Google Deepmind, Alphabet’s subsidiary dedicated to artificial intelligence, creates a research team to work alongside engineers and ensure that technological advances do not lead to drift.
Risks to employment, the threat of autonomous weapons or discriminatory biases, controversial topics related to artificial intelligence (AI) are not lacking. Deepmind, Alphabet’s subsidiary dedicated to this field, creates a unit dedicated to ethical and societal issues raised by artificial intelligence, called Deepmind Ethics and Society (DMES). Its mission is to “help engineers practice ethics and help the company anticipate and direct the impact of the AI so that it works for the benefit of all”, the company explains on its blog. Eight full-time staff will be dedicated to the project, and six outside advisors from the academic and charitable sectors will contribute their expertise on a voluntary basis. The Columbia professor, Jeffrey Sachs, Oxford AI Nick Bostrom and Christiana Figueres, a climate change activist, are among the known names. The committee should have 25 members by the end of the year and publish its first reflections by early 2018.
“These researchers are important not only for the expertise they bring, but also for the diversity of thinking they represent,” said DMES co-chairs, Verity Harding and Sean Legassick.
— WIRED UK (@WiredUK) October 4, 2017
A global questioning
Certain abuses of the AI have already been observed, especially with artificial intelligence that works on language and inherits sexist and racist prejudices. Carnegie Mellon University already has a center there, while many researchers around the world are looking at the sectoral issues of artificial intelligence. At the University of Oxford, for instance, the clicking workers have been interested in many years now: the artificial intelligence industry is based on the meticulous work of training provided by a mass of precarious workers of 45 to 90 million people.
The ethical reflections on artificial intelligence are fueled by the shocking statements of physicist Stephen Hawking or Elon Musk, the founder of Tesla. The latter likes to call artificial intelligence “the greatest risk to which our civilization will be confronted” by summoning the specter of autonomous weapons. He is one of one hundred and sixteen heads of robotics companies and specialists in artificial intelligence who wrote an open letter to the UN in August last in order to warn against the dangers of autonomous weapons and “robot killers”, which their technological advances allow developing. They fear “armed conflicts on a scale never seen before and at speeds difficult to conceive for humans”.
Become a leader
All the giants seek to position themselves as leaders on the particularly media-related reflections related to artificial intelligence. In 2016, Facebook partnered with Microsoft, Amazon, IBM, Apple and Google for a ” partnership for artificial intelligence for the benefit of citizens and society .” These US companies want to define good practices in ethics. Other American AI start-ups have created their own ethics councils, such as Lucid.Ai. The Future of Life Institute, located in Boston, is also on the subject.
While technological advances in artificial intelligence are proceeding at a steady pace, international experts are concerned about the legislator’s time to adapt. The UN has been working on independent weapons since 2013 and has recently voted in favor of more formal discussions on robot killers, drones, tanks and machine-guns. French and European parliamentarians are gradually seizing the issue. “By itself, ethical reflection is not late,” explained Le Figaro Raja Chatila, director of the Institute of Intelligent Systems and Robotics (ISIR), a joint research unit at the University Pierre and Marie Curie and the CNRS. “On the other hand, it does not impose itself and nobody is forced to respect it.”
Out of the storm
DeepMind Ethics and Society (DMES) also appears in a turbulent context. DeepMind had demanded the creation of an ethics committee, when it agreed to be bought by Google in 2014. This board, which was to see the day early 2016, was supposed to oversee the entire research of the start-up but none of these announcements were effective for the three and a half years following the acquisition. As the Guardian reminds us, the mystery remained for a long time on the names of its members, the regularity of their meetings and their subjects of discussion.
For the British newspaper, this late announcement comes at a time when DeepMind has attracted very bad publicity in the UK. The company is involved in a number of UK-based medical research projects focusing on AI, using machine learning to diagnose and even develop treatment. The UK regulator for the protection of personal data, however, found the use of Deepmind’s personal and medical data for research purposes to be unlawful.