Technology is neither good nor bad, it is just a tool; and how any tool can be used for different purposes. Therefore, since the arrival of artificial intelligence (AI), there have been people who have used it to improve processes, diagnose diseases or avoid unnecessary suffering to other species.
However, there have also been companies and governments trying to design autonomous lethal weapons, which use AI to decide for themselves if they try to kill a person or not. Luckily, experts in the field have been opposing for years.
The Institute For the Future of Life
On Wednesday, July 18, the Future of Life Institute (FLI), an organization focused on the use of technology for the betterment of humanity, issued a commitment condemning the development of autonomous lethal weapons and appeals to governments to avoid it.
“AI has great potential to help the world if we stigmatize and avoid its abuse,” said Max Tegmark, the FLI president in a press release. “AI weapons that decide to kill people autonomously are as disgusting and destabilizing as biological weapons, and should be treated in the same way,” he added.
The commitment that ensures “neither participate nor support the development, manufacture, trade or use of lethal autonomous weapons”, is signed by 170 organizations and 2,464 people, among which are Elon Musk, founder of OpenAI; Jaan Tallinn, founder of Skype; and Stuart Russell, the leading AI researcher.
The signatories that have attracted the most attention have been Demis Hassabis, Shane Legg and Mustafa Suleyman, the three founders of Google DeepMind, the main AI research team of the North American company. As you remember, Google has been embroiled in a controversy over its work in the development of weapons for the US.
In addition to technology experts, another 26 members of the United Nations have already endorsed a global ban on lethal autonomous weapons. Although several other world leaders, including Russia, the United Kingdom and the United States, have not yet joined.
This is not the first time that AI experts have met to sign a commitment against the development of autonomous weapons. However, this promise does have more signatories, and some of these new additions are quite important names in the AI space.
Unfortunately, even if all the nations of the world agree to ban autonomous lethal weapons, that would not necessarily prevent people or even governments from continuing to develop weapons in secret.
As the FLI itself warned in the past, with a terrifying video, the development of smart weapons can get out of control if it falls into the wrong hands. That is why the efforts to ban the dystopian “killer robots”.