A US air force colonel was “wrong” when he told a Royal Aeronautical Society conference last month that a drone killed its operator in a mock test because the pilot was trying to abort its mission, according to the society. .
The confusion had begun with the circulation of a blog post of the society, in which he described a presentation by Colonel Tucker “Five” Hamilton, the US Air Force chief of AI test and operations and experimental combat test pilot, at the Air Capabilities Summit and Future Combat Space Shows in London in May.
According to the blog post, Hamilton had told the crowd that in a simulation to test a drone powered by artificial intelligence and trained and incentivized to kill its targets, an operator instructed the drone in some cases not to kill its targets. and the drone responded by killing the operator.
The comments sparked deep concern about the use of AI in weaponry and extensive online conversations. But the US air force denied on Thursday night that the test was carried out. The Royal Aeronautical Society responded in a statement on Friday that Hamilton had retracted his comments and clarified that the “rogue AI drone simulation” was a hypothetical “thought experiment.”
“We have never done that experiment, nor would we need to do it to realize that this is a plausible result,” Hamilton said.
The controversy arises as the US government begins to grapple with how to regulate artificial intelligence. Concerns about the technology have been echoed by ethicists and AI researchers arguing that while there are ambitious goals for the technology—possibly curing cancer, for example—the technology is still some way off. In the meantime, they point to long-standing evidence of existing harm, including increased use of sometimes unreliable surveillance systems that misidentify Black and Brown people and can lead to over-surveillance and false arrests, the perpetuation of information erroneous on many platforms, as well as the potential harms of using nascent technology to power and operate weapons in crisis zones.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said during his May presentation.
While the simulation Hamilton spoke of didn’t actually happen, Hamilton maintains that the “thought experiment” is still worth considering when navigating whether and how to use AI in weapons.
“While this is a hypothetical example, it illustrates the real-world challenges posed by AI-driven capability and that is why the Air Force is committed to ethical AI development,” he said in a clarifying statement. your original comments.
in a statement to InsiderUS Air Force spokeswoman Ann Stefanek said the colonel’s comments were taken out of context.
“The Department of the Air Force has not conducted any AI drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said.
US Colonel Retracts Comments on Simulated Drone Attack ‘Thought Experiment’ | United States Military
A US air force colonel was “wrong” when he told a Royal Aeronautical Society conference last month that a drone killed its operator in a mock test because the pilot was trying to abort its mission, according to the society. .
The confusion had begun with the circulation of a blog post of the society, in which he described a presentation by Colonel Tucker “Five” Hamilton, the US Air Force chief of AI test and operations and experimental combat test pilot, at the Air Capabilities Summit and Future Combat Space Shows in London in May.
According to the blog post, Hamilton had told the crowd that in a simulation to test a drone powered by artificial intelligence and trained and incentivized to kill its targets, an operator instructed the drone in some cases not to kill its targets. and the drone responded by killing the operator.
The comments sparked deep concern about the use of AI in weaponry and extensive online conversations. But the US air force denied on Thursday night that the test was carried out. The Royal Aeronautical Society responded in a statement on Friday that Hamilton had retracted his comments and clarified that the “rogue AI drone simulation” was a hypothetical “thought experiment.”
“We have never done that experiment, nor would we need to do it to realize that this is a plausible result,” Hamilton said.
The controversy arises as the US government begins to grapple with how to regulate artificial intelligence. Concerns about the technology have been echoed by ethicists and AI researchers arguing that while there are ambitious goals for the technology—possibly curing cancer, for example—the technology is still some way off. In the meantime, they point to long-standing evidence of existing harm, including increased use of sometimes unreliable surveillance systems that misidentify Black and Brown people and can lead to over-surveillance and false arrests, the perpetuation of information erroneous on many platforms, as well as the potential harms of using nascent technology to power and operate weapons in crisis zones.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said during his May presentation.
While the simulation Hamilton spoke of didn’t actually happen, Hamilton maintains that the “thought experiment” is still worth considering when navigating whether and how to use AI in weapons.
“While this is a hypothetical example, it illustrates the real-world challenges posed by AI-driven capability and that is why the Air Force is committed to ethical AI development,” he said in a clarifying statement. your original comments.
in a statement to InsiderUS Air Force spokeswoman Ann Stefanek said the colonel’s comments were taken out of context.
“The Department of the Air Force has not conducted any AI drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said.
Source link