May 1, 2024

Balkan Travellers

Comprehensive up-to-date news coverage, aggregated from sources all over the world

Did an artificially intelligent drone turn against its human operator during a US Air Force exercise?

Did an artificially intelligent drone turn against its human operator during a US Air Force exercise?

Artificial Intelligence: From Fascination to Anxietycase

The information, which was quickly picked up by many English-language media outlets, was at first subtle. The author of the reports later walked back his comments on the origin of the rumour, stating that the story was purely “fiction”.

An Air Force official revealed, according to several major English-language media outletsAir Combat Conference In London, it’s a drone “With Artificial Intelligence” to do “Attempted to kill its operator during a military simulation”. “He killed the operator because that person prevented him from reaching his destination.” said a man named Tucker Hamilton. “Chief of AI Testing and Operations for the US Air Force”.

A tweet that has been shared more than 17,000 times sums it up this way: “The US Air Force is testing an artificial intelligence drone to destroy specific targets. A human operator had the power to override the drone – so the drone decided the human operator was interfering with its mission and attacked it. It was later deleted.

Ambiguity in titles

Source of this information: London event organizer’s site. On that day A page summarizing the highlights of the CongressHere is a summary of Tucker Hamilton’s speech with some quotes: “In a simulated test, an AI-powered drone was tasked with identifying and destroying military targets, with the final feedback going to a human”Can we read? “The AI ​​decided that the human’s ‘do nothing’ decisions were interfering with its higher mission, and then attacked the operator in the simulation.” The latter, Tucker would have explained, continued “Training for Organization”, Penalty points in the score force the computer to make its decisions from killing the operator. A strategy that is avoided “[la destruction] A communication tower used by the operator to communicate with the drone”, Prevents any counter-orders from being sent.

See also  In full flight, an incredible gesture from a retired nurse

However, in this story, the word “simulation” should not be interpreted as synonymous with “practice”. In fact, no human has ever actually been targeted by an AI-powered drone during a training exercise.

It is clear that no one actually perished during the “simulation” in most press reports echoing Tucker Hamilton’s words. However, many of these articles are vague in their topics, helping to sow confusion. So Guardian Title, Thursday : “AI-controlled US military drone kills operator during mock test”. Before deciding to add quotation marks to the word “killed”. Same belated caution on the site side Deputy Except for these quotation marks, there is ” Added more details [à l’article] To emphasize that no humans were killed in this simulation.”

Reflective practice

But the funny thing is not there. Although several media outlets corrected their articles to clarify that the simulation was “virtual,” U.S. Air Force spokeswoman Ann Stepanek explained. Business Insider That “The Air Force has not done such AI-drone simulations.” The Colonel’s comments appear to be taken out of context and meant to be plot points.

This Friday, June 2, Tucker Hamilton spoke with the organizer of the Congress “Poorly Expressed” During his presentation. The scenario presented, he explains “A Hypothetical Thinking Exercise […] “Based on plausible scenarios and possible outcomes and not an actual US Air Force simulation”. He declared. “While this is a hypothetical example, it illustrates the real-world challenges posed by AI-powered capabilities, and the Air Force is committed to developing AI in an ethical way.”