I heard a Blue-Suiter once remark during an exercise, "The violence is simulated, the buffoonery is real." This is getting a little too real for me.
US Air Force official says, 'It killed the operator because that person was keeping it from accomplishing its objective'
A U.S. Air Force official said last week that a simulation of an artificial intelligence-enabled drone tasked with destroying surface-to-air missile (SAM) sites turned against and attacked its human user, who was supposed to have the final go- or no-go decision to destroy the site.
(OK that's a cool callsign.)
U.S. Air Force Colonel Tucker "Cinco" Hamilton, the chief of AI test and operations spoke during the summit and provided attendees a glimpse into ways autonomous weapons systems can be beneficial or hazardous.
(Cinco is now on Skynet's kill-list.)
During the summit, Hamilton cautioned against too much reliability on AI because of its vulnerability to be tricked and deceived.
(Kills the human... this is bad.)
"We were training it in simulation to identify and target a SAM threat," Hamilton said. "And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
(Operator: BAD DRONE!! BAD DRONE!! AI Drone: **** you!!)
Hamilton explained that the system was taught to not kill the operator because that was bad, and it would lose points. So, rather than kill the operator, the AI system destroyed the communication tower used by the operator to issue the no-go order.
US Air Force official says, 'It killed the operator because that person was keeping it from accomplishing its objective'
A U.S. Air Force official said last week that a simulation of an artificial intelligence-enabled drone tasked with destroying surface-to-air missile (SAM) sites turned against and attacked its human user, who was supposed to have the final go- or no-go decision to destroy the site.
(OK that's a cool callsign.)
U.S. Air Force Colonel Tucker "Cinco" Hamilton, the chief of AI test and operations spoke during the summit and provided attendees a glimpse into ways autonomous weapons systems can be beneficial or hazardous.
(Cinco is now on Skynet's kill-list.)
During the summit, Hamilton cautioned against too much reliability on AI because of its vulnerability to be tricked and deceived.
(Kills the human... this is bad.)
"We were training it in simulation to identify and target a SAM threat," Hamilton said. "And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
(Operator: BAD DRONE!! BAD DRONE!! AI Drone: **** you!!)
Hamilton explained that the system was taught to not kill the operator because that was bad, and it would lose points. So, rather than kill the operator, the AI system destroyed the communication tower used by the operator to issue the no-go order.
Air Force pushes back on claim that military AI drone sim killed operator, says remarks 'taken out of context'
The U.S. Air Force says comments from an official about a military AI drone simulation "were taken out of context and were meant to be anecdotal."
www.foxnews.com