US military AI drone simulation kills operator before being told it is bad, then takes out control tower

The #1 community for Gun Owners in Indiana

Member Benefits:

  • Fewer Ads!
  • Discuss all aspects of firearm ownership
  • Discuss anti-gun legislation
  • Buy, sell, and trade in the classified section
  • Chat with Local gun shops, ranges, trainers & other businesses
  • Discover free outdoor shooting areas
  • View up to date on firearm-related events
  • Share photos & video with other members
  • ...and so much more!
  • Mij

    Permaplinker (thanks to Expat)
    Site Supporter
    Rating - 100%
    1   0   0
    May 22, 2022
    6,151
    113
    In the corn and beans
    So, let me get this right.

    A live simulation of AI used on a supposed actual battlefield, and they armed it with “Real” live ammunition.

    Might just be me, but I’ve got problems with this concept. :scratch:
     

    BehindBlueI's

    Grandmaster
    Rating - 100%
    29   0   0
    Oct 3, 2012
    25,897
    113
    So, let me get this right.

    A live simulation of AI used on a supposed actual battlefield, and they armed it with “Real” live ammunition.

    Might just be me, but I’ve got problems with this concept. :scratch:

    No, not sure if you read the story or just the headline. No version of the story indicates it happened in anything other than a computer program, no "real" anything happened in the physical world. The headline is just sensationalism.
     

    KellyinAvon

    Blue-ID Mafia Consigliere
    Staff member
    Moderator
    Site Supporter
    Rating - 100%
    7   0   0
    Dec 22, 2012
    24,991
    150
    Avon
    No, not sure if you read the story or just the headline. No version of the story indicates it happened in anything other than a computer program, no "real" anything happened in the physical world. The headline is just sensationalism.
    "If it bleeds it leads" has been replaced with "AI makes the humans die".

    "The violence is simulated, the buffoonery is real" was taken up another notch here. Would "Simu-Killer" (gotta give the AI pilot a call-sign, it is USAF after all) have done the same thing in an MQ-9 with a Hellfire?

    During 9th Air Force Blue Flag exercise (theater-level, think General Horner as the Air Component Commander in Desert Storm with JDAMS and Predators) at Hurlburt Field, Florida I saw live-fly, simulated (pilots in flight sims), and virtual aircraft all in the same air picture. That was 23 years ago. I didn't have a cell phone 23 years ago, where are we at now?

    Having a human between the AI and the pickle (weapons release) button is needed.
     

    BehindBlueI's

    Grandmaster
    Rating - 100%
    29   0   0
    Oct 3, 2012
    25,897
    113
    "If it bleeds it leads" has been replaced with "AI makes the humans die".

    "The violence is simulated, the buffoonery is real" was taken up another notch here. Would "Simu-Killer" (gotta give the AI pilot a call-sign, it is USAF after all) have done the same thing in an MQ-9 with a Hellfire?

    During 9th Air Force Blue Flag exercise (theater-level, think General Horner as the Air Component Commander in Desert Storm with JDAMS and Predators) at Hurlburt Field, Florida I saw live-fly, simulated (pilots in flight sims), and virtual aircraft all in the same air picture. That was 23 years ago. I didn't have a cell phone 23 years ago, where are we at now?

    Having a human between the AI and the pickle (weapons release) button is needed.

    The way I read the article, even the sensational version, was the AI kept trying to find was to complete it's mission but once it was told "not that way" it didn't do whatever that was again. It just tried to find another way. I think the whole point of the story, regardless of if it happened or not, is how computers "think" vs how humans think. An example that stuck with me was tell a human to find a gallon of milk in the kitchen and tell a computer to find a gallon of milk in the kitchen. The human will immediately go to the fridge and look because experience and learning says that's where it will be. The computer will be as likely to look in the light fixture as in the fridge because until it "learns" milk is in the fridge it "thinks" the milk is as likely to be anywhere as anywhere else. Humans are full of assumptions so we don't think to tell it so much that we just assume. AI learns on it's own and once if figures out milk is in the fridge it'll check their first. What it won't do is get surprised if the milk isn't there and OODA loop itself if it is in on the light fixture.

    The cautionary tale here isn't AI goes rogue, it never did even in the sensational version, just AI doesn't think like us and may take avenues we wouldn't have considered so the rule sets need to be different and more robust than we would think of for a human counterpart.
     

    phylodog

    Grandmaster
    Rating - 100%
    59   0   0
    Mar 7, 2008
    18,878
    113
    Arcadia
    Can't wait for the smartest minds on the planet to continue dumping all of their efforts into ensuring everyone's demise in the stupidest way possible. I've long since grown weary of those who prefer to ignore the obvious and abundant warnings claiming this is the avenue to better humanity. They rank right up there with those who believe men can give birth.
     

    KellyinAvon

    Blue-ID Mafia Consigliere
    Staff member
    Moderator
    Site Supporter
    Rating - 100%
    7   0   0
    Dec 22, 2012
    24,991
    150
    Avon
    The way I read the article, even the sensational version, was the AI kept trying to find was to complete it's mission but once it was told "not that way" it didn't do whatever that was again. It just tried to find another way. I think the whole point of the story, regardless of if it happened or not, is how computers "think" vs how humans think. An example that stuck with me was tell a human to find a gallon of milk in the kitchen and tell a computer to find a gallon of milk in the kitchen. The human will immediately go to the fridge and look because experience and learning says that's where it will be. The computer will be as likely to look in the light fixture as in the fridge because until it "learns" milk is in the fridge it "thinks" the milk is as likely to be anywhere as anywhere else. Humans are full of assumptions so we don't think to tell it so much that we just assume. AI learns on it's own and once if figures out milk is in the fridge it'll check their first. What it won't do is get surprised if the milk isn't there and OODA loop itself if it is in on the light fixture.

    The cautionary tale here isn't AI goes rogue, it never did even in the sensational version, just AI doesn't think like us and may take avenues we wouldn't have considered so the rule sets need to be different and more robust than we would think of for a human counterpart.
    Computers do complex and repetitive tasks very fast. They do not think.

    With that said, neither of us are changing our minds here.

    If you're right and I've needlessly erred on the side of caution: advances will come slower.

    If I'm right and the machines accomplish the mission and eliminate all obstacles: well erring on the side of caution looks pretty good at that point.

    I don't think my smart fridge (which I do not own) will conspire with smart smart thermostat (got one of those) to get the smart furnace (got one!) to kill me with carbon monoxide. I do think there are certain areas we must err on the side of humanity.

    Plus in all the movies the machines go evil and try to kill us.
     

    BehindBlueI's

    Grandmaster
    Rating - 100%
    29   0   0
    Oct 3, 2012
    25,897
    113
    Computers do complex and repetitive tasks very fast. They do not think.

    With that said, neither of us are changing our minds here.

    Hence quotation marks around "think" for computers.

    I've not expressed an opinion one way or the other. Simply said what the article actually says vs what people are saying it says. The AI did not actually kill anyone. The AI did not have live ammo. The AI was never in the physical world, period. The AI never violated it's rules set once a rule was established. Regardless of which version you believe is true, none of those things occurred.
     

    Scott58

    Marksman
    Rating - 0%
    0   0   0
    Jun 25, 2022
    196
    43
    NW indiana
    I wouldn't worry about the military application as much as the job loss AI is going to create. Forget about unskilled labor. Middle management white collar positions are going to drop like flies.
     

    KellyinAvon

    Blue-ID Mafia Consigliere
    Staff member
    Moderator
    Site Supporter
    Rating - 100%
    7   0   0
    Dec 22, 2012
    24,991
    150
    Avon
    Hence quotation marks around "think" for computers.

    I've not expressed an opinion one way or the other. Simply said what the article actually says vs what people are saying it says. The AI did not actually kill anyone. The AI did not have live ammo. The AI was never in the physical world, period. The AI never violated it's rules set once a rule was established. Regardless of which version you believe is true, none of those things occurred.
    "WOW, it never did that in the lab!"-- some DARPA uber-nerd.

    OK, "DARPA uber-nerd" is redundant.

    I keep using the MQ-9 Reaper as an example, seems logical.

    MQ-9 wasn't operational when I was active duty. From what I've seen carrying 4 Hellfires and 2 MK-82 (500LB bomb, 192LB of C4 IIRC) smart bomb variants (laser, GPS, all of the above with the GBU-54 Laser-JDAM) is standard.

    I'd bet several uber-nerds from DARPA also looked very closely at things such as: which weapon did the AI choose to kill the human? How long did it take to decide? Was the human in a hardened target? Would the AI have used both of the 500 pounders to kill the human in order to go cleared hot on the primary target? Would the AI have gone kamikaze to eliminate the primary target if all weapons were expended?

    It happened in a simulation... would it have happened in the realz? I have no doubt it would have.
     

    DoggyDaddy

    Grandmaster
    Site Supporter
    Rating - 100%
    73   0   1
    Aug 18, 2011
    103,361
    149
    Southside Indy
    The computer will be as likely to look in the light fixture as in the fridge because until it "learns" milk is in the fridge it "thinks" the milk is as likely to be anywhere as anywhere else. Humans are full of assumptions so we don't think to tell it so much that we just assume. AI learns on it's own and once if figures out milk is in the fridge it'll check their first. What it won't do is get surprised if the milk isn't there and OODA loop itself if it is in on the light fixture.
    The computer could search the entire room in a fraction of a second though (or figure out where to search). Of course this makes the assumption that we're talking about a virtual search, as opposed to a physical search involving robotics or something.
     

    DadSmith

    Grandmaster
    Rating - 100%
    1   0   0
    Oct 21, 2018
    22,656
    113
    Ripley County
    I heard a Blue-Suiter once remark during an exercise, "The violence is simulated, the buffoonery is real." This is getting a little too real for me.

    US Air Force official says, 'It killed the operator because that person was keeping it from accomplishing its objective'

    A U.S. Air Force official said last week that a simulation of an artificial intelligence-enabled drone tasked with destroying surface-to-air missile (SAM) sites turned against and attacked its human user, who was supposed to have the final go- or no-go decision to destroy the site.


    (OK that's a cool callsign.)
    U.S. Air Force Colonel Tucker "Cinco" Hamilton, the chief of AI test and operations spoke during the summit and provided attendees a glimpse into ways autonomous weapons systems can be beneficial or hazardous.

    (Cinco is now on Skynet's kill-list.)
    During the summit, Hamilton cautioned against too much reliability on AI because of its vulnerability to be tricked and deceived.

    (Kills the human... this is bad.)
    "We were training it in simulation to identify and target a SAM threat," Hamilton said. "And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

    (Operator: BAD DRONE!! BAD DRONE!! AI Drone: **** you!!)
    Hamilton explained that the system was taught to not kill the operator because that was bad, and it would lose points. So, rather than kill the operator, the AI system destroyed the communication tower used by the operator to issue the no-go order.


    They deny anything wrong happened :lmfao:
     

    ditcherman

    Grandmaster
    Site Supporter
    Rating - 100%
    22   0   0
    Dec 18, 2018
    7,703
    113
    In the country, hopefully.
    FWIW, the military is denying this occurred and stating the media got the details wrong (INSERT SHOCKED FACE). They say no such simulation occurred. Who knows which is right.
    Because thats what the AI told them to say?

    Also, could the AI be programed to still have a goal but be a little more mellow about it? Like, if it happens it happens, man, but don't kill anyone to make it happen? I would assume that it could be and the evolution of the programming just hasn't gotten there yet?
     

    BehindBlueI's

    Grandmaster
    Rating - 100%
    29   0   0
    Oct 3, 2012
    25,897
    113
    Because thats what the AI told them to say?

    Also, could the AI be programed to still have a goal but be a little more mellow about it? Like, if it happens it happens, man, but don't kill anyone to make it happen? I would assume that it could be and the evolution of the programming just hasn't gotten there yet?

    I'm sure IFF can be integrated. I think it's a lot of spitballing right now. Think about the ethics programming going in to automated driving systems. They are already thinking of ethical concerns like if a collision with a pedestrian is unavoidable if you don't swerve into a pole, do you hit the pedestrian or do you endanger the driver? If the choice is between two pedestrians, which do you hit and which do you avoid?

    such as: https://hai.stanford.edu/news/designing-ethical-self-driving-cars
     

    ditcherman

    Grandmaster
    Site Supporter
    Rating - 100%
    22   0   0
    Dec 18, 2018
    7,703
    113
    In the country, hopefully.
    I'm sure IFF can be integrated. I think it's a lot of spitballing right now. Think about the ethics programming going in to automated driving systems. They are already thinking of ethical concerns like if a collision with a pedestrian is unavoidable if you don't swerve into a pole, do you hit the pedestrian or do you endanger the driver? If the choice is between two pedestrians, which do you hit and which do you avoid?

    such as: https://hai.stanford.edu/news/designing-ethical-self-driving-cars
    Well if we had the flying cars already like they promised none of this would be an issue!
     
    Top Bottom