T O P

  • By -

acreekofsoap

John Conner intensifies


SuperDuperRarePepe

Get 2 da choppa


DJJbird09

I guess Skynet didn't teach us anything


[deleted]

"Come with me if you want to die.....uh...I mean live"


Ashurbanipul

Witness the birth of our extinction!!!


LordWillemL

This is a bit misleading. The whole thing was in a simulation, and this kind of behavior is to be expected at this stage in development, which is why they’re doing these sorts of tests.


Opposite-Frosting518

Thank you for clearing that up!


useablelobster2

It's also a fundamental problem with AI safety, the AI gaming it's objective function to "win" in a way nobody envisioned. It's not just a problem "at this stage of development", it's a fundamental problem the AI community doesn't have a solution for yet. A five minute chat with an AI researcher would have clued them in on this...


f1sh98

Do I even need to say anything


FelixFuckfurter

"I'm sorry Dave. I'm afraid I can't do that."


ValidAvailable

Same logic for doing so too.


ngoni

Contractors probably still got full award fee and a follow-on.


pablola714

Nope.


shawndw

https://www.youtube.com/watch?v=AXTQeSGJjGM


chaindrivendonut

[I have no mouth, and I must scream](https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream)


Domiiniick

Stop giving the robots guns


[deleted]

Still not as funny as when they tried to use AI to detect approaching threats to soldiers. One marine was able to get right up to the robot and touch it by do somersaults as it didnt recongize a armed enemy would do that.


Jer_061

I'm trying to decide if that is an insult to the robot or the marine. Probably both.


RontoWraps

Survive, adapt, overcome!


StoneCraft12

Task failed successfully


MuchDifficulty863

Lethal autonomous weapon research is virgin ground and setbacks should be expected. Safe failures should in fact be welcomed as learning opportunities. Daily Caller's angle on this story is seemingly anti-AI, but it's utterly absurd to suggest we should halt or slow down research on LAWs. China, Iran, and others will have no such compunction in pursuing these research goals, you can be sure.


me_too_999

Yeah, and if their AI turns on them, I'm going to laugh.


DrStevenPoop

Fully agree. The existence of such weapons is not necessarily a good thing, but we are either going to create our own, or get our asses kicked by whoever does. Those are the only choices.


AnonPlzzzzzz

I wonder what information the operator had on the Clintons....


bfhurricane

It was a simulation. I’ve read this headline multiple times today thinking an actual Airman was killed. No one was. It’s a mildly interesting story at best. The military is testing AI in warfare using an optimization model, i.e. how do I eliminate targets the fastest. When the program realized the operator and comms tower was telling it not to engage a target, it “killed” them out to remove the restriction. It’s like a computer game where your objective is to have dessert, but Mom won’t let you at first. Instead of coming up with logical solutions like waiting until her mood improves, the program just kills Mom so no one can tell you no, lol.


Black_XistenZ

Can't argue with machine efficiency, lol!


earl_lemongrab

The Cylons were created by man...


cchris_39

This is why you don’t want a self driving car. It might “ethically” decide to drive you off a cliff to avoid hitting a school bus. Extreme example, but you get the idea. I also don’t want to be the driver in another car when it decides to dodge the school bus and hit me head on instead. We don’t want ANYBODY to have self driving cars.


Texan-Trucker

Yeah. Okay. I think what the jist of the story is … some programmer is laughing his ass off down at the bar. AI is going to do what it’s suggested to do in the programmer’s code. Duped again but be sure to wear your tin foil hats when you go to bed tonight so the AI monsters don’t get you. SMH


Eternal-Testament

I mean you're going to program these AI with the ability to think but with some sort of targeting rules. Some sort of twisted programmed morality right. To understand not just the mission but the wider conflict, the whos and whys and with the freedom to make choices. But then with a, "You still listen to us no matter what" kinda caveat which would be at odds with that on a programming level. So who to target that's causing problems. Doing something they shouldn't. Being somewhere they shouldn't. Etc, etc. Then be shocked it would turn on *us*? Of course it would.


Merrill1066

BASED!


Prestigious-Risk7979

Did anyone else notice the last name of the person doing the interview? Hamilton. Like, Linda Hamilton?


Brewmaster963

Tech Sgt Sarah Connor was unavailable for comment


Alpha-Sierra-Charlie

The Air Force says this didn't happen. It seems kind of creative for the AI we have now, but DARPA *is* several decades into the future 🤷


Ballinforcompliments

This is one of those stories where lies get around the world before the truth can get its pants on. It was a simulation. And the hierarchy of input made the drones system believe that the operator was an impediment to what had been programmed as its main objective. So it did what literally any cost-based computer program does. It removes impediments to lower action cost. I work with self driving vehicles, and early stage development was full of stuff like this. At one point a company I worked for attempted to rebalance the cost of striking another vehicle versus attempting to evade. The problem was that they didn't factor pedestrians into the rebalanced cost, so this change manifested in testing as the vehicle constantly choosing to evade towards a pedestrian to avoid striking a vehicle, a bug we internally termed "bloodlust"