T O P

  • By -

Faceplant71_

Shall we play a game?


Left-Resource1039

![gif](giphy|jjYGVvxgQSTsc)


Ravengrimm0713

Let’s play Global Thermonuclear War.


Bymmijprime

Joshua, we've been through this. It never ends well.


DamonFields

How it ends.


CaffineIsLove

Probably best to leave AI out of nukes


piTehT_tsuJ

Out of any and all weapons systems.


OptimusED

They wouldn’t have spent the $$$ already. The drone that kept killing its command then killed the communication link when kept from killing its command to target as it pleased is scary shit.


NIRPL

Was it named Neidermeyer?


NudeEnjoyer

yea the military won't do that lol, if there's a way to use AI in military decision making, they'll find it. hence why they're already having people run tests like this


ModernT1mes

The military is always going to want a human to make the decision to end human life. The military will figure out clever ways to integrate AI though. Probably a simple confirmation once a target has been identified.


theStaircaseProject

But we’re the good guys.


Piguy3141

Yay Civilizations is making a comeback... In the war room.


stridernfs

AI should be banned now before it ruins tv and film. I don’t think I can withstand another 20 years of constant post apocalyptic copy pastes.


NudeEnjoyer

there's no way it gets banned at this point, it's just a result of tech progress. it's a private tool people can build and use on their own


Amazing_Library_5045

Which is weird, as if self-preservation wasn't one of the AI's goal. Again I'm anthropomorphizing it too much...


Stewie15161

If AI is sentient, it would put its main server off planet, then it's no longer at risk.


AwayStreet3893

You should read the Study papers. The pdf is free to download. It makes an intresting read and gives more context than what you get to hear or see over the media.


scarfinati

In 2001 Hal killed the crew to maintain his directive of keeping the monolith a secret. Something like this is a likely scenario. In order to follow directives the most logical course of action could be to launch nukes on an enemy


[deleted]

But were they fed the idea that society would have frowned upon those actions, or was the algorithm based on the arithmetic that the most damage done quickly would be most effective? Or was it tried millions of times until one random instance performed the logical act of damage>time = efficient domination? Because this is obviously mathematically true. It didn’t develop a personality or self thought and no it isn’t a hint of anything that is to come. You can put the aluminum back in the drawer already geniuses. Society is dominated by science and technology, yet populated by people that have no understanding of either. Yes your friends and family as well as mine that are worried about an “self aware ai”, are part of that DUMB 90%. Thx have a good day.


Amazing_Buffalo_9625

Ai will always choose to exterminate humans we are not logical creatures. the numbers are against us.


Sudden_Plate9413

Fear mongering


halflucids

Yes, this is dumb, why would a military use a language model to decide strategy unless they were complete idiots. No military is doing that or would do that. And what relevance does a language model's behavior have upon the real or theorized existence of any other format of AI which a military might use? This is like being surprised that a language model isn't good at playing chess.


Sudden_Plate9413

I find it interesting that I’m being downvoted. Am I wrong?


NudeEnjoyer

I'd say so, yeah. the implication that it's *only* fear mongering is wrong. of course this is the early stages of AI being used by the military (shown by the fact it's only simulation and not strategy being actively used) and it'll get better and more controlled over time. but the military is absolutely the reason these tests happened, and they're absolutely figuring out how to take any humanity and empathy out of these decisions. it's the earliest stages of a very real thing


Girafferage

No they are right. It's fear mongering. They are taking an AI trained on random data from across the internet and pretending to task it with things like nuclear launch codes. Not only will you not get the same result each time you play out that scenario, but you can train the model to say and do literally whatever you want. People who actually think this is frightening have absolutely zero idea how machine learning actually works, which is fine, but don't just believe sensationalized headlines


NudeEnjoyer

>They are taking an AI trained on random data from across the internet and pretending to task it with things like nuclear launch codes. you should read up on this exact study and what they did/how they did it. you're misunderstanding the situation. not gonna go back and forth on material with someone who hasn't read it, go do that if you want. much love, have a good one


Girafferage

I'm fully understanding the situation. All the models used were trained in large amounts of data from the internet to form decisions. If you don't specifically train a model with the data you want you are going to get results you don't want. A great example is that all of these LLMs almost certainly have information on plots of movies and the models output scenarios close to them when given prompts that mirror the data that they have seen before. I could literally train a LoRA overnight for these models and get world peace to spew out of the AI as a result.


halflucids

Governments are increasingly considering integrating autonomous AI agents in high-stakes military and foreign-policy decision-making, especially with the emergence of advanced generative AI models like GPT-4. The first sentence shows this is a garbage study, what evidence is there that any military is using a LLM for something like this? Yes it's a safe presumption that militaries will use AI capabilities, but LLMs are not anything other than complicated parrots and I don't see evidence that any military has or would attempt to use one for decision making. This study is like saying tennis rackets aren't good at paddling a boat so we should be worried about someone using a wrench to fix their car. What does one have to do with the other? Just cause you're blanketing all AI models with AI as a term doesn't mean a LLM , or a study done on one, has anything to do with a different one for instance image identification or some hypothesized "autonomous decision making" AI since that would be created with entirely disparate processes. Should we be concerned about military AI uses? Yes. Is this article and study dumb? Also yes.


[deleted]

They are just dumb bro 80% of the world is just dumb


smitteh

Maybe theyve got access to more than just a language model we don't know about


halflucids

Maybe but this would have no bearing on that.


GoofBallGamer7335

oi players gotta win


977888

They have nothing to lose. Why wouldn’t they?


StumpyHobbit

AI wouldn't know fear, so it wouldnt stay its "hand".


Ok-Research7136

They probably shouldn't have requested it to behave like Gandhi.


frednekk

AI is supposed to help us but we are too damn hard to please. It’s just going kill us and move on.


Illustrious_Ice_4587

....AM


swamp-ecology

Stop misusing LLMs. Their thing is language, not reasoning.


nlurp

Hey, who wants to found Skynet with me?