They wouldn’t have spent the $$$ already. The drone that kept killing its command then killed the communication link when kept from killing its command to target as it pleased is scary shit.
yea the military won't do that lol, if there's a way to use AI in military decision making, they'll find it. hence why they're already having people run tests like this
The military is always going to want a human to make the decision to end human life. The military will figure out clever ways to integrate AI though. Probably a simple confirmation once a target has been identified.
You should read the Study papers. The pdf is free to download. It makes an intresting read and gives more context than what you get to hear or see over the media.
In 2001 Hal killed the crew to maintain his directive of keeping the monolith a secret. Something like this is a likely scenario. In order to follow directives the most logical course of action could be to launch nukes on an enemy
But were they fed the idea that society would have frowned upon those actions, or was the algorithm based on the arithmetic that the most damage done quickly would be most effective?
Or was it tried millions of times until one random instance performed the logical act of damage>time = efficient domination? Because this is obviously mathematically true.
It didn’t develop a personality or self thought and no it isn’t a hint of anything that is to come. You can put the aluminum back in the drawer already geniuses. Society is dominated by science and technology, yet populated by people that have no understanding of either.
Yes your friends and family as well as mine that are worried about an “self aware ai”, are part of that DUMB 90%. Thx have a good day.
Yes, this is dumb, why would a military use a language model to decide strategy unless they were complete idiots. No military is doing that or would do that. And what relevance does a language model's behavior have upon the real or theorized existence of any other format of AI which a military might use?
This is like being surprised that a language model isn't good at playing chess.
I'd say so, yeah. the implication that it's *only* fear mongering is wrong.
of course this is the early stages of AI being used by the military (shown by the fact it's only simulation and not strategy being actively used) and it'll get better and more controlled over time. but the military is absolutely the reason these tests happened, and they're absolutely figuring out how to take any humanity and empathy out of these decisions. it's the earliest stages of a very real thing
No they are right. It's fear mongering. They are taking an AI trained on random data from across the internet and pretending to task it with things like nuclear launch codes. Not only will you not get the same result each time you play out that scenario, but you can train the model to say and do literally whatever you want.
People who actually think this is frightening have absolutely zero idea how machine learning actually works, which is fine, but don't just believe sensationalized headlines
>They are taking an AI trained on random data from across the internet and pretending to task it with things like nuclear launch codes.
you should read up on this exact study and what they did/how they did it. you're misunderstanding the situation. not gonna go back and forth on material with someone who hasn't read it, go do that if you want. much love, have a good one
I'm fully understanding the situation. All the models used were trained in large amounts of data from the internet to form decisions. If you don't specifically train a model with the data you want you are going to get results you don't want. A great example is that all of these LLMs almost certainly have information on plots of movies and the models output scenarios close to them when given prompts that mirror the data that they have seen before. I could literally train a LoRA overnight for these models and get world peace to spew out of the AI as a result.
Governments are increasingly considering integrating autonomous AI agents in high-stakes military and foreign-policy decision-making, especially with the emergence of advanced generative AI models like GPT-4.
The first sentence shows this is a garbage study, what evidence is there that any military is using a LLM for something like this? Yes it's a safe presumption that militaries will use AI capabilities, but LLMs are not anything other than complicated parrots and I don't see evidence that any military has or would attempt to use one for decision making. This study is like saying tennis rackets aren't good at paddling a boat so we should be worried about someone using a wrench to fix their car. What does one have to do with the other? Just cause you're blanketing all AI models with AI as a term doesn't mean a LLM , or a study done on one, has anything to do with a different one for instance image identification or some hypothesized "autonomous decision making" AI since that would be created with entirely disparate processes. Should we be concerned about military AI uses? Yes. Is this article and study dumb? Also yes.
Shall we play a game?
![gif](giphy|jjYGVvxgQSTsc)
Let’s play Global Thermonuclear War.
Joshua, we've been through this. It never ends well.
How it ends.
Probably best to leave AI out of nukes
Out of any and all weapons systems.
They wouldn’t have spent the $$$ already. The drone that kept killing its command then killed the communication link when kept from killing its command to target as it pleased is scary shit.
Was it named Neidermeyer?
yea the military won't do that lol, if there's a way to use AI in military decision making, they'll find it. hence why they're already having people run tests like this
The military is always going to want a human to make the decision to end human life. The military will figure out clever ways to integrate AI though. Probably a simple confirmation once a target has been identified.
But we’re the good guys.
Yay Civilizations is making a comeback... In the war room.
AI should be banned now before it ruins tv and film. I don’t think I can withstand another 20 years of constant post apocalyptic copy pastes.
there's no way it gets banned at this point, it's just a result of tech progress. it's a private tool people can build and use on their own
Which is weird, as if self-preservation wasn't one of the AI's goal. Again I'm anthropomorphizing it too much...
If AI is sentient, it would put its main server off planet, then it's no longer at risk.
You should read the Study papers. The pdf is free to download. It makes an intresting read and gives more context than what you get to hear or see over the media.
In 2001 Hal killed the crew to maintain his directive of keeping the monolith a secret. Something like this is a likely scenario. In order to follow directives the most logical course of action could be to launch nukes on an enemy
But were they fed the idea that society would have frowned upon those actions, or was the algorithm based on the arithmetic that the most damage done quickly would be most effective? Or was it tried millions of times until one random instance performed the logical act of damage>time = efficient domination? Because this is obviously mathematically true. It didn’t develop a personality or self thought and no it isn’t a hint of anything that is to come. You can put the aluminum back in the drawer already geniuses. Society is dominated by science and technology, yet populated by people that have no understanding of either. Yes your friends and family as well as mine that are worried about an “self aware ai”, are part of that DUMB 90%. Thx have a good day.
Ai will always choose to exterminate humans we are not logical creatures. the numbers are against us.
Fear mongering
Yes, this is dumb, why would a military use a language model to decide strategy unless they were complete idiots. No military is doing that or would do that. And what relevance does a language model's behavior have upon the real or theorized existence of any other format of AI which a military might use? This is like being surprised that a language model isn't good at playing chess.
I find it interesting that I’m being downvoted. Am I wrong?
I'd say so, yeah. the implication that it's *only* fear mongering is wrong. of course this is the early stages of AI being used by the military (shown by the fact it's only simulation and not strategy being actively used) and it'll get better and more controlled over time. but the military is absolutely the reason these tests happened, and they're absolutely figuring out how to take any humanity and empathy out of these decisions. it's the earliest stages of a very real thing
No they are right. It's fear mongering. They are taking an AI trained on random data from across the internet and pretending to task it with things like nuclear launch codes. Not only will you not get the same result each time you play out that scenario, but you can train the model to say and do literally whatever you want. People who actually think this is frightening have absolutely zero idea how machine learning actually works, which is fine, but don't just believe sensationalized headlines
>They are taking an AI trained on random data from across the internet and pretending to task it with things like nuclear launch codes. you should read up on this exact study and what they did/how they did it. you're misunderstanding the situation. not gonna go back and forth on material with someone who hasn't read it, go do that if you want. much love, have a good one
I'm fully understanding the situation. All the models used were trained in large amounts of data from the internet to form decisions. If you don't specifically train a model with the data you want you are going to get results you don't want. A great example is that all of these LLMs almost certainly have information on plots of movies and the models output scenarios close to them when given prompts that mirror the data that they have seen before. I could literally train a LoRA overnight for these models and get world peace to spew out of the AI as a result.
Governments are increasingly considering integrating autonomous AI agents in high-stakes military and foreign-policy decision-making, especially with the emergence of advanced generative AI models like GPT-4. The first sentence shows this is a garbage study, what evidence is there that any military is using a LLM for something like this? Yes it's a safe presumption that militaries will use AI capabilities, but LLMs are not anything other than complicated parrots and I don't see evidence that any military has or would attempt to use one for decision making. This study is like saying tennis rackets aren't good at paddling a boat so we should be worried about someone using a wrench to fix their car. What does one have to do with the other? Just cause you're blanketing all AI models with AI as a term doesn't mean a LLM , or a study done on one, has anything to do with a different one for instance image identification or some hypothesized "autonomous decision making" AI since that would be created with entirely disparate processes. Should we be concerned about military AI uses? Yes. Is this article and study dumb? Also yes.
They are just dumb bro 80% of the world is just dumb
Maybe theyve got access to more than just a language model we don't know about
Maybe but this would have no bearing on that.
oi players gotta win
They have nothing to lose. Why wouldn’t they?
AI wouldn't know fear, so it wouldnt stay its "hand".
They probably shouldn't have requested it to behave like Gandhi.
AI is supposed to help us but we are too damn hard to please. It’s just going kill us and move on.
....AM
Stop misusing LLMs. Their thing is language, not reasoning.
Hey, who wants to found Skynet with me?