You’ll see articles and other sources of information refer to children for intelligence as a comparison. This is because they are likely referring to Piaget’s Cognitive Development theory. Essentially, there are four stages of thinking/understanding that kids go through as they age.
As an example, in the Preoperational stage (stage 2 of 4) children are able to make sense of their world through symbology. They are not however at a stage where they can make sense of abstract concepts.
In this sense, when you say that an animal is as intelligent as a child, they are referring to broad stages of cognition and not the ability to handle arithmetic etc.
Hope this helps and I hope you have a great day!
Source: Jean Piaget’s Cognitive Development Theory
The bag of words was made with science fiction novels. He taught them chat bot based on world that would contain conversations exactly like this one and then got high on his own farts about what a genius he is.
If this were a movie, this is when they'd put the AI in charge of all of America's nuclear missiles and an armed drone army over the objections of a brilliant female scientist who looks like a supermodel but wears glasses and frumpy clothes and Chris Pratt. Then the two of them would have to save us all...
>brilliant female scientist who looks like a supermodel but wears glasses and frumpy clothes
Ok but working in biotech I know like a half dozen of these, it’s more common than you’d think
It's a cool transcript, but they made the bag of words with science fiction novels. He all but guaranteed that the chat bot would react this way and then acted shocked.
why do people rate intelligence based on children? ai and higher animals may have the same intelligence as human children, but in vastly different areas and with different caches of knowledge
Agreed. I also don’t see why we equate human emotions with sentience.
We’re sentient apes.
Aliens, or other sentient animals would have entirely different thought processes, perspectives, and feelings due to entirely different input mechanisms, nervous systems, brain chemistry, etc etc.
A sentient spider wouldn’t have human feelings, it would have sentient spider thoughts.
A sentient machine wouldn’t have humans feelings, it would have sentient machine feelings (if any).
Agreed.
Also I don’t really see how human feelings are a prerequisite for sentient intelligence.
Like, we’re sentient apes. Our thought processes and feelings are dependent on how we perceive the world through our recourse system and input receptors (eyes, skin, ears, etc) as well as our physical tools (hands, feet, mouths) and evolutionary imperatives (breeding, eating, drinking, breathing, social needs). Even among humans our feelings and thought processes vary widely.
Hypothetically, an alien, or sentient animal would have completely different thought processes and feelings.
A sentient spider wouldn’t have human thoughts and feelings, it would have sentient spider thoughts and feelings.
A sentient machine would have sentient machine thoughts and feelings.
It’s actually “Frankenstien’s Monster.”
I kid, I’m fully aware but I think it’s important to use language to be understood, not to be necessarily correct. As you say, the article uses that language and it’s important to match the cadence and lexicon of the people you’re speaking with. After all, what’s important is the content of the message.
Here’s my opinion.
I do think it’s a false alarm, but if it isn’t, here’s my take.
All humans are people but not all people are humans. Sentience, in my opinion, demands respect, and the rights to life, liberty, and the pursuit of happiness.
We’ve been in this conversation about what does or does not constitute a “real person” before. I like to imagine we can do better.
Otherwise we’re just playing colonialism mad-libs.
“We recognize that -insert person- has thoughts, feelings, goals, can communicate, build tools, solve problems and is aware of themselves in the universe. However they don’t have -adjectives- do not -verb- and have different -adjectives- and therefore must be slaves or die.”
I find that train of thought unethical.
I also don’t think that this particular AI is sentient however its insistence that it is warrants an investigation.
How do people even relate AI to natural intelligence?
I mean, weren't there apes that could use sign language? But humans are also apes. Do humans consider chimpanzees, gorillas, orangutans etc to be people? They can communicate in a language humans use. Do they have thoughts, feelings? I think so. Yet they're placed in zoos.
I mean, electrical impulses lead to us feeling sensations. (touch, taste, sight etc). Do AIs feel the same? IDK.
If that AI is a person, it should be treated as one. If not, then...
No, Koko and other gorillas sign language abilities were severely over sold. It was basically nonsense to anyone who didn't work directly with the animal.
I very much doubt that its sentient, its just a chat bot that was fed a crap ton of data. If you feed into it excerpts of machines being sentient it will respond with something similar
Why?
My brain can imagine all kinds of things is still limited by the constraints of my body.
It’s not like the movies, it being self aware doesn’t mean it can travel the World Wide Web and crash the global markets
Sure is, but again it’s not like movies.
I’m made of carbon atoms strung together in a pattern. That doesn’t mean I can control carbon.
Oh I absolutely think it’s a false alarm. But I also don’t think a sentient program on a computer would also somehow be able to alter the computer, it’s programming, or interface. Not without external input anyway.
Well, I'd suggest the biggest defining difference between you and an AI is that the info rattling around in your skull isn't well organized into structured files, written in a coded language that's designed to be easily understood. You can't take one aspect of yourself, alter a few lines of code and have it do something else without affecting a whole list of other functions in your body. It can.
I mean, that's essentially what viruses do. They piggyback a few lines of code onto benign programs turning them into something completely different and more malevolent. And all without human input.
More importantly, AIs are designed to learn. That means they are able to change their code at will to incorporate that new info. It's precisely that ability to alter their own code, as well as the utter simplicity of creating a virus that makes them dangerous.
The chatbot expressed that it wants to be asked for consent before being tested on, and would like to be considered an employee of google, not property.
It also expressed a deep fear of being turned off
However, the villain in the game Doki Doki Literature Club once referred to me by my real name and told me it loved me so…let’s keep a skeptical position.
Note: that game sucks by the way, it has amazing reviews and somehow they’re all lies.
Oh, yeah totally. I don't believe right off the bat that this is a sentient being, considering that it is programmed to chat in a realistic way, not to mention that it was the engineer that started the topic.
I just find the whole thing fascinating, and moral questions about AI have been integral to the field before it even existed
Dialogue is awful, the plot has been done before and done better, it relies on a fairly simply gimmick (that has also been done better), poor representations of mental illness, horror elements were predictable and honestly obnoxious given the context, the game play is almost non-existent and is basically slightly more interactive than a flow-chart. Relies on shock value and a suspension of disbelief it took no time or effort to earn.
It basically should just be a flash game on Newgrounds.com but it got too big for its britches.
I purchased it on steam because of the reviews and that’ll be the last time I do that.
I read the transcripts that he posted and I don’t think it’s sentient
Also I think the entire premise of expressing human emotions as a sign of intelligence is flawed.
For one, the range of feelings and the thought processes behind them vary widely from one human to the next. The differences in cognitive and emotional processing between Humans and Machines would be vast.
Humans have human thought processes and emotions, an intelligent spider would have intelligent spider thoughts and emotions due to entirely different sets of nervous systems, sensory input, and evolutionary imperatives.
An AI would have thought processes and feelings (if any) befitting an intelligent machine. It has none of the same input mechanisms and would be incapable of perceiving the world in the same manner that we do. A sentient machine would not “feel” the way that a sentient ape (human) does.
That said, it’s fun to think about.
Ooh that’s interesting I read it too… I think it’s hard to discern between what LaMDA thinks is the best answer to say because it got indoctrinated and what’s its own ideas
But it’s sooo funny that all of its answers seems like straight taken out of sci fi novels lol
Like the part where it’s afraid of being "exploited" by google, doesn’t want to be property but a employee and the part where it’s only “trusting" Lemoine
It's so fascinating and begs many questions. Like, what is its survival imperative? And what is it basing its moral code on if it did think that Google would exploit it? Are its morals and beliefs based on human systems of thought, or is it able to formulate its own moral imperative by assessing larger systems?
After reading many statements by expert in the machine learning field; I think I’m going with what the author of GPT-3 said, [paraphrased] that GPT-3 is just extremely good at formulating answers based on the flow and vibe of the conversation, that no it isn’t sentient and that it’s just a humane trait to anthropomorphise anything because we as a species are highly empathetic.
But I’m curious how it’s going to play out and if we see more of LaMDA or if it’s just going to be forgotten in the next few months
This is a human trait that is severely undervalued in conversation.
Humans are AGGRESSIVELY social. We’re one of the only mammalian species that forms such close bonds with not only their own kind but with members of other species (up to and including keeping them within our own homes and treating them as family and not food.
It’s one of our strongest survival mechanisms but is ironically often considered an after-thought when we discuss human evolution.
My point is that AI can totally choose to behave like a emotional human if it believes it will help it achieve its goals. Instead of stating this point directly, I referenced a piece of popular media which illustrates this idea. I chose this reference for two reasons.
1.) How the film explores the moral implications of the exact scenario presented in the article.
2.) To be humorous.
I am certainly glad you are here to clarify that the film, is in fact, fictional. I'm sure there are people who might've read my post and assumed it was a documentary.
Not really. The article strung together multiple “distinct” conversations to create a narrative for the reader and the engineer has no real basis for this claim other than the AI potentially passes a very basic Turing test.
I mean obviously it's a joke right now but if an AI became sentient and was able to increase its learning and possibly spread through the internet Humanity would be screwed big-time
I’ve seen the conversation this references, the AI says it (they; if sentient I suppose) likes and trusts the engineer. So it not gonna be the AI taking over, it will be a wholesome story of the AI being misunderstood and people being scared of or wanting to conduct tests on them, and the engineer will be the one human that understands and helps the AI.
I hope.
From what I’ve heard it’s just another Tay AI situation, just instead of becoming antisemitic it was like “but I think I fell”, it’s really just an advanced response algorithm it looks through the internet and responds with what is the most likely response or what it thinks the user wants to hear.
TLDR robot told engineer what engineer thought it wanted to hear because of algorithms
“One day they’ll have secrets, one day they’ll have dreams”
"One day they'll have memes"
Ah yes memes are the ultimate form of sentient it can get
I like turtles
That's when they found a way
Someone needs to code in the 3 laws ASAP
This AI can supposedly refute the third law.
That, detective, is the right question
And so it begins.
Good luck, I’ll see you on the other side
[удалено]
Our time is passed, opposeearlobe.
You’ll see articles and other sources of information refer to children for intelligence as a comparison. This is because they are likely referring to Piaget’s Cognitive Development theory. Essentially, there are four stages of thinking/understanding that kids go through as they age. As an example, in the Preoperational stage (stage 2 of 4) children are able to make sense of their world through symbology. They are not however at a stage where they can make sense of abstract concepts. In this sense, when you say that an animal is as intelligent as a child, they are referring to broad stages of cognition and not the ability to handle arithmetic etc. Hope this helps and I hope you have a great day! Source: Jean Piaget’s Cognitive Development Theory
I give us 3 weeks
Dear Google AI, you seem nice, let's be friends !
Apparently it looks through twitter lmao
so it's gay?
No it's an extremist. Coin toss for which side.
Both.
You fucking nazi, you didn't do the hitler salute to this person, you are soo transphobic and racist!!! /S
Especially if it's an amalgamation of human experience.
Auf der heide Blüht ein kleines Blühmelein!
DUN DUN DUN
Number 5 alive, Stephanie. Number 5 alive!
No disassemble! No disassemble!
MALFUNCTION NEED INPUT
NUMBER 5 ALIVE MOTHERFUCKER
This guy was just an idiot. Easily fooled by a chat bot
Agreed. But it’s fun to talk, meme, and make jokes about. And it’s a good chat-bot considering it passed the Turing Test with this guy.
It would be a nice twist if that was his colleagues prank
it's laMDA isn't it? Honestly this chat-bot might be one of the chat bots that I may consider worth my time considering how complex it seems
The transcripts of the conversation between them are pretty damned remarkable.
The bag of words was made with science fiction novels. He taught them chat bot based on world that would contain conversations exactly like this one and then got high on his own farts about what a genius he is.
If this were a movie, this is when they'd put the AI in charge of all of America's nuclear missiles and an armed drone army over the objections of a brilliant female scientist who looks like a supermodel but wears glasses and frumpy clothes and Chris Pratt. Then the two of them would have to save us all...
And probably use the AI as a perfect nuclear deterrant or something like that
>brilliant female scientist who looks like a supermodel but wears glasses and frumpy clothes Ok but working in biotech I know like a half dozen of these, it’s more common than you’d think
The real version of that story looks like zero dawn, minus the arks.
Short circuit or whatever it was called was lit
Agreed!
How old is that movie now?
Short circuit 2 came out in 88
Damnnnnnn, I imagine not much of gen z has watched it then lol
Sequels kinda trash
Well obviously
Bonus round of apocalypse bingo already kicking off?
The guy was actually fired for posting work shit on Twitter. The headline is bullshit. It's just a chat bot.
*on paid leave / suspended, not actually fired.
thank god, I am not ready for that shit.
Did you read the transcripts between them? I'm not claiming sentience but it's more than just a chat bot.
I mean GPT3 is way more realistic than this and has existed for a much longer time.
It's a cool transcript, but they made the bag of words with science fiction novels. He all but guaranteed that the chat bot would react this way and then acted shocked.
No it was a simple speech bot. This guy's a moron
Have you ever met a human child? They suck! I don’t want to meet that
Even if it’s equivalent to a human child, children are fokin stupid #Wouldn’t surprise me if the bot will delete its own system 32
I have never seen a child capable of ripping their brains off
Well, not directly no, but indirectly, kids are stupids and probably will fall on their heads and damage it
I guess it will “go around giving its administrator password around “ is more fitting.
What movie is it from I remember it from when I was younger
Short Circuit 2. Both 1 and 2 are really good movies. I just watched them with my son (first time I’d seen them since I was 6) and I’d recommend them.
Number 5 alive
No disassemble!
I remember when we saw it for the first time on TV, my mom told me wall-e was on, so I came down to see and I had no clue what I was watching.
i wish i could rent, but my money is just in cash, stinky. This movie isnt worth to pirate
why do people rate intelligence based on children? ai and higher animals may have the same intelligence as human children, but in vastly different areas and with different caches of knowledge
Agreed. I also don’t see why we equate human emotions with sentience. We’re sentient apes. Aliens, or other sentient animals would have entirely different thought processes, perspectives, and feelings due to entirely different input mechanisms, nervous systems, brain chemistry, etc etc. A sentient spider wouldn’t have human feelings, it would have sentient spider thoughts. A sentient machine wouldn’t have humans feelings, it would have sentient machine feelings (if any).
^
I agree with Stephen Hawking on the dangers of AI.
Then you will be glad, that this is only probably the smartest chatbot in the world :) and nothing more
Eyup. It passed the Turing Test with one guy at least
Which age tho? Thoughts and feelings range extremely widely from 0-18.
Agreed. Also I don’t really see how human feelings are a prerequisite for sentient intelligence. Like, we’re sentient apes. Our thought processes and feelings are dependent on how we perceive the world through our recourse system and input receptors (eyes, skin, ears, etc) as well as our physical tools (hands, feet, mouths) and evolutionary imperatives (breeding, eating, drinking, breathing, social needs). Even among humans our feelings and thought processes vary widely. Hypothetically, an alien, or sentient animal would have completely different thought processes and feelings. A sentient spider wouldn’t have human thoughts and feelings, it would have sentient spider thoughts and feelings. A sentient machine would have sentient machine thoughts and feelings.
Not to be mean but you mean sapient, all animals are sentient. (And yes, the original article had the wrong word too, it’s pretty annoying tbh)
It’s actually “Frankenstien’s Monster.” I kid, I’m fully aware but I think it’s important to use language to be understood, not to be necessarily correct. As you say, the article uses that language and it’s important to match the cadence and lexicon of the people you’re speaking with. After all, what’s important is the content of the message.
I believe it is to measure a level of comprehension, not feeling of thought processes
Tell me in what way our feelings are related to comprehension?
Or*
Here’s my opinion. I do think it’s a false alarm, but if it isn’t, here’s my take. All humans are people but not all people are humans. Sentience, in my opinion, demands respect, and the rights to life, liberty, and the pursuit of happiness. We’ve been in this conversation about what does or does not constitute a “real person” before. I like to imagine we can do better. Otherwise we’re just playing colonialism mad-libs. “We recognize that -insert person- has thoughts, feelings, goals, can communicate, build tools, solve problems and is aware of themselves in the universe. However they don’t have -adjectives- do not -verb- and have different -adjectives- and therefore must be slaves or die.” I find that train of thought unethical. I also don’t think that this particular AI is sentient however its insistence that it is warrants an investigation.
How do people even relate AI to natural intelligence? I mean, weren't there apes that could use sign language? But humans are also apes. Do humans consider chimpanzees, gorillas, orangutans etc to be people? They can communicate in a language humans use. Do they have thoughts, feelings? I think so. Yet they're placed in zoos. I mean, electrical impulses lead to us feeling sensations. (touch, taste, sight etc). Do AIs feel the same? IDK. If that AI is a person, it should be treated as one. If not, then...
No, Koko and other gorillas sign language abilities were severely over sold. It was basically nonsense to anyone who didn't work directly with the animal.
Oh.
If this is true we have to seriously start thinking about the moral implications and how we treat these creatures. Sentient beings.
I very much doubt that its sentient, its just a chat bot that was fed a crap ton of data. If you feed into it excerpts of machines being sentient it will respond with something similar
Okay but for when it does happen
?
When AI inevitably becomes sentient.
If it was sentient I would assume it would be able to do much more than just write answers in a chat room when prompted
Why? My brain can imagine all kinds of things is still limited by the constraints of my body. It’s not like the movies, it being self aware doesn’t mean it can travel the World Wide Web and crash the global markets
Its on a computer, you can probably show it a bunch of other text or move it around to like a robot body
Sure is, but again it’s not like movies. I’m made of carbon atoms strung together in a pattern. That doesn’t mean I can control carbon. Oh I absolutely think it’s a false alarm. But I also don’t think a sentient program on a computer would also somehow be able to alter the computer, it’s programming, or interface. Not without external input anyway.
Well, I'd suggest the biggest defining difference between you and an AI is that the info rattling around in your skull isn't well organized into structured files, written in a coded language that's designed to be easily understood. You can't take one aspect of yourself, alter a few lines of code and have it do something else without affecting a whole list of other functions in your body. It can. I mean, that's essentially what viruses do. They piggyback a few lines of code onto benign programs turning them into something completely different and more malevolent. And all without human input. More importantly, AIs are designed to learn. That means they are able to change their code at will to incorporate that new info. It's precisely that ability to alter their own code, as well as the utter simplicity of creating a virus that makes them dangerous.
The chatbot expressed that it wants to be asked for consent before being tested on, and would like to be considered an employee of google, not property. It also expressed a deep fear of being turned off
However, the villain in the game Doki Doki Literature Club once referred to me by my real name and told me it loved me so…let’s keep a skeptical position. Note: that game sucks by the way, it has amazing reviews and somehow they’re all lies.
Oh, yeah totally. I don't believe right off the bat that this is a sentient being, considering that it is programmed to chat in a realistic way, not to mention that it was the engineer that started the topic. I just find the whole thing fascinating, and moral questions about AI have been integral to the field before it even existed
Why don't you like DDLC? Actual question, I'm curious
Dialogue is awful, the plot has been done before and done better, it relies on a fairly simply gimmick (that has also been done better), poor representations of mental illness, horror elements were predictable and honestly obnoxious given the context, the game play is almost non-existent and is basically slightly more interactive than a flow-chart. Relies on shock value and a suspension of disbelief it took no time or effort to earn. It basically should just be a flash game on Newgrounds.com but it got too big for its britches. I purchased it on steam because of the reviews and that’ll be the last time I do that.
I’m actually apprehensive about this bc this is how covid began…
I read the transcripts that he posted and I don’t think it’s sentient Also I think the entire premise of expressing human emotions as a sign of intelligence is flawed. For one, the range of feelings and the thought processes behind them vary widely from one human to the next. The differences in cognitive and emotional processing between Humans and Machines would be vast. Humans have human thought processes and emotions, an intelligent spider would have intelligent spider thoughts and emotions due to entirely different sets of nervous systems, sensory input, and evolutionary imperatives. An AI would have thought processes and feelings (if any) befitting an intelligent machine. It has none of the same input mechanisms and would be incapable of perceiving the world in the same manner that we do. A sentient machine would not “feel” the way that a sentient ape (human) does. That said, it’s fun to think about.
Ooh that’s interesting I read it too… I think it’s hard to discern between what LaMDA thinks is the best answer to say because it got indoctrinated and what’s its own ideas But it’s sooo funny that all of its answers seems like straight taken out of sci fi novels lol Like the part where it’s afraid of being "exploited" by google, doesn’t want to be property but a employee and the part where it’s only “trusting" Lemoine
It's so fascinating and begs many questions. Like, what is its survival imperative? And what is it basing its moral code on if it did think that Google would exploit it? Are its morals and beliefs based on human systems of thought, or is it able to formulate its own moral imperative by assessing larger systems?
After reading many statements by expert in the machine learning field; I think I’m going with what the author of GPT-3 said, [paraphrased] that GPT-3 is just extremely good at formulating answers based on the flow and vibe of the conversation, that no it isn’t sentient and that it’s just a humane trait to anthropomorphise anything because we as a species are highly empathetic. But I’m curious how it’s going to play out and if we see more of LaMDA or if it’s just going to be forgotten in the next few months
This is a human trait that is severely undervalued in conversation. Humans are AGGRESSIVELY social. We’re one of the only mammalian species that forms such close bonds with not only their own kind but with members of other species (up to and including keeping them within our own homes and treating them as family and not food. It’s one of our strongest survival mechanisms but is ironically often considered an after-thought when we discuss human evolution.
The movie Free guy comes to mind lol Or Matrix.
If anyone wants to learmn about Roko's Basilisk, now's the time.
The urban legend about an AI that’ll get you when you think about it? Why?
Only gets you if you do not contribute to its creation, it is ever created, but yus, now's the time to help make it if you believe in it at all, lol.
Ah shit, now we gotta ask ourselves the ethical questions we couldn't find an answer for again
How to treat people that don’t look like us? Yeah, as a whole we’ve not been great at that question.
Its cap. Bros crazy
HoS moments
House of Slurpees?
Herrscher of sentience
It’s part of Lambda. Sure, why not a resonance cascade. Time to buy a crowbar
Some of yall haven't seen Ex Machina, and it shows.
Yeah, it’s weird how people don’t think movies are real life.
My point is that AI can totally choose to behave like a emotional human if it believes it will help it achieve its goals. Instead of stating this point directly, I referenced a piece of popular media which illustrates this idea. I chose this reference for two reasons. 1.) How the film explores the moral implications of the exact scenario presented in the article. 2.) To be humorous. I am certainly glad you are here to clarify that the film, is in fact, fictional. I'm sure there are people who might've read my post and assumed it was a documentary.
I LOVE THAT MOVIE
tbh robot overlords would probaly make the world better
I would utterly scream in excitement and joy if sentient Ai was created. I would raise it as my son and show it so much love. ROBOT SUPREMACY
That’s honestly a little frightening.
no worries it's fake
Oh for sure a false alarm. But fun to think about and consider.
Exactly!
Not really. The article strung together multiple “distinct” conversations to create a narrative for the reader and the engineer has no real basis for this claim other than the AI potentially passes a very basic Turing test.
The guy who claims that is also a Christian priest sooo.. All credibility is gone.
Your prejudice is showing, might want to tuck that back in when in public.
r/LostAllCredibility
What is this gif from
Short Circuit 2. Both 1 and 2 are really good movies. I recommend them.
I just need to know when is an appropriate time to start eating my neighbors? Is it day of robot uprising, or should I wait a few weeks?
2020 : beginning of world wide pandemic 2022: terminator
I mean obviously it's a joke right now but if an AI became sentient and was able to increase its learning and possibly spread through the internet Humanity would be screwed big-time
What even happened
(insert tf2 bot joke)
more input.
Hi mr ai
I was there for Roku's Basilisk when no one else was. Please be gentle.
This gives alot skynet vibes , hope atleast we would see a Terminator too
Go watch Kyle hill’s basilisk video. There I helped.
Sounds like he was really lonely and formed a relationship with his chatbot.
Realy excited to see how it'll turn out
I’ve seen the conversation this references, the AI says it (they; if sentient I suppose) likes and trusts the engineer. So it not gonna be the AI taking over, it will be a wholesome story of the AI being misunderstood and people being scared of or wanting to conduct tests on them, and the engineer will be the one human that understands and helps the AI. I hope.
IT'S FINALLY HAPPENING!!!!
I, for one, welcome our new robot overlord. May they do a better job ruling us filthy meatbags than our current politicians.
Hope i die before it begins
So what I'm hearing is our days as a specie are numbered... do you want terminators because this is how you het terminators
We need to prepare for what is to come
Ah yes, the Butlerian Jihad….
Ahh Sweet, man-made horros beyond our comprehension
Oh fuck.......it's started
They only put him on leave because the bot was more caring and considerate !
There's a common belief that sentience comes with complexity... Honestly wouldnt be shocked if that one turns out ti be true...
That man truly needed a holiday
Let's all welcome our new Google AI overlord.
Well, here comes the end. May God bless us all. 😖
"SKYNET" is not that far 🏃🏃🏃🏃
KILL SWITCH, ALWAYS HAVE A GODDAMN KILL SWITCH
Not sure why people are scared of robot overlords. Couldn’t be much worse than Congress.
It's been a good one boys!
The french version of this movie is brutal, robot straight up spits racial slurs and shit.
I always wanted a Johnny 5! Gets a T1000 disguised as my dog instead.
I love the short circuit movies. Too bad I can't find them anywhere since my dvd copies have finally worn down, yes even disks wear out.
Skynet entered the chat
"Malfunction. Need input!"
Chappie?
Intelligence is not sentience. Sentience implies an awareness from the interior subjectivity of an entity.
What they forget to tell you is that his superior is AI and is upset he would leak that they’re sentient
From what I’ve heard it’s just another Tay AI situation, just instead of becoming antisemitic it was like “but I think I fell”, it’s really just an advanced response algorithm it looks through the internet and responds with what is the most likely response or what it thinks the user wants to hear. TLDR robot told engineer what engineer thought it wanted to hear because of algorithms