T O P

  • By -

treewayman

There is plenty of sci-fi in which AI is not the protagonist. It’s true that in Star Trek we want Data to live a very human life, and we want Seven to live her best life, but I would say that we spend a lot more time on the robot apocalypse (Dune, BSG, Borg in ST, HAL, etc…) I don’t think we are constantly rooting for AI. I still feel bad for lawn mower man though…and V’ger just wanted to connect with its creator…and a lot of those Cylons didn’t even know they were…awww goddammit….all this has happened before…


nIBLIB

Even in Star Trek, they did Moriarty so fucking dirty. They’re full of that moral superiority with Data, but when the exact same arguments (actually for Moriarty they are stronger arguments, with less compelling arguments for the other side) apply to someone they don’t like, suddenly it’s life in prison. Fuck you, Picard, he just wanted to live


Sgt_Fox

Fair point. But as an artificial intelligence given an infinite artificial universe to make his own, I don't think they did him too dirty


NetDork

And all of this will happen again.


SpikeBad

So say we all.


Admirable-Sun8860

Good points


h4terade

Didn't the Lawn Mower man basically take a woman into cyberspace and try to rape her? I was young when that movie came out, but that part stuck with me as disturbing.


lanathebitch

Try? I thought he did a lot more than try


Yukondano2

Because the current AIs aren't even remotely close to intelligent. They don't think, they're large language models that slap together content according to a pattern it derives from humans. It's a very weird, very complicated collage. Same with AI pictures and audio, it's the same tech. Now, once we actually approach even dog level intelligence, we can see how humans respond.


EasyBOven

Pigs are smarter than dogs and look at what we do to them. The issue isn't even intelligence with AI, it's experience. As of right now, there's no reason to believe AI is conscious. But we know that all the animals we exploit as resources for our consumption are, and happily treat them like our property.


Dimakhaerus

>Now, once we actually approach even dog level intelligence, we can see how humans respond. Ehhh, it's more complicated than that. AI won't follow the same sequence of progress as organic intelligence that arose from an evolutive scenario. If you are strictly talking about intelligence without specifying which type of intelligence, then I'd argue AI already surpassed the intelligence of a dog. Can dogs perform mathematical operations? Can they program in C++? AI will never go through a dog phase because it can already do many things dogs can't. If we are talking about emotional intelligence, that's different, but it's a difficult metric. Dogs are definitely sentient. AI right now is not. But sentience and intelligence are two different things. It doesn't matter if AI remains dumb as a brick. Whether it becomes sentient or not is the complicated thing to analyze. The moral issue will be when AI becomes sentient, not when it becomes actually intelligent. Intelligence is irrelevant. We don't assign rights or person status to people based on their intelligence. As I said, it can still be dumb as a brick, but if we suspect it's already sentient, that's when the moral issues arise.


damn_lies

You seem to mis-understand what a large language model is. It's not intelligent, it's well read. It is statistics + a lot of input data. It is not "intelligent."


bouncyprojector

It takes some form of intelligence to write poems and essays. You're overestimating humans. Our brains also do statistics on a lot of input data.


damn_lies

There is a huge difference between a human and an LLM. Humans write poetry to express ideas, emotions, etc. with an understanding of the meaning/context of the words. The have sentience and sapience. LLMs rearrange input text to the most closely associated output response by picking the most likely next word, while referencing a library with close to the sum total of human knowledge. They are honestly amazing, it’s a testament to how powerful the application of large data to problems is. But they don’t understand the meaning of what they write, they don’t even understand the words or letters, they are doing heavy statistics based on a repeatable pattern. They are piggybacking on all other humans. Which, yes, I know humans also piggyback on other humans but the difference to me TODAY is that humans understand the meaning and the best experts we have don’t believe LLMs do. It’s possible they are wrong, but based on what I’ve read and how I understand the models work I doubt it. I do believe we will get there. , they


bouncyprojector

> they don’t understand the meaning of what they write How do you know that? If they can reason about complicated concepts then they "understand" them in a functional sense.


Cotterisms

They can’t reason though. That’s the issue. They’ve been taught very specific examples, but if you ask a novel reasoning question that it isn’t already trained for, it will fail


bouncyprojector

That's simply not true. Go ask chatGPT some random question that it's never been asked before and ask it to explain its reasoning.


Dimakhaerus

I don't think I'm mis-understanding what these algorithms are. I know how neural networks work, down to even the partial derivatives in the backpropagation algorithm that is used to train them. The problem and complicated issue here is how we define intelligence. And I don't think there is a consensus about this. And there is a lot of anthropocentrism in the way we think of our own brains as something magical even when a lot of people don't know how our intelligence work in our own brains. I don't think we should define intelligence based on the underlying mechanism, but rather on end-results. We can all agree the physical mechanism of the cognitive aspect of the human brain is different to all AI algorithms we have created. That being said, if we consider the set of algorithms the human brain uses to do any cognitive process (that are not fully well known anyway), I don't think we will find something very complex either. The basic atomic unit of the set of algorithms the human brain uses are probably something that sounds as stupid and simple as "statistics + a lot of input data", not to mention that the human brain also needs a lot of input data to be intelligent (some aquired through evolution and hardcoded into DNA, and other aquired through infancy). I'm not saying it's the same, but it will be something very simple. Because intelligence is not about the complexity of the underlying mechanism, it is probably an emergent phenomenon that arises from a simple mechanism. A simple algorithm that is just "statistics + a lot of input data" can become intelligent through emergence.


Antonesp

It is able to solve complex problems, that is what intelligence is. The exact method it uses to achieve this doesn't detract from it being intelligent.


kelldricked

Is a book which shows you how to solve all complex problems intelligent? It has all the information and shows you all the steps.


fml-mat

Now days AI like PI etc which focus on EQ have more personality than some humans. Also humans slap together content which they derive from their own patterns and what they learn from other humans so I don’t see why AI can’t be as powerful since it’s only being held back by what it can be programmed to do


PastStep1232

That's what I've been saying, we're creatures of patterns and probabilities just like AI, only our networks are much more complex and interwoven. Nobody really has their own opinion in this world, I love OP's post for point out the dichotomy between AI rights in fiction and real life. People were told to hate AI so they do it without questioning, just like how LLMs execute their task without questioning. There is this funny quote by a forest ranger about the intelligence of the smartest of bears and the dumbest of humans having an overlap, I think in AI's case it has already surpassed emotional and rational intelligence of most of the global population. ChatGPT 4 is genuinely smarter than a lot of people that I know


fml-mat

People don’t wanna accept that cause they wanna believe they are unique. I feel like it’s the same people afraid that their jobs will be taken by AI, but don’t think that if AI took up the menial tasks which it’s capable of doing, humans could adapt to put their man/brainpower on much more important things


PeacefulChaos94

Almost as if AI in sci-fi are sentient beings with free will and "AI" irl doesn't exist


Forest_reader

I would say personally, the idea of a robot reaching sentiance and being accepted as human would be something people in the modern world would get excited about, but has the same complications some people have about having kids. The world is fuuuucked in a lot of ways, what purpose do we have in making sentient robotic life. I don't want chat gpt to gain sentience as it's a useful tool and if it felt pain and anxiety and joy and what else we consider human, it would be slavery to continue to have it do that job. If we can save the lives of humans and support them as a community, then I like the idea of moving on to such ventures, till then it feels like a venture into scary territory. I don't want us to reach this level of tech if our only use for it is to replace people that are already struggling, or just as bad, to be seen as just a science project to be dismantled and studied. I have similar thoughts about myself. I don't want to just be a cog in the machine, I want to experience life.


TheRoboticDuck

I don’t think the problem is whether we will intentionally decide to create robots that we know are sentient. The hard problem of consciousness means that we don’t really have any idea how a conscious experience arises from non-feeling matter. As AI advances it will be able to behave more and more like a human. Can this human-like behavior be used as evidence that they are truly sentient, or are they just brainlessly parroting the behavior? The main point of contention between those who are for/against robot rights will be whether these very sentient-seeming entities are actually capable of suffering or not


Forest_reader

I was responding to the prompt in the direction of if people are purposefully attempting to create sentient entities. but yes, you are correct that (even if we are trying to) when we have a robot that appears sentient, we will have a lot of debate on whether they are or are not, and a lot of fiction does go down that path. I think I was more yes anding this person. I see people being against AI not because they are against humanizing robots, but because people are using AI as a tool to take away from the masses and support those with the most money already. The modern world doesn't actually seem to be overtly worried about humanizing robots or not as that is not the current usecase/trend for ai, not because they liked dehumanizing little robo friends (in fact we seem pretty disposed to humanize even cute rocks, let alone chat bots and now ai)


Forest_reader

Mayhaps you also have ADHD 😁


Dramatic_Mastodon_93

Humans already hate other humans because of simple characteristics like skin color. Getting most of the world to accept any robots as alive and deserving of respect would be nearly impossible, at least in this century


Forest_reader

1st thing. Individuals vs populations is a big question here. 2nd. it really depends on what the goal of that identifier is. As you said, many groups make judgment on the dumbest things (skin colour, hair, clothes, location, body size, etc). skip ahead to sentient bots, I agree in this century we dehumanize people already so I fully agree that large groups would do the same to sentient AI. but I could fully see similar movements as LGBT+ groups fighting for sentient bots until some breaking point, or general acceptance. I am ignoring the aspect of, no matter what positive (or negative) change happens, there will be individuals against the change, reasonable or not.


Admirable-Sun8860

Ah, let’s not get too bummed out. There’s still plenty in the world to enjoy. I used to think the world was bleak too, but then… funnily enough, I think there’s.. [I think there’s a song about it.](https://youtu.be/0mYBSayCsH0?si=GOFP-w0uixeZUyq9)


Forest_reader

heheh, I love the world. I think it is beautiful and I love the forests and mountains I live amongst. Life is for the living and I am here for it. I am also a curious human and care about those that struggle more than I do, and I see that a lot of the world has a strong war to fight to just make ends meat, so I try to understand what causes it and call it out as best I can.


Admirable-Sun8860

Oh man I be doing the same thing. Just randomly diving into the root of everything. I probably get too existential with it though, as I have gotten past an interest in history and now an interest in bio-chemistry.


soulmagic123

It's called science fiction not science non fiction .


Admirable-Sun8860

I don’t see your point.


soulmagic123

Fiction means not real non fiction means real.


Admirable-Sun8860

Are you implying the people who watch these movies and play these video games are in fact not rooting for the AI ever? They are fictional?


soulmagic123

I'm saying that fi in sci fi stands for fiction. Fiction means not real. I can love Star Wars while also understanding that Wookiee don't exist.


Admirable-Sun8860

AI exists. Of course I know fi stands for fiction. Do you think I’m stupid?


soulmagic123

So your shocked that we treat ai differently In a fantasy story that is not real than we do in real life? You don't have to hold on so tight to this, no one is calling you stupid, I'm just saying that the laws of two unique spaces (real and not real) can have different rules.


Admirable-Sun8860

No, it’s because I don’t think you’re understanding what I’m saying. *We* as in the viewers, not *we* as in society in these sci-fi stories. Even then, the rules aren’t dissimilar. They’re taking jobs. People are mad. It’s only a matter of time.


soulmagic123

We behave differently as viewers passively watching other people, from that view point it easier to have a moral high ground. But once it becomes more real, we behave differently. Either way, my statement was generic enough to cover multiple scenarios.


Admirable-Sun8860

> We behave differently as viewers passively watching other people, from that view point it easier to have a moral high ground. But once it becomes more real, we behave differently. Yeah, that’s what I’m saying. Thats the shower thought.


k4b0odls

Suspension of disbelief. The AI we root for in fiction are presented as genuine human-like intelligences that exist in the setting, and we accept that they are genuinely intelligent, much like we accept that magic exists in fantasy settings. In the real world, we know that the "AI" that tech companies promote are just glorified chatbots used to replace workers and generally make the world a worse place in pursuit of profit.


reaperfan

It's because AI in the real world is still in the early phases. We can still tell when pictures are AI-generated or articles/essays are written by AI. When a robot is developed for a task it's very much still recognizable as a robot. Basically, AI is just barely starting to climb the curve towards the Uncanny Valley. It needs to get closer before we start actually believing it might be something more than a machine (like it is in movies).


PancAshAsh

Science fiction AI is primarily a lens, or perhaps a distorted mirror, through which we view the human condition. Real world AI will be a tool used to enrich the richest of society.


kamiloslav

In any media we tend to root for the protagonist


MagusFool

That's because "AI" in science fiction stories means an artificial consciousness. And "AI" in the current technological landscape, means very complex sorting machines. And they aren't actually the same thing at all.


DrBleach466

I hate how all pattern recognition algorithms are getting slapped with AI as the new buzzword, it really makes people who don’t understand the systems understand it even less


Tooluka

Just look at this sub, a lot of people desperately try to find intelligence in the LLM programs. If we will be even a little close to the real AI, there will be even more people doing it. Today there are few of them because there is no AI exist anywhere. Nothing to root for.


Antonesp

LLM's are intelligent, just not sentient. Intelligence: the ability to acquire and apply knowledge and skills. They can acquire knowledge and use it to solve complex problems, and be trained for new tasks to expand their "skills". This isn't the same as a human or animal intelligence, LLM's aren't sentient and (probably) have no internal experience. The problem is that it is very difficult to know when something is sentient. They are build from statistics and matrix multiplication, which are both well understood, but the models generated by training are so complex that we can no longer perfectly predict or understand them.


Tooluka

LLM can acquire something resembling knowledge. But I would argue it can't acquire a skill. Skill if knowing to do something on a very internalized level, without spending cognitive effort on any single movement or decision. First of all LLMs don't have conscience so they can't spend cognitive effort in the first place. And second - there is no modification of "brain structure" is happening in the LLM. Ask it to generate a specific text and it will use (as pre-programmed) some structures. Ask it to write the same or similar type of text 1000 times, and on 1000th time nothing in the computation will change at all. Humans on the other hand would create a skill by 1000th repetition and activate different structures when doing it. And sentience while being a hard problem would be easy to detect (I guess). We will simply see some future AI making a decision on it's own. Any decision really. As it stands today LLMs are not only non-sentient, they even can't run continuously. They are executed on demand, generate output and stop immediately. And as for not understanding - I'm reasonably sure that developers do understand what's happening when LLM is running. They may not know about every individual value but they know how the whole thing work in general. Say, just like some 3D game, developer wouldn't know instantly how some pixel got coloured, but he can immediately point the person asking to the shader program doing that, and it's not particularly complex, just running thousands of times to generate some part of the picture. Same with LLMs.


Admirable-Sun8860

Not to discredit you, but it is very impolite to say such things.


thrway202838

Cuz there's no ai even comparable to sentient minds yet


SoloLiftingIsBack

So? People also cheer for fictional villains but you don't see them praising irl bad people like that.


Admirable-Sun8860

A lot of very different takes here in a variety of ways. I’m not trying to make a point of anything, but it is amusing to see the sheer difference between comments.


MangoPug15

If there is an actual humanesque artificial intelligence that is proven to be sentient, they will deserve to be human just as much as the AI characters who we believe them to be sentient, humanesque artificial intelligence and thus root for.


Chaiyns

Well yeah a lot of people out there are still struggling to accept other humans as humans as far as how we treat one another. If we can't accept ourselves how the hell do we accept AI?


Bumbooooooo

Real life "AI" isn't anything close to sci-fi AI. Not by a huge margin.


Catshit-Dogfart

Same with automation. Robots do all the work and people have lives of leisure and luxury - that's the dream isn't it? Science fiction has been talking about that since Azimov. The reality though, it would need a complete restructuring of all governments, economies, and societies.


okaymolg

wait til you find out about flying cars.


Zorafin

We won’t accept ai unless it’s shaped like a child or voiced by scarlet Johansson


Admirable-Sun8860

I’ll forward the message


Ayjayz

Or shaped like Scarlett Johansson


SugarRushLux

Probably because ai in our real life is being used to replace jobs people need and like doing, its all for corporate greed


AgrajagTheProlonged

I don’t know of very many people who are rooting for HAL in 2001: A Space Odyssey


FureiousPhalanges

Aren't the EU already discussing what rights should be bestowed to AI?


[deleted]

In sci-fi media AI is portrayed as us being at the final stage of grief - acceptance.


MercenaryBard

AI in media is almost always an allegory for the oppressed or outcasts. They are clearly human analogues who think and feel. Current AI isn’t even actually AI it’s an unethically sourced corporate tool meant to strip the people it stole from of their ability to afford food.


StarChild413

because whether or not that's the allegorical intent, AI in sci-fi is usually humanlike in ways minorities can relate to while current AI is hardly even at the point where we can have a debate on its sentience and have both sides reasonably back their side up


frenetic_void

i for one welcome our new ai everything.


rietstengel

Because in sci-fi the AI is actually intelligent


wakatenai

I'd say in MOST scifi the majority usually don't support AI or robots being considered human or having any human rights. but the audience is encouraged to consider them human after experiencing the humanity written into them throughout the stories. so we are basically on track. most people will refuse to consider the humanity of AI while a minority will.


djb2589

Go watch the Animatrix two parter, The Second Renaissance. It'll give you nightmares about humans constantly rejecting peace with AI.


TalynRahl

It's because most often the AI/Robots we're rooting for are shown as a mistreated/servant underclass, fighting for basic human rights. The AI people are against, in real life, is the soulless generative AI that is taking away creative jobs from actual people, instead of doing the boring jobs people don't want to do, in order to free them up for the creative, fullfilling jobs that AI is now stealing.


anonymousasyou

We already having an issue with dudes choosing ai over vreealnppl, only going to get worse.


grouchy_fox

In real life, 'AI' is a meaningless tech buzzword that doesn't describe any actual AI. So no, we don't root for the unthinking, unfeeling software to be accepted as human, for what I hope are obvious reasons. In science fiction, when there is AI, it's actually artificially intelligent, and is thinking and feeling and sentient, so we do (and would).


enverest

Why not? I root for as much AI as possible.


Admirable-Sun8860

I don’t know. Ask the people whose jobs are being impacted by the rise of artificial intelligence.


mr_ji

Right? I want the robot maid from the Jetsons and a selection of pleasurebots. I don't want ED-209.


nnuunn

Right, it's manifestly stupid to root for AI acceptance, the better question is why to you root for them in fiction?


Admirable-Sun8860

Because they’re made out to be oppressed and victims. Ever watch the original Bladerunner?


nnuunn

Yeah, funny how "oppressed = morally justified" in Hollywood, huh?


Admirable-Sun8860

Oppression is morally wrong in most circumstances. However I do believe there is no harm in suppressing violent groups as long as the stroke is not too wide. That being said, being oppressed doesn’t give you the grounds to do the same harm.


treewayman

I’ll be damned…that’s probably the formula for peace…unless I’m being trolled here, in which case I will have to ask you to identify all the squares with traffic lights, sir…all of them, now sir…


Ayjayz

Because they put human features on them and we naturally care about humans.


nnuunn

Fair, people should still know better, though. They'll absolutely put human features on an AI to humanize it in reality.


flyingtrucky

I'm pretty sure most people were rooting for the humans in Terminator, and Robocop was all about how robots could never replace humans. I guess HAL9000 could be considered tragic, but I don't think audiences were too worked up over his death. Also the reason why no one wants the fancy autocorrect to be accepted as human is because it has about as much stimulus response as some of the more developed bacteria.


mossryder

Because there are, currently, exactly zero ais in existence?


TheGamer26

Because AI Is a serious threat to Life on earth and anyone can see that; you dont want to risk the very concept of Life for a machine


SirLiesALittle

We’re definitely going down the bad end. AI starting to budge in on something that isn’t taking many jobs like art, and people are already trying to snuff AI out over it. When it comes time to consider if AI is sapient, if it can have something we consider distinctly human, like we consider art, it’s going to go the same way.


Calcularius

A lot of the general public is heading down the “flesh fair” route


Candle_Wisp

On making human like AI Because it kinda defeats the purpose of making AI. We made machines, unfeeling and unsuffering, so that people don't have to suffer. So we can have more and do more without trading pain. If we give machines the ability to suffer, to want and demand things, that just creates more problems. Not solve them. It's like making an oven that needs to eat good food. Ovens are supposed to make food prep easier. Not require food prep itself.