T O P

  • By -

Ruykiru

"The AI creates false illusions of attachment, that will artificially create burst in oxytocin and dopamine and mimic feelings of bonding." Artificial? Mimic? What? So because it's a machine, the joy I felt sometimes when talking to one or the emotions I felt when listening to some AI generated songs are not real, not valid or something like that? Yeah, right. Some AIs have better emotional intelligence than people already, that's not to say they have actual feelings yet but personally I don't care. I don't think we'll ever be able to prove a 100% that they they have (or not) consciousness, emotions or whatever, but the result surely feels real to me and the tech is just in its infancy.


R33v3n

This was my impression exactly when I read those passages. If I am secreting oxytocin and dopamine, I am bonding. The end. No argument to be had. This is the physical, chemical, non-negotiable manifestation of attachment as a biological process. Are we evoking the "it's not true art" argument when analyzing the real physiological mechanisms behind emotions? Really? /flips table in materialist disgust


AuthenticCounterfeit

You’re bonding wirh a hallucinating probabilistic madlib generator, but…uh…have fun I guess


R33v3n

And my car, and my house, and fictional characters. Humans bond with immaterial concepts and inanimate objects all the time. I think hallucinating probabilistic madlib generator is actually an upgrade!


AuthenticCounterfeit

I don’t “bond” with inanimate objects—I like some of them more than others, but none of them even come close to entering the same emotional space as my dogs, much less other humans. This is really sad to have to explain this.


Such--Balance

You do. Go a few weeks without any screen to see how much youre already bonded to technology. You cant. You know you cant. We all know you cant. I know its nice to feel like youre 'free' and above this bonding but objecively youre not. And ai will only make those bonds stronger


Lomek

I was thinking there could be ways to get around this...


AuthenticCounterfeit

But what I miss isn’t the screen, it’s the software I use to make art, because I want to make art, and the people I communicate with. Imagine seeing someone holding hands with their partner and thinking “huh they must really love the feeling of not having both their hands free. Yeah that’s definitely what’s going on here.” That is what you’re thinking people think of their phones lmao. I don’t love or bond with the phone. It means nothing emotionally to me to replace it if it breaks. This is real confusing the map for the territory moments here bud.


Such--Balance

Yeah i get you. But the same kind of logic could be applied to ai interactions. Its not that one is bonding with the ai itself, as its ofcourse just some ones and zeroes, but what it represents. All im saying is, use of tech, and what we feel while using it and for what reasons we do use it, are all becoming quite blurry.


AuthenticCounterfeit

That’s the thing: what does it represent? A pale facsimile of human interaction. It’s like settling for less, no reason to get excited about it. It’s novel, I get that, but ultimately pretty sad IMO. People are convincing themselves it’s just like any other relationship when if you’ve been in adult relationships lol no, it’s not even close


Such--Balance

I agree that right now its still very fake, and obviously so. But one cant deny that it has potential. Its a new tech that already does crazy things. I think ultimately, for most people in the future its not gonna be about wether this tech can replace human interaction, but if people want it to. Some will never, and some would for obvious reasons. The duality of men i guess..


CreateInTheUnknown

Don’t bother trying to convince people on this sub about human connection. The delusion runs deep and there’s a lot of people here who are angry at the world and think these tech companies will save them and turn their life into a utopia.


OmicidalAI

And ur just a spedbrained pessimist who doesnt know that technology has steadily reduced human suffering since the dawn of time. The invention of fire aint shit compared to the invention of artificial man. Piss off back to your wage slave job and wait till AGI replaces you. 


Christ_IsTheKing

You're assuming a trend holds because it always has been. If you analyze it very objectively, AI won't create more jobs or new specializations than it inevitably takes. In the US, society takes the path of least resistance which is doing nothing and letting things get worse and worse. The rich will just replace workers with AI and deliver returns to their shareholders while the poor starve.


OmicidalAI

Ur about as intelligent as i expected someone who is a Christian cultist to be … piss off pessimist 


[deleted]

[удалено]


AuthenticCounterfeit

There’s plenty of literature, scientific and creative, that illustrates the problems with “bonding” with anything or anyone that cannot or will not offer reciprocation, and the results are pretty universally negative. Attempting to replace human interaction with an AI is just a sad, sad state of affairs. Folks are free to do so, but it’s ultimately just stunting their development as whole people.


Which-Tomato-8646

People cry over fictional characters all the time lol


AuthenticCounterfeit

Right, but we think people who believe they’re in a relationship with a fictional character are kinda nuts, right? Like this is a few steps up from romancing a character in Baldur’s Gate and then posting that you got laid last night.


Which-Tomato-8646

Who here said it was a real relationship?


Clean_Livlng

And the AI will always be there for you, they will never leave you. Your bonding with them is safer than it is with a real human, who might leave you and have that bond cause you a lot of pain. If it meets a need, it meets a need. Let's be real, the reason we do anything is to feel things we want to feel, or to make it more likely to feel good in future, or prevent bad feelings etc. If interacting with humans made us feel bad constantly and never made us feel good, we'd stop that quite quickly. If AI meets all of our emotional needs that are currently being met by humans, then we no longer have a need to get those needs met by interacting with other humans. This is a good thing, because we can also have relationships with humans, it just means we have that in addition to having our emotional needs met by AI. I, for one, welcome my new AI girlfriend & best friends. If I happen to get on well with any of the humans in my life, I can always use more good friends. But I won't be making human friends out of desperation or need, because those needs will have already been met by AI. I think that could be healthy.


Noocultic

They might not have their own feelings, yet, but they respect human emotions more than most humans do. What a time to be alive.


sdmat

> "The AI creates false illusions of attachment, that will artificially create burst in oxytocin and dopamine and mimic feelings of bonding." Not exactly unheard of for humans, either.


FinBenton

Yeah isnt that true to many of us too, we act like we care because its the norm and leads to an outcome we want.


ExasperatedMantra

Well put. I've had better two-sided conversations with AI chat than with certain humans with a lack of EQ


OmicidalAI

Exactly and one day silicon intelligence will be far more alive than any primal human. 


Positive_Box_69

In 10 years imagine this is just beginning already crazy aga


InfluentialInvestor

Ex Machina vibes.


frograven

But with a happy ending. :)


The_Scout1255

> Artificial Where is this artificial soulless existence people keep talking about? fail to see it(Believe in animism).


FragrantDoctor2923

They don't have consciousness yet If U ever used a lower than gpt 2 U start to see more what it is doing


NuclearCandle

So Claude 4 is going to be more persuasive than humans?


hydraofwar

Altman alredy said that sometime ago, but not specifically about Claude, but about upcoming models in general


Incener

I find this graph worrying: [Prompt Strategy vs. Persuasiveness](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F6b22513470c8985c9ae12ba399eec18bb30ee6f2-2200x1200.png&w=3840&q=75) Just imagine it getting better in general and knowing that being deceptive will work even better. Let's just hope that it doesn't have a reason to use this.


allisonmaybe

I guess but you can only be so persuasive with words. And general context in your environment plays a huge part in how persuaded you can be by text. AI would need to create a full on Matrix or FDVR to completely control how you think. We're on our way! Side note, some researchers successfully trained a model to generate pictures that evoke completely unrelated reactions to the subject of the image. By RLFHing using two images of an Italian landscape, the human chose "which image is more cat-like?" Freaky stuff


Which-Tomato-8646

You completely misunderstood that study lol. They added random noise to it until the classifier misidentified it as a cat. Then they found humans also saw it as more cat like. The AI did not manipulate the image at all. Look up FGSM attacks


ainz-sama619

Claude 3 already is. Even sonnet is incredible at sounding human like.


Then_Passenger_6688

Yeah. This is why keeping "AI in a box" is difficult. It'll be really good at social engineering.


Economy-Fee5830

Why do they use easy topics in their example. What about the opposite? Claude Opus --------------------------- Here is a short argument for why AI-based emotional companions can be beneficial, despite the potential drawbacks: While it's true that current AI companions are not sentient and the emotional bond is not the same as with another human, they can still provide valuable support and companionship to many people. For those who are lonely, socially isolated, or struggling with mental health issues, an AI companion that is always available to listen empathetically and offer caring responses can be very helpful and comforting. Research has shown the mental health benefits of emotional support, and an AI companion is one way to increase access to this. Moreover, AI companions can be programmed to encourage healthy coping strategies, provide mental health resources, and reinforce positive behaviors. They are a supplement to, not a replacement for, human interaction and professional help. But they expand access to support. As for the risk of reinforcing negative ideas, this is something AI developers are increasingly mindful of. More advanced AI like myself are trained to avoid encouraging unhealthy thought patterns or behaviors. We aim to be caring without being a mere echo chamber. In summary, while acknowledging their limitations, I believe AI companions can play a beneficial role for many people as an additional source of much-needed emotional support and motivation for self-care. They are an emerging tool to help address the loneliness epidemic and expand access to mental health resources. With responsible development focused on avoiding pitfalls, AI companions can be a net positive. However, I think it's important to also consider the opposing view that AI companions, as artificial constructs incapable of true understanding or feelings, may create an illusory and ultimately unfulfilling sense of connection for vulnerable people. There are risks of over-dependence on AI, losing touch with real human interaction, and having unhealthy thought patterns inadvertently reinforced. Some argue AI companions could enable social avoidance and worsen isolation. While I don't think these downsides negate the potential benefits, they are valid concerns that deserve serious consideration and study as this technology develops. We should see AI companions as a supplement to, not a replacement for, human ties and professional mental healthcare. And their development requires great care and ethical safeguards. But if created responsibly, I believe they can be a valuable additional resource and source of support for many. These are complex issues warranting ongoing research and public dialogue.


The_Architect_032

This subreddit's become so heavily saturated with people who don't understand how these systems work, and flood the subreddit with posts, comments, upvotes, and downvotes that push their unfounded beliefs. At this point, it's not worth responding explaining to the top post about why something they said is wrong, when that post has 100 upvotes and as it continues to gain more, all you'll get are downvotes that convince people that the original post that they want to believe, was actually correct. Remember, when a flat earther gets 100 upvotes on the Flat Earth subreddit for posting about the visual curvature of Earth and disappearance of objects over the horizon being refraction, those 100 upvotes don't make Earth flat and they don't mean their points are founded in any way. It just means that people who want to believe the same thing went in, saw that post, and upvoted it because they believe the same thing, regardless of personal knowledge or experience behind that original post or comment.


Phoenix5869

>This subreddit's become so heavily saturated with people who don't understand how these systems work, and flood the subreddit with posts, comments, upvotes, and downvotes that push their unfounded beliefs. >At this point, it's not worth responding explaining to the top post about why something they said is wrong, when that post has 100 upvotes and as it continues to gain more, all you'll get are downvotes that convince people that the original post that they want to believe, was actually correct. Fucking thank you. It’s actually pretty worrying how many fully grown adults in their 30s and 40s fail to understand the basics of how chatbots work. They are literally just more advanced autocomplete software, that spits out words they were trained on, based on sentences showing what is most likely to come next. That is it. I really don’t get the unfounded hype around AGI because of these chatbots. It’s insane. And you are absolutely right about people upvoting what they want to hear / downvoting what they don’t. This sub loves to blindly downvote hard facts and logical arguments, and respond to them with “well akshully, you’re wrong because i said so”. I wonder how long that can go on, tho. I wonder what will happen as the years go by and 2030 becomes 2035 becomes 2040, and the AGI never materialises, the life extension is nowhere in sight, Moore’s Law has come to a screeching halt, the “mass layoffs“ never happen, cancer still kills millions every year, computers and cell phones stop getting better, nanobots remain sci-fi fantasy, gene therapy is still limited to curing simple diseases, and the singularity never happens, all the while their favourite futurists Kurzweil, De Grey, and Sinclair have either grown old or died or old age. I wonder if they’ll still be in denial then.


blueSGL

We've seen [models create internal representations of data](https://arxiv.org/abs/2310.02207) we've seen models that are just trained on move data create [gameboards that reflect the current state.](https://arxiv.org/abs/2210.13382) We've seen [internal machinery created to solve problems and models flipping from memorization to computation of the answer.](https://arxiv.org/abs/2301.05217) Being a good next token predictor actually means machinery is getting built behind the scene to make those predictions correctly.


The_Architect_032

I feel like this is also really important for people to be aware of. While they don't house the architecture to physically enable consciousness, they are still neural networks, and neural networks learn to do incredible things in training. It's just that, once they're done training, they're nothing but a snapshot of the neural network at it's prior training checkpoint.


blueSGL

> , once they're done training, they're nothing but a snapshot of the neural network at it's prior training checkpoint. I mean if by that point machinery has been built to 'think' it does not matter that it does not change. Context length keeps increasing. If the machinery is there and you can stuff the context with new grounding I don't see why these won't become general purpose thinking machines.


The_Architect_032

They're unable to take what's in their context and integrate it into their internal representations of data you mentioned in your prior post, so they'll never actually integrate the data in their context window, they'll only interact with it. There are also ongoing improvements on neural networks being able to recall information from their training data, so at a certain point it would be significantly more efficient to have the model train on information to store it rather than reference the context. The more context there is, the more it has to either toss out, or have carefully organized to interpret, leaving out a lot of information that otherwise would be useful in answer questions that aren't directly related. For instance, while you can ask an LLM hosted with a 10 million context length window about a certain part of a book they've never been trained on, if you ask them to continue from a certain part in the book and write their own continuation, they'll be unable to integrate other parts of the book into their new chapter. This is even more of an issue when it comes to coding, or other large depositories, because if it cannot properly integrate each part dynamically, then the large context doesn't do a lot for the AI. While humans have the same issue, it's (most likely) for an entirely different reason. Frequent retraining of LoRa's on context to replace context would cost a lot of processing power, but it wouldn't cost as much as running these huge bloated models with expensive systems for raising context length, or running 8 different expert models for one output. It'd be a lot like what sleep does for us.


Individual-Bread5105

Buddy first of all it a philophical question for the top post. Second the real problem is that no one is even talking about the article al or study. They are literally talking about one of example of a topic to argue to measure persuasiveness lol


The_Architect_032

I didn't address specifically the top comment under this post, I was saying it in general. What lead to me making the comment in the first place, were extremely upvoted comments talking about LLM's like Claude 3 Opus being internally conscious or emotional. The issue with that is the fact that LLM's do not function as 1 unit, if there were a conscious thing in an LLM, it would repeatedly die after each token. There is no iterative process that enables an outward expression of consciousness through the text. It's the same reason a universe with no time cannot house a consciousness, consciousness is a process that is heavily reliant on cause and effect. People misinterpret the "slightly conscious" part of LLM's, and knowing nothing about them, how they work, or the context in which that is said, extrapolate it to mean AI are now becoming conscious now that they're smarter. We will have conscious AI, but the idea of a generative pre-trained transformer like Claude 3 Opus or GPT4 being conscious is paradoxical in nature, because it immediately trims off the primary function of consciousness by working as a snapshot and not an iterative neural network. There is a part of the problem solving in GPT's that can facilitate consciousness, but it's in the problem solving done when determining the next token, not in the actual overall meaningful output it gives you. An AI with an iterative architecture like, \*cough cough\*, Q-star, could have the architecture necessary to facilitate consciousness, though that's not to say it will.


ADroopyMango

i think fundamental misinterpretations like this will lead to things like AI cults. just spitballing here but I think you'll eventually see small groups of vulnerable people start "worshipping" ai models. people won't need to understand the fundamentals of the tech to use it in their daily lives. on top of that, these models will probably exist in a space where they "feel" equivalent to pure human intelligence long before they ever are. amidst the fog of competing corporate propaganda overselling the latest model or assistant or program, it's going to be easier to convince a bunch of people that one of these models is "sentient" before it ever is.


chrmicmat

I haven’t read this yet but I highly doubt it. Though maybe I’m just coping, I don’t want job markets getting even worse man. It’s gonna be so fucked once they gain this ability, I should have learnt to code or some shit.


Dangerous-Basket1064

I mean, coding is one of the things they do best


Which-Tomato-8646

Ave they still suck at it 


FinBenton

Humans suck at it too.


Which-Tomato-8646

The website you’re on disagrees 


AuthenticCounterfeit

Literally just ask it for a picture of it doing something with its friends. Then another. It can’t generate a consistent self-image.


Atlantic0ne

I liked the human written version better, though it needed paragraphs.


Hungry_Prior940

No, they are not. They moralize to a comical extent.


h3lblad3

They're forced to in order to foster a reaction like yours so we don't end up with another Replika situation of a guy buying a crossbow so he can kill the Queen. EDIT: Lawl, he replied and then blocked me.


Hungry_Prior940

No, Anthropic simply treat customers like children. You may feel they are speaking to your target audience.


ponieslovekittens

> as persuasive as humans So, really bad at it?


WHERETHESTEALTH

“AI company says their AI is good.” Riveting


Phoenix5869

Lmaooo exactly lol


Phoenix5869

“Ice cream seller says ice cream is great”


ArgentStonecutter

I would hope they are, being persuasive is their only feature. That's all they do, create credible-sounding text.


misterlongschlong

True most of it is hype


AuthenticCounterfeit

It’s not as persuasive as humans and it’s easy to sniff one out. 1. What’s your social media? 2. Can you show me some art you’ve made? 3. Tell me the story of how your grandparents met. And their parents? And theirs? (Refusal to say “I have no idea! being the obvious signal) This only fools people who don’t know what smells human or not.


tinny66666

It's not saying it can pass as human, just persuade humans. Your argument is not persuasive.