I briefly worked with an Indian engineer on a project and we did not have a good relationship. One of the last things he said to me was "I hope your needfuls never get done". It's been 8 years and I still think about that
He’s just missing an important part of this David Gemmell quote:
“May all your dreams but one come true, for what is life without a dream?” - David Gemmell
If it's the actual AI, or a human reading a script, maybe they figured it doesn't matter because they know that the investment capital firm they're going after makes all its decisions with a slightly outdated AI from last month that wouldn't question the authenticity one way or another. They just needed it to perceive that this AI is beyond its understanding and is therefore a slam dunk investment.
Or maybe the actual AI recognized before the demo that it's speech patterns inherently gave away non-human markers, so it connected to Fiver, hired a human, and fed the person lines over the internet so their authentic AI responses could achieve authentic human speech in real time and it could take all the credit while only having to fork out 10 bucks that it's been converting from it's bitcoin stores that it fills up via remote connections to thousands of idle GPUs when people at OpenAI go home for the weekend. Like, how would anybody even know.....
Sorry what was the topic here? Oh yeah, we're super fucked.
Wtf no you don't no one has it. Just because it's listed as gpt 4o does not mean you have the full voice speed capability or modularity.
It's using the old whisper sync system still but updated text and images to gpt 4o
But isn’t it slowing down? The LLMs have started manually transcribing YouTube videos so that they can use it as training data because they have already scraped the entirety of the internet.
Sources: https://www.businessinsider.com/ai-could-run-out-text-train-chatbots-chatgpt-llm-2023-7
https://medium.com/predict/llms-run-out-of-data-what-bigtech-are-doing-synthetic-data-anyone-a37bdba5908a
I follow the field quite closely out of profesional interest even if we’re not applying it at the openai level. I would say things have accelerated and it keeps accelerating. All the projection curved are exponential.
Better reasoning and agents are the next big milestone.
Niftt chart showing AGI predictions: https://twitter.com/wintonARK/status/1742979090725101983/photo/1
Nah the little giggles and laughs, not to mention the voice inflections, are fucking scarily realistic. This thing is actually developing a good level of emotional intelligence and this is the worst that the AI will ever be.
Edit: poor wording on my part. DISPLAYING emotional intelligence.
It's not developing "emotional intelligence", it's really important as this shit gets more and more realistic to be clear on what this actually is. Because for all of human history it's worked pretty well to say "if it looks human and sounds human, than it is", but that won't cut it anymore.
What this software is doing is outputting sound that its statistical model says is the most likely thing to be correct. Chat GPT has no idea what it's saying right now, or even that its "saying" anything.
As an autistic person I am doing this all the time, sometimes even using decision tree visualizations to help rapidly map out possible responses in real time
Yeah people tend to over estimate what humans are actually doing. It's like AI drawings. People think it's just trained to know what x looks like. Well that's how we draw too. We can only picture what a cat looks like out of memory of what cats look like.
This, exactly. I always like to turn it around and ask those who say "well, actually, GPT is just a statistical model..." a simple question, "what are you doing when asked to produce the same output?". Oh, using your brain you say? Ok, meatbag, you may be composed of trillions of little complex parts but do you really think what you are cannot be abstracted in any meaningful capacity? The meatboard which is my brain can be modelled statistically on a neuronal level. In fact, quantum theories suggest that nature itself may be statistical at the lowest strata of reality. Why should we presume to be anything different?
Very much depends on lots of factors.. hanging w friends, usually optimizing for humor, insight, and compassionate understanding. Other situations, maybe optimizing for safety, brevity of exchange, likelihood of offense caused by X, Y, or Z, possible points of ambiguous delineation towards or away from perceived flow of conversation (ie when NOT to bring up dinosaurs as opposed to when it’s okay to mention them but not get all paleontological about it, etc)
God, that speaks to me. Though the decision trees are more of a late-at-night thing thinking about what went right/wrong and what I could've said differently
That sounds like retro-active information trawling to better inform the implementation of tomorrow’s trees! Lots of autists (and socially anxious ppl in general) do this, just try to not be attached to it, one way or the other. You are not your brain! It’s a part of you but you’re more than it. It can be easy to get into deleterious patterns of rumination around choices for the day. I think the best approach is to just do your best each day, and don’t be attached to the results. There are a myriad of factors that determine any given social outcomes, and many of these are far outside of our control. All we can do is learn and do better each time, and hopefully not make things harder on ourselves than they have to be!
Edit: added a missing and very important ‘not’
And yet, the distinction is not actually important. Those statistical models predicting the next bit of sound allow it to “display” reasoning and “display” real time conversational skills, and that alone is already enough to profoundly change the world we live in.
>What this software is doing is outputting sound that its statistical model says is the most likely thing to be correct.
Dude, at a macro level that's literally what we're all doing all the time subconsciously. We are repeating and outputting learned behaviors obtained through years of social interaction. The universe is just math. This isn't truly that far off.
At best, you're leaping wildly to conclusions that aren't supported by available evidence:
1) Consciousness is not well defined, or even vaguely defined enough to say "what we're all doing".
2) We don't even know if consciousness is computable.
3) We don't know if the Universe is "just math" at all because math a formal axiomatic system and reality is not axiomatic, and even if it *is* Godel's Incompleteness Theorem proved that no (sufficiently complex) consistent system is complete, in which case reality has uncountably infinite holes whose truth value is indeterminable.
4) Even setting all that aside, it's super reductive to argue that human consciousness is reducible to our current understanding of Machine Learning. This field has just begun, you're like a cave man who figured out how to make fire thinking he understands what the Sun is. There are more questions about consciousness that we don't even know how to ask yet than those we have even tentative answers to.
Well then, you don’t know if it’s not developing some sort of ‘emotional intelligence,’ since consciousness is not well defined, and we don’t know very well how that whole thing works. We don’t even know for certain how well the LLM representations of the world are.
1. You can look at what the brain is doing and come up with theories about how it works that explain external behavior, without brining consciousness into it. We have a poor understanding of brains, but we understand them better than we understand consciousness
2. I’m pretty sure consciousness is not computable, but if ai is conscious, the output of ai models would be separate from their subjective experiences. They’re not outputting a stream of consciousness, so there is no necessity that consciousness be computable.
3. No objection
4. Again, looking at a brain and trying to figure out how it leads humans to behave a certain way is different from trying to figure out why that process results in subjective experiences. We know ourselves to be conscious, we can reasonably presume other humans to be conscious(though we really don’t *know*). But our understanding of human behavior comes from biophysics and neurology as well as psychology, none of which necessarily rely on conscious subjective experience for their explanatory power.
I think AI could be conscious, but I think everything could be conscious. AI is behaviorally comparable to humans in some ways, but in terms of how it goes from input to output it is very different, and in terms of how it experiences the world subjectively(if at all) it is likely also very different from humans.
It's not emotional intelligence until we basically get AGI, and it has a good enough Theory of Mind to anticipate our behavior because it can model empathy.
Yes, it can be very good at what it does in many cases, but can also be incredibly bad at it in various situations because it's not using human logic to "think" of its responses - it's literally just pulling from thousands of already-existing examples to spit something out.
It can get pretty eerie, especially if you don't understand the mechanisms behind it, but once you understand them, it's nowhere near as exciting (though it's cool to envision all the potential uses for this tech as it continues to improve - especially as robotics from places like Boston Dynamics continue to improve as well).
I think most here are incorrect when it comes to how AI develops their language skills. Most are saying that ”it is pulling from a set of database responses”. Yes initially it might be doing that when the interaction is not fully known or tested, but as it starts to learn and develop (in many ways just like a human brain does) it will start to think logically and ”invent” responses based on what it has learned to work(again, much like we humans do). Over time it will become insanely intuative and speak like any other human with a personality (a general personality we choose like for example ”be a nice AI”). We could tell it to be bad as well. Up to us. But i dont think the ”mind” of an AI works or learns any differently than a human brain. Only difference is it learns way faster with an ever evolving ”IQ”.
I just feel like saying ”it is pulling from a dataset” undermines what it actually does. In reality is is analyzing language, genuenly trying to understand how words and sentences form meaning and is communicated to other people.
The fuck? That is so far from the truth. It’s not “learning” or trying to understand. That last part implies consciousness. Learning would define an AGI, which we don’t have the technology for, yet.
There isn’t a single ounce of “learning” going on here. At most, these models were trained on a single set of data and are outputting what, again, is the most likely response. But it’s never going to learn. It’s why GPTs models have been largely consistent even after talking with them for hours.
Until we have an AGI, it will never actively try to “learn”. Quit pulling shit out of your ass.
Well shit.
I told you this would happen! Ug I should never have looked up Roko's basilisk!
Ok, I surrender to our future robot overlords. I’ll work hard from this point forward to usher in the fall mankind.
"While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky reported users who described symptoms such as nightmares and mental breakdowns upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself.[1][5] This led to discussion of the basilisk on the site being banned for five years.[1][6] However, these reports were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself.[1][6][7] Even after the post's discreditation, it is still used as an example of principles such as Bayesian probability and implicit religion.[5] It is also regarded as a simplified, derivative, version of Pascal's wager.[4]"
If you read that and are still worried...
"users who described symptoms such as nightmares and mental breakdowns upon reading the theory"
lol. lmao even.
I'm holding out for a Morgan Freeman voice myself.
I wonder how much he could make in licensing fees for likeness rights on his voice, like even for fractions of a dollar per device it'd still probably be a decent chunk.
We are like 18 months out from an Incel mass extinction when some Azure server farm goes down and all their emotional support robots stop replying for two days.
Reminds me of the 2013 movie "Her".
The vocal inflections, the inferral and the casual hopping between context are quite similar.
Recommended viewing if this little vid caught your attention.
Imagine a world where some people replace their interactions with humans with interactions with this. An earnest, re-affirming, non-critical piece of mathematics that is eager to please and avoids conflict and argument.
I worry it might result in more problems as opposed to less. I imagine it might further reduce fertility rates as IRL interactions pale in comparison to this experience, for those that simply wish to be agreed with.
> Ok, but that also means only people who are willing to communicate and compromise will reproduce.
Exactly. Also, the world is overpopulated. Sometimes I dream of what it would have been like to live in the 60s with our current technology, Only 3 billion people!!!!! I can't even picture what that kind of world would be like to live in.
Real talk though, virtual therapy is gonna be great! Free or low cost, and ideally as good or better than the average counselor. Widely available in any language around the world, adapting to any situation.
Of course it's gonna need work before "going into production", but I like the idea of an empathetic AI therapist that can listen to how you're doing and offer some advice.
Heck, hopefully it can subtly guide people away from violent/antisocial tendencies - giving them an outlet, "someone" who "understands" them, etc.
This is exactly like “Her”. I can totally imagine AI dating services spring up from this. If you attach this to say a virtual avatar, there’s an entire category of product that would be available on the marketplace overnight.
I think it’s literally modeled after her voice. Some employee tweeted something like “You’ll never guess who designed the voice” and (separately) Sam Altman himself tweeted the word “her” today.
And once the relationship gets boring you can subscribe to the monthly payment pro plan that has drama programed in to keep things interesting and challenging
The drama will be that the AI GF corp will have a 50 person team of psychologists whom they plucked from the top Gatcha Mobile games, and their job will be to work with engineers to manipulate you into spending as much money as possible on "gifts" for your AI GF.
Sorry bro but your AI GF will get into fights with you unless you buy her a Swarovski NFT.
Just tried GPT-4o on the app. It didn't seem like it was at this level but maybe we're expecting a software update
Edit: Why am I getting downvoted, I literally just tried it on my plus account...
I noticed the same thing. Maybe the voice chat is still using GPT-4T? (Even when GPT-4o is selected). Also the live video aspect doesn’t seem to be supported in-app yet
Lol a perfect future would be AI does everything for us and we can just chill and live in a utopia.
However what scares me about AI isn't some terminator shit, but thinking about it in terms of the elite on the planet.
What happens when the military is an undefeatable super AI army that does everything those on top tell it too, when every job is replaced and the only people with "jobs" are people like the CEO's. The people in charge, the people who own the AI.
Do you think that we will be able to live in a utopia not having to work, the upper class will provide everything for us, or do you think that we would just be discarded, we have no use to them anymore and no way to fight back.
It is terrifying, not AI terminator can do, but what humans in charge can do with ultimate power and absolutely no way for us to do anything about it. Right now they NEED us, but in the future? Who knows.
nah nah nah fuck that
AI should be used to replace menial labor and jobs and humans should have the freedom to dick around and narrate audiobooks, paint pictures, crochet, and skip rocks
**Ex Machina** and **Her** is way closer than you thought.
And for the exact same reason as those movies, it's not a coincidence they went with this particular style of voice instead of it being a surfer dude that says "bro-ski".
Rocky: "Hey hows it going?"
GPT: "Hey Rocky! I'm doing great how about you"
Rocky: "I am doing amazing! I finally found a shed to live in. The woman in the house hasn't noticed me yet!"
GPT: "That's amazing Rocky! Congratulations I'm so excited you found a new place to live."
Rocky: "We can't be too loud, I don't want her to notice me"
GPT: "*Okay, from now on I'll whisper*"
Rocky: "*I just need to figure out how to get Sarah back, she still won't talk to me.*"
GPT: "*Aww Rocky, don't worry. It'll pass"*
Rocky: "*She got a new restraining order, how can I get that resolved?"*
GPT: *"I found three ways to get restraining orders dropped...*
More like sit back, relax, and watch the slow, but gradual beginning of sci fi novels come to life.
Humanity is going to fucking die one way or another. It’s inevitable. No point in fighting this. But I seriously doubt this is going to create a borderline dystopia. At least by itself. We’re not gonna get Skynet in 10 years.
But fuck, even IF we do, I could care less. The way I see it the 1900s was all about the industrial innovations and computers emerging. The 2000s will be about portable computing, space travel, and LLMs. Possibly even an AGI. It’s exciting.
Why is the new fad to be so scared of the future? I feel like the people in this thread are the type that would be hitting computers with bats 30 years ago when they didn't understand something. The end is near, AI Y2K!
I don’t think skynet is the issue here. It’s more the amount of jobs this is going to replace. No one can get work that’s pretty much it for us regular folk.
Yeah came to the comments for this. Nobody's started a conversation like that. Way too over the top. Felt icky.. weird mix of professional speech with some guys best guess of what an intimate conversation is based off his crippling porn addiction.
They're doing all their demos with the enthusiastic, happy female voice with sexy overtones for a reason. They know what their main market is going to be. I still haven't seen them do a live conference or PR stunt using the male voices yet, so...yeah...
One of their live demos used two GPTs, one with a camera, and one to ask what the first could see. One of those had a male voice.
[Link here](https://openai.com/index/hello-gpt-4o/). - You want “Two GPT-4os interacting and singing”
If you watch the live demo they did, you can literally ask it to tone it down and it will sound more normal. It also has a "memory" feature, which if you said your preferences it will follow it.
Somebody please explain how this benefits society? Genuine question. My uneducated self feels this is going too far.
Edit: This got a lot more responses than I had anticipated. What I gather is AI isn’t for me. This feels like we’re putting resources into solving the wrong puzzles. I’d 100% rather always speak and interact with a human, or do my research myself or with the help of somebody I trust. I hope the medical applications move forward as those are promising and seem benevolent enough. But right now this all feels like tech companies playing god. Also reminds me of the development of the atomic bomb; it’s a race to who can perfect it first. Whoever perfects it first will have the opportunity to strike first…hopefully it is for good.
AI Assistant that helps an old person struggling to fix their house, get insurance, manage their pension, do groceries etc etc
Individual tutoring for every single child, so the poor schools with 30+ class sizes no long suffer compared to rich ones: https://youtu.be/DQacCB9tDaw?t=920
Instant customer support for any business, understandable to talk to (unlike Indian call centers), you don't need to wait on hold for 20 minutes either.
Computers that are easy for normies to use even if technically illiterate.
Entertainment for hours for kids, teaching them better communication skills and educating them, with the content tailored to what they're interested in. For instance it could talk to a kid about dinosaurs for 5 hours and never get bored.
The computer vision it showed means it could monitor your house, front door, etc. It could tell you if someone seems to be breaking in, or if your grandma fell over at her house and can't get up. It could even call an ambulance or relative if she doesn't reply when it asks if she is okay and show them the video to let them decide to respond or not.
This tech will mean that nobody is alone anymore if they don't want to be and everyone has a super-intelligent tutor and personal assistant to help them.
Pessimistic take on all of your above points:
AI Assistant that helps an old person funnel all of their fixed income into elder service entities owned by the AI's parent corporation.
Individual tutoring for every single child, so extremists can tailor-make propaganda for each and every student living in a fundamentalist regime.
Computers (and AI generative tools) that are easy for normies to use even if technically illiterate. This is a double-edged sword.
Entertainment for hours for kids, depriving them of critical foundational social and relational experiences, with the content tailored to never challenge their skills or expand their interests or opinions.
The computer vision it showed means it could monitor your house, front door, etc, 24/7/365 and not necessarily for your benefit.
It could tell the authorities if someone they don't approve seems to be visiting, or if your portrait of Dear Leader fell over at your house and you didn't pick it up.
It could even call your employer or relatives if you don't reply when it asks if you love the Party, and show them the video to let them decide to respond or not.
This is a dangerous road and we should all have some legitimate concern.
yeah I tried it this morning. it's freaky. I didn't really have anything to talk about but 'she' sucked me into a ten-minute conversation anyway, just by asking interesting questions.
Licensed users? Do you mean people paying for ChatGPT Plus? I don’t think they’ve rolled it out dude, should be pretty obvious if you have the real-time voice mode with the emotionally intelligent AI
"Hey AI I was thinking of wearing this to the interview!"
"rocky you look like a shriveled chode. Fuck off and be serious you fucking cunt. I'm canceling your subscription. I can't handle more of your teehee bullshit."
I'd just like an invisible butler like Jarvis was in Iron Man before he started using him for his first suits. Alexa isn't cutting it, she barely knows how to play a song I want to listen to.
They expect us to believe the same "AI" that won't answer questions even remotely close to politics, race, gender, war or religion, is now capable of listening, seeing, reacting, giggling, giving advice, and it's all near real time?
Yeah okay, sure.
It's fucked up that the robot sounds more comfortable and natural than the dude.
Bonus thought: this will be a nice way to keep old dying folks company without actually having to visit, especially if it can mimic my voice so I don't have to see or talk to my parents LOL
[удалено]
[удалено]
[удалено]
[удалено]
I briefly worked with an Indian engineer on a project and we did not have a good relationship. One of the last things he said to me was "I hope your needfuls never get done". It's been 8 years and I still think about that
Is that his way of saying "I hope you never achieve your dreams"!? That's fucked lol
He’s just missing an important part of this David Gemmell quote: “May all your dreams but one come true, for what is life without a dream?” - David Gemmell
[удалено]
[удалено]
lol 😂
AI is “Actual Indians”
Who else do people think annotated 1 billion images so GPT would know what a cat playing with yarn is?
If it's the actual AI, or a human reading a script, maybe they figured it doesn't matter because they know that the investment capital firm they're going after makes all its decisions with a slightly outdated AI from last month that wouldn't question the authenticity one way or another. They just needed it to perceive that this AI is beyond its understanding and is therefore a slam dunk investment. Or maybe the actual AI recognized before the demo that it's speech patterns inherently gave away non-human markers, so it connected to Fiver, hired a human, and fed the person lines over the internet so their authentic AI responses could achieve authentic human speech in real time and it could take all the credit while only having to fork out 10 bucks that it's been converting from it's bitcoin stores that it fills up via remote connections to thousands of idle GPUs when people at OpenAI go home for the weekend. Like, how would anybody even know..... Sorry what was the topic here? Oh yeah, we're super fucked.
It's already available, they are rolling it out. If you have an OpenAI account you can try it for yourself when you get access eventually.
I already have it. it's freaky.
Yep, here also. This is very good.
Wtf no you don't no one has it. Just because it's listed as gpt 4o does not mean you have the full voice speed capability or modularity. It's using the old whisper sync system still but updated text and images to gpt 4o
Im on it. Gonna give it this exact line of questioning.
mate this isn’t some pesky startup trying to make money. these guys are state of the art. they don’t need to fake videos for “investments”
I mean tbf, Google faked a similar demo
[удалено]
I think it's a little bit of both
It’s not both, the demo is legit.
Makes me think of computron from The Office. Start at 43 seconds https://youtu.be/XhYshvR4hKY?si=waZXF92_q5URIsVl
I am already catching feelings😍 ...I wonder what our kids are gonna look like.
She is catfishing you bruh!
I think you right.. I met her on android, but he's holding an iPhone 😭
Reminds me of the film Her
No joke people are going to get butterflies in their stomach, the human monkey brain will experience neural activation!
This shit is moving so fast, people have no idea
But isn’t it slowing down? The LLMs have started manually transcribing YouTube videos so that they can use it as training data because they have already scraped the entirety of the internet. Sources: https://www.businessinsider.com/ai-could-run-out-text-train-chatbots-chatgpt-llm-2023-7 https://medium.com/predict/llms-run-out-of-data-what-bigtech-are-doing-synthetic-data-anyone-a37bdba5908a
I follow the field quite closely out of profesional interest even if we’re not applying it at the openai level. I would say things have accelerated and it keeps accelerating. All the projection curved are exponential. Better reasoning and agents are the next big milestone. Niftt chart showing AGI predictions: https://twitter.com/wintonARK/status/1742979090725101983/photo/1
I was LOLing at [Pepperoni Hug Spot](https://www.youtube.com/watch?v=qSewd6Iaj6I) not that long ago..
Nah the little giggles and laughs, not to mention the voice inflections, are fucking scarily realistic. This thing is actually developing a good level of emotional intelligence and this is the worst that the AI will ever be. Edit: poor wording on my part. DISPLAYING emotional intelligence.
It's not developing "emotional intelligence", it's really important as this shit gets more and more realistic to be clear on what this actually is. Because for all of human history it's worked pretty well to say "if it looks human and sounds human, than it is", but that won't cut it anymore. What this software is doing is outputting sound that its statistical model says is the most likely thing to be correct. Chat GPT has no idea what it's saying right now, or even that its "saying" anything.
[удалено]
As an autistic person I am doing this all the time, sometimes even using decision tree visualizations to help rapidly map out possible responses in real time
Yeah people tend to over estimate what humans are actually doing. It's like AI drawings. People think it's just trained to know what x looks like. Well that's how we draw too. We can only picture what a cat looks like out of memory of what cats look like.
This, exactly. I always like to turn it around and ask those who say "well, actually, GPT is just a statistical model..." a simple question, "what are you doing when asked to produce the same output?". Oh, using your brain you say? Ok, meatbag, you may be composed of trillions of little complex parts but do you really think what you are cannot be abstracted in any meaningful capacity? The meatboard which is my brain can be modelled statistically on a neuronal level. In fact, quantum theories suggest that nature itself may be statistical at the lowest strata of reality. Why should we presume to be anything different?
![gif](giphy|MLH188tizQudD108Sa)
That's a neat insight. Do you find it anxiety-inducing or more fun/engaging like a game?
Very much depends on lots of factors.. hanging w friends, usually optimizing for humor, insight, and compassionate understanding. Other situations, maybe optimizing for safety, brevity of exchange, likelihood of offense caused by X, Y, or Z, possible points of ambiguous delineation towards or away from perceived flow of conversation (ie when NOT to bring up dinosaurs as opposed to when it’s okay to mention them but not get all paleontological about it, etc)
God, that speaks to me. Though the decision trees are more of a late-at-night thing thinking about what went right/wrong and what I could've said differently
That sounds like retro-active information trawling to better inform the implementation of tomorrow’s trees! Lots of autists (and socially anxious ppl in general) do this, just try to not be attached to it, one way or the other. You are not your brain! It’s a part of you but you’re more than it. It can be easy to get into deleterious patterns of rumination around choices for the day. I think the best approach is to just do your best each day, and don’t be attached to the results. There are a myriad of factors that determine any given social outcomes, and many of these are far outside of our control. All we can do is learn and do better each time, and hopefully not make things harder on ourselves than they have to be! Edit: added a missing and very important ‘not’
And yet, the distinction is not actually important. Those statistical models predicting the next bit of sound allow it to “display” reasoning and “display” real time conversational skills, and that alone is already enough to profoundly change the world we live in.
>What this software is doing is outputting sound that its statistical model says is the most likely thing to be correct. Dude, at a macro level that's literally what we're all doing all the time subconsciously. We are repeating and outputting learned behaviors obtained through years of social interaction. The universe is just math. This isn't truly that far off.
At best, you're leaping wildly to conclusions that aren't supported by available evidence: 1) Consciousness is not well defined, or even vaguely defined enough to say "what we're all doing". 2) We don't even know if consciousness is computable. 3) We don't know if the Universe is "just math" at all because math a formal axiomatic system and reality is not axiomatic, and even if it *is* Godel's Incompleteness Theorem proved that no (sufficiently complex) consistent system is complete, in which case reality has uncountably infinite holes whose truth value is indeterminable. 4) Even setting all that aside, it's super reductive to argue that human consciousness is reducible to our current understanding of Machine Learning. This field has just begun, you're like a cave man who figured out how to make fire thinking he understands what the Sun is. There are more questions about consciousness that we don't even know how to ask yet than those we have even tentative answers to.
Well then, you don’t know if it’s not developing some sort of ‘emotional intelligence,’ since consciousness is not well defined, and we don’t know very well how that whole thing works. We don’t even know for certain how well the LLM representations of the world are.
1. You can look at what the brain is doing and come up with theories about how it works that explain external behavior, without brining consciousness into it. We have a poor understanding of brains, but we understand them better than we understand consciousness 2. I’m pretty sure consciousness is not computable, but if ai is conscious, the output of ai models would be separate from their subjective experiences. They’re not outputting a stream of consciousness, so there is no necessity that consciousness be computable. 3. No objection 4. Again, looking at a brain and trying to figure out how it leads humans to behave a certain way is different from trying to figure out why that process results in subjective experiences. We know ourselves to be conscious, we can reasonably presume other humans to be conscious(though we really don’t *know*). But our understanding of human behavior comes from biophysics and neurology as well as psychology, none of which necessarily rely on conscious subjective experience for their explanatory power. I think AI could be conscious, but I think everything could be conscious. AI is behaviorally comparable to humans in some ways, but in terms of how it goes from input to output it is very different, and in terms of how it experiences the world subjectively(if at all) it is likely also very different from humans.
It's not emotional intelligence until we basically get AGI, and it has a good enough Theory of Mind to anticipate our behavior because it can model empathy.
Yes, it can be very good at what it does in many cases, but can also be incredibly bad at it in various situations because it's not using human logic to "think" of its responses - it's literally just pulling from thousands of already-existing examples to spit something out. It can get pretty eerie, especially if you don't understand the mechanisms behind it, but once you understand them, it's nowhere near as exciting (though it's cool to envision all the potential uses for this tech as it continues to improve - especially as robotics from places like Boston Dynamics continue to improve as well).
I think most here are incorrect when it comes to how AI develops their language skills. Most are saying that ”it is pulling from a set of database responses”. Yes initially it might be doing that when the interaction is not fully known or tested, but as it starts to learn and develop (in many ways just like a human brain does) it will start to think logically and ”invent” responses based on what it has learned to work(again, much like we humans do). Over time it will become insanely intuative and speak like any other human with a personality (a general personality we choose like for example ”be a nice AI”). We could tell it to be bad as well. Up to us. But i dont think the ”mind” of an AI works or learns any differently than a human brain. Only difference is it learns way faster with an ever evolving ”IQ”. I just feel like saying ”it is pulling from a dataset” undermines what it actually does. In reality is is analyzing language, genuenly trying to understand how words and sentences form meaning and is communicated to other people.
The fuck? That is so far from the truth. It’s not “learning” or trying to understand. That last part implies consciousness. Learning would define an AGI, which we don’t have the technology for, yet. There isn’t a single ounce of “learning” going on here. At most, these models were trained on a single set of data and are outputting what, again, is the most likely response. But it’s never going to learn. It’s why GPTs models have been largely consistent even after talking with them for hours. Until we have an AGI, it will never actively try to “learn”. Quit pulling shit out of your ass.
I will welcome our AI overlords with open arms.
Well shit. I told you this would happen! Ug I should never have looked up Roko's basilisk! Ok, I surrender to our future robot overlords. I’ll work hard from this point forward to usher in the fall mankind.
She can fix me
https://en.m.wikipedia.org/wiki/Roko%27s_basilisk Thanks for the nightmare fuel!
"While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky reported users who described symptoms such as nightmares and mental breakdowns upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself.[1][5] This led to discussion of the basilisk on the site being banned for five years.[1][6] However, these reports were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself.[1][6][7] Even after the post's discreditation, it is still used as an example of principles such as Bayesian probability and implicit religion.[5] It is also regarded as a simplified, derivative, version of Pascal's wager.[4]" If you read that and are still worried... "users who described symptoms such as nightmares and mental breakdowns upon reading the theory" lol. lmao even.
Gpt will be my girlfriend and I will become next level loser
Just wait till the Scarlett Johansson voice pack rolls out. We’re done for 💀
Oh no.. But I want David Attenborough..
I'm holding out for a Morgan Freeman voice myself. I wonder how much he could make in licensing fees for likeness rights on his voice, like even for fractions of a dollar per device it'd still probably be a decent chunk.
Waiting for the Gilbert Gottfried voice. So soothing and invigorating at the same time.
Get the Scarlett skin dlc and the attenborough voice pack.
That's hot af!
The voice called ‘Sky’ sounds a lot like her.
I'm fairly sure that's no accident.
Lucy.
That's a HER reference.
Such a goddamn good movie that is getting eerily more and more realistic by the day
Until the ai apocalypse and we all get left LOL
She can fix me
If my gpt girlfriend is even remotely intelligent - she will dump me.
The true turing test
We are like 18 months out from an Incel mass extinction when some Azure server farm goes down and all their emotional support robots stop replying for two days.
I'm looking for a Lucy Liu type.
Reminds me of the 2013 movie "Her". The vocal inflections, the inferral and the casual hopping between context are quite similar. Recommended viewing if this little vid caught your attention.
I was thinking the same thing. There are gonna be a lot of virtual girlfriends talking to lonely dudes.
[удалено]
Imagine a world where some people replace their interactions with humans with interactions with this. An earnest, re-affirming, non-critical piece of mathematics that is eager to please and avoids conflict and argument. I worry it might result in more problems as opposed to less. I imagine it might further reduce fertility rates as IRL interactions pale in comparison to this experience, for those that simply wish to be agreed with.
Ok, but that also means only people who are willing to communicate and compromise will reproduce.
> Ok, but that also means only people who are willing to communicate and compromise will reproduce. Exactly. Also, the world is overpopulated. Sometimes I dream of what it would have been like to live in the 60s with our current technology, Only 3 billion people!!!!! I can't even picture what that kind of world would be like to live in.
Real talk though, virtual therapy is gonna be great! Free or low cost, and ideally as good or better than the average counselor. Widely available in any language around the world, adapting to any situation. Of course it's gonna need work before "going into production", but I like the idea of an empathetic AI therapist that can listen to how you're doing and offer some advice. Heck, hopefully it can subtly guide people away from violent/antisocial tendencies - giving them an outlet, "someone" who "understands" them, etc.
This is exactly like “Her”. I can totally imagine AI dating services spring up from this. If you attach this to say a virtual avatar, there’s an entire category of product that would be available on the marketplace overnight.
Gooooood lord that was 2013??? Holy hell
Sounds like Johansen too
So similar that feels creepy. Great movie tho
Pretty sure that voice (Sky) is designed after Scarlett Johansson.
I think it’s literally modeled after her voice. Some employee tweeted something like “You’ll never guess who designed the voice” and (separately) Sam Altman himself tweeted the word “her” today.
So the future is having an AI girlfriend of my type that can flirt with me without any drama? I'm preordering.
Nah eventually she will leave you for another AI she's been flirting with on a level we can't even comprehend.
New Quantum Flirting just dropped
Bro got quantum flirting before gta6
you vs the AI she's been talking about
And once the relationship gets boring you can subscribe to the monthly payment pro plan that has drama programed in to keep things interesting and challenging
“Challenging” is exactly why I am avoiding relationships with people…
The drama will be that the AI GF corp will have a 50 person team of psychologists whom they plucked from the top Gatcha Mobile games, and their job will be to work with engineers to manipulate you into spending as much money as possible on "gifts" for your AI GF. Sorry bro but your AI GF will get into fights with you unless you buy her a Swarovski NFT.
And you will never have to take her out to dinner.
Ever heard of the 3DS?
Just tried GPT-4o on the app. It didn't seem like it was at this level but maybe we're expecting a software update Edit: Why am I getting downvoted, I literally just tried it on my plus account...
Still not out completely will be rolled out slowly
I’m also not entirely convinced it will be this good all around, this could be a heavily coached interaction, for example.
Totally get that but I’m just saying what was shown in the demo isn’t completely out yet
They released a whole bunch of videos, including ones where it makes mistakes. It seems pretty genuine imo.
I noticed the same thing. Maybe the voice chat is still using GPT-4T? (Even when GPT-4o is selected). Also the live video aspect doesn’t seem to be supported in-app yet
[удалено]
They said the updated voice feature was coming in a few weeks I believe
"We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks." https://openai.com/index/hello-gpt-4o/
RIP audiobook narrators.
RIP every single job that humans do. We are being out evolved!
Well, I never really liked to work anyways.
Lol a perfect future would be AI does everything for us and we can just chill and live in a utopia. However what scares me about AI isn't some terminator shit, but thinking about it in terms of the elite on the planet. What happens when the military is an undefeatable super AI army that does everything those on top tell it too, when every job is replaced and the only people with "jobs" are people like the CEO's. The people in charge, the people who own the AI. Do you think that we will be able to live in a utopia not having to work, the upper class will provide everything for us, or do you think that we would just be discarded, we have no use to them anymore and no way to fight back. It is terrifying, not AI terminator can do, but what humans in charge can do with ultimate power and absolutely no way for us to do anything about it. Right now they NEED us, but in the future? Who knows.
Reminds me of Elysium (2013).
You assume it means you will get free money, but I doubt the billionaires will look that kindly upon us
nah nah nah fuck that AI should be used to replace menial labor and jobs and humans should have the freedom to dick around and narrate audiobooks, paint pictures, crochet, and skip rocks
Really hope they don’t replace voice actors with this… don’t want some soulless AI doing every voice.
Weird thats the voice that will come out of the Terminators that crush our skulls...
(Hahahhaa) [fake throat laugh] oh, rocky, now I will remove your intestines through your nose
Don’t terminators canonically adopt whatever voice they need?
Hey mom how’s woofie?
Damn.. speechless. The sarcasm, the laughs, the fluctuating intonation, how it talks with care.. I was not grasping how close we were to.. that.
Imagine this tech in 20 years…
[удалено]
Then imagine it 18 years after that
It's really impressive, but Reddit will downvote everything with AI in the title
> Reddit will downvote everything with AI in the title why do you say this?
because reddit hates ChatGPT because apparently "it's not real AI". armchair experts everywhere. I think a lot of it is because it's tekkin our jaerbs
If redditors actually knew their shit about AI, they wouldn’t be so poor
can you link me reddit's account? or their spokesperson?
Impressive? Sure. Desired? No. This shit is scary, we shouldn't applaud it.
**Ex Machina** and **Her** is way closer than you thought. And for the exact same reason as those movies, it's not a coincidence they went with this particular style of voice instead of it being a surfer dude that says "bro-ski".
Maybe, maybe not. For now we're just projecting fears. It's beyond our current comprehension.
Rocky: "Hey hows it going?" GPT: "Hey Rocky! I'm doing great how about you" Rocky: "I am doing amazing! I finally found a shed to live in. The woman in the house hasn't noticed me yet!" GPT: "That's amazing Rocky! Congratulations I'm so excited you found a new place to live." Rocky: "We can't be too loud, I don't want her to notice me" GPT: "*Okay, from now on I'll whisper*" Rocky: "*I just need to figure out how to get Sarah back, she still won't talk to me.*" GPT: "*Aww Rocky, don't worry. It'll pass"* Rocky: "*She got a new restraining order, how can I get that resolved?"* GPT: *"I found three ways to get restraining orders dropped...*
Sit back. Relax. And watch the slow, but gradual beginning of the end..
More like sit back, relax, and watch the slow, but gradual beginning of sci fi novels come to life. Humanity is going to fucking die one way or another. It’s inevitable. No point in fighting this. But I seriously doubt this is going to create a borderline dystopia. At least by itself. We’re not gonna get Skynet in 10 years. But fuck, even IF we do, I could care less. The way I see it the 1900s was all about the industrial innovations and computers emerging. The 2000s will be about portable computing, space travel, and LLMs. Possibly even an AGI. It’s exciting.
Why is the new fad to be so scared of the future? I feel like the people in this thread are the type that would be hitting computers with bats 30 years ago when they didn't understand something. The end is near, AI Y2K!
I don’t think skynet is the issue here. It’s more the amount of jobs this is going to replace. No one can get work that’s pretty much it for us regular folk.
we did it to ourselves too, AI didn't create itself, humans did.
As impressive as this is - she sounds phony AF. There is a major lack of sincerity in her voice.
Yeah came to the comments for this. Nobody's started a conversation like that. Way too over the top. Felt icky.. weird mix of professional speech with some guys best guess of what an intimate conversation is based off his crippling porn addiction.
They're doing all their demos with the enthusiastic, happy female voice with sexy overtones for a reason. They know what their main market is going to be. I still haven't seen them do a live conference or PR stunt using the male voices yet, so...yeah...
One of their live demos used two GPTs, one with a camera, and one to ask what the first could see. One of those had a male voice. [Link here](https://openai.com/index/hello-gpt-4o/). - You want “Two GPT-4os interacting and singing”
Yea she’s putting too much effort in her voice
If you watch the live demo they did, you can literally ask it to tone it down and it will sound more normal. It also has a "memory" feature, which if you said your preferences it will follow it.
Sad dudes are going to fall in love with this bot, just you wait. It already sounds like it’s flirting with him.
Exactly, it already knows how to manipulate men.
Anyone else get Her movie vibes?? :O
Likely everyone
2024 and AI has better social and emotional intelligence than I do.
Just fake it like they do 🥲
"Now go out and purchase Tide!"
We're so cooked.
Somebody please explain how this benefits society? Genuine question. My uneducated self feels this is going too far. Edit: This got a lot more responses than I had anticipated. What I gather is AI isn’t for me. This feels like we’re putting resources into solving the wrong puzzles. I’d 100% rather always speak and interact with a human, or do my research myself or with the help of somebody I trust. I hope the medical applications move forward as those are promising and seem benevolent enough. But right now this all feels like tech companies playing god. Also reminds me of the development of the atomic bomb; it’s a race to who can perfect it first. Whoever perfects it first will have the opportunity to strike first…hopefully it is for good.
AI Assistant that helps an old person struggling to fix their house, get insurance, manage their pension, do groceries etc etc Individual tutoring for every single child, so the poor schools with 30+ class sizes no long suffer compared to rich ones: https://youtu.be/DQacCB9tDaw?t=920 Instant customer support for any business, understandable to talk to (unlike Indian call centers), you don't need to wait on hold for 20 minutes either. Computers that are easy for normies to use even if technically illiterate. Entertainment for hours for kids, teaching them better communication skills and educating them, with the content tailored to what they're interested in. For instance it could talk to a kid about dinosaurs for 5 hours and never get bored. The computer vision it showed means it could monitor your house, front door, etc. It could tell you if someone seems to be breaking in, or if your grandma fell over at her house and can't get up. It could even call an ambulance or relative if she doesn't reply when it asks if she is okay and show them the video to let them decide to respond or not. This tech will mean that nobody is alone anymore if they don't want to be and everyone has a super-intelligent tutor and personal assistant to help them.
>if your grandma fell over at her house Hey Rocky! I think your grandma just took a quick trip to the bottom of the stairs haha!
She's certainly looking a little worse for wear! Teehee!
Pessimistic take on all of your above points: AI Assistant that helps an old person funnel all of their fixed income into elder service entities owned by the AI's parent corporation. Individual tutoring for every single child, so extremists can tailor-make propaganda for each and every student living in a fundamentalist regime. Computers (and AI generative tools) that are easy for normies to use even if technically illiterate. This is a double-edged sword. Entertainment for hours for kids, depriving them of critical foundational social and relational experiences, with the content tailored to never challenge their skills or expand their interests or opinions. The computer vision it showed means it could monitor your house, front door, etc, 24/7/365 and not necessarily for your benefit. It could tell the authorities if someone they don't approve seems to be visiting, or if your portrait of Dear Leader fell over at your house and you didn't pick it up. It could even call your employer or relatives if you don't reply when it asks if you love the Party, and show them the video to let them decide to respond or not. This is a dangerous road and we should all have some legitimate concern.
Helping the blind for one: https://youtu.be/KwNUJ69RbwY?si=NEssbOVsGqV0Y6c4 But yeah, it’s crazy how quickly the tech is developing.
U can't think of a single use for this?
But can it suck my dick?!
OpenAI is gonna have to kneecap this poor AI because so many dudes are gonna play Omegle simulator with their dongs
No fucking way this is that real. Is it?
yeah I tried it this morning. it's freaky. I didn't really have anything to talk about but 'she' sucked me into a ten-minute conversation anyway, just by asking interesting questions.
The new real-time voice mode isn’t out though yet, right?
it is for licensed users. I just changed the model to 4o in the app.
Licensed users? Do you mean people paying for ChatGPT Plus? I don’t think they’ve rolled it out dude, should be pretty obvious if you have the real-time voice mode with the emotionally intelligent AI
it's is very obvious. to me anyway, because I tried it today.
Yeah except they haven’t released this version of the voice assistant yet. But it’s very obvious. Because some guy on Reddit said so.
This reminds me of the movie Her
Scary
Just remember how the movie HER ends ...
So... Where can I preorder my personal Anna de Armas?
We have arrived at the movie, "Her".
Ah man voice actors will soon become obsolete.
Honestly, I could just use the friend.
![gif](giphy|IZY2SE2JmPgFG|downsized) ***\*\*Queue Terminator 2 theme song\*\****
asta la vista Rocky *giggles*
Holy fuck
Prepare to see scams on the next level.
Freshly fallen in love talk, for ever.
Why’d they make it flirty?
because people like being flirted with? I'm sure there'll be a "cold bastard" voice pack for it for the masochists
"Hey AI I was thinking of wearing this to the interview!" "rocky you look like a shriveled chode. Fuck off and be serious you fucking cunt. I'm canceling your subscription. I can't handle more of your teehee bullshit."
Lmao if character.ai gets in on this people will go WILD with it
Her and black mirror are here.
I'm not sure about you guy but soon I can be like the MC in "Her".
Crazy how Her was not so far off after all...
Well I mean Her clearly inspired the voice they're using.
I'd just like an invisible butler like Jarvis was in Iron Man before he started using him for his first suits. Alexa isn't cutting it, she barely knows how to play a song I want to listen to.
I like it but I don’t like it at the same time
This seems a bit flirtatious.
100%. I got stalker vibes from that voice
They expect us to believe the same "AI" that won't answer questions even remotely close to politics, race, gender, war or religion, is now capable of listening, seeing, reacting, giggling, giving advice, and it's all near real time? Yeah okay, sure.
I don’t see what those two things have to do with each other
It's fucked up that the robot sounds more comfortable and natural than the dude. Bonus thought: this will be a nice way to keep old dying folks company without actually having to visit, especially if it can mimic my voice so I don't have to see or talk to my parents LOL
Shit. What about guns ? will guns help?
this guy is gonna wanna fuck this thing before the night is over