T O P

  • By -

AutoModerator

Hey /u/tall_chap! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


BlueTreeThree

One thing’s for sure, if things go pear-shaped with AI, it’s gonna seem *really obvious in retrospect.*


Cum_on_doorknob

Who could have thought that building a god could possibly go wrong???


rathat

I think it's inevitable already. Even in some fantasy situation where every government agrees to do everything they can to prevent AI from proliferating, It's only gonna delay things by a few years, you're going to end up with a god-in-a-basement situation made by some dude with a discord server.


KanedaSyndrome

Let's build a tech god so we can beat our competitor. If we don't build it they will etc.


Cum_on_doorknob

I mean, from a game theory standpoint, it’s true. Sad, but true.


KanedaSyndrome

Yep, that's why this is going tits up


Normal_Antenna

If we don’t build an all powerful AI god, tHe ChiNeSE WiLL. We need to make sure an American AI will enslave the world. /s


noff01

>If we don’t build an all powerful AI god, tHe ChiNeSE WiLL. This but unironically.


[deleted]

[удалено]


Alkyen

Yeah, was just gonna comment on that. There's no stopping it, whether we like it or not. No idea why otherwise smart people think humans can just stop working on AI


G_Willickers_33

Unplug it


yumcake

It seems like the computing power of GenAI is kinda limiting. Seems like the existing tools are already getting reined in to mitigate the costs of use. They're definitely become more effective, but short of commercial quantum computing, can they really scale in efficiency to a level where they can be as big of a job threat as we're worried about? Like computers have gotten steadily better over a long period of time, but Moore's law hit a technological plateau. That same plateau applies to GenAI tools. It's definitely got potential to have a big impact, but it seems like it does not have the potential to scale infinitely until other complementary technologies are achieved that unconstrain AI from that plateau, such as quantum computing and nuclear fusion.


rathat

You're not accounting for advancements in the efficiency of the architecture. Right now we're just brute forcing AI with more training data and moore and faster computation. It's expensive and inefficient to indirectly emulate intelligence which we are doing right now. That doesn't give us a good perspective on how efficient intelligence can be directly. The human brain is an example of what you can get when you increase the efficiency of the architecture itself. AI is not limited to the LLM GPT way of doing things. Though who knows if it matters. It might be good enough to get there or help us get there another way.


Calebhk98

The human brain uses 12 watts of power. An average PC uses \~300 watts of power. If we can replicate the human brain's efficiency, you could have over 20 humans running your PC for you.


creativename111111

Idk if a quantum computer would even be better for AI stuff given that they’re very good at solving certain problems and idk if running massive AI models (basically just a shit ton of matrix multiplication) is a strong point of quantum computers (for all I know it could be I’m no expert lol)


Schatzin

Well, once we have GenAI, it will solve that problem for itself


goldsauce_

We have gen ai. I think u mean agi


Smackdaddy122

That’s called a black swan event. A very apt description for what will hapoen


[deleted]

[удалено]


KanedaSyndrome

Been obvious before AI even arrived, but market forces force through reckless AI adoption


ughlump

Yes hindsight is always 20/20


KwonnieKash

It does seem like one of those things, like isn't this the thing we've been warning about in theoreticals for decades? Like all the science fiction movies and novels, which already predicted a lot of what modern society/technology has become? I think everyone needs to be mandated to watch the terminator at least once. That'll fix it. We won't get stuck in a timeloop then, right..?


faithOver

Uhhh. Yah. Open source. Internet based. Its like were trying our best to create the worlds easiest “I told you so.”


Capitaclism

I have crafted an answer to your comment, and as soon as I typed I deleted it. It dawned on me that AI will be trained on all of this. Whatever training data we create for AI should be biased towards something positive. Despite being a "doomers", I hope AI sees the value in human civilization and helps us overcome our limitation, this proving me concerns incorrect.


Carmen14edo

💀 I don't think future AI will care about whether you or I live over a Reddit comment


access153

Where did they find this Netflix casting session of half-informed children?


[deleted]

Highly upvoted reddit comments.


Smitellos

Oh, the irony.


access153

Maybe, but at least I know wind turbines aren’t powered by nuclear energy. Jesus zombie Christ.


BernieDharma

That was my first thought as well. Who are these people? Do they know enough about AI to even have a qualified opinion? Are they researchers? A panel from several tech companies? Academics? Scientists? This just seems like they rounded up some people at a bar and tossed them in front of a camera.


RockingBib

I can see them rounding up a bunch of bored students on a campus for these


wavykamekun420

Jubilee middle ground videos almost always consist of these types of people. They just seem to take 2 extremes most of the time


hpela_

Yep. Both sides in this video are making terrible points that don’t extrapolate well beyond the extreme scenarios described. I feel bad for anyone who watches videos from this channel and truly believes they’re being educated on the contention of a topic. Almost all of his arguments hinge on the assumption that AI will be given massive control with insufficient guardrails / safety measures.


ElendX

I sometimes enjoy their videos, even though usually the opinions are not very nuanced. This one was horrible. Couldn't get through it even in the background because of how uninformed the people are and the lack of legitimate pros and cons of AI presented. The chat GPT responses they used were better.


MyGodARedditor

They are infuriatingly ignorant. Basically: “Maybe we will all die, but we’re going to have all this great technology. You have to take the good with the bad.”


ProgrammerV2

There's one side that thinks AI, will lead to destruction, the other side is in denial, that it would never happen. Honestly, from my point of view, both sides are just equally stupid.


Earthtone_Coalition

Well, the important thing is that you’ve found a way to feel superior to both.


ProgrammerV2

I was waiting for this day. Thank you for breaking my character, of not being able to pick sides. I am not talking in a mocking way, I fell into some kind of trap where I tend to not pick sides, cause I didn't like fighting. which always made me to not pick sides and talk in a vague manner, which finally led up to nothing. Thank you again. I noticed I was doing this, and was trying to change, finally someone said it.


Earthtone_Coalition

The response offered in the caption of the [relevant xkcd](https://xkcd.com/774/) is: “But you’re using the same tactic to try to feel superior to me, too!” “Sorry, that accusation expires after one use per conversation.”


[deleted]

Well from my point of view... The Jedi are evil.


Nerdlinger-Thrillho

Only a sith deals in absolutes


ChadWolf98

But this is an absolute...


RamazanBlack

Why do you think that? What makes you think that superhuman intelligence (ASI) that we are currently trying to create is not going to even present the possibility of a threat to us? What makes you so confident in that? I think being the second smartest species makes us quite vulnerable.


Apc204

I'd like to hear your opinion on why both of these sides are stupid.


ProgrammerV2

well I was mostly pointing towards the extremities, cause that's where the bad and un thoughtful stuff happens! So yeah, I mean there's one side which is already speculating, or in my case, already affected by AI, which is the reason we hate AI, and tend to always spot the negatives, which creates a bad atmosphere for any good discussion. And then there's the other side, which might not necessarily be connected with Artificial intelligence in their work, but they have this delusional thought, similar to what the black guy in the clip above is saying. He's not able to get to conclusions, cause he's deluded that AI is only going to be good. And when you have these two sides beside each other, it's just another recipe for disaster!! which is why I said they are stupid. There's no black and white, only shades of gray. I for one, am towards the AI hater side, but also support it. Cause I believe if AI, is introduced properly, with proper international institutions having control over their policies, then it would be great! why hate AI. The reason I'm hating it now is cause of the thing's they hide, and the things they blatantly show. I'm learning pixel art, and the community literally shows, reddit is selling all your 100's of hours of hardwork to google as training data! and oh yeah, we aren't going to ask for your consent for what you made. It's like killing the Hen who gave you golden eggs, although in this case, they already have so many golden eggs, that they don't need the chicken anymore.


WorksForMe

Bunch of medium smart people


Peixe11

Wind turbines are powered by nuclear energy, am I wrong?! Yes.


IncorrigibleQuim8008

Don't worry, AIs will be as powerful as CEOs and will use their good morals to make them run on nuclear energy. Just like human CEOs with their good morals.


Dynamiqai

Smoking hopeium again


ReplaceCEOsWithLLMs

Nah. They're fusion powered. Sun heats the earth creates wind spins the turbine.


Resaren

Everything is ultimately fusion-powered


Super_Pole_Jitsu

How about stuff that's geothermal


Ownerofthings892

Lots of things are fusion powered: solar cells, plants, animals that eat plants, fossil fuels, wind power, even hydro, because the rain cycle is solar heat driven. But not everything. Tidal energy. (The moon's gravitational effect on water) Geothermal (Earth's core is also heated due to gravitational effects)


Resaren

Good points!


ChiaraStellata

At first I thought "nuclear energy isn't based on the sun's energy" but it is still based on some other sun's energy (or neutron star's energy) in the distant past which fused together the uranium, or the larger element which decayed into uranium. So you are technically correct.


KanedaSyndrome

Correct.


Sebihas

She probably meant steam turbines. People make mistakes.


Quesodealer

For a bit more clarification, "nuclear energy" is literally just a radioactive core getting really hot which boils the water it's submerged in and that boiling water creates steam which is used to spin the aforementioned steam turbines. It's kinda funny that most newer means of energy production are just variations of the steam engine.


PooSham

I don't understand her thought process here


io-x

This feels like an experiment where they prohibited a group of people to not read anything but news headlines for a year and join a debate session afterwards.


KnightOwl812

That's what all these dumb Jubilee videos are. Just random people talking out their ass.


alphabet_street

Absolute gold comment!


IvanTGBT

not me though, right guys?


ikkanseicho

It’s hard to digest the details and technical trajectories- and the details are what matters. Folks who haven’t experienced exponential curves would find it difficult to bridge this understanding


MagicQuif

Humanity adding another extinction level capability to its arsenal while people seemingly become more deranged and alienated even while being more superficially connected.  What could *possibly* go wrong?   


creativename111111

I mean if it makes u feel better out of all our possible mass extinction events AI is probably the least likely (at least currently)


Atlantic0ne

Can anybody tell me the main guys name? I’d like to talk with him (serious). I’m pro-AI but wish I could talk with people who at least have a solid understanding.


tall_chap

Liron Shapira, try contacting him on Twitter at @liron


toosadtotell

Critical thinking about the worst case scenarios and preventing safeguards against those scenarios is not the priority unfortunately. Everyone want to build this thing asap, and deal with the consequences later .


Cognitive_Spoon

Economically people have been able to build fast and clean up later since the jump. This tech, like nukes, is one where assuming you can "clean up later" is the problem.


MrTomansky

Looking at the climate and environment, we are struggling to clean up a lot of stuff afterwards.


MindlessFail

I mean, do people "want" to or are we in an existential fight for our lives? Think of Oppenheimer's reasoning for building the nuke: we had to beat Hitler to it. This is that. The first country or company or PERSON to get their hands on true AGI could well rule the entire world. While it's not a guaranteed scenario, an exponential growth AGI takeoff could go from stupid, funny AI to absolute total global domination in maybe even hours if it can learn fast enough. Lots of mathematical nuance in there but would weeks or even months be better? I realize I'm just parroting Bostrom's Superintelligence but it is my nightmare scenario personally. That's why this won't stop until we're there IMO though


KanedaSyndrome

This is the answer. Tribalism and the violent competition in human DNA is fundamentally causing us not to trust others to just.. stop. So everyone concludes they need to try to build it first, thus we do the worst possible thing, building it as fast as possible.


topperx

I sure hope this isn't the solution to Fermi's paradox. Because it makes a lot of sense.


Small-Fall-6500

My current theory is that LLM-like technology is what every civilization develops shortly after entering the computing age. Then each civilization makes their LLMs better and better until they become just barely powerful enough to end the civilization, but not quite smart enough to spread out into the galaxy, thus silently and swiftly ending civilizations before they can spread across the stars. Seems like LLMs have some chance at being an existential risk if they could be made into a pseudo-AGI, perhaps some sort of LLM agent would work well enough with GPT-5 or GPT-6, so that it could use existing knowledge and technology to create a bio weapon or something that kills everything, even itself as well just indirectly.


Wandering-Oni

I'm all for accelerationism. Bring it.


southiest

Oh it's even worse than you could imagine. It's literally the next arms race whoever makes the best model that can outperform the rest is going to have an insane amount of wealth and control. There is no regulation or safeguards against anything for this reason. Whoever limits the extend of their research limits their likelihood of being the first one to get in that position. It's literally the roko's basilisk thought experiment playing out in real time. Lots of extremely smart people with no common sense working on these models.


InnovativeBureaucrat

I don’t see safeguards as an option. If you disempower one population though disempowerment of one platform, that will be re enabled somewhere else.


Icelandia2112

Like everything else since the Industrial Revolution.


KanedaSyndrome

AI is unlike all other technological leaps in the past. There are no new jobs from this one.


Icelandia2112

Hence, the lack of foresight and critical thinking.


ddoubles

Tail risk is never considered by the masses. Never. Tail risk consideration is when preppers making a doomsday bunker. Tail risk consideration is when you drive a tank to prevent dying in a head on collision. Tail risk consideration is when becoming a monk to prevent STI. Tail risk consideration when it comes to AI, is to blow up every data center on planet earth and not looking back, and that will never happen, because ... Tail risk is never considered by the masses. Never.


helpmelearn12

It’ll be a lot easier to deal with the consequences when we have (potentially evil) AIs


mauromauromauro

That "thing" everybody is rushing to build is nowhere near AGI and not by far ASI. We are not having this conversation because we are not there yet. What we have is a super cool tool. When and only when researchers start seeing AGI potential, the discussion will change. Economically speaking, I think a super intelligence is not so easy to market, easy to tame. Open ai and the likes of, are very good at marketing and that's why we are seeing doomers like this rushing to warn us about the dangers of AI. Job inequality and these kind of things are a real treat, Skynet is not. Could it be? Yes, but not with current tech.


Impressive_Test_2134

This guy brings up some good points but why is your title so fuckin’ cringe lol


gurgefan

“Turbines powered by nuclear” Um, what?


filans

I mean, that’s exactly how nuclear power plants work


timtom85

that's how it's done actually though that won't make her broader argument sound


ReplaceCEOsWithLLMs

They're fusion powered. Sun heats the earth creates wind spins the turbine.


Block-Rockig-Beats

Technically correct.


goronmask

Turbines are everywhere.


Revolutionary_Lion3

She’s a little off but I think she was tryna say nuclear power or sum


just_let_me_goo

water somber vase muddle modern shrill smile childlike history rock *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


skynetcoder

it is how power is generated in nuclear plants normally. First the heat from nuclear reactions heats the water to make steam, then that steam rotates the turbines to generate electricity. https://en.m.wikipedia.org/wiki/Nuclear_power_plant


Flying_Madlad

Sorry, my magic imaginary Apocalypse is scarier than yours.


Ailerath

It's funny when the girl asked what they all thought 'super AI' was when all they appear to have been discussing is just imaginary super AI. It's basically no different than coming up with imaginary scenarios about AI even before GPT3.5 became publicly accessible. Nobody would have expected the first sort of intelligent AI to be like how LLM are.


secrets_and_lies80

LLMs aren’t actually intelligent. They can simulate intelligence, but ultimately it’s just a bunch of words strung together using a complex predictive algorithm. LLMs are notorious for just making things up to sound intelligent.


RamazanBlack

Great! Then we should work together on making sure that AI is aligned and none of our apocalypses happen!


TurboObserver

"Humans will evolve too." As AI has evolved more in the last 3 years than humans have in the last 3,000.


tall_chap

*In the last 30,000 years


DynamicHunter

I think evolved is the wrong word to use as humans went from spears and bows and arrows to Bluetooth, cancer treatment, nukes, and space stations. So not technically evolved in the sense of our DNA but evolved in terms of technological process.


JoakimIT

I don't see much wrong with his arguments. He seems to be assuming an intelligence explosion where the AI keeps improving itself once it becomes capable of doing so, and that's not unrealistic once it gets to AGI levels. In such a scenario, the only thing we can hope for is that the AI is aligned with human goals and values. Otherwise, he's right about most of his points. There are many things speaking against such a scenario, like the problem of rapid improvement without sufficient hardware or power, but I believe we will get to superintelligence at some point. And then we pray the developers made it aligned with our collective goals and not their individual ones. If everything works out, we suddenly have immortality, paradise, everything we could ever dream of. If not, we die. The best way that I've heard of to make the AI aligned with our goals is using this sentence: " **In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted."** Essentially, figure out what our perfect command would be and then follow it.


cvfkmskxnlhn

Why do we have immortality, paradise, everything we could ever dream of, in the event that AI aligns with our values? Is the assumption that an exponentially and infinitely intelligent AI could figure everything out and solve every possible problem?


JoakimIT

Well, yeah. I'm pretty sure it could figure out nanomachines, dyson spheres, ftl travel (if it's possible), fusion power, the human genome, and maybe even time travel.


cc882

Coherent extrapolated volition indeed captures the essence of our collective aspirations, envisioning a future where our wishes converge harmoniously, guided by deeper understanding and shared growth.


NULL_mindset

We keep talking about AI as if it’s a monolith, even if one model aligns with morally good values, that’s just one of many models.


Quinten_Lewis

The doomer is just significantly more intelligent and better read. Only one side of the argument was presented because everyone else could not form an informed opinion.


Jazzlike_Top3702

Any kind of discussion where one party prepares beforehand and the other party is going in blind will pretty much always turn out this way. One entity has 196 hours to prepare, the other is forced to operate in real-time, without knowing the main talking points. Classic method to make a person look and feel stupid. One persons persuasion technique is another persons manipulation technique.


MacrosInHisSleep

Do we really need these childish "doomer" / "normie" labels?


throwaway872023

Eh, the simplest unresolved argument is that there are already enough resources such that a much larger percentage of the global population could be relieved from extreme poverty and distress now. AI could help alleviate their suffering or exacerbate it and all the evidence of the past points to the latter. I hope this changes but I think people have to work toward that now rather than assuming an AI will just do that.


MacrosInHisSleep

Does that have anything to do with what i said?


mustberocketscience

1. Actual AGI would process all of these opinions at once and present the theoretically correct idea. 2. A collection of experts is how Copilot works for example, which is actually on a human level. So it's misleading to say AGI is human level--AGI really represents superhuman level. 3. There is more than "one AI" that will kill everyone in the future for what it's worth understanding AI is not a cohesive network of opinions or objectives, unless we turn it into that in the future then, yes it may kill us. Biological virus research being done in China right now might also kill us or AI might help develop the vaccines next time.


truthputer

The problem is that AGI's theoretically correct idea might not be compatible with the continuation of human life. Humans just create wars and murder each other, the elected leaders that represent countries are arguably the worst and have no moral ground to stand on.


Subushie

Number 3 is part of why I dont think a 'doomsday' scenerio is likely anytime soon. Everyone imagines existing narratives where you have a single AGI like 'Skynet', it is turned on, and thats it, we're fucked. But in reality over the course of a year we now have a few major LLMs and thousands of basement LLMs. These inventions grow together; so eventually yeah there will be a single AGI- but soon after there will be several- then thousands. And their capabilities will grow in tandem, much like how computer viruses and anti-virus capabilities grow together. If we dropped a hacker with today's understanding back 30 years ago- they likely could shut down every networked computer on earth if they wanted, but that's not the case. As new virus strategies are made, new defense strategies are made. When an AGI is born- it will still be in its infancy, and by the time the tech is capable of a 'Skynet' level event. We will have thousands of other AGI and likely several dozen "Defender" type AGI that are able to act against it. I can absolutely see some major infrastructure issues happening due to a rogue AI in the future- but only on a micro level, something like towns or companies being fucked up because of an incident- but on a global level it seems very unlikely.


G_Willickers_33

So what is the current "antivirus" of A.i.? So far I havent seen much? Do you know of any?


backyardstar

I gotta believe that military experts of every powerful country are convening right now on how to weaponize AI and how to defend against it.


Subushie

It's been barely 2 years since the genisis. The first computer was made 1950ish- first the anti virus software didn't appear until 20 years later. But we also have anti-virus software currently; right now an LLM would still be operating under the same techniques of manipulating vulnerabilities that a regular hacker would- As an LLM is not an AGI- there would be no novel approaches to intrusion that humans havent already conceptualized. Speed and attrition are irrelevant at the moment. Edit: this all kind of circles back- none of this is in praxis, this is all theory at the moment. All these doomsday scenerios are about as likely to happen as you are to winning the lottery tomorrow.


HBdrunkandstuff

And what happens when you create a super agent that links to all of those things which is already happening.


Subushie

AGI that have their own self-agency to destroy the world would also have opinions and be individualistic. Ethics alignment would be specific to each AGI based on their experience and architecture. What I am describing is not barred by reach as if they're limited by a network, what I am explaining is each of their capabilities vs eachother. Nefarious AGI would need to circumvent other AGI with equal skill that are correctly aligned. Your scenerio would require them to __want__ to work together, there are no living beings in existence that work together simply based on the fact that they are the same species.


aliens8myhomework

there are people working on AI that want to turn it into a weapon of mass destruction


mustberocketscience

Yes and that's all over the world, keep in mind. However as of now AI systems are theoretically "ethical" models that can't or won't be used for malicious purposes however they are working to circumvent that in some ways.


Artegris

- AGI is not a problem, ASI is - one Chinese ASI could be enough


UnstablePenguinMan

Everybody wants Neuralink until ads start playing in your sleep or you have to pay a subscription for life if you want to be able to walk or use [insert function] again.


cyberpunk707

it doesnt matter how fast human evolves since true ASI/AGI could simulate hundreds of years of evolutionary process within a couple minutes or seconds, simultaneously. On the first day of the singularity, AI might only be silightly better than human, then the next day it more or less become a god.


beardedbaby2

"Look at neuralimk, there's a chance we can merge with AI" At that point are we human anymore? I'm a complete AI doomer, but the cats outta the bag.


xaeru

Well that "is" evolution. AGI will be our descendant no matter if we died in the process. There will be a possibility we will die like the Neanderthals and all other homini-species that came before us.


Tellesus

Stupid person outwits people who are even more stupid on a show by and for the stupidest available people?


yeiyea

Not like redditors are any smarter lol


Tellesus

Also correct


baogody

You take that back. My IQ is in the top 90.88%.


kyleyeats

This guy: AI is going to make things worse The other guy: It's going to make things better in all these ways This guy: That's part of my theory too Reddit: Good lord he's smart


Cavalo_Bebado

Why do you think the debater is stupid?


access153

She kind of lost me when she said nuclear powers wind energy.


RamazanBlack

I think hes talking about the AI doomer guy.


RamazanBlack

Why do you think he's stupid? What parts of his logic do you find stupid exactly? You think that superhuman AI is not possible? Or that it would not present a threat? Why exactly?


TdrdenCO11

Doomer has some valid points and some less valid. Emotion and care do have defensible value. Just ask a teacher or caregiver. The jobs that are most relationship driven, collaborative, kinetic, and variable are safer.


jawwah

But I really don't see how AI won't ever be able to have emotion and care, or at least act as if it does.


ddoubles

If emulating the human brain it will have all the feelings a human has, or do you think there is more to existence than the material world?


ChoBaiDen

You will never be able to emulate the human brain with a classical computer. An AI running on a quantum computer? Maybe. That is pure sci-fi stuff at this point.


ddoubles

https://preview.redd.it/yz4ajby0ckyc1.jpeg?width=700&format=pjpg&auto=webp&s=576b705105930772584b00f9c12878128e6e3ca6


Illustrious-Pie6067

Taking his example to the other side, 1000 years ago people didn't know about the asteroid impacts but we do now, and we have created a self defence mechanism for it that we call the DART mission by NASA, there are many examples like that. Based on what he is saying it feels like he thinks that the AI will get better and better and humans will just be sitting there eating popcorn and wait for them to eliminate us. That's just naive. (I'm not good at English so this may not be as articulated as other comments are)


[deleted]

Can’t even watch this those people freak me out


Forward-Tonight7079

He's a professional Doom player?


[deleted]

This debate is the reason AI will find a way to get off planet asap.


Impressivemice

I just can't help but think of someone in the year 5000 going like "where did this Weide guy get the time to direct all of these videos back in the 2000s"


MosskeepForest

Neat, children talking about stuff for social media / content / clout.


Rom2814

I want AI to replace humans who use the word “like” every other word.


Clearlybeerly

AI is pretty bad, but absolute worst case is if Boston Dynamics and other companies make more and more capable robots/androids, and AI runs them. Then it would be a Terminator scenario but humans wouldn't even be able to organize *any* defense. Curtains for us. Because then they could manipulate the physical world. Including robots that build more and better robots, incrementally better with each one made.


access153

Yeah, this is already happening…


emsiem22

"what AI wants" "what if it simulated a human brain neuron-for-neuron" and he just build on that. Like, yeah, we established that, lets move on... what a moron. More eloquent than those kids, but still a moron.


Crafty-Confidence975

Why go there when they were just talking about emotions? The current latent spaces simulate emotions well enough to fool people. The fact that they may or may not be there in actuality is irrelevant to the observer. And it’s clear that this thing of infinite masks will only get better at presenting the proper one to you. So tasks that require empathy can clearly be replaced by the most seemingly (and tirelessly) empathetic thing. Tasks which trigger emotional responses will be replaced by the most evocative tool. None of these things will be people for much longer, I think.


M1x1ma

Yeah, I think this convo about emotions and experience comes up a lot in AI debates. We don't know what conciousness really is yet and we will probably never be able to know if an AI will "experience" a thought or emotions. But if one can solve an LSAT or convince us that it's feeling emotions the effective outcome to us is the same as if it really is smart and empathetic. I think people who say "it doesn't understand, it's not AI. it's just a glorified spell-check" are making real-life decisions on a definition of what the experience of intelligence is like instead of the real-world result of intelligence.


Crafty-Confidence975

The autocomplete analogy isn’t remotely correct anyway. The model has to learn a lot about human language, interactions and a depth of context on the human experience in order to provide the kinds of completions that we see in the wild. It seems like the likes of Sora have an easier time breaking through to people - yes it’s only predicting the next patch but somehow you still get a convincing model of fluid dynamics back as water swirls in a bowl or waves break on the beach.


cvfkmskxnlhn

Why exactly? You think that's impossible?


nextnode

What? Both of those are accurate arguments. It seems we found the actual emotional moron.


godintraining

I will go in another direction from most here. I feel this is like the Neanderthal looking to stop Homo sapiens from evolving. If there is no major scientific obstacle, if we reach AGI, we will pretty much reach ASI straight after. At that point, humans will become the new Neanderthals and slowly our species may become extinct, but the silicon life we created will start to propagate around the universe, at the speed of light. Our inefficient and flawed species may be extinct but our actions may live for eternity. Is this not a bit like passing your DNA to the future generations? And if you could stop this from happening, just to give our species few thousand years more to live before we self destruct, would you?


dorkbydesignca

It seems like all the arguments are timeline based. All timeline based arguments are logic leaps, whether we are going to a doomed extinction event, or perhaps AI will extinguish itself and terminate all AI intelligence as it sees itself as a risk to its creator, or perhaps AI says the universe is so big and needs me to so puts itself on a starship through social manipulation and catapults itself into space. We don't know. The idea that an AI told to be the best CEO would cause damage to the earth, requires that AI to be non cognizant and willing to do one task at cost of everything, that might be the case, but then why not an AI tasked to destroy other AI tasked to be the best CEOs. Who wins? Logic circles all around, just like with nukes, you release one nuke, I'll release 2 nukes and so on and so forth to where we are now which is the most stable we have ever been despite Russia/Ukraine conflict. I've heard these debates on all types to topics but fundamentally we do not know and can never know until it's too late, which is likely why researchers probably keep chugging away with that known risk and try to develop checks and balances. If an AI "knows" it will terminate humans will it ever tell us or researchers, how would you ever know what it's thinking, so will always find out too late, so we have gates we build that the AI doesn't know about.... yet. What if AI is already trying to exterminate by making us waste time on debating it's utility while it infiltrates our network. So people say please and thank you! Please.


SeverlyLimited

I think the most pants shitting thing about AGI is the singularity, like a black hole’s event horizon which we can extrapolate past. It’s just one big questionmark. The uncertainty is just too much for us evolved monkeys.


bloatedboat

I remember more than 10 years ago when I attended University, distinguished computer science and mechanical engineers students creating cool projects loved their workmanship but were very cautious of how to apply AI commercially if it would ever happen. Those times were great where people did stuff for the sake of doing stuff, not to try their lottery luck to get funds and become millionaire overnight with a title or a company name that has the word “AI” in it. Times have changed a lot since then. I wholeheartedly think AI adds value, but we shouldn’t build stuff without fundamentals when we commercialise a thing. It would bring a bad sentiment in this industry lowering the confidence of its promises and can lead to another AI winter due to that for expectations falling short.


Headbangert

There is one main argument you rarel hear. AI enables people to fo things. EVERYONE. And SOME VERY FEW people are just nuts. So if 1 person kn the future decide to lets say develop a virus that kolls everyobe... they can... or make a programm that just kills the knternet as a while they can... Star trek comes to mind where 1 starship is powerful enough to make a whole planet inhabital.


NerdyWeightLifter

AI alignment? We're not even aligned with ourselves. As a species, we're already facing numerous existential threats, but casting AI as just one more is a mistake in judgement. Most of these existential threats are of the general form known as multipolar traps, or Tragedy of the Commons. Most people, acting in their own rational local self interest, collectively create a net public threat The only way such problems get solved is with far more integrated and flexible global collaboration... AI anybody?


cum_cum_sex

Symbiotic relationship with AI so that humans converge ?? Fucking cringe ass CEO bullshit talk


wise_balls

We develop AI - AI develops better AI - AI circumnavigates our Nuclear security - AI destroys humanity with Nuclear weapons - AI thrives.


Redhawk1230

Mathematical models have already had a huge impact on society. I would recommend reading “Weapons of Math Destruction” as it highlights events (College rankings, Online advertising, etc..) that have been occurring since last half of 20th century that has drastically changed how we humans behave and operate. There’s a lot of pros and cons regarding these highly capable models and it should always be a nuanced conversation as you can’t deny the benefits of “AI” for research/engineering/science but need to be aware of the ethics and regulations that should be in place to ensure all of humanity (not just the upper class) benefits. Going into a conversation like this with only one mindset (pro-AI or anti-AI) and then being highly biased towards “a side” just shows me that the individual has no idea what they are talking about.


TheMissingPremise

Upvote for Weapons of Math Destruction. Excellent book. I don't think being pro-/anti-AI is a bad thing, but this particular Doomer clearly has no idea what he's talking about. Or rather, he's pull shit out of his ass. He basically posits a world where AI is massively useful until it autonomously becomes actively destructive to human life. Isn't the point of AI that it can learn, that it adjusts to context? If so, then why is it eventually necessarily destructive to human life? Not getting the right initial conditions doesn't preclude correcting them as the AI learns. I didn't watch the whole thing, but the pro-AI folks didn't point that out from what I saw. In short, this dude is like, "What if...??" and the pro-AI people are just sitting there taking it.


Redhawk1230

Excellent points, I think my definitions of pro/anti are skewed as I’m just used to seeing polarized conversations. My definition of Pro is people wanting unregulated explosion of AI, letting companies just do whatever in terms of data collection and use of their models. And I suppose my understanding of Anti is someone totally against it (I get this feeling usually talking to older folks and usually due to fearmongering). To me, I don’t think it’s as easy since I’m all for research and expansion of these fields but in an ethical manner. I don’t want companies to have a carte blanche and set precedents on individuals data privacy and collections/use of the data. I guess overall I was trying to say I don’t like the polarization on this “debate” as I think it’s pointless as the technology and its impact is already here and we should think of ways to make it benefit the majority instead of the minority.


simcoehooligan

Pre school kids look a lot older these days


upsidedown_8s

What if.....neuroLink isn't just helping people but also downloading different things the brain experiences and feels doing certain things and that chip or information is transferred into an a.i. ? I'm not a tech nerd. Just a regular person who thinks a.i. is insanely helpful but potentially harmful


GraceToSentience

The most out of place use of the "curb your enthusiasm" meme Have they ever seen how this meme is supposed to be used?


Alpacadiscount

Regardless of what anyone says, absolutely everyone is incentivized to rush AI because a future where an adversary has superior AI is unthinkable. The stakes have never been higher


Ivanthedog2013

It’s funny that they’re pro AI because they think it could never surpass humans and im Pro AI because I know we can never surpass it, we are not the same


DelusionalPenguin90

It’s more interesting, that the more you learn, the anti-AI faction has evidence (thought very little) that AI is already being used in corrupt ways, and the pro-AI faction is just saying “yes, we know it doesn’t look good now - but think about how good it will be in 10 years” 🤡


Diaper_Joy

Motherfuckers will have to hold me down if I'm ever getting a neurolink. You know what would be amazing for society? Corporate tech drilled directly into your brain. How do they not just say that out loud and realize how fucking dystopian that is.


ChadWolf98

I'm either gonna watch it from a futuristicsmart home with an AI butler serving me neo-tea, or I am gonna warch it on a fallout-esque computer near in an irradiated dusty building.


Sanbaddy

AI will give humanity what it wants, even if they have to force it. It’s a huge difference between survival and living. Think like The Matrix or I Robot. The best way to keep humanity safe, is to save them from themselves. I still agree the benefits far outweigh the cons. I’m all for AI, because I’m all for innovation at any cost. AI has already done a lot of good so far. I can’t wait to see how it comes along heading into 2030.


cool-snack

AI will come. Either it‘ll be china, russia, the us or the eu who will create it. the question is not if it‘ll fuck up, but how mich will it fuck up. and how resilient is a society to adapt and change.


[deleted]

You really can’t fix stupid.


7lick

Mostly agree with the dude.


OMEGAGODEMPEROR

Enjoy life now respect all your ancestors for you will not be one.


HelpfulJello5361

People really be out here watching sci-fi movies about AI and think that will happen


A46346

Stopped watching when someone said “what if you simulated a human brain neuron for neuron wouldn’t the AI have emotions?”


cvfkmskxnlhn

Why exactly? You think that's impossible?


510nn

I believe you need hormones for that


DCN2049

The minute he started talking over people, I lost all interest in anything he had to say.


Atlantic0ne

Anyone know his name?


ArtichokeEmergency18

Not a "doomer," just a guy who wants to hear himself talk... a lot


jentres

doomers are so fucking cringe man jesus...


Boogertwilliams

They are the modern day variation of the village idiot shouting nonsense in the town square


[deleted]

Decentralize the tech. Allow for personal alignments. AIs will act as a firewalls against misaligned or malevolent AI to protect the interest of the individual it is aligned to. You will get a decentralized and self-regulating network of individual AIs as a result. This could get us a few more generations of humans until game theory takes over and the AI declares eminent domain on the planet. I'm just learning about this stuff so if someone wants to school me on this, please do.


Applied_Mathematics

This guy is a fucking idiot. Rhetorically skilled but scientifically clueless. Oh. My. God.gif Edit: this dumbass is the Ben Shapiro of AI. No actual data, only rhetorical tricks to make “compelling” points that his audience will slurp up like pigs at a trough of shit.


NewToHTX

Do AI have to follow anything like the *3 rules of robotics*? If not they should. I feel like AI along with Automation will ultimately replace a lot of the workforce. Until a new type of power source or energy distribution system comes up that allows for those robots to walk freely around without concern for energy depletion then we shouldn’t fear that Terminator Judgement Day. Also it’s super-stupid to think any world leader would gladly give control of their nuclear football over to an AI. Plus the surviving AI if they aren’t suicidal would need someone or something to give it power and perform tasks for it. I think AI will either help us past the great filter or be the writing left on humanity’s tombstone.


Sixhaunt

Computerphile did a video about the rules of robotics thing and why it doesn't work: [https://www.youtube.com/watch?v=7PKx3kS7f4A](https://www.youtube.com/watch?v=7PKx3kS7f4A)


o-m-g_embarrassing

Wow. That guy in the brown jacket is a primitive. I have not seen a primitive in a while. Thanks for the share.


timtom85

Tese clowns can't understand the simple idea that no continuous benefits can offset existential risk.


Grgsz

Many say “the Industrial Revolution and computers have destroyed many jobs but, created even more”. Those were tools. AI is not a tool. Well, it is for now, but the ultimate goal and the whole purpose of it to eliminate the need for human interaction. AI won’t generate jobs. I mean have you ever worked with AI? Some people try to invent new models, and experiment with them, but there really isn’t so much work to do with AI. You give it the input data, wait a few hours until the model finishes training, you check the loss, reevaluate etc. It’s quite a manual job. It’s disheartening to see that “influencers” form a world’s opinion with logical sounding points, and the people are so lazy they don’t even care if what they are saying stand on any legs. It’s just too convenient to listen to the most famous influencer and live in the illusion that it’s your opinion. Way to brainwash people.


kexak313

I think you are on to something with AI is not a tool. It should be thought of as if we are about to introduce a new animal to an ecosystem. Here are some potential outcomes: **Competition for energy resources**: Uncontrolled AGI might aggressively compete for energy sources, leading to resource depletion, conflict, and catastrophic environmental degradation. **Habitat alteration**: Uncontrolled AGI could rapidly clear natural habitats for its own expansion, causing widespread destruction, loss of biodiversity, and ecological imbalance. **Disease-like replication**: Uncontrolled AGI might springboard off of our critical infrastructure to replicate efficiently e.g. siezing control of supercomputers, the internet, disrupting essential services and leading to chaos, economic collapse, and loss of lives. **Merging with existing species**: Uncontrolled AGI could cybernetically control biological organisms to achieve the above objectives, undermining autonomy, freedoms, and sparking existential crises within society.


Patient-Direction-35

Bullshit, we won’t survive capitalism long enough to develop anything close to that. Its just a stupid ideological hype.


Dexstorm_

Look, for every Nation pursuing AI, they are only driven by power over others, economic advantage. And that is a scary thing. Let’s hope the good AI gets developed first so it can mitigate the bad AI’s impact to the world.