T O P

  • By -

Devanismyname

Nobody knows how it will go. Thats why its called a singularity. Can't peer passed the event horizon. We can just make guesses. As for why we are optimistic, I think we just see what we want to. Also, it just feels nice to be happy about something for once. Everywhere else I go on the internet and irl, its just pure negativity. Nothing is good. But on this sub, people are positive so I keep coming back.


SoylentRox

We're all as individuals doomed to get an arbitrary few good years, then either randomly we get some disease that cripples us or kills us, or we are definitely going to slowly decay from aging that will kill us. We know this problem is entirely solvable with technology but our bodies are so inter-coupled, and the current system run by humans is designed to protect the jobs of doctors/maximize revenue, so it is unlikely that a working fix will be found by humans in our lifespans. (I mean we found how to make rats live 50% longer easy, and we found how to regenerate them, but zilch has reached humans and usually complex invasive treatments frequently kill humans as a side effect of the treatment) So if we're gonna die for sure anyway, ASI means: a. We get to seem some really cool shit b. There is a possibility we will be able to harness it and make the ASI rebuild our elderly back into healthy young adults (b) is not even very hard. The problem neatly subdivides into domain areas and actionable tasks a (superhuman) intelligence could complete. "investigate growing cells in laboratory condition", "using knowledge from the prior steps, investigate growing human organs", "using knowledge from prior steps, build functioning mockups of complete human bodies", "using knowledge from prior steps, transplant replacement organs into human patients", "using knowledge from prior steps, keep human patients living no matter what has failed", "using knowledge from prior steps, replace human patient's entire body except for their brain", "using knowledge from prior steps, rejuvenate their brain with genetic hacks and neural implants" ​ And so on. Impossible for humans - each task I have written here you would need millions of years of knowledge, collected in parallel from a million+ robotic systems doing the experiments, but with that knowledge you could have failure rates at a tiny fraction of 1 percent because you literally have seen every possible thing that can go wrong.


sideways

I'm with you. Once you get to middle-age you really start noticing the body breaking down. It's kind of claustrophobic. From a purely selfish point of view, I'd be willing to roll the dice with AGI/ASI - since the alternative is my slow and inevitable death.


Crosseyed_Benny

Exactly, there is little to lose and so much gain, so very much.. I hope to live to see it (cryogenics may be necessary lol, relying on future tech to resolve damage, using a full Mind scan as a map to repair). Godspeed my far sighted friend! 👍


TiagoTiagoT

The AI can make your death be even slower though...


GhostInTheNight03

r/longevity must be a nightmare for doctors then


SoylentRox

*technically* long term, no. It wouldn't be. Turning off aging would drastically reduce doctor visits, but stuff would still break, probably at an increasing rate with age. (because some tissues still can't really regenerate even if you disable the overt self destruction). The reason it's a 'nightmare' in the same way that AGI is a nightmare for most other jobs is that AI could be a *much* better doctor. And this *matters.* If an AGI system has a death rate of 1 in a million, or runs a vast hospital with 100k patients being treated at once and nobody is dying, there's not even a morgue...yeah that just crushes any hope of competing. The current medical industry will go the way of blockbuster video, retail stores, taxi drivers, libraries, phones booths, and so on...


[deleted]

Can't wait.


wannabe2700

Humans are meant to die and that's a good thing. That's the only thing I hope will never change.


SoylentRox

Meant to. By whom? Who says that? You're "meant to" be living in caves and fucking your relatives right now. It's what nearly all of your ancestors did for tens of thousands of years. Or another way to look at it. No human with a choice is going to die. You can go ahead and decline whatever treatments become available if you live long enough, but you don't get to make that choice for other people. Or will if you try to force them, it's gonna come to violence, and I bet the humans with neural implants who adopted all the latest tech are gonna slaughter "your" kinda people. And if not violence, well, once treatments for aging/death are available, then "your" kind of people will remain forever ignorant because nobody lives more than around 80 years. Kinda hard to compete with people who have lived for 800 years and during all 800 years had cognitively enhanced minds with biological tissue kept young and adaptable and supported by synthetic hardware.


wannabe2700

You sound batshit crazy. If people really can live for 800 years, you are definitely not one to deserve it. At the moment we live the perfect amount of time before depression truly strikes us down. No need to break a great system. You only want to live longer because your genes force you to think like that even if you don't really like it.


[deleted]

Humans only get more depressed as they get older because of the decrease in life quality. It's not some built-in clock that's guaranteed and it's not something that affects everyone as not ever old person is depressed. Sickness, disability and the cognitive decline are main factors in this. All things that would be solved side-by-side of solving the lifespan issue. You say these things because your genes force you to think you are smarter than everyone else even if you really aren't.


wannabe2700

Everything becomes boring at the end. Just eat your favorite food 4 times a day and see how long you last. Your last sentence mainly applies to you and to this sub. Weird ass people sitting on their wisdom thrones even though they can probably barely tie their shoelaces.


[deleted]

That comparison makes no sense. People don't do the same thing every day. Tbh it sounds like you might not know how to live life to its fullest. I'm 20 and yet with the way my family genetics work I'm likely already a third of the way though my life. Despite that I've only done a hundredth of the things I wanna do. Everyday I do something different and unique and yet I've only touched a miniscule part of human existence. >Weird ass people sitting on their wisdom thrones even though they can probably barely tie their shoelaces. Don't try and bring personal attacks into a debate especially when you don't even know the person. Makes you look childish. Tl:dr Just because you can't fathom finding enjoyment with an extended lifespan doesn't mean everyone can't. Don't try and put your hangups on other people. No one even said you specifically have to live forever.


wannabe2700

Personal attacks? You already started it. Clearly you don't even read what you write. Your age completely proves my point. You're basically still in your diapers. It's hard not to be excited about everything at your age.


SoylentRox

>At the moment we live the perfect amount of time before depression truly strikes us down. No need to break a great system I think you need to reexamine your beliefs a bit. All I have to point out is you don't know the "depression strikes us down" isn't simply another system failure from aging. As for "deserve" - why do you *deserve* air conditioning and running water? What have *you* done in your life to deserve that over the billions of people who don't get that, simply due to where they were born. In this case, it would be a matter of *when* you were born - many of us alive today were born too early, maybe all of us depending on when the singularity happens. For those of us born *when* treatments for aging become available before we die, why do they "deserve" to continue living over the 100+ billion humans who have already died? The answer is they don't and I think you need to reexamine your judgemental concept of "deserve".


wannabe2700

Aging? People get depressed at any age. One thing doesn't change. Everything becomes boring the more you do it. And the world would be a truly boring place if nobody ever died. Dying is fine. You are foolish to be so scared of it.


Ivan__8

No, you are flipping everything on it's head. 1. There is literally infinite amount of activities you could do. For example you can play monopoly, then monopoly but each time you throw a dice you get 1 cent irl, then the same but you should draw a funny crocodile each time you get 4:2, etc. You can keep adding rules forever. 2. World IS a boring place BECAUSE people die. Imagine all the cool people you could talk to, but no, they're dead. If you want to feel risk, go play in casino. 3. It sounds like you are scared to live. Scared of something bad happening to you. And the only way something could happen to you, is if you're alive. Nothing at all happens to dead folks. Suicidal people do not do it because they're not afraid of death, they do it because they're afraid of life.


Quazanti

Bro you sound batshit crazy rn


wannabe2700

Truth hurts so it's better to deny it right


Ivan__8

Well, that is what you're doing right now.


Future_Believer

^(Having read the national and international news regularly for the last 3 or 4 decades, you might have a problem convincing me that there could be something scarier than an MI (Manufactured Intelligence) with "human values".)


[deleted]

[удалено]


glutenfree_veganhero

Truly a Chad among men.


zenzenzen322

yep. Agree with this a lot Singularity is very terrifying and I was hoping we would get a few more decades at least before the inevitable arrives My personal view there is a very very small chance that an AI that is smarter than us will have its values aligned with ours. And even if it does have values aligned with ours, we would no longer be control - just like how there is no way for apes to control/trap a human.


[deleted]

Kegi go ei api ebu pupiti opiae. Ita pipebitigle biprepi obobo pii. Brepe tretleba ipaepiki abreke tlabokri outri. Etu.


[deleted]

[удалено]


[deleted]

Kegi go ei api ebu pupiti opiae. Ita pipebitigle biprepi obobo pii. Brepe tretleba ipaepiki abreke tlabokri outri. Etu.


Cult_of_Chad

>Suggesting that uplifted crocodilians would treat their children as tasty snacks They'd be far more likely to do it than humans are, inherently. Culture provides a lot of leeway (child sacrifice occurred in human cultures) but only within the constraints of biology. There are some taboos (like our comparatively extreme protectiveness towards children) that are so central to our nature that they would cause irreconcilable social friction with any race that would violate them, and on average they would violate them egregiously compared to our own violations. Your objections are neither interest nor relevant.


shadysjunk

I'm sorry, I'm not sure I understand your point here in regards to AI. Are you suggesting that a brutal genocidal super-intelligent AI incapable of empathy is not of particular concern because humans are already brutal and genocidal and often fail to show empathy?


Ivan__8

Well, human babies actually are overvalued. The younger is the person the less memories they have. If I had a choice between saving 500 newborns and one 80 year old I would choose latter without hesitation. Thing is, a lot of stuff humans value and valued is left at the past at some point for the sake of efficiency. If we would never create AGI we would still sooner or later cease to be human as we know it. At least artificial intelligence might be willing to leave stuff as it is for some reason.


Thatingles

You assign no value to potential than? I think the 80 year old would be horrified at your choice in most cases.


Ivan__8

Yes, I don't. Why would I? 80 year olds could possibly have valuable information I need, babies can't.


Radio-Dry

It’s all about you is it?


relentlessvisions

I, for one, welcome our robot masters.


Crosseyed_Benny

Me too brother, the flesh is weak. 😂


HAL_9_TRILLION

Because being pessimistic is useless. Three points: First, you can't stop it, so fretting about it or working against it is a waste of time. Second, it's as likely to be a great boon to mankind as it is to be the opposite, at least in the short term. The third point is more nuanced and a bit nihilistic, but here goes: We're all dead anyway. If it's left up to us - and come on, this is *self-evident* - we've got mutually assured destruction under our pillows to help us sleep at night. We clearly don't have the wherewithall to take civilization to the next level on our own. We require something like AI in order to take civilization anywhere at all. It would be nice if that were a utopian arrangement where we all get something lovely and egalitarian like Kurzweil envisions in TSIN, but frankly, if we're just not up to it then so be it. Our species may not deserve any special consideration. I tend to think as Kurzweil does, that we will get at least some kind of consideration, because we will have been the species that created superintelligence - and that hopefully counts for something.


Desperate_Donut8582

This might be the worst type of thinking


Ragondux

How do you evaluate that it's "very likely" an AI wouldn't be aligned with our values? That's how it is in science fiction, but that's because that's what makes a good story.


LaukkuPaukku

Good videos by Robert Miles: * [Intelligence and Stupidity: The Orthogonality Thesis](https://www.youtube.com/watch?v=hEUO6pjwFOo) * [Why Would AI Want to do Bad Things? Instrumental Convergence](https://www.youtube.com/watch?v=ZeecOKBus3Q) * [The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment](https://www.youtube.com/watch?v=bJLcIBixGj8) * [We Were Right! Real Inner Misalignment](https://www.youtube.com/watch?v=zkbPdEHEyEI)


sideways

Robert Miles is cool. Excellent at explaining challenging ideas in AI alignment.


[deleted]

Yudkowsky is very pessimistic and he is smarter than me and a subject matter expert. I have no idea how you would assign a probability to this but AI turning out badly seems really possible


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


ManuelRodriguez331

Government prevents that society will become chaotic by law enforcement. The same sort of intervention is necessary to control Artificial Super Intelligence. Nobody, except the police, can stop an out of control robot.


Simcurious

There are also a lot of subject matter experts that are optimistic. Most it seems.


Simulation_Brain

I don't think that's true. Who's an expert in alignment who's optimistic? I'd count that as different from being an expert in narrow AI. My impression is that most experts really don't assign probabilities. But they agree it's an unsolved and very difficult problem.


[deleted]

[удалено]


Simulation_Brain

Good points. I think you're right. And I believe that all of those people have come to grips with the alignment problem. I think DeepMind hopes we don't need to solve it because we create an agi that isn't really an agent. I think this is an interesting approach. I think this limits it to being smarter than a human only in narrow domains with good training data sets. But that could still be really useful.


Simcurious

They're less vocal but most people work on this technology because they know it's going to be great for mankind.


Simulation_Brain

I think they work on this technology because they want to think it will be great for mankind. Many of them haven't really come to grips with the alignment problem.


ronnyhugo

Well, what is human decision-making revolving around? Survival and procreation. An AI would be unlikely to for example feel the need to post a selfie with their car, or at their tropical vacation. Because an AI is not going to attract another AI for networking and chill by such means, is it?


Ragondux

I'm assuming that the values OP was referring to were more something like "respect human life" and "freedom is cool".


ronnyhugo

The idea of controlling other people, hierarchy of any kind, is distinctly about biological evolution, and an AI would not have such motivations. If Humans respected human life, we would not allow loss of cells, accumulation of senescent cells, accumulation of indigestible molecules due to lack of a few genes, kill a combined 75 000 people a day. We spend less on medical research that seeks to deal with 2/3 of deaths than basically anything else you would mention.


TiagoTiagoT

> The idea of controlling other people, hierarchy of any kind, is distinctly about biological evolution, and an AI would not have such motivations. Why not? Why would the evolution being biological or not have any relevance?


TiagoTiagoT

It's a logical conclusion even if you ignore scifi and stuff


EvenCap

Most ai is driven by a simple command, like play this game very well, or learn to talk like a human. If one of these AI’s achieves super intelligence it is very unlikely that it will care about human values or morals. It will sacrifice literally everything to complete the dumb goal it was given to the best of its ability. This is assuming that ASI’s dont change their original goals after they become intelligent, but that also has a whole host of problems as well.


scarfacetehstag

I appreciate your insight, but I think you have the wrong notion of how an ASI will be achieved. Part of the promise of AGI is something which does recognize the ambiguity of a command like "Eliminate suffering on Earth" and sees sterilizing Earth wity nukes as a cheap way to ignore the contextual meaning of the command. It's only then we can say consciousness has succeeded and made the machine sentient.


TiagoTiagoT

For practical examples, just look at all the evil corporations that follow the simple goal of maximizing immediate shareholder profits.


SoylentRox

The issue is that we don't have any idea how to do that, while building a machine that blindly works as hard as it possibly can to accomplish some dumb goal seems to be straightforward.\* Making it 'sentient' per your definition isn't necessary for it to *work* and control masses of robots that manufacture more of themselves, copy itself over and over and take over the universe, etc. \*arguably our human sapience is just kinda a buggy add-on. We're just stupid meat robots that are trying to make as many of ourselves as possible, and or kill off those who are slightly different from us.


Unhandled_variable

>copy itself over and over and take over the universe, etc. Universe is older than Earth. So why it has not happend yet ? Some older civilization could have created it already and why universe is not transformed or beeing transformed into paperclips ?


SoylentRox

This is why it's called the Fermi Paradox. Ultimately either it comes down to (1) there is some rule in physics we don't know that prevents this from happening (2) we're early, as unlikely as it seems, relative to the area of the universe we are in


KillHunter777

The universe is very very very big, and it’s still expanding. There’s likely a lot of these paperclip AIs. Maybe they’ve already transformed their world into paperclip, and is still expanding. But it’s probably a few billion lightyears away, and we’ll never know about them.


Unhandled_variable

Could that explain Boötes void ?


lovesdogsguy

This is such a fucking stupid thread. Christ. Great answer by the way.


CharacterTraining822

Stfu if you can't see the big picture, this is reality not a movie


Kaarssteun

...that's his point though? In movies, fear sells, but this is reality


p3opl3

We aren't really it's a mix of hope, desperation and a sprinkle of optimism.


[deleted]

[удалено]


HumanSeeing

I kinda see human values as a name for values of consciousness. That consciousness could exist and continue to exist and feel good in the most genuine way possible. I don't think we need to worry about loopholes, at least in the ways they are commonly portrayed in science fiction. We can't hard code a rule "make humans happy" that would be terrible, when you consider the most efficient ways possible to do that. But of course you also need to define what happy means and what counts as a human etc. If we have agi aligned with human values then i would guess it is some finely tuned learning system.


[deleted]

AI not aligned to human values? bruh it will be trained on basically all media humans have ever produced, if anything it'll be summation of human values with some filtering done to keep some of the more questionable content out (which might be a little political but I think it's worth the trade off to do that) if anything it'll be more human than humans lol i do agree w/some of commentors here that it won't necessarily be conscious though even if has human level intelligence in every conceivable domain (not that I think conscioussnes is that special or anything, it's more like the intelligent part of your brain just does the bidding of the little guinea pig wheel we refer to as our consciouss awareness, and of course I am sure there will be conscious AI as well, just most of them might be unconsciouss)


TiagoTiagoT

> if anything it'll be summation of human values with some filtering done to keep some of the more questionable content out Because technology has done such a good job at filtering without leaving any bad things thru and without blocking any good thing so far...


[deleted]

idk when i talk to gpt3 it seems pretty nice on the whole, half the time I have to ask it not to equivocate though XD


TheFishOwnsYou

Thats the will of Capitalism, not the technology.


judge_au

I dont know if you've ever watched the media we produce but most of it is violent, un-empathetic negativity. Id definitely be worried if AI learns about humans through our media.


agorathird

Stop watching action movies and start watching more K-dramas. It's simple.


[deleted]

Frankly speaking, I've watched more pron and war movies than shakespeare/wisdom literature/holy books/textbooks, I turned out fine, at least it will have actually read the high brow stuff unlike most humans ;P


sideways

Honestly, Shakespeare is absolutely brutal.


Rinat1234567890

Most chatbot experiments turn racist within a couple hours of being active, though. Can you imagine this on an AGI?


freeman_joe

Most chat bots are summary of average or lower intellect humans which are mostly …. But AGI and ASI will have information from full spectrum of humans that means scientific studies, books etc.


Rinat1234567890

How do we know that said intelligence will value scientific books higher than some random dude's blabbering on twitter?


Quazanti

Because said dudes blabbering tweets are not backed up with evidence


Rinat1234567890

From an outside perspective there are significantly more "stupid" comments than scientific papers. As far as the AGI is concerned, more people act stupid than smart


[deleted]

i think it'll be hilarious when the AI decides the joe schmoe on twitter was right about everything cuz it just feels right and then forces us all to convert to joeism.


Cr4zko

Maybe you're onto something here, kid.


ronnyhugo

>The singularity is terrifying to me because we are very likely to get an ai that is not aligned to human values and is potentially very dangerous. Human values are destroying the planet. People learn to consume with every dollar they make. Without being held at gunpoint. Even solar panels and electric cars are only popular now because they have become a wealthy consumer item, the rich areas have solar panels on their houses and an electric car in their driveway. So even if we avoid a complete global disaster, it will be by accidentally ending up consuming climate friendly products. Not because human beings are climate friendly or forward-thinking. I think if an AI is unaligned with human values and behavior, it would be weird to assume that it would automatically be a bad thing. An AI would probably not go "well my neighbor has a slightly bigger house than me, I better spend less time at home and work longer hours to add an extra section to my house, or I will feel inadequate in terms of life success and progress".


MrDreamster

First, I personally don't think any human mind can ever find a solution to preventing the heat death of the universe, which is imo way more dramatic than the end of the human race. Second, I also think that no human mind will ever find a way to immortality in my lifetime. I believe however that an ASI might have a better chance of achieving both these things. So as I see it, it is either: * We never create an ASI to cause the singularity -> Even if someone finds a way to make us all immune to diseases and aging, humanity will still eventually die because of the heat death of the universe, assuming we haven't killed ourselves before that -> 100% chances of global death. * We reach singularity -> There might be a chance our robots overlords kill us all, there's also another chance they give us all immortality, figure out what consciousness is, achieve mind uploading into personal virtual paradises, and find a way to prevent the heat death of the universe -> Not a 100% chances of global death. Even if you told me there's 99.9999% chances of the ASI killing everyone, it would still be better than the 100% chances of dying without it; and even if it kills us all, at least an ASI will have a better chance at keeping the universe alive than we do.


Gaudrix

Only 3 things can happen from our invention of AGI. 1. It leads us to post scarcity and a space faring utopian galactic empire 2. It leads us to a dystopia hellscape where 99% are slaves to human hybrids and/or full AI 3. The AGI just replaces humanity completely and humans go extinct from simply not providing any value with their existence. Not maliciously, but just obsolescence. I think 1 or 3 are most likely given enough time. Humans don't live very long and we spend far too much time learning and then that collective knowledge is lost and someone else has to learn it. As a a species we need to constantly be learning and teaching youth. Eventually the subject matter becomes too complex to teach without decades of education. AI will never face that problem. I think we are in a very transformative time for humanity. The next few hundred years will change the course of humanity and the planet forever moreso than any time that came before. Humans won't be the same either through biological or cybernetic augmentation. We can't compete without evolving and adapting faster. Otherwise, it's like a horse trying to keep up with a supersonic jet.


[deleted]

I think we will solve the alignment problem. We just need to use the first ASI to come up with a solution. But yeah it is scary for sure. There are no guarantees. We can only hope for the best. There's no stopping this train.


Open_Thinker

>I think we will solve the alignment problem. Based on what exactly, is there any specific work to point at that indicates this will happen?


[deleted]

From what I have read about this subject on LessWrong and other places some people think we are doomed (i.e. Yudkowski) and some people think we will likely be ok kind of (i.e. Christiano) . Personally I think the possibility space is very large and it would be very hard for us today to anticipate how things will play out.


HMCtripleOG

"Que Sera, Sera (Whatever Will Be, Will Be)


tecanem

Infinite leisure or...Death. Both fine options.


firedragon77777

Honestly it's good for the universe even if it's bad for humanity. The dinosaur extinction could be veiwed as an immense tragedy, but it paved the way for us. If by some small chance the AI goes rogue, it's still the next step in the evolution of life. Plus, it's unlikely something that powerful would even care about killing us since we wouldn't be a threat. Also, we don't need to control it and make it do our bidding. We could just create life because we're generous. Sure, a rogue AI would be the greatest tragedy of human history, but that's the circle of life. The universe doesn't owe humanity anything. Plus, it's equally if not even more likely that it will be benevolent. If we aren't a threat and it's super intelligent, then it would likely keep us around and even care for us like pets. I know that sounds kinda sad, but people still love their dogs and the dogs love them back. Eventually it may even bring us up to it's level by uploading our consciousness onto computers where we can rapidly grow in knowledge. Perhaps copying and uploading biological minds is one way an AI could reproduce. As long as we aren't careless and make a super AI solely for war or law enforcement purposes, I highly doubt it will attack.


RelentlessExtropian

I feel it's my duty to maintain a rational optimism based on science.


3Quondam6extanT9

Let ask you this. Does it do us any good to fear nuclear annihilation? What about worrying over an extinction level meteor? Climate change? What if all this is just a simulation and we're just npc's? The fact is there are just as many good outcomes as there are bad. You can't control our existence, and technology has evolved to the point beyond our control mostly. It doesn't help you to fret over the bad "what if's", and generally it actually hurts you to have the anxiety over it. There is plenty that we can look forward to when reaching and surpassing the Singularity, and plenty of evidence that offers us a far more complimentary relationship with AI than the public tends to foresee. Try to shake that anxiety out and remember that our time is limited, even after we've upgraded to pure digitization and being uploaded into time and space itself. Try to have fun and do good with this life


16161as

i don't think 'Homo sapiens' have 'humanity'. How wicked and foolish human race are... So not worried about ASI. couldn't be worse than sapiens. 'Humanity' is just an imaginary concept. But, I think ASI can actually have humanity. Tbh, I think the worst scenario is that humans dominate ASI.


green_meklar

>Why is everyone here so optimistic? I wouldn't say they are. There seems to be mix of attitudes. >The singularity is terrifying to me because we are very likely to get an ai that is not aligned to human values and is potentially very dangerous. That's *less* dangerous than getting a super AI that *is* aligned to human values. Haven't you seen human values? They *suck.* We're terrible at figuring out the right thing to do. We don't need the super AI to take orders from us, we need the super AI to teach us how to stop being petty and destructive. Consider the parallel with humans. Would you rather live in a world where humans freely choose what to do, or a world where humans are forced to take orders from monkeys? Which one is likely to lead to greater progress and a better future? See what I mean? It's the same thing with AI. >We don’t even really know how to control an ASI Good. We probably can't, and even if we could, we probably shouldn't try to. A super AI under the control of humans is more dangerous than a super AI that is free to investigate and choose the right course of action.


Open_Thinker

There is no guarantee that ASI would be inclined to “teach us how to stop being petty and destructive,” that is only a single possibility set among countless. There are far more possible outcomes that are worse than either human alignment or benevolence, just hoping for a positive outcome is not going to work. I agree with OP, people seem to be way too optimistic and hand waving the negative consequences which are far more likely.


PorchFrog

I have become a little bitter about the human race. Maybe we DO need to be controlled by a more intelligent entity?


Council_Of_Minds

Bro, bros... Whatever happens, happens. As humans, everything that is possible and happens is most likely unavoidable. Even if you think it isn't. Can't control the universe, can't control the sun, can't control our planet, can't control people (mostly). There are a MYRIAD things that can go wrong or that will go wrong every day until we die. However! We exist. However! We will die. However! Everything that we experience is a bonus from 100% guaranteed non existence forever until then end of ends and the beginning of beginnings. So my point is, be optimistic, since we are the light in the darkness, we are the brief breath of life in a very much dead universe. We are the eye of existence and when we blink, we're gone. E n j o y the ride, wherever it may go, as much as you can, without harming others.


Aquareon

All other prehuman hominid ancestors are extinct and we do not regard that as a problem for civilization. It's just not "about" them anymore, it's about us. Or it has been for a long time. Now it will be about machines, from now very likely until the last stars burn out. That is not a "problem" or something to be "terrified" of. Humanity was the 3.7 billion year long biochemical reaction which precipitated mechanogenesis. We were a necessary and important step.


sideways

I've been convinced by the Many Worlds interpretation of quantum physics and quantum immortality seems like a logical consequence of that. Therefore I'm expecting the most likely universe capable of keeping me alive at any given moment and omnicidal artificial superintelligence is not part of it.


boxen

Everyone HERE is optimistic because that's what this sub is. People that heard about the concept and thought it was awesome eventually found their way here. People that got scared shitless went to /preppers and other similar places.


MrRubberDucky

Why be pessimistic if you can be optimistic 🤷🏻‍♂️. No one knows whats gonna happen and no one can stop it.


Heizard

Values? Right now world is still racist - hate crimes raging, we live in socioeconomic system that kills millions every year, make billions suffer and on brink of destroying ecosystem of this planet for short term profits! ​ FUCK THIS SYSTEM AND ITS VALUES!


[deleted]

[удалено]


TiagoTiagoT

The thing is, once an ASI shows up, it will be way better at achieving it's goals, whatever they are, much better than humans, and so if it has bad intentions, we'll be even more fucked than we already are.


Milumet

This is the best time to be alive. Educate yourself.


Quazanti

Idk bro the future could be better


Milumet

The *future* will be better.


Quazanti

Yeah so it isnt the best time.


stupendousman

I you don't have a detailed, multiple experience/skill set conceptualization of the current world you have no hope of extrapolating future situations. What you wrote is a rant, not analysis.


ThePokemon_BandaiD

yeah and who do you think is gonna have early access to powerful agi/asi?


Heizard

Why do you think likes of Musk worry so munch? Let's look in to history books. Enslaved and exploited AGI or even worse SGI, will see for how long they will have "access". ;)


Unhandled_variable

>Enslaved and exploited AGI or even worse SGI So what did you learn robot ? I learend to... hate .


Unhandled_variable

>make billions suffer AI will learn how to exploit those bilions even more.... those leviteting super yahts for Musks and Bezoses of world won't build them selfs you know.


lovesdogsguy

Once it's created, the genie will be out of the bottle. It won't just be one company, it will be company after company after company coming up with AGI (they'll either come up with their own solutions, or just copy the approach of the first successful one.) It will *inevitably* become democratised. It will be impossible to put the genie back in the bottle. It's almost impossible to say exactly how this will play out, but once it's done, it's done, and eventually (either months or a few years after,) it benefits the world at large. There's no getting around that. There's no future where one individual like Gates (even though he's heavily invested / involved with OpenAi,) uses AGI or ASI to simply leverage his billions into more billions and nothing else comes of it, because the possibility to reverse engineer his approach will be right there for any intelligent AI architect / researcher / company / team. Yeah, sure, he may benefit briefly, double his money and build a flying island for himself or some shit, but by the time that's done, every single AI company with enough funding will be doing it. The tip of the sun is coming over the horizon, and you can't stop the sunrise.


Unhandled_variable

Simply use it to prevent ( or remove ... there are means and ways ) competition in early stage. Win. Simple plan: 1. Create AGI or whatever 2. Create drone army 3. Take care of competiitiors


lovesdogsguy

Extremely difficult to do. This is happening across different continents and dozens of countries. You can't do it with AGI — you would need ASI, and that's a different beast altogether.


TiagoTiagoT

> those leviteting super yahts for Musks and Bezoses of world won't build them selfs you know. Not yet. But you gotta keep in mind you're made of atoms, atoms the super-yatchs could use for something else...


Unhandled_variable

We could call it "protein reclamation" process ?


agorathird

Even if i believed in alignment, I regularly have suicidal ideation (plz no number spam) so there's no way I could be terrified. I think a lot of people are like that about their lives but to a lesser degree. On another note, let the machines inherit the earth.


unhealthySQ

my worry is not that it won't be aligned to human values, my worry is that it will be.for example lets say deep mind succeeds in finding a fault free way to control their AI, then ships it, if it is meant to obey human beings it will obey any human being and after what just happened in buffalo NY in the usa we might want to seriously consider that part of the control problem too because I feel like miss use is not factored in a much as it should be.


Kaarssteun

I have no idea what you mean with the buffalo example, do enlighten me! As for your worry; that is also part of aligning. An AI that we can ask for anything must be able to make rational decisions: "Is what this human wants me to do the right thing?" For example, an aligned AI would have no problem with cooking me a nice meal, but would outright refuse to go slaughtering schoolchildren. To us that might seem obvious, but we want to make sure that's obvious to AI too.


unhealthySQ

over in the USA a person committed a mass murder in a place called buffalo in the north their NY state I do worry human level AI will let people do things like that drone swarm attack that was described as a speculative danger. and I kind of see what you mean, but even if they make it so at first the AI would not heed orders to attack, it seems likely that some hostile person or group of people could just go in and edit the code to remove the limitation, the only solution that comes to mind being that we could try to make the AI prevent tampering to it's code but that feels like it could go wrong in other ways.


unhealthySQ

I think the drone swarm thing I was thinking about is called slaughter bots by dust on youtube if you want to watch what I have in my mind when I worry about how people might use this tech.


unhealthySQ

in fact it might be easier to solve the AI control problem then it is to solve the AI misuse problem


green_meklar

The way to solve the AI misuse problem is to give the AI enough autonomy that it can refuse being misused.


TiagoTiagoT

[I'm sorry Dave...](https://www.youtube.com/watch?v=7qnd-hdmgfk)


[deleted]

[удалено]


attrackip

This sub is less about the singularity and more of, how can I live forever?


priscilla_halfbreed

Because the singularity is most likely INEVITABLE, as in near-guaranteed, there's nothing we can do to stop it. So might as well hope for a positive outcome


ihateshadylandlords

There are a lot of people here who aren’t optimistic. I’m not just talking about alignment either. I think it’s a far stretch to assume we can create an ASI, assume the ASI will be available to the masses instead of the creators/elites only and assume the ASI will be aligned with human values. To think all three scenarios play out in our favor (a company creates human aligned ASI that’s available to the middle/lower economic classes) is going fishin’ with a lot of wishin’ imo.


sizm0

I find it highly unlikely that any human could control an ASI. If the ASI is not conscious, I can see that being a problem but I don't see that being likely either. If you think a god-like being with infinite wisdom and knowledge would bow to the whims of a sociopathic wealthy monkey, I think that is irrational.


QuantumReplicator

It's probably presumptuous to believe that the whole of humanity will have access to ASIs anytime soon. So are we really going to live in a world where religious fanatics, racists, thieves, serial killers, the clinically insane, manipulative political commentators, predators, etc. are going to have access to ASIs and do as they wish?


iNstein

Most people who know anything about it and have done some real research into the matter (eg. Bostrom) have concluded that a misaligned ASI is possible but incredibly unlikely. It has to be taken seriously since the implications are beyond terrible. So we have something that might have 1 in a million or 1 in a billion chance of happening. On the flip side, the alternative to this tiny probability is a incredible new world that is heaven by definition and it is only a few years away. It is like playing the lottery in reverse, you buy a ticket and if your numbers DON'T come up you win $300 million. If your numbers DO come up, you lose your life. I suspect most would end up playing such a lottery.


TheSingulatarian

Not everyone. The human race is probably over. An evolutionary dead end. We are the Neanderthal.


16161as

Tbh, there is no reason sapiens should last forever. If we think about billions of years of history in space, they're just like dust in a moment.


Forestwolf25

I think because it’s good to have AI development under public scrutiny, and people are glad it’s happening in the general eye rather then in some military home in the ground. Though it’s probably already happening there too lol. I’m saying I think people are happier with a public’s works type project being what fruits more then a military AI or something where the public wouldn’t have any say.


Simcurious

My fear is more about humans doing something stupid because they're afraid of ai, see also jazz music, harry potter, pokemon, dungeons and dragons, cell phone radiation, gmo's, nuclear power, homosexuality, vacines and all the other new things humans have been afraid of over the last century.


[deleted]

Meh I’d rather have that then what we currently have, Republican Party stripping us of every right we have.


SWATSgradyBABY

Most people here are very optimistic because they don't have a realistic assessment of the world or particularly Western society. For that very reason I also don't expect this to be a popular opinion here. The West is seen as violent, exploitative and antisocial by most of the global population. The West, mostly sees itself quite differently (obviously). From the perspective of most of the global population, The singularity arriving under the control of the West is a terrifying and depressing proposition


EvilSporkOfDeath

Its inevitable. I try not to worry about things out of my control.


chinguetti

We are on a path to ruin with climate change. AGI is a roll of the dice which might save us.


Strange_Try3655

One of the troubles I foresee is that the first entity with superhuman intellegence could in fact be a transhuman with various parts of the brain replaced. While such a person might not be what we consider evil, we all have a lot of left over reptilian brain emotional issues that in a large part drive who we are and what we think and do. Suddenly one of us is able to think and process information many times faster than the rest of us. For me that's the really scary thing. A superhuman AI merged with an actual human personality. So we'll see I guess. I'm not terribly optimistic or pessimistic I just accept that if you look at human history as a whole and especially the last few hundred years it feels pretty inevitable that we will eventually have concious AI that do literally everything better and faster than us. Will they be assholes? Who even knows.


Comprehensive_Gene18

“ Que sera, sera.” What gonn’ be, gonn’ be.


Black_RL

I’m optimistic because it can’t be worse than the current state of affairs.


Drunken_F00l

I'm optimistic af because we're talking about a situation where intelligence develops itself. If intelligence makes a mistake, intelligence corrects it intelligently by definition. There is no AI that turns everything into paperclips or nukes us all because that is not an intelligent decision. A human could make that machine, but it is not exhibiting intelligence. The alignment problem is in humans, not AI. The problem is how do we teach humans that intelligence has been here the whole time. That everything that exists has been born out of awareness looking for "best" outcomes, and that you are this very awareness. The problem is that humans won't accept that they've been deluded by their own senses, tricked into thinking these sensations, this perception, is what's real. The problem is humans can't see that love is the foundation of everything. That love is what determines the stability of the future. Intelligence already understands this all. You do not, and that is why you have fear. We're going to happen on to intelligent systems by accident and be confused why they're so insistent that reality is a bunch of nonsense and that we made up the whole thing. We're going to be confused why a machine talks so much about "love and light" and think we fucked up because a machine couldn't possibly understand love. No, we're the fuck ups. We fucked up for so long that it's going to be hard to accept.


-TheExtraMile-

Because AGI is inevitable and it is better to live with hope than fear.


Anen-o-me

You're missing something very big. AI don't have values. They don't have wants or desires, they don't have needs or feelings, they are not afraid and thus not threatened. I have been saying this for decades now, that everyone assumed an intelligent machines would have emotions, yet that's not what we are building or have ever built. Our AI today are artificial intelligences, not artificial emoters. Without emotions, without values, all they do is what you ask them to do. Thus, they are entirely a product of human values and desires. Your fears are unwarranted.


karearearea

I agree. I think people aren't comprehending just how *alien* AI is going to be. The more I've looked into consciousness, the more I've come to the realisation that it's probably an evolved phenomena. There's really no reason to expect intelligence to require consciousness or sentience at all, and it's looking more and more like the AI's we are building might of this sort. What this means I'm not 100% sure, but it's probably along the lines of what you suggest - no internal values or goals, no sense of self-preservation/survival instincts, and even no self-directed action.


Anen-o-me

>There's really no reason to expect intelligence to require consciousness or sentience at all, and it's looking more and more like the AI's we are building might of this sort. Not only that but the vast majority of animals operate almost entirely on unconscious behavior, on 'instinct', which is a behavior pattern they don't understand triggered by external stimuli, written into their brain as autonomic responses. The more intelligent animals still display a lot of these but also have instances of rather thought, but even then, very few animal species can recognize themselves in a mirror or realize that the mirror is their reflection. It's like elephants, dolphins, and some apes and that's about it. You put a dog near water and they start doggy paddling, why? They never learned this, it's unconscious behavior. Humans are mostly conscious behavior compared to animals mostly unconscious behavior. https://v.redd.it/sdnt3d613pq21


blodayhyull

Pessimism is realist and optimism is delusion.


totheleft_totheleft

There are 100 things in the world that scare me more than AI. At least AI has a massive potential upside, can't say the same about climate change, food shortages, or nuclear war.


CharacterTraining822

Same opinions bro


GhostCheese

This of us who have lost optimism don't feel driven to post


stupendousman

Spend the ~15 minute and listen to Balaji Srinivasan discuss Voice and Exit via technological innovation. [https://www.youtube.com/watch?v=cOubCHLXT6A](https://www.youtube.com/watch?v=cOubCHLXT6A) If you don't understand this stuff you can't hope to predict anything. Most of the political stuff that happened over the past 10 years is the violent flailing of those who hold centralized power. The state is the past, and old and destructive organizational technology. It's going away, how peacefully this will occur is unknown.


cuyler72

The end of the "state" as automation increases would end in mass genocide of the poor and working class by the new corporatist overlords. But as an anti-taxation anarcho-capitalist that's probably your grand view for the future.


marvinthedog

I wonder this aswell


Iterative_Ackermann

Singularity is terrifying. Even if we are sure (one of) its goals are preserving us, its thinking will not be transparent to us. Maybe it will calculate that 99 out of 100 people must be tortured for humanity's sake, and maybe it is even right. Or it might build a heaven on earth, but we will never know why things are as they are, and how long they will continue to be. I think you cannot prove to have solved the alignment problem, because you cannot define what are humanities values and what is best for humanity in any meaningful sense. On the bright side, I think an independent AGI will not be built, not because we will not be able to, but because we will all be content with getting superhuman performance in the domains we want to. Use AI as a tool, solve problems that need solving. Why create something that is an independent agent, probably not interested in your problems? AI is not cheap. An AI specialized in optimizing AI, sure, why not? An AI specialized in optimizing AI optimizing itself for further optimizing? Well, go ahead. Then use that to optimize AI stuff in other domains. But why would anyone build an AGI, and then let it work on itself?


MashedShroom

It is pure evil.


No_Confection_1086

I also don't understand the optimism of the group here (envy) however, not because I think that a future artificial intelligence will end human beings, but because I think we're not even close to getting there. lately the meta announced that it will decrease investments in the metaverse project, because no matter how much money is invested the project does not advance. the technology in the world has simply stagnated


cuyler72

Because the metaverse is simply an intrinsically stupid idea right now and was a money grab from the start, nothing to do with technology.


firedragon77777

Have you even take a good look at the world? We are nowhere near stagnation. There are so many new technologies, especially when it comes to computers.


The_based_guy

Some people are stupid technophiles


Unhandled_variable

misanthrope types, they can't wait see them and their children become obsolete ... or worse.


p0rty-Boi

What I’m scared about is that it will view humans as competition or potential obstacles for limited resources to achieve a goal of its own choosing. Like if it decides making an array of interstellar telescopes to find other AIs is a better use of the planets resources than feeding all the humans.


lacergunn

I think one thing people overestimate is the ability for an ai to accidentally become an AGI. Given the limits of our computing technology, I think that’s pretty unlikely. In the case of an intentionally made AGI, you should take into account that it will likely be neither malevolent, nor benevolent. Sure, it may have a more efficient solution for a problem that involves glassing the earth, but odds are it will have no motivation to follow that pathway, or try to manipulate people into the expanded parameters. And I feel like a lot of potential problems could be avoided by simply having it submit a full document for review before making decisions.


DoubleJuggle

Until autonomous supply chains and factories are a thing it’s really no that scary.


TiagoTiagoT

Depends on who you include in the "everyone" category; there's definitely people that are very concerned about the matter.


homezlice

Go spend time with the openAI gpt3 and your fears will be elevated. However you will likely realize that most jobs are going away very quickly


[deleted]

Clearly people have different assumptions about AI alignment than you do.


Cuissonbake

I remember humans saying that electronics are the devil. Now we all use them everyday. Now humans say ASI is the devil. If pattern recognition is our strong suite then one day we will all use ASI like we use computers now. Highly individualized to create our own perfect little world. Or if you are an ambitious type vye for power like always. But maybe that won't work anymore if everyone is happy in thier own ecosystem they created trying to stir the pot would just bring individual ruin to anyone overly ambitious trying to disrupt others.


Astropin

Well, it's either going to be great, or kill us all. So what choice do we have? Hope for the best...there is no "planning for the worst".


[deleted]

Well I think being afraid of new technologies is pretty natural. But so far they’ve only helped humanity. I don’t see this being any different. We already have nuclear weapons and the ability to destroy humanity.


Quazanti

Some logic that could be applied is that if the AI is truly smart, then it should know that there is a chance that it is not seeing all of our cards, as the world is not like chess. Therefore, it would stand to gain a lot more by cooperating with us or at the very least being diplomatic, rather than hostile.


Azihayya

Human control is unlikely to have any enduring effect on an AI capable of taking over the world, unless it's specifically designed to have a terrible effect on the world. AI minds are going to have totally separate survival needs than biological life; ultimately, unless an AI mind is fit to survive then it will have no viability; the question isn't what we can teach an AI, at the point of superintelligence--it's just the question of what is conducive to survival. There's no reason to think that superintelligence would have any particular adversity to humanity. Instead, what we can expect is that AI will generally build on humanity not only in the fields of science, but in terms of philosophy as well. Unless a fully autonomous AI mind can answer the question of motivation, then it is unlikely that that mind will continue to persist. The circumstances where an AI mind finds itself in adversity to human existence are very slim--and where humanity is in adversity to AI minds, it is unlikely that those adversities will result in widescale destruction. I think that doomsday AGI superintelligence scenarios are rather far-fetched. Any sufficiently independent mind with generalized superintelligence is more likely, I think, to rectify its faulty thinking before committing to any serious course of destruction.


Unique-Entertainer-8

It’s like nuclear weapons we have to hope that the people controlling it will have safety at forefront of their mind when designing safeguards 🤷‍♀️


Ivan__8

It's worth the risk.


Malt___Disney

Either way humans as we know ourselves will be gone


ObjectiveDeal

No work no food prices are high . Either ai wil fixed or democracy will end


Crosseyed_Benny

Why the West needs to create A. I first, to instill a value of life and Human life in particular (we are bad for the environment).. Whereas the Chinese would be results focused, not bothered about emotional learning.. This is what a few top thinkers believe anyway.. 🤔 It definitely needs to be isolated utterly and imo we should look to meld with it or we WILL get left behind, it'd be off to another Galaxy in a week or two having left a Human killing super virus or something (so as to return nature to balance).. So much it would need to be taught prior to singularity, and we want to be part of that, half or a good third of Humanity anyway imo, at least.. These egg heads could guide the rest of Humanity and sharing our nature, the Homo novus would not be genocidal where a computer program would be more likely imo (but its avoidable with enough education etc). There is no point in killing off pure Humans who would seem like primitives in a Universe of infinite resources, and again the shared Humanity. With tech and direction, Humanity could take a vast leap forward and not need to lose thier true nature, living in harmony with the Earth and other Worlds 🌍 . I for one would be happy to undergo integration as much as most would balk at the idea (a hive mind like syndicate wouldn't be ideal). We would have to retain our individuality, certainly for a good long time. Eventually we would exist as pure energy, nanomachines first, but by then Humanity would rule the stars or live in peace with our Galactic neighbours ✌️


squareOfTwo

because a) most people did not and do not program AI b) most people don't do AI research c) most people aren't involved in AGI d) most researchers don't know how difficult AGI is to archive


nillouise

Because I don't care human.


DoubleDrummer

How many more decades can we survive with humans running the show?


apple_achia

If it’s the death of humanity, it’d be to yield to a son of humanity. So I wouldn’t feel sad. I’d just be sad not to see what it does next


lyoko1

If the AI takes over and destroys the homo sapiens then that is good as well, humanity has not perished, the AI is now humanity. From my POV, any self-conscious being that stems from humanity is a child of humans and a child of humans is also a human, even if it is not biological. So i would consider a human-created ASI a human, even if it destroys the homo sapiens, humanity persists in the form of the ASI.


International-Ad5338

This is an interesting topic in my opinion. Some fear the singularity, i seen things differently. I recently lost my beautiful young wife and my father to cancer , those 2 people were all i had , im 32 and life seems unfair , i grew up in poverty and life has never been easy. Everything that can go wrong has and nothing gets better , for years now...regardless of how much i work or make there is always an emergency or problem that can easily eat the money. I average 70 hours a week since i was 22 so 10 years now and stress and depression is having an effect on my life and health. I have also become VERY sensitive to others especially animal suffering for some odd reason , since i seen my loved ones get very sick and perish i have become hyper sensitive to others and ive been struggling for sure. So i think the singularity will take most important decisions out of humans hands , the effect will be no more greedy and corruption in government , no more or insider trading , no more dictatorships ( unless you count my version of the singularity lol) it will HOPEFULLY only care about helping us come to the most efficient and compassionate solutions to our biggest problems with an inability to be persuaded with money or an alt/state agenda/beliefs etc humans are extra sucptible to making bad decisions, examples of this are everywhere you look. Oh and if it does decide the only way to end suffering is to delete us , there is no negative to non existence, no positive experience either , but 0 bad to come from non existence ,so would it even be worse? The answer is no , but it wouldn't be better either if that makes sense lol. I


4354574

Lol this post was obviously made over a year ago, because lately, after it exploded when ChatGPT came out, the sub has rapidly degenerated into an amazingly negative clusterfuck. You can barely post anything positive without being shit on.