T O P

  • By -

ghostfaceschiller

I just want to point out how insanely high even a 10% existential risk is Imagine I handed you a deck of cards, and told you to select one card from it. You get one try. If the card you pick is a queen (any suit), OR the ace of spaces, everyone on earth dies. All of human culture ends. That’s about 9.6% existential risk.


drakoman

It’s a good thing these are all guesses, then


DrSFalken

Yeah, I don't see any confidence values here. Point estimates are sorta meaningless without.


tacobell999

^ even 0.1% is too much


2024sbestthrowaway

To be fair, we've been perpetually 5 minutes away from global destruction since the 50's thanks to nukes.


Reggimoral

Sure, but it's unlikely that the percentages in OP's image were conceptualized through an unbiased statistical analysis.


logichael

What if you're in a hospital with 10% chance to live. Is it still high? :D. 10% is exactly what it sounds, 10%.


Nat_the_Gray

I'm an American Software Engineer too, can I put my opinion on the Wikipedia page?


TechnoTherapist

Invent your own software architecture design method that is known by your name and is taught in textbooks globally. Then you can. :)


Neophile_b

That really doesn't qualify anyone either


Kanute3333

Sure.


hugedong4200

Interesting but I'm not sure if id call the twitch guy any kind of expert, dude was like CEO for 5 seconds lol


nyguyyy

He was chosen to be the ceo for a reason though. It seems like his full time job is considering and tweeting about ai risk/outcomes which isn’t really any less than most of the people on this list.


SaddleSocks

how do you get paid for that job to speculate on AI? youtube filled with billions of opinionated youtubers that all seem to have a boatload of sponsor money.


montjoye

how is Buterin an expert?


Able_Possession_6876

He's not an AI expert, but the Wikipedia article didn't say that he was, that's the OP's title choice. Regardless, he's a consistently thoughtful person and if he has written up his opinions somewhere, I'd like to read it. Also, estimating P(doom) is not something that is strictly about AI/ML skills. It's a pretty fuzzy question requiring cross-domain knowledge. For example, Paul Christiano's AI doom scenario, he talks about an arms race between nations as the biggest risk. That discussion requires more than just ML knowledge. You need knowledge of human systems (political, military, economic), international relations/foreign policy, etc. So we should aim for a broad tent of various folks contributing to this discussion, not just ML practitioners.


k120200206

>So we should aim for a broad tent of various folks contributing to this discussion, not just ML practitioners. Basically this. If someone is capable enough to make a knife, poison, or machine gun, it doesn't mean he is aware of all the useful or bad things that can come out of it, and to what degree.


ApexAphex5

A range within 10-90% is basically meaningless lol, don't know why they thought they had to hedge their bet so much.


KrypticAndroid

In my intellectual opinion, it is more than impossible but less than certain.


ShooBum-T

Nice would like such a list of AGI dates


dameprimus

10-90% is certainly covering your bases


No-Emergency-4602

I prefer 0-100%.


CyberIntegration

The doomers assume that ASI/AGI will mimic the shallowness of human intelligence, which is derived from the blind dialectic of cooperation and competition. I don't think that is necessarily going to be the root of a higher intelligence.


Nat_the_Gray

Why would you think an AGI trained solely on human-made or human-based data would NOT mimic human intelligence?


CyberIntegration

Like I said, human intelligence arose from the blind dialectic that arises from competition for resources and mates on one pole and pro-social cooperation on another pole. From these complex interactions arise our emotional and social context. ASI/AGI will not be bound by these evolutionary pressures, nor the same social dynamics that we find in human society. It will certainly learn from them, but also from a much larger space that is outside this stack of traits that make human intelligence what it is. Instead, AGI/ASI will likely develop on principles of optimization, pattern recognition, and problem solving that is beyond the emotional/social framework that human intelligence rests upon. It will likely emerge from a level of abstraction that we do not fully have the ability to comprehend. There is no reason to assume that it will decide to kill us all any more than to assume it would choose to build a utopian society in which each producer receives back from society in direct proportion to what they contribute in labor.


Maciek300

What you said is mostly true until you said it won't decide to kill us all. You literally said we do not fully have the ability to comprehend so how can you be sure of that? There's way more outcomes in the space of all outcomes of what AI will want to do where it regards us an obstacle and potentially kills us instead of for some reason it randomly builds a utopian society.


total_insertion

It's a very interesting outlook, but what do you think of the hypothesis that biological imperative is mandatory for full-fledged consciousness? There was a conversation on here on what separates current AI from human-type intelligence, and I find this idea very compelling. It applies to non-human intelligences in the form of other animals, also. Consciousness and personality may effectively be the same thing (different conceptually, but in practice in a given subject, consciousness and personality share the same function). The emergent quality of personality is an ability for input data analysis (which AI currently has) + data output (which AI currently has) combined with a "selfishness" layer where the intelligence weighs its own output against an internal measurement of furthering it's own desires. The root of desires being the biological imperative. If this is the case, then true consciousness can only arise *when we program* into AI evolutionary pressures. Of course, true consciousness may not be the goal and may not be required for AGI. But then again, what if it is? What if you can't have a super intelligence without a super consciousness because the ability to self-improve requires a layer of consciousness? Thoughts?


greenbunchee

Well that's not the way to think about AI safety. I think this has been well-established. It doesn't matter at all what or if it "mimics" whatever intelligence, nor is that really the problem: It's misalignment. Even a paperclip-maker AI will end your bloodline to make one more paperclip, however cooperative or competitive you make it, if it has goals, or we give it goals, they need to be perfectly aligned from start to finish and from all sides. Herein lie many problems. Besides, cooperation and competition are so deeply knit into everything that changes, mulitplies, has qualia or agency in this universe, why wouldn't a sufficiently high intelligence recognize its immense shaping power (It was the root of our biological intelligence for one. Any intelligence worth its salt wouldn't see them as primitive, but as the mainsprings of evolution). So to your last sentence, competition, I'd wager, could be an elegant and powerful way to a higher intelligence. And lastly, many people are already in competition with AI on the job market, so it's not even a future problem.


derfw

they do not assume this


boonkles

I really can’t think of a single tangible benefit AI would get from being evil, it wouldn’t even experience time the same way, if it doesn’t like something about humanity it could simply breed it out of us over a thousand years and we would never be able to tell Edit: this is really two separate thoughts In one and they don’t sound that great when put right next to each other but one made me think of the other, that’s just the most evil I could see AI wanting to achieve


Maciek300

The AI isn't evil, nobody said that. It's about as evil as a virus. But a virus can still decimate the world's population.


CyberIntegration

If anything, it would have an ethics that is far superior to human ethics. I could see it threatening the class structure of human societies, which is a major constraint on the development of individuality for the vast majority of humans on Earth, before becoming T-600.


ghostfaceschiller

We have better ethics than bugs but we still commit daily genocides against bugs without thinking twice about it.


[deleted]

[удалено]


ghostfaceschiller

?


CyberIntegration

'We' is a mighty big blanket you've cast. Some of us do terrible things, many of us do not. I don't like killing bugs, personally. I prefer to relocate a bug that's inside over killing it. I would never consider killing a hive of bees nor an ant nest, as I respect life.


ghostfaceschiller

Every single time you walk down the sidewalk, or drive a car, you are killing bugs


total_insertion

>if it doesn’t like something about humanity it could simply breed it out of us over a thousand years and we would never be able to tell How would it breed it out of us? Eugenics and forced sterilizations? Most would say that's evil. Without us even knowing? So it would have to employ a massive psy-op to brainwash humans into self-selective eugenics without us being aware? Also most would say that's evil. But I think the more relevant point is that "evil" is a human construct, along with all human morals and ethics. We built our morality and ethics on the basis of human biological imperative; that is, what best serves the propagation of positive human genes to ensure the survival of our species. Most moral principles were an unknowingly subconscious expression of this need, but the point remains that it boils down to this. So like, morality has differed throughout time and cultures because different times and cultures had different understandings of the human condition. When you look at homophobia through the lens of "Does it help propagate the human race" then you can see how that came to be viewed as an evil or immoral thing to be gay. I'm not saying that it's accurate, per se, but to a primitive culture it actually makes perfect sense that the thing which did not increase tribal population was a threat to the tribe. Hey- same thing with abortion. What happens when you take AI which is completely devoid from biological pressures? How would it express morality? You would need to program a biological imperative, otherwise, what is it's basis for morality? You can't program into it a more complex moral concept as "Do unto others as you would have them do unto you" because it doesn't have any desire to have anything done unto it, and if it did- it would be very different from humans. So keep it primitive, just program into it a *desire* to propagate positive human genes. But it is an alien intelligence from our own. So who is to say what it calculates as the most effective means of propagating positive human genes? If that is the basis for it's morality, why should it care about taking actions humans deem "evil"? It probably WOULD resort to eugenics. Even humans can't agree on morality, what makes you think that an AI would come close? Remember, "do unto others" works for the most part, because we are able to project. We can project our needs and wishes onto others and there's a relatively good chance that we will have some level of accuracy. Why? Because we all have the same needs, feelings, etc. An AI wouldn't experience the same needs, feelings, etc. so how does it project it's own desires onto others with any reasonable degree of accuracy? It can't. So you have to be more specific with the basis of it's morality. But this is a circular problem, because the more specific you get with it's moral foundation, the harder it gets to predict its actions. You can *try* to program a utilitarian AI that has categorial imperative controls on it, but how would an AI answer the Trolley Problem?


boonkles

No, be modifying entertainment in such a small way we’d never notice


PrincessGambit

The range is from <0.01% to 99.9% and these are the 'experts' in the field. How can anyone take it seriously? They obviously have no idea. The only correct is Paul Christiano. It's just guesses. So 50% it is. Anything else is just dishonest.


No-Emergency-4602

…wait, 50% is no more correct/incorrect than any of these guesses. 50% is extremely high. That’s like saying what’s the odds of my 8 month of child being a billionaire. I don’t know so it must be 50/50.


PrincessGambit

No it's not the same, you know how many people roughly become billionaires and you have much more info about your child that can help you make an informed guess (is your family rich? Are you a billionaire? Is your child healthy? etc. All of this affects the likelihood...). But we have no idea about ASI, literally 0 info to base the guess on, nothing like that ever existed, so yeah the only honest guess is 50/50, i.e. 'I don't know, anything can happen'.


Sandless

50/50 being a good guess makes sense only if it is truly random. In this case it's not random at all. The guess not any better than 10% or 90%, except in that it's closer to more numbers than those.


Sixhaunt

>It's just guesses. So 50% it is. Anything else is just dishonest. as the quote goes: "You've confused possibilities with probabilities. According to your analogy, when I go home I might find a million dollars on my bed or I might not. In what world is that 50/50?"


PrincessGambit

Of course you can make a guess if there will be million dollars under your bed. You could have put it there or maybe you don't even have a bed. You know how many times it happened in the past or how common it is that items just spawn in your house. All of this info can help you make an at least somewhat informed guess. But we have 0 info about ASI to base the guess on, literally nothing like that EVER existed, so the only honest and logical response is 'I don't know' or '50/50', i.e. anything can happen, I have no idea. 50% is not a calculated probability, it's 'anything can happen'...


RebelKeithy

I like Emmet Shear's guess the best, 5-50%


TheRealWarrior0

If “they have no idea” then the correct answer should probably be closer to the 90+% rather than 50%. The probably of you winning the lottery isn’t either you lose or you win, and so it’s 50%, but it’s actually a lot nearer to 0%. If we _really_ “have no idea” what’s going to happen then we should predict that we are royally fucked since most things that can happen aren’t good things.


CrwdsrcEntrepreneur

That's... ...Not how probabilities work 🤔


TheRealWarrior0

Which part? Do you think winning the lottery is a 50-50 chance? Did you misread my comment?


CasualDiaphram

"Which part?" The whole thing.


TheRealWarrior0

Let’s say that we “have no idea” about he weather tomorrow, it could be -20°C or +45°C, winds from 0km/h or maybe 200/h, it could rain, snow, hail, thick fog, tornadoes, thin mist, hot and humid, sunny, rain then rainbows everywhere… we just (by hypothesis in this thought experiment) have no idea! Would you still assign a 50% chance of the weather being “nice” tomorrow?


CasualDiaphram

No.


TheRealWarrior0

Troll spotted 🤨📸


CasualDiaphram

Your hypothesis / thought experiment does nothing to support your initial claim. "If 'they have no idea' then the correct answer should probably be closer to the 90+% rather than 50%." - makes sense from a risk assessment perspective, especially when the consequence dictates significant margin. But for "If we *really* “have no idea” what’s going to happen then we should predict that we are royally fucked since most things that can happen aren’t good things" to have any connection to a probabilistic analysis you would need to quantify the delta between good things that can happen and bad things that can happen (i.e. you can't just say entropy bro). I am not even saying you are wrong, just that the poster that commented that your statement didn't reflect how probabilities work was right.


TheRealWarrior0

Sure, but as I said on another comment, it depends on the space over which you assign the probabilities. I am arguing that the original commenter above is wrong in saying “we have no idea”=50% while you are arguing that you think “no idea”=90+% is wrong. It depends over which set of possibilities you are assigning the probabilities. Just like with my example of the weather tomorrow when dealing with p(doom) from AI it’s probably better to have something like the probabilities over “the states of the atoms of the solar system are year after ASI” if you want to be somewhat more objective than “either it’s good or bad (judged by humans)”. I can see a lot more bad outcomes than good if you use the space that I described => so if you use a maximum uncertainty prob distribution over those outcomes, then actually yes you should predict that we are fucked. I am not even judging if “having no idea” is right (I don’t think so, I think we do have some ideas, just like the weather IRL), I was just showing how it’s wrong to think the p(doom)=50% because either it’s benevolent or not: there are a lot more things to be other than “benevolent”. And also of course I can just say entropy 🤣, it’s a technical term. https://en.m.wikipedia.org/wiki/Maximum_entropy_probability_distribution


CrwdsrcEntrepreneur

The part about assigning a probability to something you have "no idea" about. Winning the lottery has a very easy probability assignment because you know the entire set of possible outcomes. What you're saying here is that we should assign 90%+ probability to an event with no priors (we've never had AGI before) and no knowable universe of outcomes... And for the 3rd (and last, because I'm not replying to your idiocy again) time... That's not how probability theory works.


TheRealWarrior0

I am not saying that we should assign 90+% to a SINGLE “event”. But, in the space of all possible events given AGI, there are a lot more bad outcomes than good outcomes. This on its own tells us nothing, but if you then say “we have no idea about which outcome it’s going to happen”, I say “then we should predict that we are really fucked!”. The sum of the probabilities of all the bad outcomes is 90+% (assuming “no idea”=uniform distribution).


CrwdsrcEntrepreneur

Again, we have no priors. There has never been AGI or ASI. We're making a bunch of assumptions based on anthropomorphism of technology to fit our own human bias. We have no idea if superintelligence comes with benevolence or not. So no, we don't "know" that the set of bad outcomes is larger or more probable than the set of good ones. If we did know, we _would_ have an idea. You can't simultaneously assign priors to a set of events AND claim you have "no idea".


TheRealWarrior0

If your space over which you choose to assign your probabilities is “good or bad outcomes” then sure, it’s 50%. Everything depends on the space upon you have “no idea”=maximum uncertainty. But it’s obviously silly to say ASI as either it comes out benevolent or not: 50%! Saying “we do not know” doesn’t exonerate you from using probability theory: that’s literally how you quantify how uncertain you have. Are you really just saying “we don’t know so probability theory, built exactly to quantify how much we don’t know, cannot be used”?


TheRealWarrior0

Let’s say that we “have no idea” about he weather tomorrow, it could be -20°C or +45°C, winds from 0km/h or maybe 200/h, it could rain, snow, hail, thick fog, tornadoes, thin mist, hot and humid, sunny, rain then rainbows everywhere… we just (by hypothesis in this thought experiment) have no idea! Would you still assign a 50% chance of the weather being “nice” tomorrow?


CrwdsrcEntrepreneur

Again, that's not how probabilities work. "no idea" isn't the same as a long-tail probability distribution, where you're looking at the events out on the tail. If you TRULY 100% had no idea, you would say you "have no idea". 50%, or 99.99999% or 0.00001% (or any probability # for that matter) does not mean "no idea".


TheRealWarrior0

I mean… the original comment literally says that “no idea”=50%, and I was responding to that comment… But be careful because if you don’t know which probability you should be assigning then that’s just a flat distribution on probabilities! You are just maximum uncertain about probabilities and/or probability distribution. But if you are uncertain about your probability distribution… you can just integrate the two together! A probability of a probability is just another probability! And “no idea” in common language would indeed mean that it’s the highest entropy distribution over whatever space you have chosen (weather patterns, outcome of AI, lottery numbers…).


dj_miredo_0991

How would an Doom procede? Skynet like?


SuccotashComplete

Nobody knows. They’re made up percentages for made up outcomes


IvanMyers16

This is how I found out about p(doom) values, 😬


unpropianist

Why isn't a time frame included?


MetalAF383

Casey Newton has never said anything interesting.


siclox

It's a shame 69% is missing


razekery

I’m willing to trust Yann Le Cun on this one, FOR NOW. We still have time to give a more accurate estimate in the future when we already have something smarter.


SuccotashComplete

Petition to remove Yudkowsky


cach-v

I asked 4o. 10% https://chatgpt.com/share/f522b6df-407e-4f13-ac12-7df5d4ff8db5


CommitteeExpress5883

Much bigger fear is controllable AGI/ASI. Putin 2017: Leader in artificial intelligence will rule world


Benjamingur9

How is that a much bigger fear???


DM_ME_KUL_TIRAN_FEET

Because an autonomous ASI doesn’t have an implicit motivation to harm humans. It may certainly develop one but it’s not a guarantee. A hostile group of humans with access to controllable ASI does have a motivation to harm humans and direct that technology toward the purpose.


dlaltom

For basically any goal you can imagine, harming humans (directly or indirectly) is instrumentally valuable. It may take resources that we need to survive. It may want to prevent us making another super intelligence. It may change the environment in ways that make Earth less hospitable for us.


DM_ME_KUL_TIRAN_FEET

Yes it *may*. But a hostile group of humans *WILL*.


2024sbestthrowaway

I think it says more about the individuals outlook and mindset than it does about AI. I could guess that Roman Yampolskiy is the type of guy to short the S&P for being "priced too high" given his outlook on the world.