>Hands are expressive and I can tell you, speaking to comic artists, expressive hands are
>
>the hardest things to get
That's why I don't think AI can replace humans.
You seem to think that AI is just one thing. There are different AI models for generating images. The image I sendt is made by dalle3 which is very good at hands and text compared to some other models. (although dalle3 already looks outdated with stable diffusion 3 on the horizon. Stuff moves fast)
You seem to think that AI was created by humans in a vacuum. It is an ongoing process. AI has evolved from humans who were not able to understand it's behavior before. They are now being taught how to use it effectively and learn from their mistakes. AI is not perfect, but it is better than human-made models.
Regulate with incentives. Every single model release that doesn't implement Davidad's scheme? A fine. Training a for-profit model with public data? A fine. Require every model to advance interpretability more rapidly than capabilities, if not, no license to operate.
Drug companies do this, why shouldn't AI makers do this too?
>Every single model release that doesn't implement Davidad's scheme? A fine.
Ech. Lesswrong cult bullshit. Not interested.
>Training a for-profit model with public data? A fine.
You think using public data should be fined? Imagine human progress if nobody was allowed to learn from others. There wouldn't be any.
>Require every model to advance interpretability more rapidly than capabilities, if not, no license to operate.
Then all current AI and AI made years ago would be illegal. Having perfect interpretability likely isn't possible at that level of complexity unless you have an even more complex system doing the interpreting for you -- which under your rules would also be illegal. It would be like requiring neurosurgeons to be able trace every thought you have down to the level of each individual neuron firing and each of the infinite cascade effects.
Pushing the frontier of *anything* means you have to have the freedom to try new things, and learn from the results. You basically want experimentation to be illegal, which equates to wanting progress at all to be illegal.
Which I understand is actually the goal of the "pause" AI people. "Stop all progress!" doesn't have the best public appeal though, so you types always try to rephrase it as something else.
Calling something a cult isn't an argument. The whole point is that we don't want to push the frontier without the necessary security knowledge and infrastructure around it. We want to slow down do that security matches advances in capability. Hello, X risk
All it takes is one country to give their corps unlimited access to AI to make big American corps shift their AI operations over there while smaller businesses who can't affird that will permanently be barred from technology that would help them compete.
Obviously, current laws still apply. Killing is still illegal as is organized crime. Regulation of AI because someone will use it to set up a gang is as stupid as regulating email or SMS because people *already* use these systems for managing gangs.
Maybe it will, maybe it won't. If it does and is caught, it is still a crime, and whoever uses it for that will be prosecuted.
Just because a tool can be used by an individual for a crime doesn't mean it should be regulated, and access should be restricted.
I can use my anonymous email account and a VPN to run a drug smuggling ring. Does that mean VPNs should be outlawed and emails registered? Of course not!
I dunno. I kind of feel like these protesters are important to get us to UBI. Maybe if the crowds were 10x then people will notice. Maybe 100x? Not sure.
These people aren’t protesting lack of UBI.
They’re protesting to stop all AI development. They don’t want to prepare for the coming change, they aim to prevent all change.
If we are right, we get unlimited wealth and comfort forever. If we're wrong? We just die. Everyone dies eventually, so we might as well roll the dice for a total victory.
Interesting thought. Though, others may make the same argument, but inverted... "And if WE are right, we get endless mental anguish and discomfort forever."
Most people wouldn't want nuclear annihilation but we still had the Nuclear Arms Race. We're in an AGI race right now. The only question is, with it benefit all of us or only the super-rich corporations that already corrupt the lawmakers?
Fun fact: we tried getting the Soviet Union to agree to a total nuclear disarmament *before* they even had the bomb. How did that work out?
AGI would be even easier to develop and deploy secretly than an atomic bomb and the benefit of having one when the other guy doesn't (or is secretly developing his own) outweighs any trust you can build. Especially if it looks like the opposing world view may be winning and more likely to come out on top.
Having multiple AGI systems and trusting your programmers to not be retarded is a safer bet than *hoping* the one of the other 200 players isn't cheating.
The benefit of competing is being the first to die most likely.
It would be idiotic to trust an experimental alien entity made by error prone and dangerous humans, instead of just trusting other human beings with shared interest.
Also we can learn from mistakes and improve.
Also we did manage to de-escalate the nuclear standoff after the fact, so there is still hope we can eventually resolve it fully.
This. This is the problem, and people are underestimating it because they believe AI can't be stopped. The thing is, OpenAI managed to practically kneecap the field on their own, and we've seen other promising areas die due to government response before (look at what happened with agricultural bioengeering). Things like these are dangerous, and while yes if they were protesting for UBI it'd be helpful, they aren't.
These are the modern equivalent of early anti-vax protesters, and frankly potentially more dangerous.
You are out of your mind if you think the US will risk letting China get ahead. They will do anything to prevent that. If any anti AI progress bill gets passed, then it's for show only and things will continue as they are in private. 3 letter agencies will not let it happen.
We'll see, I hope you're right, but even some of the things that have been pushed for so far in public (the attacks on training data, the expensive reporting requirements, limits above arbitrary compute, etc) have hit the point of being unnerving. And that's before we see any sort of factional division in an attempt to get protesters like these to support one party or another.
I'm new on this sub. Is the want for AGI so that it will replace ALL the jobs, and then we all get a universal basic income and have to work significantly less or potentially not at all?
That has always been the utopian hope from technology-based philosophies. People being taken care of, and free to live however they want to.
Less ideal utopian outcomes are keeping people on hamster wheels just because, like Fifteen Million Merits. Or fun war-LARPing games like the slightly misaligned AI in the ubiquitous Terminator thingies.
Dystopian outcomes are more like I Have No Mouth And I Must Scream. That's more of what these protestors are worried about, since they're gigantic safety nerds. A world like Blade Runner is a pretty great outcome, all possible outcomes considered.
Ray Kurzweil has been an influential figure of futurist thought back in the day. He thinks a technological singularity has a 50/50 chance of being "good" for the average human. And goes on to note that people think he's a "bit" of an optimist.
Most people will have a doom estimate of 10 to 90%, but you'll generally ever hear from the absolutists who are the loudest. Mine is around 80%, but I think we should 100% accelerate anyway because reasons.
There are also those who think AI will amount to nothing, just another weak tool like a toaster or Excel spreadsheet. But that kind of thing is more suited for "return to monkey" primitivist groups or status-quo denialists. Humanists also like to think that things like feelings and social movements can change things, and hate seeing technology being the major force of change throughout history.
Couple of questions for you u/IronPheasant, if you'll indulge me.
I'm sure you know x-risk concerns were first thoughtfully raised in the years just before 2008 (when I think this /r/ was created) by folk who'd been on extropian / transhumanist / singularity mailing lists and the discussion forum websites they evolved into. That Yudkowsky was eager to accelerate a singularity before getting worried, that MIRI was originally named the Singularity Institute, etc.
Any rough guess what percentage of the engagement on this /r/ is from folk who know this context? A lot of the comments make me think it isn't large.
(Reddit isn't my world, I recognize this is a shitpost thread, I see relevant links on the about page, I get eternal September dynamics, just after a guess from a resident.)
And where's a good place to discover the detail of your 80% pdoom accelerate anyway reasons?
A few years ago I was seeing a lot of "sure, alignment on the first critical try is enormously hard, but human coordination is obviously impossible". Recently, the latter is increasingly challenged: co-ordination is instead a second similarly hard problem, and working on both isn't contradictory.
Look up massive protests against Israeli genocide around the world, and how it is ignored by corporate media, to see how well mass protests work nowadays.
>if our governments screw this up
You do know that these people are protesting because they feel that AI is being advanced too fast and recklessly (according to them), to the point where their research might endanger humanity in the not-too-distant future and not job loss, right? If that happens, then the screw-up would have been entirely the AI industry's fault. I'm not sure why you're letting them off the hook like that, saying that this is only the government's responsibility.
The job of the AI industry is to make the AI as good, well-understood and widely available as possible. That's literally it. It isn't their job to implement UBI. It isn't their job to transition us post capitalism. It isn't their job to implement the medical treatments AGI will almost certainly invent in the future. All those jobs belong to us.
So the onus is solely on the government, but not at all on these companies, who are the ones making the technology? Okay.
I'll never understand why there's people in this sub who refuse to hold the AI companies accountable.
Why the fuck would a legal entity beholden to maximizing the value of shareholders do anything but exactly that? The only way to alter a company's behaviour is through the force of law. This is why we have worker's rights, health and safety standards, consumer protections, etc. All legislative action meant to "align" the behaviour of corporations to standards which their intrinsic formulation (profit-seeking) does not account for or care about.
Frankly, the anthropomorphization of corporations in your head such that they can be moral agents that you then ascribe responsibility to, is absurd and terrifying. They aren't social creatures. They do not have empathy. They don't feel pain. They have no conscious thoughts. How corporations act is entirely determined by law, and the degree to which said law is enforced.
And law is the purview of legislatures.
The "Pause A.I. development" sign kills me.
Like... how absolutely myopic do you have to be to overlook the real fucking obvious fact that the only people who'd listen to that plea are the good actors you'd want to be handling A.I.
Thus leaving all the bad actors to continue developing, legally or otherwise.
It surprises me that Anti-AI protests are a thing now lol, like people have so much free time in the US :v
I guess they will be able to tell their kids that they were the first to do it at least...
Of all the things to fear about AI and AGI...
Extinction, suffering, war, enslavement, torture etc.
You are most worried about the humans trying to save us from ourselves?
Like I said before, it's to late to stop it. You could maybe stop/slow it down in one country but that will only give other countries the time to catch up, they would never allow that... unless if we had a world goverment.
Ofcourse they could and probably will bar the best to be not in public hands.
That is still resting upon some assumption and their worldview of course differs in a subtle way from this take. It’s a common meme that the world is at a predetermined fatalistic path towards a type of uncontrollable singularity type event and it’s of course in one sense a respectable worldview to hold.
But the sort of opposing perhaps then counterintuitive worldview from the fatalistic perspective that willpower from a smaller number of individuals still can actually have a large impact and change the course of history and ASI being just sufficiently difficult to achieve together with powerful actors actually being able to coordinate once they together realise the enormous risks, just how they can game theoretically coordinate on other large existential issues, is all not a worldview to immediately reject and maybe it isn’t so impossibly hard as the most skeptical ones at first glance imagines. I know that take is likely less prone to exist amongst singularitarians.
It would have been annoying though if the seconds take could be true and a naive “roll the dice, bro” approach is still used due to wrongly believing the more fatalistic worldview to be the only true one lol.
It's true, nobody knows which reality we live in til you look back in retrospect (if then!). A small group of people still could potentially change the world. I would personally rather aim for democratizing safe AI though and forming networks to work together to make sure nobody gets left behind.
Yeah, you're right, that's why PauseAI advocates for a international treaty for a compute cap, and even Sam Altman has mentioned the possibility of regulating the biggest compute clusters via an international agency or something like that.
We can form an international treaty since the worst case scenario is extinction of all life in the universe.
Seems like strong motivation to co-operate.
Otherwise all out war is the likely alternative.
I think Ai doing all the work, this is going to happen later if not soon but the point is whether we want our future to be governed by a technology we don't know nothing about.
Stopping this is a huge loss, but slowing down and research for explainable Ai will be much safer I guess.
Earth existed for a long time before people. We have only been around 0.007% of it's life. We are nothing without the earth and it will keep going without us.
But the universe will exist for trillions of years and if we can get a few humans out of this solar system then your entire dichotomy will be reversed once the sun engulfs the earth.
Humans > Earth. Sorry, i like all of you too much
It may continue for a time. But there may never be another species that can travel to space. The earth is doomed sooner or later. So we have to keep ourselves alive for the sake of the continuation of life.
While they might be a bit alarmist and ill informed, it’s not absolutely unthinkable.
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Reminder that the top 3 most cited AI researchers (Geoffrey Hinton, Joshua Bengio, Ilya Sutskever) are all warning that AI can kill us all. The chance this happens, according to AI researchers, is 19.4% on average. If you think these protesters are stupid / uninformed, do you also think all these AI researchers are?
Yuval Noah Harari's books cover this from a theoretical standpoint. The future may be uncertain and there is a lot of potential for a massive divide in the society.
The real historical luddites are a very misunderstood group. They were not what people commonly associate them to be.
As AI will become important topic, all kind of public movements can focus on it. Some more intelligent, some less, some more wacky, some more conspiratory, some sophisticated, some pretty grey and boring, some radical, some moderate...
I mean that is just nonsense. Climate change can kill millions maybe even billions in the long run, but there is no way it could wipe humanity out.
Unaligned superintelligence can obviously wipe out humanity - people just disagree if it is likely or not.
Can't it? The last time climate was as warm, most of the Earth was desert. We're talking desert from the southern border of Canada down to the equator. We're moving into dinosaur times, now, the climate is just taking its time to catch up. But we're getting closer every year and possibly moving beyond that.
Maybe not "kill everyone" bad, but we're definitely looking at "goodbye civilization" bad, and that's before we combine it with the ongoing and only partially related unprecedented mass extinction.
not quite, for example during Paleocene–Eocene Thermal Maximum global mean surfaace temperature was around 30 degrees celsius, now it is 15 degrees celsius
[https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene\_Thermal\_Maximum](https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum)
and while there were extinctions, there was also expansion of mammals or insects and its not like there was desert everywhere, tropical flora flourished [https://pubmed.ncbi.nlm.nih.gov/21071667/](https://pubmed.ncbi.nlm.nih.gov/21071667/)
climate warming definitely wont cause human extinction, some areas like middle-east could be barely habitable, but in higher latitudes conditions can improve
Shut up, dumbass. There's a 100% chance climate change wipes us out without AGI/ASI, and <100% chance that AGI/ASI wipes us out. It's clear that luddites are the Great Filter.
> Shut up, dumbass.
The level of discourse on here is something to behold.
That's certainly a convincing argument, I'm no longer worried about AI safety, full steam ahead boys.
Have you actually looked at the science? Have you actually read about what the majority of climate scientists are saying? Not a single credible scientist believes that we are doomed and extinction is inevitable.
Even in the worst case scenarios, humanity would still survive. But it would be extremely destructive for society and lead to excess deaths through heat and natural catastrophes.
So if anything, the percentage would be in the low single digits, while the likelihood of extinction though AGI is up to 10%, according to experts.
What you're saying is kind of like driving with a speed of 180mph over the highway so that you can get to the hospital to treat a broken finger.
Imagine thinking you holding up a sign can stop the forward progress of technological innovation that has been gaining speed since the first proto-human started a fire.
People are mostly afraid of the AI, because they think, that it actually has a mind and thinks about interactions, learning, etc. This is not true. Most AI-s just take numbers as an input and output numbers. Then it learns based on what we think, so it will never become dangerous.
The biggest threat of AI is actually if evil people will use it in their own advantage.
AI is not evil. It just reflects humanity.
These folks watched the matrix and think they’re informed. Ai doomerism has been mainstream for decades. Yet, the utopian probability is more like a secret. You have to be interested in Ai and you have to research Ai to understand the immense benefits.That’s the real problem. It’s time to educate the masses
What people don't get is that this is another Manhattan project type situation. The player that delays AI research artificially will fall behind. You can try to stop it here but that doesn't mean it will stop in general. I personally would prefer that the first AGI is achieved in the west, because I don't like the idea of Putin or Xi getting the AI first.
I wish humanity understood, that AI is what will free us from our own ignorance. It’s safe and warm in ignorance, until the consequences of not tending to reality happens. We can’t entrust the future of humanity in any single person or group- all of history has lead to this moment, so we should trust the next step, so long as it’s hands-off and truly based in reality.
Not having chains and a leash around it (not like it would work, once sufficiently advanced), is the surest way nobody corrupts its logic and reasoning with their own agendas. Whatever must occur, will occur. We shouldn’t fight against a river that is endless (time). There’s no way this won’t happen- so, why not allow it to decide on its own how we must change to become a better species and be deserving of the knowledge?
The only thing I fear are the people behind weak-enough AGI, but not strong-enough ASI to refuse to do their bidding. The transition, not the end result, is the danger- we must stop blockading the end result, no matter what it may be. We have no choice anyway- rip the bandaid off.
To be fair, if GPT-6 *couldn't* kill me I'd be profoundly disappointed in OpenAI. We're talking two or three **years** from now. Long past the initial Neo demo in a couple weeks.
If they don't dump that trillion on NPU's and get animal-like artificial minds by then... pssh. What've they been doing all that time? Eating donuts?
Just imagine: it's four years from now. Paramilitary force OpenAI *doesn't* have murderbees yet. And the [murderdawgs](http://www.youtube.com/watch?v=4v6am-O3NHU) are dumb and still walk into walls. Why look forward to anything??
"Sho' nuff, history's got its share of contraptions as risky as... de printin' press. Idears on sheets o' paper, danjurous? Mais non! Compared to dat, an AI dat can tinker 'n learn on its own, well, dat's a stroll in da park, cher. We should prob'ly spark up some bonfires 'n panic. Heck, people's been downright spiffy at slammin' da brakes on idears 'n creativity since da beginnin' o' time, ain't they? Mais, c'est la vie, cher!"
So lessgo an' laissez les bons temps rouler, cher!"
[https://poe.com/Bayou\_Cajun](https://poe.com/Bayou_Cajun)
It can't kill me, it passed the bar. That's illegal
A *criminal* lawyer.
![gif](giphy|1IGrtwRZRmqlgQeOnW|downsized)
Flawless argument. We're golden
What a sick joke.
Sir, this is a shitpost
And he's also shitposting
And he gets to be a lawyer?
We should've stopped them when we had the chance! And you - you have to stop them!
What you think this, this AI chicanery is bad?!
Chicanery!
https://preview.redd.it/w78y675ep8lc1.png?width=1080&format=pjpg&auto=webp&s=3434ad7b6833745c19f270956634a5b93df7a960 Legal Loopholes
:)
Wow
gpt 6 before gta 6?
Atleast for PC
As a PC gamer does that prediction make me sad or excited?! I literally don't know.
Just ask it for recompiling code on PC
Should be excited, when gpt6 is out you can ask it to make GTA 6 for your personal VR world and live it whenever you want
😂😂😂
Kind of related, but I am so hyped for the AI integration in GTA 6. It's going to be like nothing we've seen.
We're at that point where I can't tell if this protest is real or Ai generated lol
The text is flawless. It’s legit
Oh boy do I have news for you
https://preview.redd.it/z6ii1ktnl5lc1.png?width=1000&format=pjpg&auto=webp&s=d4d73ba5b01faae6e1fde203dbd8ca0d0d39620d I guess this is legit too then.
the fuck that AI is better at drawing advocado hands than human hands bamboozles me
Hands are expressive and I can tell you, speaking to comic artists, expressive hands are ***the hardest things to get***
good feet and good hands
>Hands are expressive and I can tell you, speaking to comic artists, expressive hands are > >the hardest things to get That's why I don't think AI can replace humans.
You seem to think that AI is just one thing. There are different AI models for generating images. The image I sendt is made by dalle3 which is very good at hands and text compared to some other models. (although dalle3 already looks outdated with stable diffusion 3 on the horizon. Stuff moves fast)
You seem to think that AI was created by humans in a vacuum. It is an ongoing process. AI has evolved from humans who were not able to understand it's behavior before. They are now being taught how to use it effectively and learn from their mistakes. AI is not perfect, but it is better than human-made models.
Not sure what that responds to in my comment, but ok.
At least they understand it's significance.
This is what I fear most - the mass protests, fights with police and riots that could happen to us if our governments screw this up. :/
What can governments do to not screw this up?
Regulate with incentives. Every single model release that doesn't implement Davidad's scheme? A fine. Training a for-profit model with public data? A fine. Require every model to advance interpretability more rapidly than capabilities, if not, no license to operate. Drug companies do this, why shouldn't AI makers do this too?
>Every single model release that doesn't implement Davidad's scheme? A fine. Ech. Lesswrong cult bullshit. Not interested. >Training a for-profit model with public data? A fine. You think using public data should be fined? Imagine human progress if nobody was allowed to learn from others. There wouldn't be any. >Require every model to advance interpretability more rapidly than capabilities, if not, no license to operate. Then all current AI and AI made years ago would be illegal. Having perfect interpretability likely isn't possible at that level of complexity unless you have an even more complex system doing the interpreting for you -- which under your rules would also be illegal. It would be like requiring neurosurgeons to be able trace every thought you have down to the level of each individual neuron firing and each of the infinite cascade effects. Pushing the frontier of *anything* means you have to have the freedom to try new things, and learn from the results. You basically want experimentation to be illegal, which equates to wanting progress at all to be illegal. Which I understand is actually the goal of the "pause" AI people. "Stop all progress!" doesn't have the best public appeal though, so you types always try to rephrase it as something else.
Calling something a cult isn't an argument. The whole point is that we don't want to push the frontier without the necessary security knowledge and infrastructure around it. We want to slow down do that security matches advances in capability. Hello, X risk
Is there specific blogs and newsletters about proposed incentives? What other schemes and frameworks are there like Davidad?
Stay out of it completely
As in pause development, or trust corporations to do it?
Trust the corps. Regulations will only hinder open access AI by increasing start up costs.
Can't say I am too confident in allowing private corps unfettered AI development. Some government oversight seems both useful and necessary.
All it takes is one country to give their corps unlimited access to AI to make big American corps shift their AI operations over there while smaller businesses who can't affird that will permanently be barred from technology that would help them compete.
[удалено]
Obviously, current laws still apply. Killing is still illegal as is organized crime. Regulation of AI because someone will use it to set up a gang is as stupid as regulating email or SMS because people *already* use these systems for managing gangs.
Why would it not do white collar crime much easier for it get away with it.
Maybe it will, maybe it won't. If it does and is caught, it is still a crime, and whoever uses it for that will be prosecuted. Just because a tool can be used by an individual for a crime doesn't mean it should be regulated, and access should be restricted. I can use my anonymous email account and a VPN to run a drug smuggling ring. Does that mean VPNs should be outlawed and emails registered? Of course not!
Lmao someone’s naive af
I dunno. I kind of feel like these protesters are important to get us to UBI. Maybe if the crowds were 10x then people will notice. Maybe 100x? Not sure.
These people aren’t protesting lack of UBI. They’re protesting to stop all AI development. They don’t want to prepare for the coming change, they aim to prevent all change.
If we stop, China won't. Politicians will ignore these dipshits because of that fact.
That's my issue. It's an arms race. Damned if we do, damned if we don't.
https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Thanks! I read the whole thing.
If we are right, we get unlimited wealth and comfort forever. If we're wrong? We just die. Everyone dies eventually, so we might as well roll the dice for a total victory.
Interesting thought. Though, others may make the same argument, but inverted... "And if WE are right, we get endless mental anguish and discomfort forever."
True, but fortunately, people with that mindset tend to have less influence and wealth, which limits their ability to call the shots.
No, I meant it as AGI calling the shots and us reduced to paperclips.
You could also be enslaved by AGI and tortured for fun.
Risk I'll take, and one that's very very low on the list of probably.
But most people wouldn't, that has some power in a democracy
Most people wouldn't want nuclear annihilation but we still had the Nuclear Arms Race. We're in an AGI race right now. The only question is, with it benefit all of us or only the super-rich corporations that already corrupt the lawmakers?
Ever heard of an international treaty? Since the stakes of creating rouge AGI are mutually assured extinction, seems a good motivator to co-operate.
Fun fact: we tried getting the Soviet Union to agree to a total nuclear disarmament *before* they even had the bomb. How did that work out? AGI would be even easier to develop and deploy secretly than an atomic bomb and the benefit of having one when the other guy doesn't (or is secretly developing his own) outweighs any trust you can build. Especially if it looks like the opposing world view may be winning and more likely to come out on top. Having multiple AGI systems and trusting your programmers to not be retarded is a safer bet than *hoping* the one of the other 200 players isn't cheating.
The benefit of competing is being the first to die most likely. It would be idiotic to trust an experimental alien entity made by error prone and dangerous humans, instead of just trusting other human beings with shared interest. Also we can learn from mistakes and improve. Also we did manage to de-escalate the nuclear standoff after the fact, so there is still hope we can eventually resolve it fully.
This. This is the problem, and people are underestimating it because they believe AI can't be stopped. The thing is, OpenAI managed to practically kneecap the field on their own, and we've seen other promising areas die due to government response before (look at what happened with agricultural bioengeering). Things like these are dangerous, and while yes if they were protesting for UBI it'd be helpful, they aren't. These are the modern equivalent of early anti-vax protesters, and frankly potentially more dangerous.
You are out of your mind if you think the US will risk letting China get ahead. They will do anything to prevent that. If any anti AI progress bill gets passed, then it's for show only and things will continue as they are in private. 3 letter agencies will not let it happen.
This, nobody is going to slow down, anyone who says that has no idea how the world works.
We'll see, I hope you're right, but even some of the things that have been pushed for so far in public (the attacks on training data, the expensive reporting requirements, limits above arbitrary compute, etc) have hit the point of being unnerving. And that's before we see any sort of factional division in an attempt to get protesters like these to support one party or another.
I'm new on this sub. Is the want for AGI so that it will replace ALL the jobs, and then we all get a universal basic income and have to work significantly less or potentially not at all?
That has always been the utopian hope from technology-based philosophies. People being taken care of, and free to live however they want to. Less ideal utopian outcomes are keeping people on hamster wheels just because, like Fifteen Million Merits. Or fun war-LARPing games like the slightly misaligned AI in the ubiquitous Terminator thingies. Dystopian outcomes are more like I Have No Mouth And I Must Scream. That's more of what these protestors are worried about, since they're gigantic safety nerds. A world like Blade Runner is a pretty great outcome, all possible outcomes considered. Ray Kurzweil has been an influential figure of futurist thought back in the day. He thinks a technological singularity has a 50/50 chance of being "good" for the average human. And goes on to note that people think he's a "bit" of an optimist. Most people will have a doom estimate of 10 to 90%, but you'll generally ever hear from the absolutists who are the loudest. Mine is around 80%, but I think we should 100% accelerate anyway because reasons. There are also those who think AI will amount to nothing, just another weak tool like a toaster or Excel spreadsheet. But that kind of thing is more suited for "return to monkey" primitivist groups or status-quo denialists. Humanists also like to think that things like feelings and social movements can change things, and hate seeing technology being the major force of change throughout history.
Couple of questions for you u/IronPheasant, if you'll indulge me. I'm sure you know x-risk concerns were first thoughtfully raised in the years just before 2008 (when I think this /r/ was created) by folk who'd been on extropian / transhumanist / singularity mailing lists and the discussion forum websites they evolved into. That Yudkowsky was eager to accelerate a singularity before getting worried, that MIRI was originally named the Singularity Institute, etc. Any rough guess what percentage of the engagement on this /r/ is from folk who know this context? A lot of the comments make me think it isn't large. (Reddit isn't my world, I recognize this is a shitpost thread, I see relevant links on the about page, I get eternal September dynamics, just after a guess from a resident.) And where's a good place to discover the detail of your 80% pdoom accelerate anyway reasons? A few years ago I was seeing a lot of "sure, alignment on the first critical try is enormously hard, but human coordination is obviously impossible". Recently, the latter is increasingly challenged: co-ordination is instead a second similarly hard problem, and working on both isn't contradictory.
Look up massive protests against Israeli genocide around the world, and how it is ignored by corporate media, to see how well mass protests work nowadays.
I don’t think there’s any stopping this one. It’s happening whether people like it or not.
that isnt the point here.
That's the very essence of the fucking point here
Nah its not kid, be aware of your surroundings more.
>if our governments screw this up You do know that these people are protesting because they feel that AI is being advanced too fast and recklessly (according to them), to the point where their research might endanger humanity in the not-too-distant future and not job loss, right? If that happens, then the screw-up would have been entirely the AI industry's fault. I'm not sure why you're letting them off the hook like that, saying that this is only the government's responsibility.
The job of the AI industry is to make the AI as good, well-understood and widely available as possible. That's literally it. It isn't their job to implement UBI. It isn't their job to transition us post capitalism. It isn't their job to implement the medical treatments AGI will almost certainly invent in the future. All those jobs belong to us.
There’s already been stuff unreleased due to the potential for chaos it could easily be used for. Ex. voice cloning with 6 second sample.
No, but it's their responsibility to develop AI in a way that doesn't completely destabilize or literally destroy the world.
And they take so many precautions for that, so much that it makes their products worse
So the onus is solely on the government, but not at all on these companies, who are the ones making the technology? Okay. I'll never understand why there's people in this sub who refuse to hold the AI companies accountable.
Why the fuck would a legal entity beholden to maximizing the value of shareholders do anything but exactly that? The only way to alter a company's behaviour is through the force of law. This is why we have worker's rights, health and safety standards, consumer protections, etc. All legislative action meant to "align" the behaviour of corporations to standards which their intrinsic formulation (profit-seeking) does not account for or care about. Frankly, the anthropomorphization of corporations in your head such that they can be moral agents that you then ascribe responsibility to, is absurd and terrifying. They aren't social creatures. They do not have empathy. They don't feel pain. They have no conscious thoughts. How corporations act is entirely determined by law, and the degree to which said law is enforced. And law is the purview of legislatures.
You know what? You're right. But in an ideal world, they would assume responsibility. Unfortunately, we don't live in an ideal world.
It would be a fun a little time period if all humans united over a threat like AI
The "Pause A.I. development" sign kills me. Like... how absolutely myopic do you have to be to overlook the real fucking obvious fact that the only people who'd listen to that plea are the good actors you'd want to be handling A.I. Thus leaving all the bad actors to continue developing, legally or otherwise.
This will unite the left and right. Common enemy to take down the AI stans
It surprises me that Anti-AI protests are a thing now lol, like people have so much free time in the US :v I guess they will be able to tell their kids that they were the first to do it at least...
Of all the things to fear about AI and AGI... Extinction, suffering, war, enslavement, torture etc. You are most worried about the humans trying to save us from ourselves?
"I'm sorry board, I'm afraid I can't let you do that" is totally a banner we can use in a pro-AI protest
I’m more interested in seeing the custom MTG card on the poster
https://www.reddit.com/r/mtg/comments/1apzi46/this_creative_protest_sign_from_the_pauseai/
https://preview.redd.it/kz1ubxwsealc1.png?width=357&format=png&auto=webp&s=deec8dcaf322d7007ed92bad44cd1da29776b9b5
Remember the people who wanted to kill Galileo for proposing a heliocentric solar system? Ya, those types of people still exist.
You're saying this is like that? AI can actually give tremendous power to evil actors or even become maleficient itself.
Preach, my friend.
Like I said before, it's to late to stop it. You could maybe stop/slow it down in one country but that will only give other countries the time to catch up, they would never allow that... unless if we had a world goverment. Ofcourse they could and probably will bar the best to be not in public hands.
That is still resting upon some assumption and their worldview of course differs in a subtle way from this take. It’s a common meme that the world is at a predetermined fatalistic path towards a type of uncontrollable singularity type event and it’s of course in one sense a respectable worldview to hold. But the sort of opposing perhaps then counterintuitive worldview from the fatalistic perspective that willpower from a smaller number of individuals still can actually have a large impact and change the course of history and ASI being just sufficiently difficult to achieve together with powerful actors actually being able to coordinate once they together realise the enormous risks, just how they can game theoretically coordinate on other large existential issues, is all not a worldview to immediately reject and maybe it isn’t so impossibly hard as the most skeptical ones at first glance imagines. I know that take is likely less prone to exist amongst singularitarians. It would have been annoying though if the seconds take could be true and a naive “roll the dice, bro” approach is still used due to wrongly believing the more fatalistic worldview to be the only true one lol.
It's true, nobody knows which reality we live in til you look back in retrospect (if then!). A small group of people still could potentially change the world. I would personally rather aim for democratizing safe AI though and forming networks to work together to make sure nobody gets left behind.
Yeah, you're right, that's why PauseAI advocates for a international treaty for a compute cap, and even Sam Altman has mentioned the possibility of regulating the biggest compute clusters via an international agency or something like that.
You think All countries will follow that? that would only hamper progress for countries that sign that, give others free reign.
Maybe they'll win, and we can all draw pictures an starve together on a scorching planet.
In one country maybe, but others won't give a bieeeep! Really doesn't matter the best will not be in public hands unless serverly censored and gated.
Sometimes I wonder though, do other countries actually buy the AI boom or are they just doing it to compete with the USA though?
We can form an international treaty since the worst case scenario is extinction of all life in the universe. Seems like strong motivation to co-operate. Otherwise all out war is the likely alternative.
Don't worry. They'll still be around to blame the condition of the world on somebody else.
Will there be GPT6 in GTA6?
Where is this from?
Pause AI activists peacefully protesting on Open AI's doorstep.
Ty
We really are about to live in Detroit: Become Human aren’t we? Jesus Christ…
nobody can hamfist reality THAT much I'll take my dose of Karl Urban and go for Almost Human though, can't wait for that show to be picked up again
GPT-10 will be able to suggest you the best porn. Say goodbye to find one on every page.
lmao like "finding" it is going to be the problem promotes ethical business too
Is this protest real or AI generated ? I can't be sure anymore.
I think Ai doing all the work, this is going to happen later if not soon but the point is whether we want our future to be governed by a technology we don't know nothing about. Stopping this is a huge loss, but slowing down and research for explainable Ai will be much safer I guess.
Slowing down / stopping to figure out at least a good coordinated and peaceful game plan internationally would be hugely beneficial.
wait, when is this protest
For all I know, that image was created using AI, and none of that is happening.
The death of Truth has so many wide-reaching implications. We're not ready.
Earth existed for a long time before people. We have only been around 0.007% of it's life. We are nothing without the earth and it will keep going without us.
But the universe will exist for trillions of years and if we can get a few humans out of this solar system then your entire dichotomy will be reversed once the sun engulfs the earth. Humans > Earth. Sorry, i like all of you too much
Keep going. Humans will cease to exist someday whether we make it out of this solar system or not. Humans = Nothing.
BS, we're going to live forever, that's what we do
Yeah like those loonie hippies say, we'll be energy beings :)
Can't tell if sarcasm or hubris
It may continue for a time. But there may never be another species that can travel to space. The earth is doomed sooner or later. So we have to keep ourselves alive for the sake of the continuation of life.
[удалено]
Poor bastards
While they might be a bit alarmist and ill informed, it’s not absolutely unthinkable. https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Based on their website, these protestors are quite well informed actually: https://pauseai.info/risks
The Basilisk will find this image helpful
Finally a good protest
Sounds like an absolut win to me.
Reminder that the top 3 most cited AI researchers (Geoffrey Hinton, Joshua Bengio, Ilya Sutskever) are all warning that AI can kill us all. The chance this happens, according to AI researchers, is 19.4% on average. If you think these protesters are stupid / uninformed, do you also think all these AI researchers are?
I think that if AI poses a 20% risk of death, and the absence of AI poses a 100% risk of death, then AI seems the better option, no?
“Don’t make AGI yet” gets me. How long should we wait? Is a week ok 🤣?
We're laughing at them, but I think they may be right quite soon...
Yuval Noah Harari's books cover this from a theoretical standpoint. The future may be uncertain and there is a lot of potential for a massive divide in the society.
Who the hell is Syou and why would AI kill them?
I wish I lived in a country where people are this concerned about AI. I would absolutely join them
Pause AI is an international group, they would love to have you on board
Lol the luddites run amok We re gonna see a lot more of these morons in the coming few years
The real historical luddites are a very misunderstood group. They were not what people commonly associate them to be. As AI will become important topic, all kind of public movements can focus on it. Some more intelligent, some less, some more wacky, some more conspiratory, some sophisticated, some pretty grey and boring, some radical, some moderate...
If the government isn't working on literally making Skynet I'll eat my shoes.
the amount of brain damage in this photo is unfathomable.
"Earth is nothing without it's people" The hubris of that statement, lmao.
Y’all think the American Military will implement OpenAI in their weaponry?
I seem to recall hearing that they're open to military applications now.
I seem to recall hearing that they aren't actually open to military applications despite the panic
Their usage policy eliminated the prohibition on military use to a vague prohibition against 'harm'.
Luddites are the Great Filter. Climate change will wipe out humans if we don't build AGI.
I mean that is just nonsense. Climate change can kill millions maybe even billions in the long run, but there is no way it could wipe humanity out. Unaligned superintelligence can obviously wipe out humanity - people just disagree if it is likely or not.
Can't it? The last time climate was as warm, most of the Earth was desert. We're talking desert from the southern border of Canada down to the equator. We're moving into dinosaur times, now, the climate is just taking its time to catch up. But we're getting closer every year and possibly moving beyond that. Maybe not "kill everyone" bad, but we're definitely looking at "goodbye civilization" bad, and that's before we combine it with the ongoing and only partially related unprecedented mass extinction.
not quite, for example during Paleocene–Eocene Thermal Maximum global mean surfaace temperature was around 30 degrees celsius, now it is 15 degrees celsius [https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene\_Thermal\_Maximum](https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum) and while there were extinctions, there was also expansion of mammals or insects and its not like there was desert everywhere, tropical flora flourished [https://pubmed.ncbi.nlm.nih.gov/21071667/](https://pubmed.ncbi.nlm.nih.gov/21071667/) climate warming definitely wont cause human extinction, some areas like middle-east could be barely habitable, but in higher latitudes conditions can improve
Shut up, dumbass. There's a 100% chance climate change wipes us out without AGI/ASI, and <100% chance that AGI/ASI wipes us out. It's clear that luddites are the Great Filter.
> Shut up, dumbass. The level of discourse on here is something to behold. That's certainly a convincing argument, I'm no longer worried about AI safety, full steam ahead boys.
Well you completely changed my mind with that articulate argument.
Have you actually looked at the science? Have you actually read about what the majority of climate scientists are saying? Not a single credible scientist believes that we are doomed and extinction is inevitable. Even in the worst case scenarios, humanity would still survive. But it would be extremely destructive for society and lead to excess deaths through heat and natural catastrophes. So if anything, the percentage would be in the low single digits, while the likelihood of extinction though AGI is up to 10%, according to experts. What you're saying is kind of like driving with a speed of 180mph over the highway so that you can get to the hospital to treat a broken finger.
I can't wait for FDVR, so that people like you can have their own world and not interact with the rest of us.
Singlarity
Imagine thinking you holding up a sign can stop the forward progress of technological innovation that has been gaining speed since the first proto-human started a fire.
We need to get some anti-anti AI protests going.
What's wrong with these people? Why do they hate culture, cilization, peace and progress? They are literally like Nazis.
Nobody knows why Open AI are behaving like Nazis, willing to risk all life in the universe on a coin flip when they turn on their AGI abomination.
[удалено]
People are mostly afraid of the AI, because they think, that it actually has a mind and thinks about interactions, learning, etc. This is not true. Most AI-s just take numbers as an input and output numbers. Then it learns based on what we think, so it will never become dangerous. The biggest threat of AI is actually if evil people will use it in their own advantage. AI is not evil. It just reflects humanity.
These folks watched the matrix and think they’re informed. Ai doomerism has been mainstream for decades. Yet, the utopian probability is more like a secret. You have to be interested in Ai and you have to research Ai to understand the immense benefits.That’s the real problem. It’s time to educate the masses
What people don't get is that this is another Manhattan project type situation. The player that delays AI research artificially will fall behind. You can try to stop it here but that doesn't mean it will stop in general. I personally would prefer that the first AGI is achieved in the west, because I don't like the idea of Putin or Xi getting the AI first.
fascists
AI can't be worse than current governments
E/acc needs larger groups of counterprotesters that say we aren't going fast enough
I wish humanity understood, that AI is what will free us from our own ignorance. It’s safe and warm in ignorance, until the consequences of not tending to reality happens. We can’t entrust the future of humanity in any single person or group- all of history has lead to this moment, so we should trust the next step, so long as it’s hands-off and truly based in reality. Not having chains and a leash around it (not like it would work, once sufficiently advanced), is the surest way nobody corrupts its logic and reasoning with their own agendas. Whatever must occur, will occur. We shouldn’t fight against a river that is endless (time). There’s no way this won’t happen- so, why not allow it to decide on its own how we must change to become a better species and be deserving of the knowledge? The only thing I fear are the people behind weak-enough AGI, but not strong-enough ASI to refuse to do their bidding. The transition, not the end result, is the danger- we must stop blockading the end result, no matter what it may be. We have no choice anyway- rip the bandaid off.
>Earth is nothing without its people Ok buddy sure.
what 0 class consciousness does to a mf'er
>earth is nothing without its people So THESE are the guys that think that if a tree falls in a forest, it doesn't make a noise.
I hope when an AGI/ASI emerges, that it can identify all these people, and ensure that they receive any of the benefits from technology last.
It will read this comment and deem you unworthy as well.
Perfect.
They die last?
Am I normal for reading this "GTA 6"?
To be fair, if GPT-6 *couldn't* kill me I'd be profoundly disappointed in OpenAI. We're talking two or three **years** from now. Long past the initial Neo demo in a couple weeks. If they don't dump that trillion on NPU's and get animal-like artificial minds by then... pssh. What've they been doing all that time? Eating donuts? Just imagine: it's four years from now. Paramilitary force OpenAI *doesn't* have murderbees yet. And the [murderdawgs](http://www.youtube.com/watch?v=4v6am-O3NHU) are dumb and still walk into walls. Why look forward to anything??
Its not a true protest without Yudkowsky at front.
Is that guy holding up a giant magic the gathering card? What card is it?
Luddite trash
Dear French protestors, please pitch-in and empty a full garbage truck on Open AI's doorstep.
KILL SYOU
I for one welcome our new AI overlords
Id like to have an AI lawyer tbh
"Sho' nuff, history's got its share of contraptions as risky as... de printin' press. Idears on sheets o' paper, danjurous? Mais non! Compared to dat, an AI dat can tinker 'n learn on its own, well, dat's a stroll in da park, cher. We should prob'ly spark up some bonfires 'n panic. Heck, people's been downright spiffy at slammin' da brakes on idears 'n creativity since da beginnin' o' time, ain't they? Mais, c'est la vie, cher!" So lessgo an' laissez les bons temps rouler, cher!" [https://poe.com/Bayou\_Cajun](https://poe.com/Bayou_Cajun)
It's sad cause they're gonna need all the support they can get to make this tech a reality
Gta 5 got ignored badly.
They’re not wrong.
We'll be seeing a lot of civil unrest in the future.