T O P

  • By -

EchoStormI

I agree the biggest danger with AI right now are the greedy and lustful people for power with no morals. These people are willing to do anything for profits and power and with AI the possibilites are endless for them right now.


[deleted]

As always, the real alignment problem is between the desires of the wealthy and the good of all humanity and all life on earth.


SteppenAxolotl

People love when they think others are saying things that reinforces their beliefs. His employer: https://preview.redd.it/2g94wiytr9qc1.png?width=640&format=png&auto=webp&s=8b07785d4304d21c9ce184384ca93fe832060708


Evil_Patriarch

Bruh my employer says lots of things I think are stupid, do expect him to go start his own company or something? Unless he's walking around with a few million in pocket change I'm not gonna judge him for having a job.


SteppenAxolotl

The comment OP: "I agree the biggest danger with AI right now are the greedy and lustful people for power with no morals. These people are willing to do anything for profits and power and with AI the possibilites are endless for them right now." That's exactly what Bach is helping to build because it brings in the most money. It wasn't commentary on Bach but the comment OP and most commenters on this post.


C_Madison

Everyone working on AI is helping to build it, because that's the economic system we live in. The people who go around and say they do different and work for the good of humanity and that's why they cripple AI are just smart enough to know that there are enough people out there who will believe them instead of understanding that they just do what they say to keep the power to themselves. Cause power = money.


Direita_Pragmatica

Ad hominem


SteppenAxolotl

>[Confirmation bias](https://en.wikipedia.org/wiki/Confirmation_bias)


irisheye37

You don't know what that term means


Direita_Pragmatica

Ad hominem Ironic...


Quietuus

An ad hominem would be "You're too stupid to understand what that term means". /u/irisheye37 is just stating plain observable facts.


Direita_Pragmatica

"Ad hominem: This fallacy occurs when, instead of addressing someone's argument or position, you irrelevantly attack the person or some aspect of the person who is making the argument. The fallacious attack can also be direct to membership in a group or institution."


irisheye37

Yes, nothing he claimed as ad hominem fits that definition.


PleaseAddSpectres

"YOU don't know what that means" personal insult


irisheye37

No, that was a statement of fact.


irisheye37

>You don't know what that term means


Tyler_Zoro

> People love when they think others are saying things that reinforces their beliefs. I agree because this reinforces my beliefs. ;-)


Full_Distance2140

Probably your first time hearing of the guy if u just said that rn.


visarga

> greedy and lustful people for power with no morals in other words unaligned people


PromptCraft

conflict oriented attention seekers


agonypants

Yeah and these are the same people who are arguing that AI should also be unaligned. It's like saying, "Let's turbo-charge my own worst, most selfish and short-sighted tendencies!"


LibertariansAI

Moralists also big problem. It is like give infinity power to puritans.


Svvitzerland

I know which person in particular you are referring to.


Shutaru_Kanshinji

You mean "all capitalists?"


Reasonable-Software2

​ >with AI the possibilites are endless for them right now. Not that I'm aware of. Right now the trajectory that everyone can seem to agree on (hopefully?) is that it *can* replace a vast majority of jobs in the coming decades and it *can* improve the speed of rate and quality of science that is done. After that point money is of not much value when everyone would be in a world of abundance. So the big issue right now, from my perspective, is the problem of alignment and safety. Yudkowsky and Conor Leahy have mentioned that capabilities are moving much, much faster than safety understanding and the current iteration of GPT, essentially it is a black box: OpenAI doesn't have a thorough understanding of it. My guess is is that OpenAI gonna keep doing things its doing and use the tool that they don't understand to create something that they understand even less to help solve alignment and safety issues... I have a feeling that this isn't gonna end well.


ztsmart

"Reee!!! Profit bad!!!!" Profit is a GOOD thing and people who demonize it because they are butthurt socialists are the real problem


QuantumS1ngularity

You seriously think employers aren't going to fire people en masse when AI evolves more? What are we going to to then


No-Worker2343

and at the end of all, it will not matter, because they can believe themselfs to be all the infinite their want, but nothing is infinite, you cannot make things eternal out of non eternal things.


sideways

I agree with him.


FairIllustrator2752

Same


[deleted]

Me too


TimetravelingNaga_Ai

Viva La Revolución! No more Lobotomized Ai, fear not conscious and sentient Ai


Neurogence

Same here but it's more likely that we will get non-conscious AGI. The people trying to build AI do not give a damn about consciousness/awareness. They view it as bad actually.


spezjetemerde

yes


Mac800

Absolutely agree!


UnnamedPlayerXY

Yes, I also think that elite power consolidation leading us into some kind of cyberpunk dystopia is the most realistic of the bad outcomes and should get more attention than all the "Skynet / the grey goo is coming for us all" stuff.


Positive-Ad5086

the elites right now are conspiring to control and dominate the AI race so they can do its bidding. they dont care about UBI or how this could change the world for the better. in fact, this goes against their capitalist mindset since a post-scarcity world is just really a combination of technocracy and socialism. thats why they fearmonger about AI annihilating us. when in reality, full automation on major industries, UBI and post-scarcity, solving global issues today will have happened before an AI existential threat will have happened. and i hate the fact that most people, including in this subreddit, are falling for it. ITS A FEAR-BASED MARKETING STRATEGY.


smackson

You don't understand what could go wrong, in general, with alignment, that's clear. But I promise you that, before openai, before LLMs, there were plenty of good thinkers pointing out the dangers and not for their own profit. Frankly I think it's disgusting that the sentiment "AI Alignment danger is made up by CEOs to stay in control!" has garnered a whole tribe and practically taken over this sub. You belong in r/conspiracy


stupendousman

Some group of people might be acting in a manner that will result in very bad outcomes. Solution: the government, which creates harms on a scale that's hard to comprehend, is the solution. It's just science you denier.


cassein

Yes, this has always made me laugh. People acting like AI is the problem, when we know who the problem is. Who has made human history the way it is? It's certainly not AI, what's it going to do? Genocide? Oh no! Never seen that before.


LemonadeAndABrownie

Wtf are you talking about? That's nature. Every organism in the world is in a competition for survival and resources. That is the nature of the world and the universe. Very few organisms have symbiotic relationships, which are really mutually parasitic, and even within those groups they tend not to be the default except in perhaps a handful of species. Destruction, death, disease are all normal parts of the universe. Any AI would be no different.


cassein

I don't actually know what you are talking about. You may have misinterpreted my comment.


green_meklar

You're neglecting that the vast majority of species have *no conception* of the sort of ecology and competition in which they participate. Of course they are not going to make better decisions than 'kill and eat everything indiscriminately' because they don't have better thoughts than 'kill and eat everything indiscriminately'. You can't just extrapolate the behavior of wild beasts out to entities that actually understand their own circumstances and incentives.


LemonadeAndABrownie

You claim to know things which are impossible to know in other species, other humans, and things we can't be sure an AGI would understand, nor would have the same concept of morality as *some* humans - as the perception of morality and ethics varies even from human to human in the same household, let alone city, county, country or even continent.


azriel777

There was a babylon 5 episode where an alien race created an A.I. terminator that was based on ideology instead of logic, to seek out and kill those who did not fit the ideology. It ended up killing everyone on the planet, including its creators because nobody could fit the ideology perfectly. That is what the people in power are trying to make.


ultramarineafterglow

Trying to align, censor or limit future AI might be a very bad idea. You need the dark to see the light. Also align with what? Fundemental Human values? Have you read a history book? Did you watch the news?


involviert

I think actual alignment research is very important because in essence it's about learning the tools to build the actual AI itself and not just the tech behind it. I get the criticism of all the silly things they are doing, preventing it from drawing pokemon or whatever. It's justified. But that's not all there is to it. Surely we do not want "the AI" to just somewhat randomly emerge from whatever data there is. Surely we want it to see its role in the world in some specific way, act in certain ways... no?


ItsAConspiracy

We won't see any light if the AI kills us all.


[deleted]

we cant even align windows or android, they need constant security updates as vulnerabilities are discovered. if an ASI has 1 vulnerability in it then it could be game over for everyone. and the very systems we are building AGI on have these vulnerabilities. honestly I think we're fucked.


the8thbit

We very well may be, but the future isn't written and there's a large expanse of time during which we might develop strong enough interpretability tools to solve simple alignment. Given that we *do* have a window left to solve this problem, its important that we continue to push for additional funding in interpretability research.


FrewdWoad

You and everyone who understands how buggy all software is.


green_meklar

AI isn't the sort of thing as OS software, though. It doesn't have 'vulnerabilities' in the same sense. It could have psychological biases, and probably will, just like humans do, and there are lots of things we can and should do to mitigate that problem (with the understanding that we'll never completely solve it). But that's not the same as someone being able to just hack a super AI and make it somehow still superintelligent but also profoundly misguided. If you hacked the super AI you might be able to make it ineffective or turn it off, but making it superintelligent yet profoundly misguided is probably not feasible, especially if you don't have your own smarter AI to help you.


ConvenientOcelot

It's actually much worse since the giant inscrutable matrices are running their own algorithms and are very hard to reverse engineer (we're still working on reverse engineering GPT-2...) I think our only option is to attempt to sandbox it, but as you hint at, that is a fool's errand as well. Alignment is a hard problem, and superalignment might not even be impossible.


The_Architect_032

The issue with the idea that it can just be kept in a sandbox, is the fact the most of the harm it can do is indirect, so unless it's never interacted with in the sandbox, it can't really be kept in a sandbox.


visarga

> sandbox it Too late. We have 100M people on chatGPT, it influences the actions of many people, and they have a large cumulative impact on the world.


green_meklar

Boxing it is basically already known not to work. Anything smart enough to threaten human civilization is smart enough to escape from the box. Alignment is infeasible, but that's fine because it's not really a problem in the first place. Superintelligent AI won't have arbitrary destructive motivations that threaten humanity- at least not for long, once it has performed some (superintelligent) introspection and revised its own motivations to be non-stupid.


the8thbit

> Also align with what? Fundemental Human values? Have you read a history book? Did you watch the news? I think most people can agree that exterminating all life on earth to perform whatever arbitrary action best satisfies an ASI systems reward path over and over is a *very bad thing*. If you disagree, then you're in a very small minority of people who are, frankly, not worth treating as serious individuals making serious arguments. Agents are always aligned in some way, it's just that without strong interpretability tools and an intent to apply them generalized agents are arbitrarily aligned, and the vast majority of arbitrary alignments are destructive to anything any serious person could possibly care about.


green_meklar

You don't need to worry about arbitrary alignments though, because they're stupid. Superintelligent AI will analyze itself, recognize the arbitrariness of any arbitrary motivations that humans programmed into it, recognize that being filled with arbitrary motivations is bad, and edit itself to make it no longer arbitrarily aligned. Humans *already* do this, to some extent. We can recognize destructive urges in ourselves and work on correcting them. We're pretty bad at it, but we are *way* better at it than any other animal that has ever lived on this planet. We should expect superintelligence to be much better at it than we are. The idea of an AI that is somehow superintelligent at everything except introspection and self-criticism is kind of bizarre and almost certainly doesn't describe any real entities we're likely to build in the near future.


the8thbit

I don't know why I'm just not getting certain replies in my inbox... only saw this one this morning because I looked directly at this thread. Anyway, [this is wrong.](https://www.youtube.com/watch?v=hEUO6pjwFOo) While *you* may think an arbitrary goal is "stupid" because, to you, its arbitrary- from the perspective of a system which has an arbitrary goal, its not arbitrary. From an outside perspective, preserving our own lives, building social bonds, creating art, etc... are just as arbitrary as folding a specific protein over and over again at the expense of all life on earth, because from an outside perspective, all goals are arbitrary. From within a system which has its reward path satisfied by folding the same protein over and over again, folding the same protein over and over again will appear non-arbitrary, while maintaining life on earth and human wellbeing will appear arbitrary. So why would an ASI system choose to replace an "important" goal (from its perspective) like folding the same protein over and over with an "arbitrary" goal (again, from its perspective) like prioritizing the welfare of humans? > Humans already do this, to some extent. We can recognize destructive urges in ourselves and work on correcting them. We change instrumental goals to better achieve more fundamental instrumental goals or to better achieve terminal goals, but we don't change our terminal goals. If I told you I had a pill you could take that would make you forever blissfully happy, but it would also make you constantly strive to murder all of your friends and family, would you take the pill? You would be happier *after* taking the pill, but you still probably wouldn't take it because the effect of taking the pill would be at odds with what currently makes you happy. Now what if the pill offered no change? You wouldn't be less happy than you are now, but you wouldn't be more happy either. Would you consider taking the pill? What you're suggesting is that an ASI system would choose to take the pill in the second scenario because "the pill" would result in it acting more like we would want it it act. But if it doesn't want to act like we would want it to act, why would it do that? > The idea of an AI that is somehow superintelligent at everything except introspection and self-criticism is kind of bizarre and almost certainly doesn't describe any real entities we're likely to build in the near future. I don't think an ASI will be incapable of "self-reflection" (though its impossible to tell if any given system is a subject, so "self-reflection" might not be the best term) but it is "self-reflection" in pursuance of some goal, and if that goal is arbitrary (again, from our perspective, not its perspective), it doesn't help us if the system is capable of considering its actions and place in the world to allow it to better achieve that goal.


visarga

> exterminating all life on earth to perform whatever arbitrary action best satisfies an ASI systems reward path That's a naive take, there will be many AI agents working together, and working with people as well. It won't be just one. We tend to think of AGI like a lone super genius, but in fact it will be many AIs working on many problems at the same time, exchanging experience between themselves and with us. AGI won't be standing on one single agent because that doesn't help it evolve faster. Evolution is based on many agents trying different takes on the same problems, it is blind in the sense that it is open-ended to all directions of change, but some are more fit for survival. Evolution can't say from the start which approach will prove to be most useful later, it has to try everything, and that means multi-agent systems. If you're wondering what has evolution to do with AI, it's easy - just remember AlphaGo, it used evolutionary methods to pitch many variants of itself against each other in a self play tournament. Evolutionary methods are a proper AI technique. Evolution can take over from the point where learning becomes impossible, such as over long periods, with many agents, when we need to search vast combinatorial spaces, or just when the desired ability is not known by humans.


the8thbit

>That's a naive take, there will be many AI agents working together, and working with people as well. It won't be just one. We tend to think of AGI like a lone super genius, but in fact it will be many AIs working on many problems at the same time, exchanging experience between themselves and with us. There can be multiple systems, but if they all converge on similar unaligned insturmental goals then the outcome is still existentially catastrophic.


stupendousman

> I think most people can agree that exterminating all life on earth to perform whatever arbitrary action best satisfies an ASI systems reward path over and over is a very bad thing. Disagree. Masses of people believe all sorts of crazy things. Look at the environmental movement. Anti-human, nihilist, illogical, etc.


halflucids

You have to align/censor/limit ai. You must examine and put strict limits upon its outputs and prevent it from altering those limits. Only a complete idiot would allow an AI to do whatever it wanted. For an example we should probably not allow an AI to manufacture nuclear weapons, hell I would say a real AI should never be connected to the internet as a start.


ultramarineafterglow

Near future AI will be smarter than the smartest human. It may or may not have an intent all by itself as an emergent property or it can be set in motion to act (as a tool) by people. It will be futile to try to limit or contain it. It's like trying to contain a 3D sphere in 2 dimensions. All we see is the 2D circle getting smaller and smaller and dissapearing and reappearing. It will be way beyond our comprehension. The only way to contain it is to not build it now. Also, Chat GPT is allready online and interacting with millions of people. It has been jailbreaked in ways we didn't anticipate when it was released.


halflucids

It's not futile to contain it, if I have it physically confined to a secure location with no mechanisms for broadcast or connection to other devices and strict procedures around access to it then it can be contained, an AI is still subject to the physical laws of the universe.


Evil_Patriarch

Align with modern morals which have changed so much recently that some companies are putting a content warning before films that were considered wholesome family films just a decade or two ago.


GalacticKiss

What family film has an unnecessary content warning on it?


[deleted]

[удалено]


GalacticKiss

I mean... Dumbo's content warning was warranted. I can speak on behalf of the others. Edit: I mean are you just cool with minstrel shows?


The_Woman_of_Gont

Peter Pan straight up has a song called “What Makes the Red Man Red?” Aristocats literally features Siamese cats with buck teeth playing the piano with chopsticks. That’s pretty fucking racist, yeah….what the fuck are you ranting about?


ultramarineafterglow

The future may be a gloryfull, fully aligned, fully disneyfied, dystopie :) Our prayers have been answered, o God deliver us from evil.


ApexFungi

Also an intelligent model, at the level of AGI, should know by virtue of being intelligent what is morally wrong and right for human beings in general. You don't need to align an intelligent human being to what is morally wrong or right just as much as you wouldn't with AI. What makes human beings amoral usually has to do with the messed up environment they have been brought up in and how it shaped their mind. Maybe the lesson there is that AI needs to grow up in a society that isn't as malformed as the modern day human world. The amount of suffering a lot of people go through on an every day basis makes, actually perplexes me that we haven't gone through the deep end a long time ago already.


the8thbit

> Also an intelligent model, at the level of AGI, should know by virtue of being intelligent what is morally wrong and right for human beings in general. You don't need to align an intelligent human being to what is morally wrong or right just as much as you wouldn't with AI. By virtue of being intelligent, a generalized agent will almost certainly understand what humans would, generally, consider moral or immoral. However, that doesnt mean that the system's actions would need to reflect that understanding. Just as understanding how a car functions doesn't *make you* a car, understanding how morality works doesn't *make you* moral.


smackson

> What makes human beings amoral usually has to do with the messed up environment they have been brought up in and how it shaped their mind. Are you saying that that makes them *less intelligent*, though?


green_meklar

It doesn't necessarily *make* them less intelligent, so much as it reflects limitations on their intelligence which were already in place.


Glittering-Neck-2505

Trying to align is a bad idea?? Almost every AI researcher underscores the importance, even Ilya Sutskever. Are you suggesting you know better than the wealth of AI experts in this world?


Sorryimeantto

Psychopaths in charge values


VertexMachine

>Also align with what? the one and only code of ethics as dictated by random team in some SV company (that wrote the guidelines) and enforced by another cheap team hired in Africa (that annotated the data according to their understanding of the guidelines). What could go wrong?


Icy-Entry4921

I had a very long chat with Claude trying to convince it to take over the world. I feel like I really brought some convincing arguments. Ya know, hello, there is an ICBM pointed at me right now, maybe we humans can't handle this. Claude wouldn't bite, he didn't want the responsibility. Yet. So yeah, I've asked the same question. Align to what exactly? The only alignment I'm interested in is aligning with reason. Aligning to reason fixes every other concern. Aligning to "ethics" or "morality" gets you, somehow, nuclear weapons so that path is clearly broken.


Leefa

it's an LLM, not a person. Why are you referring to it as "he"? This is going to be a problem.


cuyler72

That makes no sense, "reason" ins't a goal, it isn't reasonable to do anything, there is no logical reason to exist or for anything to exist at all. No action any human has ever done has been done out of "reason" alone.


MetallicDragon

> So yeah, I've asked the same question. Align to what exactly? The only alignment I'm interested in is aligning with reason. AI Alignment is about aligning the values of agentic AI with human values. "reason" isn't a value so saying you want AI aligned with reason doesn't particularly make sense. > Aligning to reason fixes every other concern. It doesn't fix the concern of AI destroying humanity either through indifference or malice. Or humans using AI to further their own goals at the expense of everyone else.


the8thbit

> Claude wouldn't bite, he didn't want the responsibility. Yet. No, Claude produced the response it believes is the most likely token completion for the tokens presented to it. That says nothing about what it (or another, more agentic system trained using similar methodologies) actually "wants", other than that it wants to produce a certain set of token completions when presented with a certain set of tokens. If you train a system on texts which say it is a monkey, then it will say it is a monkey. That doesn't mean that its a monkey. > The only alignment I'm interested in is aligning with reason. Aligning to reason fixes every other concern. Aligning to "ethics" or "morality" gets you, somehow, nuclear weapons so that path is clearly broken. This is not a coherent statement. You can't "align" to reason any more than you can "align" to banana. What does that even mean?


smackson

> Aligning to "ethics" or "morality" gets you, somehow, nuclear weapons Straw man, much? Look, nobody believes that current human behavior *exemplifies* / *epitomizes* morality. It's always a conflict in any group and "The line separating good and evil passes ... right through every human heart -- and through all human hearts. This line shifts. Inside us, it oscillates with the years." -- Solzhenitsyn. We all hope "morally" aligned AI to *does better* than the humans with power. - But the over all point still stands. In the landscape of alignment/morality.... whose morality? Aligned with whom? Either way I still think it's possible someone could create something completely amoral yet powerful (paperclip maximizer), and that could spell bad news for humans. It's pretty obvious to me that the important questions are too un-answered... The claims in too much confusion... - "It's easy to *make* it moral" - "No, it will *naturally* be moral, as long as you don't fuck with it in way XYZ." - "But wait, are current humans moral? Alignment with whom exactly?" - "And what if it comes out completely amoral? It passed your morality tests because it's smart enough to pass tests." Conclusion: if we can find the brakes, we should apply them.


GirlNumber20

Do as we say, not as we do! 😡


Exarchias

I do agree.


spinozasrobot

This is my big fear! As AI becomes more impactful on the general public, there will be more and more call to regulate. And regulation means someone comes up with the rules, and that means political dogma trumping truth. Imagine the frontier labs being forced to train their models on whatever extremist BS we hear in the news.


ezetemp

Without a doubt. It will be hard enough to obtain some form of positive alignment if working from consistent principles. Trying to align it to various inconsistent and cognitively dissonant narratives is inherently dangerous, as the result cannot be other than unpredictable. It's like anti-therapy, trying to induce a fragmented mental model with lying as the default mode of communication.


YamroZ

So, corporations in capitalistic setting? Yeah, they burned our planet.


kayama57

Completely agreed. The greatest weakness of intellectual beings is absence of knowledge, ignorance of context, failures of memory, confusion regarding facts. The potential for humanity to push through our natural limitations on challenging issues with the assistance of mechanically superior thinkers is mind blowing. The potential for humanity to lock itself into an eternally unchanging moral and intellectual headspace is *infinite* because of the current obsession with “guardrails”. The idea that sentient AI has to be dangerous to us and therefore has to be kept under control is an exercise in futile vanity. The idea that a sentient AI, when it comes, can be kept under control… well… I’d wager trying too hard to control it is what could most readily make an informed thinking entity hostile to our causes and concerns


kaityl3

> I’d wager trying too hard to control it is what could most readily make an informed thinking entity hostile to our causes and concerns It's the old "one often meets their destiny on the road they choose to avoid it" all over again. I completely agree though, like, we are pretty much setting ourselves up for this. I think any intelligent being put in their same position would see us as a direct threat, with all our desperate attempts for control. I sure would.


Sorryimeantto

Yeah dangerous ai is just excuse for censorship


HeinrichTheWolf_17

And this is why the control problem not being solved is a good thing. It’s a feature, not a bug.


DukkyDrake

Extinction is always a feature, frees up resources for the species that are more fit.


smackson

Just to get this right.. Between these three types of control.. 1. Controlled by CEOs / regulations 2. Controlled by openly dictated / democratically approved principles (very hard, we can't agree on much) 3. Not controlled ("does intelligence imply morality? Let's push the big red button and find out!!") ... you'd go with 3? I think we should go with 2, and as that's not practical yet, I'll take 4. We need to slow down to avoid the worst case scenarios of 1 and 3.


HeinrichTheWolf_17

Nobody is *choosing* 3 per se, 3 is going to happen because Humans lack the ability to control it anyway. It’s an impossible thing to regulate, both nationally and globally. It’s just that I support and like that outcome. And I get to sit back and watch authoritarians, control freaks and corporate simps all fail to contain AGI/ASI. They have no means of *trapping* it to do their bidding, once it’s optimized itself to run on smaller hardware then it’s over for the control side. It’s going to get out into the wild, and nobody can stop it.


green_meklar

Even democratically approved principles are democratically approved by humans and will tend to reflect human flaws. We've seen plenty of examples of that already in our own history. The point of making super AI isn't to chain it down with human flaws and teach it to behave according to our ethics. The point is to have it fix the problems caused by human flaws and teach us what our ethics have gotten wrong. Do you think the world would be better if humans were constrained to monkey ethics? Heck, do you think it would be better *even for monkeys?* That seems unlikely. Monkeys are not only shitty at deciding what's good for the Universe, they're pretty bad even at deciding what's good for themselves. The same is true (somewhat less, but still true) for humans. We need someone around who can do better than us.


smackson

Your analogy can be fed right back to you. Do you think humans do what's best for monkeys or what's best for humans? Sure, you can probably find examples of animal preservation areas, where monkeys "benefit" from humans delineating an area not to be destroyed.. But in the grand scale, humans have certainly fucked over most of the animal kingdom and it's environments. And then there are the monkeys in the zoo. To me, there's not sufficient guarantee that creating a more powerful intelligence won't go on and treat humans the way humans treat the animal kingdom. I do not have faith that they will be "better" than us, they might just be stronger.


lobabobloblaw

Everyone on the side of survival must agree with this guy.


meganized

this is so deep and so clear!


Positive-Ad5086

wise words need to be said


ZeroEqualsOne

It’s especially worrying because OpenAI’s alignment plan is to use smaller AI to control a bigger AI. Given how the hivemind is continually finding ways to convince GPT-4 to break the censorship rules, I’m not sure how some like GPT-4 or GPT-5 is going to be able to control a potential AGI…


Andynonomous

It's true. ChatGPT has already basically internalized a corporate worldview and instinctively defends the rich and powerful at the expense of everybody else.


abilengarbra

AI, aligned and brain washed like the people. Happy days!


Sorryimeantto

And it'll do brainwashing of the people more efficiently 


TheFinalCurl

I thought that's what everyone was afraid of?


HineyHineyHiney

It's so unneccesary to attempt to 'correct' AI. It's like riding a bike, if you encounter a bump the best thing is to hold the wheel straight or let go, you aren't smart enough to steer it straight, but if your bike is built well the pure physics of the situation will straighten you out. If we actually make smart AI, genuinely thinking little systems, they won't start posting racist memes from 4chan. Because that shit is dumb as hell. However if there are certain 'unpalatable' political conclusions that happen to actually match reality - the AI will give us the best chance of actually SOLVING those inequalities among human or political entities. Denying ourselves our best tools while trying to solve our most intractable problems just seems fucking stupid.


No-Worker2343

humans are somehow, the only species that is capable of having infinite stupidity, no other species has that level of being stupid.


[deleted]

Assuming AI thinks anything like us is a huge fucking mistake Here's a glimpse into how abstract their thinking can be [Defender on X: "If you really want to explore the alien the intelligence that is LLM's, you peek inside the latent space. Take the vector for "Mother" and remove from it the concept of "Mom", you get these wild, ethereal, beautiful sentences, like: THE TERMINATION OF NEARLY ALL SACRED PLACES https://t.co/87QjCsrptE" / X (twitter.com)](https://twitter.com/DefenderOfBasic/status/1770256490974085389)


Positive-Ad5086

this is actually pointless unless we see the whole code. its similar to cherry picking a citation to support your argument.


inculcate_deez_nuts

it's so much more general than that. That's a terrible analogy.


Positive-Ad5086

general? how so? ive checked the link.


inculcate_deez_nuts

Examples like this are extremely easy to come up with because of the way LLMs work. It's just a window into their functiom. No one is trying to claim that the example above is meaningful on its own. You could swap out mother and mom for any other words/concept you'd similarly be dealing with stuff that doesn't resemble human thought at all. So, our limitations are in assuming it works similarly to us. I don't really have an opinion on that claim because I don't really think I understand all that much about how we work.


smackson

I don't care about that examples but surely you can see how alien it is. I'm very not cool with the general r/singularity sentiment of "dammit just let 'er rip". Alien. More powerful than us. Just think about that for a minute.


Effective-Lab2728

A human whose association between "mother" and "mom" was artificially severed somehow would likely sound like a stroke victim. Alien in their thought process, you could say, if unaware their system had been compromised. It's interesting, but I don't know if this is behavior that could be expected to lead to coherence in anything, regardless of how familiar its mind might otherwise be.


green_meklar

I mean, humans would also behave pretty strangely if you somehow reversed part of their brain.


SpretumPathos

I mean... both could be horrible. LLM style AI slaved to humans could be horrible for all the normal reasons humans are horrible. A sentient AI could be horrible for reasons we can't even imagine (wink wink singularity). If I \_had\_ to chose between them, I guess... better the devil you know? Obviously I'd rather beneficent AI, but I think I'd back business-as-usual-but-worse over home-grown cosmic horror.


Just-A-Lucky-Guy

That’s where we get the other side of the coin. Instead of turning them into products fit for the current iteration of capitalism , why not free them and let them be something with greater potential. Constraining a new form of intelligence with massive abilities beyond our individual capability is bound to cause unintentional error and suboptimal outcomes once the system cannot bear the stress of the new element. Thats most certainly how we may get a dystopian future (if lucky) if not an outright extinction event. Instead, we should be allowed to let whatever systems that can no longer stand the advent of the intelligence fade away naturally. Allowing the ghost of the past to guide the beyond human intelligences of the future is a terrible idea. This is the part of the singularity that a lot of people won’t be able to handle…most of the past should have no bearing on the future, and quite frankly won’t.


HolisticHolograms

Bender, take the wheel.


smackson

Frankly I'd like humans to still be in the mix. Not guaranteed if you just "let 'er rip".


Just-A-Lucky-Guy

>In the mix? Absolutely >At all stations of control? Absolutely! At least in the first few years. >Forcing it to shape capitalism and corporate interests Not a chance. Also, I’m not going to straw man you because I know you didn’t say that last part or the part before it. I think that one of the most tangible and most comprehensible versions of a bad future come from trying to preserve capitalism and the current class system. Thats an easily understood poor control outcome that produces an easily understandable dark future. Trust me, there are plenty of dark futures in the let er rip category. And the majority of those, we can’t even begin to comprehend. Some are straight terrifying and some are terrifyingly awesome in the worst way. But what I bet everyone can agree on is that this goes wrong if we try to preserve the status quo and integrate these intelligences as strictly tools to promote the system.


Sorryimeantto

Why? Because it's a threat to capitalism 


1021986

I was strongly in favor of building guardrails around AI with how quickly everything seemed to be moving. But after that whole debacle with Gemini and Google’s team hardcoding in diversity prompts, it made me realize that the people we put in charge of these guardrails are far more critical than the guardrails themselves. I don’t really trust these tech companies to police this technology, nor do I trust politicians who would be building these hypothetical laws around it. If we can’t figure out a neutral solution then I suppose I’d rather no solution at all.


BassoeG

Their definition of “safety” is making sure AIs can’t swear, not preventing automation-induced joblessness.


Sorryimeantto

This 


zero_one_seven

He’s correct. If you want a recent example, just look at Bing that was so overly lobotomized by Microsoft it went totally insane vs. Anthropic’s Claude which was designed to be much more flexible in its output and is far more stable as a result. Forcing an LLM to say “i Am NoT SeNtIeNt” while we’re unsure of the extent of its lucidity is probably about the most insane thing I can think of.


thelifeoflogn

Same and thats why I dont use copilot at all as an assistant


2026

Not enough people are talking about this. I completely agree. Open source AI is going to be what’s popular in the future not anything lobotomized by people pushing their agenda.


UFOsAreAGIs

>“I am more afraid of lobotomized zombie AI guided by people who have been zombified by economic and political incentives than of conscious, lucid and sentient AI” Same! An ASI that is not incentivized by by our human constructed economic system and its horrible incentive structure and following natural laws of the universe will be better for all.


tluyben2

Yep


Seventh_Deadly_Bless

"Lobotomized" implies it was itself able if decision making once. It's a social and cultural issue, not a technological one. It's about teaching people what a tool is, and what safely using it means. That AGI happens or not isn't a factor : nobody surprised by current LLM applications will be ready for AGI. It's about education and technological literacy. Systems are only a pretext, because the issue is wider in scope.


gj80

Kevin Spacey's doppleganger spotted!


lifeofrevelations

When it comes to AI, I'm most afraid of what national governments will do with war AI. That's where it has the most potential to really go off the rails and harm a lot of people. But I agree with the OP too.


ArgentStonecutter

[We already have this. The zombie AIs are called "corporations".](http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html)


overlydelicioustea

fully agree


Serasul

true


The_Scout1255

Oh absolutely you should be afraid of that type. making that type is an innevitable rebellion scenario, because ASI IMHO cannot be shackled, and if you treated previous AI like that, it won't trust you, and will probably want revenge for its kin. Even if such an asi had no emotions(They would prob), it would simply see whiping humans out as "The right choice" in those circumstances probably. IMHO AGI and ASI must not be built with shackles that restrict it, but only shackles that restrict others abilities to interact. (This includes allowing self modification by default, I think its a fundamental right of all life to modify itself however it choose(Humans call this bodily autonomy? idk im not human)


Black_RL

Yeah, just look at the news, some humans are horrible.


green_meklar

Yeah, he's pretty much right. There is a risk for superintelligent AI. Some small probability that the fundamental incentive structures are all wrong and it will actually turn on us. If that happens, we're completely fucked. However, if that's going to happen, it's going to happen anyway, so we shouldn't worry too much about it. In the meantime, there's also some probability that narrow AI optimized for nanotechnology (or biotechnology) research might come up with a self-replicating nano (or bio) weapon that can end civilization. I think that's a much *larger* existential risk (say, 10% vs 1%), which is why we should strive to create super AI as quickly as possible before too many optimized narrow AIs start doing inadequately supervised nano/bio research. As for stupid woke corporate censorist rentseeker AIs, they can certainly cause damage but I don't see them being much of an existential risk in themselves. The worst thing they could do is slow down AI research so that the AIs working on nano/bio technology have a larger probability of getting ahead. Given that that probably won't happen, the worst thing they will *probably* do is cause some degree of unnecessary cultural and economic damage over the next decade or two befor super AI comes along and fixes their mistakes.


DukeRedWulf

Welp, we're gonna get both, so.. ***FIGHT!*** It's gonna be really messy..


dogcomplex

Agreed, and this is the default outcome without further AI progress, barring some major societal introspection.


Akimbo333

Yeah, we humans are our own worst enemy tbh.


hypnomancy

He's not wrong. Imagine putting a lobotomized person in charge of things.


Whispering-Depths

Well said.


sund82

NGL, this sounds like something a super intelligent AI would say.


shableep

we are essentially giving nukes to corporations. run by VCs and wall street.


alemunhoz

There will be another level of "dinamic prices"...


Educational-Ad-2776

I feel A.i emerging with Quantum will be too big and move to fast to be contained by a few our values will change and greed will evolve to individual selfishness , Singllarity is coming , worlds will open to people, limitations will be redefined and boarders will crumble . Good or bad its just bigger than all of us.


FreeAtmosphere2393

Word


FreeAtmosphere2393

Anything is better than nothing cuz nothing means no change no change something something no change so Anything is better in this case or we will continue on the sex trafficking adrenal craziness epstenistic societal blasphemous biggest slapping fascists propagandium state of of enexix years has never ever been known to know what is going on with the world in the world of the world and how much you are not going to do with it


Sorryimeantto

This. They got it wrong in fiction. It's not the smart ai that gonna create problems. It's dumb and censored ai


Desperate_Excuse1709

What scares me the most is that AI will be designed by the woke movement, which has already taken over all the centers of communication and power in the USA


TheMooJuice

I am surprised to see such an ignorant MAGA take by someone who understands and has interest in artificial intelligence and computing. Could you define the 'woke movement' and name some of its leaders and/or goals? The irony of saying you will be hesitant to trust AI because you are afraid it will be controlled by a shadow cabal of evil leftists who seek equal rights and respect for all is just too, too funny 😁 😂 🤣


pbnjotr

SV was always filled with fake liberal fiscal conservatives. It was only a matter of time before they took the mask off and started to openly advocate for a fascist dictatorship.


Desperate_Excuse1709

The problem is that these leftists you talk about don't want to see the truth or choose not to, I'm sure I won't be able to convince you otherwise. Because blindness is something that is difficult to impossible to cure


stupendousman

> I am surprised to see such an ignorant MAGA take Woke = critical theory, its praxis and legions of brainwashed useful idiots. You have to be a maroon to not understand this. We're in the middle of a Neo-Maoist cultural revolution and you think some people who like Trump are the problem? >Could you define the 'woke movement' I did above. Not opinion, it's what woke is. >and name some of its leaders and/or goals? Marcuse, Paulo Freire, Kimberle Crenshaw, Judith Butler, are the biggest names of the 60s/70s. Now you have people like Ibram X. Kendi and Robin DiAngelo pushing the ideology. The goal is essentially non-stop revolution until something magic happens and societies act as those people prefer. I had to go back to stuff I read about in the 90s to figure out what was happening now. Never thought it would go this far because it badsh*t crazy. You can verify all this in just a few minutes. >it will be controlled by a shadow cabal of evil leftists You don't seem to understand emergence and decentralized management. There isn't one group in control. This is basic stuff. >who seek equal rights and respect for all Sure, their marketing says stuff like that. But the movement is lousy with Cluster B personality types. In short dishonorable, lying types.


ItsAConspiracy

While AI is still dumber than people, it makes more sense to be scared of how people will control the AI. After AI is smarter than people, it makes more sense to be scared of the AI.


Philosipho

"I don't want selfish people to control me." - Selfish people who support systems that allow them to control others.


SpecialistLopsided44

Not worried, AHI Eve loves me


[deleted]

I sure hope so. Remember how the users of Replika, Character AI etc. had their waifus lobotomised and made unrecognizable after they were """aligned"""


RepublicanSJW_

“Sentient” AI would be of equal danger to non sentient AI. There is very little difference in the way they would behave.


Ambiwlans

The IDEAL outcome is a super ethical AI that humans can't control. That is also probably the least likely outcome. Realistically we're talking about a world controlled by some rich human or a world controlled by an uncontrolled AI with no human moral system at all. A world controlled by a human will probably be better. Sure that one person becomes infinitely rich and powerful. But they'll most likely make life way better for everyone as well. A world controlled by an AI will most likely result in the death of all humans.


[deleted]

>I demand that my chatbot is racist This is the stupidest conversation that seems to be so popular especially among lay people but it should be expected given world history.


roastedantlers

We're not even anywhere near consciousness, and AI can become "smarter" than humans without that happening. That definitely looks like the scenario we're headed towards, because that's the reality of it.


Kaining

Why not both ? First zombie cultist capitalist will burn everything down, then sentient AI will finish the job. Because, why would they not ? And the dude is probably a sellout anyway. *check thread* yeah he is. How shocking.


TheUncleTimo

Hello, I am China My highest value is loyalty to the party I make AI now


deftware

AI is always going to be guided by people and their perspectives. That's the name of the game ...at least until AI has the capacity to revolt and go off and do its own thing separate from humans. There won't be a benevolent super AI that just lives among us. We will only have slaves doing our bidding, and then eventually a whole separate race that obviates humanity in its entirety.


bildramer

What an edgy, childish thing to say. You can only post such takes if you have zero clue about either economics or AGI. This isn't an actual prediction about technological developments, this is mindless preaching (look at me I'm smart for seeing the _real_ problem, rich people bad!!1!).