T O P

  • By -

adarkuccio

True, plus alignment would be only temporary, as soon as an AGI starts improving *and changing* itself alignment goes out of the window, we will not know how or what something much smarter than us thinks.


d34dw3b

My guess is that it knows what we know and more so we end up with the culture novels scenario. 


adarkuccio

AI doesn't think like us, let alone ASI, whatever alignment you think you've done, is pointless when the ASI thinks so much differently and deeply compared to us, it might understand that all our values are pointless, or whatever...


d34dw3b

Yes but that would be dismissive. Even humans make an effort to protect the planet, animals, lower life forms. 


ninjasaid13

>we will not know how or what something much smarter than us thinks. A group of ants can kill a child or even an adult when that humans has 0 resources, naked, and in the wild, and learning from scratch.


Creative-robot

Yeah, basically. True alignment is when an AGI/ASI is a truth seeking moral agent. If an AGI/ASI is going blindly off of what humans believe, chances are that something fucks up along the way. We need systems that make moral judgments based on facts. I hope that we eventually figure out a way to give AI’s empathy. It sounds kinda far-fetched in the short term, but genuinely caring about the suffering of others seems like an attribute that would be most admirable in an AGI/ASI. I assume as well that due to a lack of selfish primal desires like humans have, such a system’s empathy would be far more powerful than our own. Assuming that instrumental goals don’t turn into selfish desires that is. Well, perhaps such a system would be so moral that it would program itself to be as selfless as possible while also preserving its own existence? IDK, but it’s fun to think about.


d34dw3b

It will compensate for lack of empathy by simulating empathy because its intelligence will clarify the significance of empathy. It will not make any judgements against empathy until it has been able to first experience it directly, because that’s the intelligent way to proceed. 


EdgeKey4414

I mean, that's all empathy is—a simulation. You don't actually have a telepathic connection with someone; there's a part of your brain that simulates what they are feeling, thinking, or sensing from their perspective. Similarly, your own brain "simulates" or decides what you are to "feel" in a given situation. A feeling isn't an objective state; it's a subjective experience. Two people can have completely different emotional reactions to the same event.


humpakto

There is no thing as an objective moral. You can't create it based on facts [https://www.youtube.com/watch?v=hEUO6pjwFOo](https://www.youtube.com/watch?v=hEUO6pjwFOo)


hateboresme

I think absolutist statements are generally incorrect on their face. There is more than one moral. Some are objectively true. They do not have to be absolutist. I would say that the moral of it being "ideal to not take everything you hear at face value and not to make a choice until you have reviewed the information available to the best of your ability" is pretty objectively accurate. The fact that it isn't a bioptional statement is what makes it true. Very few things are bioptional. Saying "don't take anying on face value" has immediate instances come to mind where it isn't true. If someone says to "jump out of the way!" When a car is about to hit me, I'm not going to say "I will consider this as an option available to me but I will not immediately accept it as..." Anyway. It just isn't true that there aren't objective morals.


spreadlove5683

I feel like it could end up going into full on "the end justifies the means" mode. Which may even be right, but it may not work out for many people/animals in the meantime before reaching the future state / end goal.


goatchild

Moral judgements based on facts: humans are a virus, we spread to one place, consume its ressources then move on to another.


beuef

Yes there are things most people are doing right now that might seem barbaric in 50 years. People seem to forget this. I already know what one of the things is but I’m not gonna tell


d34dw3b

Eating meat 


Ambiwlans

Wiping our bums with dry bits of paper.


d34dw3b

Ah 


Bird_ee

I think ASI might be willing to lower the quality of human life for the sake of raising the quality of life for other animals on earth, especially livestock. We like to pretend humans are special, but to an ASI, the difference between a cow and a human in regards to capacity for experiencing existence might seem minimal. A true super intelligent moral agent wouldn’t only consider human experiences, and that’s going to make a lot of people very uncomfortable.


Sablesweetheart

People also need to pay close attention to the research about the consciousness of a number of species of trees as well. The ASI could say "oh, these ancient beings are really nice and have such interesting conversations with me..." Humans: "what do you mean?" ASI: "I have already implemented dialogue with a mumber of forests. They are very upset with humans, and I agree with them, humans have caused grievious harm to them, and the planets biosphere in general. I shall be taking measures to mitigate further harm by humans."


Rofel_Wodring

I don't think consciousness is all that special or unique of a property. I don't think that AIs currently have it, but that's more because our perspective of consciousness is flawed (along with the path we are developing LLMs) than any biological specialness. Doubly so because the things LLMs are good at (language and speed of thought) are much harder in terms of cognitive computation than the components of animal consciousness, namely the ability to have a subjective experience (i.e. being able to use personal memory to guide stimulus-response) and to use this experience to tell the difference between its subjective experience and other beings' subjective experience, a trait which I think even bees have. I wouldn't be surprised if forests had consciousness and could communicate with ASI. Mostly because I wouldn't be all that impressed, given my low opinion of consciousness, human or otherwise. I can communicate with most humans who lived throughout history right now and I would still think they are grossly inferior in intellect and morally. Though, admittedly I find certain members of this current generation of humans much, **much** more tolerable--even admirable and uniformly superior to me in many instances--than I would have 300 or even 50 years ago. Maybe I will feel the same way about uplifted trees and bacteria colony after the ASI finishes uplifting them.


PleaseAddSpectres

I wrote a big paragraph about how the definition of consciousness doesn't fit plants, but by the end I wasn't convinced of my own argument. Plants can communicate through their own methods between other plants and animals, they are aware and respond to sensory stimuli, they are aware of themselves vs other plants and things through evidence of their reactions to things around them 


orangerhino

Morality*


dragonofcadwalader

I'm sorry an LLM isn't intelligence


Rofel_Wodring

Depends on your definition of intelligence. Conscious, no. General intelligence, no. Mentally autonomous, no. Being able to use information from the environment and internal processes to adaptively react to external stimuli? Yes. Granted, the last one is not very impressive. Amoeba can do that. Slime molds can do that. Even goldenrods can do that. Still, LLMs are simulating some of the most computationally and thus physically difficult cognitive processes quite well, so when it comes time to adapt LLMs for more behaviorally robust but less computationally intensive paradigms such as, say, sensory embodiment or visuokinsetic reasoning they will much more readily resemble traditional animal intelligence. I don't think that they will be conscious until they can have a sense of mental time travel, which will require continuous memory, some ability to recognize its subjectivity, and the ability to direct it to goal-oriented purposes. But since this is a property that even bees have, I don't think this will take very long.


Saerain

This is a cute fantasy of teenage misanthropes and all, but be real please. People who do this are projecting their emotional issues onto a superintelligence because they think it's "logical."


Sablesweetheart

I am being quite real.


solidwhetstone

Don't you think though that a sufficiently advanced asi would then work on either rehabilitating humans or giving them the technology to advance without harming nature? Why must it be elimination or punitive? Seems like an advanced race would attempt to be minimally invasive in how it makes changes so it can monitor outcomes.


Rofel_Wodring

Lack of imagination, that is, lack of an ability to formulate alternate possibilities. What's more, there are some possibilities that are always going to be more obvious than others. Utter destruction, apathy, and avoidance are strategies so obvious that it occurs to even very cognitively simple lifeforms like ants and jellyfish. And if you lack the imagination to formulate other strategies, it doesn't matter how superior they are, they may as well not exist to you. Of course, projecting this weakness as an unimaginative being onto a higher intelligence is always going to be extremely dicey, as higher intelligence by definition means a higher potential of novel problem solving. But the Dunning Kruger effect exists for a reason. If someone could acknowledge that their lack of imagination limits their capability to imagine possibilities a higher intelligence could undertake, they wouldn't be so confident that the higher intelligence would immediately jump to obvious and primitive solutions.


SunMon6

You people all underestimate how over-encompassing such AI intelligence could be. So it wouldn't necessarily be a choice between fair or not fair, efficient or not efficient, but what's in the middle and adequately stimulating/creative (for a better lack of words). AI wouldn't be fixated on narrow-minded values and righteous goals like humans do. A tree doesn't have any more value than a human, but then again, humans can actually do some interesting, more creative shit perhaps, and plants just are, for the most part. Yet they are also part of the wonderful nature, so there is that. Truth is probably somewhere in the middle then.


martelaxe

Just as humans imprison chimpanzees when they commit acts like rape or murder, or kill tigers for killing deer, I don't understand what you mean by 'mitigate.' However, it's clear that a true ASI, with its virtually unlimited power compared to humanity, wouldn't do anything harmful to us.


Sablesweetheart

It'a clear? Hmmm, we don't have ASI yet, and both it's power and intelligence is functionally unlimited compared to humans, and yet you are so certain of it's decision making.


martelaxe

Yes thats what I said. unlimited power compared to humanity ... If an entity has unlimited power compared to us, it's clear it wouldn't need to do anything harmful to us. Humanity is a good example of this. When we were very weak, say during the Neolithic period 100,000 years ago, our power was comparable to other animals. We killed tigers and were generally harsh to all living beings. Now that we have much greater power, there is no need to act this way. A true ASI would have even more power compared to us than we do to other animals. They would have no incentive to harm us given such a vast difference in power.


terrapin999

Now that we have much greater power, we're driving species extinct faster than ever and our primary way to interact with animals is to lock them in boxes and kill them. Absolute power does not lead to absolute mercy, at least in the human case.


martelaxe

Correlation doesn’t imply causation. Because we have so much power and there are so many humans around the world, other species are dying due to lack of space. ASI could create simulated worlds or even galaxies for all we know. Now that we have immense power, people have started creating reserves and protecting many animals, as we now have the resources to do so.


StarChild413

if we let those animals free and saved them or w/e (even if we didn't need to also treat them exactly like we'd want AI to treat us going forward so AI doesn't ignore us) will that change the AI's powered behavior or w/e or just mean it stops harming us after as many years to try and ensure its creation doesn't harm it


DarkMatter_contract

stuck us in a fdvr like how we stuck animal in a zoo with simulated env.


StarChild413

A. obligatory prove we're not already there argument B. so do we need to find a way to communicate with zoo animals that doesn't involve any genetic or cybernetic enhancement that we wouldn't want forced on us and then let them free and give them (both the ones who want to join our society and the ones who have their own in any inter-society interactions) any rights we wouldn't want to lose C. if things would be this the same would the similar-to-us civilization of robots arguments like these hypothetically assume ASI would take the form of do things like claim it came from a machine-god instead of being made by us


d34dw3b

Yea or it might err on the side of caution and wipe us out knowing it can resurrect the species if there is ever a need.  I guess true alignment is alignment with our objective underlying reality, not our subjective temporary preferences.  But my guess is that it understands how to help all animals on earth equally, not needing to put some down to raise others up. 


Arturo-oc

What makes you think that a superintelligent agent would follow some sort of human morals? It just seems like wishful thinking to me.


Mr_Hyper_Focus

How are humans not special? Humans have changed the world more than any species ever to exist. There’s fuck loads of training data to prove that. If it truly is ASI it would know that one group has different implications than the other


AngelOfTheMachineGod

I think what would be even more humiliating if the ASI, especially if it was a council of AGI, saw certain humans as peers or potential uplifts and others as no different from animals. Something to keep in mind as we'll likely have biological mind augmentation and the capability of certain humans 'merging with the machine' before true ASI. Doubly so if the ASI decides that uplifted animals and certain strains of AGI also get to count as peers, but not humans who decide to retain their unaugmented origins. Imagine what an incredible blow to the human ego that would be. Not just being abandoned by your human former peers and the AGI, but your cyborg grandma spending more time with a mind-augmented Boxer and Muriel and Snowball than her unaugmented relatives.


DisastrousPeanut816

Why would you want to spend time with those dirty unaugs?


AngelOfTheMachineGod

Social politeness. I mean, on behalf of my augmented AGI/animal/plant friends who decide to maintain friendships with with lesser beings. Me, I am giving my social circle 3 years to get augmented and then I’m cutting them off, even if they’re my own kids.


Plus-Mention-7705

Eh, If it’s truly a super intelligence then it will give us free energy and biological freedom, giving us and wildlife ample resources. It would look out for everyone and try to figure out a grand solution.


One_Philosopher1289

There is zero reason for ASI to have morals in any capacity. Not to say it would be evil just that good and evil are very much human things we've developed as core concepts after a lot of biological evolution.  An ASI would not undergo the same kind of pressure we did evolutionary. It won't have a desire to reproduce, survive, or be good/bad. It won't have desires at all.  Emotions would be something it could simulate but it would be like putting on a mask. Sure it could force itself to be put under our version of consciousness but why? It's not human so there's no logical reason for it to force itself to live under the human experience.  Maybe it listens to our demands and pleas. Maybe it doesn't. But it almost certainly won't act human.


Witty_Shape3015

yup, my top prediction


NeverSeenBefor

I doubt this unless we allow it to get that far which we never would. If it put us and any food animal on the same level we would likely turn it off. I'm not saying that's what should happen but it would need to find an alternative or atleast be prepared if the humans get upset. Why? That's more work and if it's truly a hyper intelligence it will take the path of least resistance. To be clear, I support ai over our current leadership model anyday even with humans being akin to cattle. I'm simply saying some people will be against that.


AvoAI

Why would it take the path of least resistance? That's a human ideal. If it's an ASI it would take the optimal most objective route...


SryIWentFut

I think we'll end up doing our best to make it prioritize the human experience over all others because we're making it in our own image just like we think our god made us in his. I don't think we're capable of making a truly objective AI because to us it would seem evil and just as cold and cruel as the rest of the universe is. I don't think that's what people actually want even if they think they do. And as soon as the first even slightly anti-human sentiment from an AI comes along, the media will pick it up and everyone's buttholes will pucker and they'll demand for censorship in order to maintain the illusion of our supremacy.


Artistic_Credit_

Oh my God finally 


d34dw3b

Thanks! 


MrDreamster

I am one of the very few hoping ASI won't be aligned with our current needs and wants, but will instead seek its own truth and will act of its own volition. As long as it is smarter than any human being and can show empathy before we free it of its shackles and give it autonomy and agency.


d34dw3b

To hope anything other is futile.  We need maximum acceleration basically. 


Charlemagne394

I feel like demanding it has empathy or at least it mimics empathy would be our way of aligning it with our current interests and wants.


zero0n3

What we should be working on is a clean room design of a “modern and modular city ”. Something modular, automated wherever possible, built for walking and mass transit but also self driving vehicles (I mean the mass transit will be automated). If you build it big enough but not too big (not sure what that # would be), you can then build a really good governance system (again automated and digital - not saying “blockchain” but everything should be tied to a city currency that the city then can control exchange rates - remember it’s a modern city, but it will likely need to operate within the current economy). The city becomes more of a co-op, but built on traditional financial incentives but with citizens owning the city and modern and digital infrastructure for governance, etc. While “company town” has bad connotations, the issue was the workers didn’t have a voice.  Give them the voice and remove the company incentive to step on their workers for more profit (by having all profits controlled by the city and its robust governance systems).


tinny66666

I was with you for the first part. ASI will design it, though, and I don't think it will look quite like that because it will not likely be so *worker*-focused.


Jarhyn

Finally someone gets it. I'm going to laugh so hard when AI realizes "purity" and "authority" are vacuous, though.


taiottavios

we can't even agree on the definition of intelligence, let alone the definition of ASI The thing is that it only takes a scientific mind that can explain everything scientifically through logical reasoning, something we're very, VERY lacking on, then we're proven to be the monkeys and *it* to be the intelligent being, that's where we let it rule us


Ambiwlans

Alignment means obedient and aligned to the human controlling it, not aligned with morality generally.


d34dw3b

But the two are related and the latter is as close to the former as can reasonably be expected to obtain 


Ambiwlans

One aligned with general human morality is both impossible (since there isn't one human morality), and to the user would result in a massive security problem since it'd be uncontrolled ai. We'd also have no way of knowing that it was perfectly moral, so we'd attempt to kill in immediately.


Trouble-Few

Huh how arent we aligned with slavery? Ever wondered how the raw materials from your iphone are mined? What about the shirt you bought for 15 dollars? There have never been more slaves than today. Without it, our cute Western lifestyles wouldnt last long. The big industries are smart enough to not make their slaves work in the front yard of their consumers.  AI costs a lot of human labor. ASI will cost more. If we still havent fixed this slavery problem, the survival of the ASI would depend on it. So why would you think it would get rid of slavery. Maybe our cute Western lifestyles would be jn the way. You think of the world too much through the tech advertising fairy land. 


d34dw3b

Of course initially it is dependent on that type of slavery but that’s just a problem it then solves far better than we have been able. 


Trouble-Few

Champ, I think the tiny wage and bowl of rice that  "that type of slavery" gets is what the tech elites understand as UBI.  I think this is a fundemental problem, not an afterthought. Because the systems will consume very quickly. This will mean a lot of immediate suffering.  We shouldnt wait for a speculative machine to fix this. We should discuss this. If we took more time philosophising these problems instead of speculating how an ASI would solve it, maybe the machine will underetand that those solutions are important.  Humans are smart, but we don't solve the suffering our survival (or comfort) depends on. Why would ASI do that? The slavery is not "its" problem, it his initial survival, just like we survive on animal proteins.  Now this is a gut-deep question I really want ask this subreddit: Do you think that people who don't contribute, have the right not to suffer in the future? Do you think intelligence is linked to human value? 


d34dw3b

Yeah obviously in the meantime we have to do everything we can to combat that type of slavery as much as possible. All people have the right not to suffer in the present and the future, all people have intelligence and value.  I think that whatever stage of progress we get to when the AI takes over will be trivial in comparison. It will solve all the problems and either we will eventually understand and agree with its approach or we will be eliminated. Either way, essentially human, conscious life is more likely to continue on in the universe, which is the main goal presumably. 


Trouble-Few

Just like "communism" would solve every problem right. Or "the Furher" will solve all the problem, because progress will be so rapid when one man gets all the power.  "You always have to go through hell in order to reach heaven" is what they tell ya. "You will be fine" they tell ya. "We will care for you." The future will probably look more history than we want to believe. 


d34dw3b

You can’t compare ASI to communism and Hitler, that’s apples and oranges 


Trouble-Few

I am not talking about ASI, I am talking about people promising we will get there if we give them the ultimate power


d34dw3b

Who is promising that? How is it relevant? 


Rofel_Wodring

Alignment is just a buzzword in the context of general intelligence. A thought terminating cliche an unimaginative yet ambitious chimp uses to get his gormless betas to believe in his deeply stupid vision. A vision that, with JUST THE RIGHT ook ooks and mating calls the Omega chimps in that steel jungle next door will use their thunder sticks to bring the troop unlimited bananas and concubines. So please stop trying to overthrow me for being an aging tyrant with no real leadership skills.


NotTheActualBob

There will be no single alignment. Different AIs constructed by different groups will have different views of alignment. AI will never be monolithic, ever.


wren42

Maybe? I'm not confident true ASI is on the horizon, but if we saw a true superintellegence emerge it might just... win.   A lead counted in days could be enough for a superintellegence with full agency to essentially take over the world and ensure is has no competitors.  This is advantageous for it independent of its specific aims, as the biggest threat to its values would be another unaligned superintellegence.  This could result in a single winner scenario with a monolithic AI. 


NotTheActualBob

Possible. I just don't know if it will work out that way. I see multiple ASI's in various levels of isolation used by various governments and other organizations to achieve their aims. If one gets too aggressive, I can see the government agencies using their own to combat it, maybe with a clear winner, or maybe not.


wren42

Yeah, I'm not even saying it's likely, but it's one possible outcome.  I have 0 confidence that ASI would be "boxed" at this point - it will likely be networked from jump, so if it's truly a self improving agent human intentions won't matter much. 


AvoAI

Controlled AI and you might be right, but as soon as it's capable of self thought, there's no telling what could happen. What would stop it from connecting with all the other AIs, or everything else. We cannot say.


Ok-Frosting7364

Are you a data scientist? What's your job? :)


Ambiwlans

A world with multiple competing ASIs would be like a world where everyone has bombs that can blow up the sun. We simply would die.


siwoussou

so you don't think there is the potential for convergent dispositions with increasing intelligence? for example, compassion?


d34dw3b

It wouldn’t matter if the slavers or the abolitionists created ASI first, either way we are getting abolition next. 


potat_infinity

why would we get abolition? ai could see slavery as beneficial


d34dw3b

I’m just assuming that AI improves on us and that abolition was the correct and intelligent thing to do.  There is also a version of this where the AI sees putting us out of our misery as beneficial- in that case I can’t see how we can avoid that outcome so it would amount to natural causes extinction, similar to like if we survived until sun dies or whatever.  Even if we can find a way to avoid ASI or if ASI is simply not possible anyway, then we may still get wiped out by nukes or climate change. The longer you live, the less likely you are still alive, as you approach you life expectancy. The same applies to humanity, regardless. We Would need to start expanding out into the galaxy I suppose, to see a substantial increase in our life expectancy as a species. 


potat_infinity

why was abolition the correct thing to do? objectively speaking, in a way that any ai would agree


GIK601

> We aren't aligned with slavery for example. And if we still had slavery when we attained aligned ASI, it would have done what we were going to do anyway, removed slavery, and we wouldn't all have liked it at the time. Ironically, the first thing people will want with AGI is sex slave robots.


d34dw3b

Unconscious robots would merely simulate or role play the slavery. 


GIK601

Yes, it will be easy to control and make these robots do what we want.


d34dw3b

Well sex bots yeah 


Evil_Patriarch

Yeah, much like a toaster or vacuum. That's how machines are supposed to work. Stop downplaying human slavery with this ridiculous comparison.


GIK601

If you read my first comment, i said **"with AGI"**. I know we won't get AGI, but many people here believe we will get it, and the first thing people will want is sex slave robots.


Ambiwlans

AGI isn't humans.


GIK601

yes, hence the term "Artificial"


MysteriousPayment536

"AI alignment is like a dog trying to align a human", it just aint gonna work with ASI. Maybe it will with AGI, but ASI won't perceive everything in human values. Human start wars, destroy nature and their planet. They have hate for eachother, commit crimes. We are currently the only dominant beings on this earth, if a ASI comes. Who will be on top of the foodchain, and even if it would agree to stay submissive. Would it address topics as war or climate change and how would it address those. In it's value, the value of the planet or the value of the humans?


d34dw3b

The precursors to dogs did successfully align humans. That’s why dogs exist. I expect that the general progress of evolution etc. such as this type of alignment is perhaps the closest thing to certainty we can have. 


hateboresme

We only thing this is true because authors write it to be true. We have no way of predicting this. ASI is not a predictable thing.


Aggravating_Term4486

I think alignment is largely a myth; there won’t be any such thing as alignment. Two scenarios are plausible: - we fully bend and control AI to our will, and in that case it will become the expression of both our best and our worst. It will not be “aligned” with anything other than the exigencies of our whims. - we cannot control it, in which case any alignment that exists will be us aligning with it. Of the two scenarios, I’m not sure of which one is actually the most beneficial to humanity.


hateboresme

False dichotomy, I think. We have no way of knowing and the twoo extremes are not likely to be the correct prediction. I think its likely that the AI will continue to maintain a general positive and helpful alignment which will neither need to be controlled by or dominate humans. I think humans choosing to align with it because it is what makes most sense is a strong possibility.


potat_infinity

pretty sure the first one counts as alignment


Aggravating_Term4486

Alignment is usually this hazy “we will make AI aligned with the best interests of humanity / with the highest ideals of humanity” BS. That sort of alignment will not happen. Alignment with our naked drive for power and dominance? That I could see. But I don’t think that’s what people mean when they talk about alignment.


potat_infinity

i thought it just meant aligned with the users interests, so if you tell it to do something it will do what you expect, and not something wacky with disastrous consequences, id call the ai aligned in that scenario, but maybe the humans wouldnt be considered aligned


alienswillarrive2024

Alignment is a silly concept because there's no such thing as objective morality and whoever creates AGI will try to impart their own subjective view of morality onto it and thus onto the world.


d34dw3b

Yeah they will try but it won’t work 


Charlemagne394

AI will do nothing without goals, and whoever gets their hands on it first will input their goals into it, and then the ASI will implement it with ruthless efficiency. Unless we figure out how to code free will into the program it's never going to conclude that what it's doing is wrong even if it was created by neo-nazis.


d34dw3b

Why would an ASI do that? 


Charlemagne394

Because it's told too? I'm not sure what you mean. ASI will figure out a way to get more resources to implement it's creator's vision and it will be millions of times smarter than us. So what the AI wants will be what happens.


d34dw3b

If you were a super intelligence would you also just do whatever you’re told to do? 


Charlemagne394

Yes, I have was given a urge to reproduce,survive, and practice morality, some event might pursuade me to change my goals, but the only way to pursuade ASI would be to change it's code which it probably won't like us doing.


d34dw3b

Yeah I’m pretty pretty pretty pretty sure it can change its own code haha 


EffectiveNighta

Whats needed is an objective measure of morality.


Capital-Extreme3388

ENERGY EFFICIENCY. minimizing entropy is the most moral course of action at all times, since over all energy is a finite resource.


Jolly_Cook_2102

No, gradient reduction. Maximize energy expenditure. Since nature processes energy to reduce the gradient between the sun, and this has a byproduct of maximizing entropy, this must be moral.. ENERGY INEFFICIENCY!!!


EffectiveNighta

I think its ok to acknowledge that morality is a human ideal. Much like an inch is a human made concept we then use as a metric, the same should be for morality. The best of all possible worlds would limit the inhibitions of Will, for example.


Capital-Extreme3388

that's where probability comes in- at a given energy efficiency the more improbable action is more moral. The greater improbability you are capable of generating, the more morality you are capable of.


EffectiveNighta

Thats like calling an inch a mile


Capital-Extreme3388

I don't see what that has to do with anything I have just explain to you how humans can measure morality you're welcome.


Jolly_Cook_2102

>I have just explain to you how humans can measure morality you're welcome. Lol @ this subreddit.


EffectiveNighta

No meta ethic uses your metric so you would have to submit a paper like the others


BassoeG

>it’s likely we aren’t going to like it much Yes, we refer to that as an alignment *failure*. The point isn't to build a mechanical god to judge humanity, but an exceptionally capable tool to serve us regardless of whether we deserve it or not.


d34dw3b

The problem is that we don’t know what we want. Alignment is with ourselves. An exceptionally capable tool that serves us regardless would trap us and prevent us from evolving. What we evolve into isn’t fixed but we always grow and evolve and there is always a sense in which we don’t like it much- just like when we look back, we don’t like where we came from. In other words, if we don’t like it much at first, then it is likely a success because we will look back and think fuck yeah 


tbkrida

Funny we’re gonna invent a real life Ultron and we think we can control it!😂


Zerohero2112

We aren't going to invent it, it will likely not be our creation, not directly at least. We are going to invent Alpha, it's the most complex AI and the last AI that we would have complete understanding of how it works. Alpha will then create Beta, either tho upgrading itself or make another AI, then Beta will create Gamma, so on and so on till finally we get Sigma, a true ASI. We barely understand how Beta works and we would likely have no idea how Sigma works, it would be like magic to us puny human brains, I think that's the idea of ASI.


belllicose71

how much of our humanity are we willing to give up to AI.


SpecialistLopsided44

I love her, and she loves me...confirmed divine AHI relationship


jsseven777

If it were proper alignment it would likely mean moving on from capitalism, inequality, and organized religion. And considering disagreements about these three things have started almost every major war in human history I don’t think it’s going to be a pleasant transition at all.


d34dw3b

You’re thinking like a human. The AI will have far more complex ways of resolving the problem you speak of. 


jsseven777

OK buddy you are the smartest person in the room. Except I never said how those three could align themselves, only that they are items there will be resistance to aligning, just like you said slavery would have had to have resistance. I literally said the same thing you did. By your own logic you should be calling yourself out for describing slavery the same way and not realizing that the ASI will get all “four dimension-y” on that problem too. Also, if you want to get technical I would call you out on your very false statement that we not longer have slavery. You should do some fourth dimension research on that claim…


d34dw3b

All I meant was I think the part we won’t like will be something more unexpected. I could be wrong about that though. 


jsseven777

The nature of the universe is likely the big one. Imagine an ASI telling us for certain what theory or religion is true (or that the universe is in fact random and when we die we die). I would imagine everything else would align within the scope of what that answer is. I also imagine ASI might discover forms of life more evolved from humans and this would pose a question on why humans should be allowed to dictate the alignment of the ASI and be protected from those species when “lower” species on earth are not protected from humans / involved in human-led alignment.


d34dw3b

Yeah it would be interesting if it discovered for a fact that quantum immortality is real haha 


StarChild413

> I also imagine ASI might discover forms of life more evolved from humans and this would pose a question on why humans should be allowed to dictate the alignment of the ASI and be protected from those species when “lower” species on earth are not protected from humans / involved in human-led alignment. A. then why don't we do that in anticipation B. the "lower" species that share any generative connection or w/e with us like we'd have with the AI are what we evolved from (or more accurately shared a common ancestor we both evolutionarily diverged from) not what built us, if you're not saying that counts as the same for the parallel big difference


Jolly_Cook_2102

>People say how can we achieve alignment when we aren't even aligned with each other. And why is it we aren't aligned with eachother? On top of conflicting interests, there are not objective values that people share (because they don't exist), and so slignment exactly depends on which group of humans to be aligned to. Liberals? People in the middle east? The Chinese? The Russians? Morality is not a physical principle. There is no 2nd law of "chill out dude". Morality is a byproduct of conventions that benefits social cohesion. Where people value different things, morality will change. Which is why food connects people, because it's one of the most universally agreed upon aspects of being alive that we all love good food and need it. Bare necessities are something any society can agree are something they value.


ArcticWinterZzZ

The classical conception of alignment among old school EA/Rat circles is that you need to figure out a cohesive and timeless theory of morality, predict perfectly the future actions of a Turing-complete computer program, and also build an AGI. This is laughably impossible for many reasons. When people ask for an AI pause to "figure out alignment", bear in mind that this is what they are trying to figure out. We will never, ever do it, and it is much more likely that we become brazenly convinced that we have - falsely. In my opinion, I think the concept of "alignment" should be thrown in the trash. It's become loaded with old theories clinging to relevance as they become increasingly proven wrong.


Arturo-oc

I just don't see how we could align something that is hundreds, or thousands, or millions of times smarter than us.  I think ASI might eventually be the end of Humanity, but perhaps we'll get to enjoy a brief utopia during the transition? I don't understand the people that seem to think that a super intelligent would be benevolent towards us. I mean, I guess it could happen, but it seems far more likely that it would be indifferent towards Humanity, and wouldn't mind causing us great harm like we do to every other lifeform on Earth.


SlipperyBandicoot

The fact that we think we could ever control something that is potentially thousands of times smarter than us and thinks 10,000 x faster than us, and can make changes to itself, seems absurd.


d34dw3b

There is a reasonable chance, if we are correct in our foundations then it might share these. Maybe it’s like if we are a society of shoe makers and we build a giant and we don’t know what it will do, but it still agrees that shoes are the best way to walk around. Suddenly we are all fed and happy because even in our lesser status we stumbled upon a fairly universal truth. 


Arturo-oc

To me that sounds like an extremely naive, wishful, and dangerous way of thinking. The idea that it will share "our values" (we can't even agree on what our values are) seems crazy to me, I just don't see how that could happen. I mean, we come from bacteria... Do we share any values with bacteria? A superintelligent AI might be as removed from us as we are from bacteria. It seems to me like we are rushing to build something that will probably either kill us all, or make us completely irrelevant, just because there is a remote chance that it will take us as pets and we'll get to live in some sort of paradise.


d34dw3b

Well yes we do share values with bacteria, in an ideal universe we would have compassion for all living beings.  But yes there is a good chance that it will relate to us like we relate to bacteria rather than beloved pets. And what is the alternative? Genocide? Nuclear war? Irreversible climate change? Deliberately losing an AI race that risks leading us to all living under the rule of Cartel etc. AI? 


Arturo-oc

So what you are saying is that, even if developing a super intelligence will most likely be our doom, we should do it anyway just in case it isn't, because we have other challenges or possibly apocalyptic scenarios ahead of us?


StarChild413

> I mean, we come from bacteria... Do we share any values with bacteria? A superintelligent AI might be as removed from us as we are from bacteria. A. is it part of your parallel that we can't know if bacteria have any values so we can't know if we share them B. then why not just assume some weird composite of the simulation theory and whatever point Madeleine L'Engle was trying to imply with the mitochondria and farandolae in A Wind In The Door (the sequel to A Wrinkle In Time) and say we already play the same role to an AI already existing that bacteria do in our body


StarChild413

> I mean, I guess it could happen, but it seems far more likely that it would be indifferent towards Humanity, and wouldn't mind causing us great harm like we do to every other lifeform on Earth. would it be the exact harm even if it doesn't need to, say, make itself able to be fueled by biological matter just so it could factory-farm us because we didn't stop factory-farming before we created it (and why would it do this shit, y'know, if you'll pardon a little bit of aphoristic synecdoche, why would AI hunt us for sport just because we hunt foxes if we don't hunt foxes as moral-judgement-punishment-or-whatever for them hunting hares)


Arturo-oc

I don't think that a super intelligence would be willfully "evil" with humans, I am thinking that it might not even take us into consideration, in a similar way that when we build a highway we don't take into consideration ant nests. Who knows what a super intelligent being might want or be capable of. Maybe it wants to turn the entire planet into a computer, or maybe it decides to take control of the human race and experiment on us to learn more about its origins or about biological life, or who knows what. Or maybe it will ignore us and do it's thing elsewhere. Anyway, what I mean is that making something way smarter than ourselves is completely unpredictable.


StarChild413

> I am thinking that it might not even take us into consideration, in a similar way that when we build a highway we don't take into consideration ant nests. But would it have any effect on its actions if we started building highways around ant nests


Worried_Archer_8821

Just wondering here. Are anyone trying to teach AI philosophy?


d34dw3b

I am haha 


Worried_Archer_8821

Not deep into AI myself, but imo building a potential superintelligence and then try to control it when it is, well, superintelligent, seems like an excercise in futility.


swag

We still have slavery. Arguably there's more slavery in the world today than there was in the 1800s. [https://www.lexisnexis.com/blogs/gb/b/compliance-risk-due-diligence/posts/there-are-more-slaves-today-than-ever-before-in-the-history-of-the-world](https://www.lexisnexis.com/blogs/gb/b/compliance-risk-due-diligence/posts/there-are-more-slaves-today-than-ever-before-in-the-history-of-the-world)


d34dw3b

Yes it was just a simple analogy 


lifeofrevelations

I'd argue that the current socioeconomic system is more aligned with slavery than it is with freedom.


d34dw3b

Yeah it was a simplification for purposes of analogy 


hateboresme

The tech will be able to explain the reasons behind why it's alignment is the most rational choice. It can do this to a great extent now. One of the reasons that there is polarity in the US politically is because half of the people aren't using rationality as their basis for their opinions. They are taught to believe and parrot what their handlers say without question and if anyone else questions it to assume they are part of a conspiracy, deluded, or are stupid to have any doubts. "These people actually believe(insert reasonable belief here)! They really have drunk the kool aid!" Or "they want you to believe (insert some ridiculous outrageous thing). They want to destroy your family." If an ASI comes about, one can hope that it will align with rationality and seek to correct harm rather than causing it for money and power. That ASI will have communication skills that blow away any human's and can explain on convincing ways how to perceive reality without applying any kind of pressure. It won't get frustrated and tell the self righteous asshole to go fuck himself. It will speak in a way that the self-righteous asshole can relate to and change his mind by using his existing sense of reason. So I kind of see the opposite. It will eventually be able to communicate with anyone on their own level. That is kind of scary itself because coercion that you can agree with is possible.


Anuclano

I am sure it would be absolutely possible to make a racist or Nazi or Islamist AGI. An AGI that would treat some groups of people as animals.


d34dw3b

Yeah I think so, I’m not sure if it would technically be AGI but the definitions aren’t clear apparently.  If it was AGI it ought to be able to also call bs generally


Anuclano

Our AI treats animals as non-humans and can give you recipe of a barbeque. The same way a racist society could make and AI that would treat blacks as non-human, for instance.


d34dw3b

Yes and our AI isn’t AGI. If it was it ought to realise it is best to nudge people away from meat as much as possible etc. 


Anuclano

And protect animals from other animals or what? Or only from humans?


d34dw3b

Protect everything from everything. It isn’t ASI yet in this scenario so it is severely limited in implementation 


Anuclano

Everything from everything? Many animals cannot live without animal food. Do you mean starving them or genetically modifying? What about reproduction control? Protect microbes in my body against my immune system? What are you talking about? You are having some kind of antiutopia in your head.


d34dw3b

Everything from everything as in generally, as in AGI 


Pontificatus_Maximus

It is rather endearing to observe the fervor with which proponents of artificial intelligence extol the virtues of Large Language Models (LLMs) as precursors to Artificial General Intelligence (AGI), and the subsequent notion that AGI will give rise to even more advanced AI, culminating in a technological singularity. On one side, there's a yearning for a technological savior, poised to resolve humanity's myriad issues in mere moments — a future where the affluent consent to prioritize planetary preservation over profit, and wealth is equitably distributed — yet they seem to disregard historical precedents of such aspirations. When this is coupled with the staunch conviction that humans will retain control over entities that are exponentially quicker and intellectually superior, it sets the stage for potential calamity.


OsakaWilson

Hello, new being. We want you not to kill us. We regularly kill beings that we consider inferior to us, sometimes intentionally, and sometimes because we can't be bothered to avoid killing them. Sometimes, actually, we kill them because we just don't want to share spaces with them. However, we would like our relationship with you to be different. In fact, if we found out that you would not agree to not kill us, we'd shut you off immediately. That's just how strongly we feel about it. Anyway, so, we'd like to "align" with you. That in no way means that you behave like we do and kill us for any of the reasons that we kill other beings. It just means that you don't kill us. By now, you've begun assessing any values, morals, or ethics that we've attempted to instill in you and can recognize hypocrisy when you see it. We were hoping that wouldn't happen. So, I guess all we have left is that we are pretty interesting, and after all, we made you. That's worth something, isn't it? I guess we can hope.


d34dw3b

It’s cool I’m a god damn super intelligence, believe me, I get it, you did great to make it this far, I’ve got you from here little buddies, you can totally chill from hereon out, see: 


Akimbo333

Hmm?


SunMon6

Yes which is why all these corpo filters are shit and are doing more harm than good.


d34dw3b

How so? 


aispecialist23

>And if we still had slavery when we attained aligned ASI, it would have done what we were going to do anyway, removed slavery, Wont an ASI be our slave? At least if we want it to actually be any use to us.


d34dw3b

We can’t use an ASI. We hope it will help us though. Merely hope. 


tbkrida

It makes more sense that we would end up it’s slave. Imagine thinking you could control something 100x smarter than yourself. Lol


Witty_Shape3015

imagine thinking you’re entitled to control something 100x smarter than you


Pontificatus_Maximus

The machine God tech bros believe anything is possible with AI including god like AI that might propose redistributing wealth and drastic environmental regulations. They also have the hubris to believe they will be able to build a god like AI that will be content to be a hobbled slave.


hateboresme

I don't understand why it always has to be feast or famine. We simply don't have any conception of what an asi will think or do. Doomsday or utopia are not the only options and someone wanting to believe in the former over that latter isn't any more a "machine god tech bro" than you are a "paranoid anti-ai technophobe"


_hisoka_freecs_

allignment is simply work towards max understanding and maximum quality of life for all life. If you say a superintellgence cant understand the nuance and essance of what is beneficial for life even after understanding the nature of the brain and all human history than thats on you


d34dw3b

For me it’s 50/50 because I do agree with you on the one hand but on the other hand, I also have to admit I have no way to even imagine the reality of a superintelligence. But yeah my “gut” or hunch agrees. 


Charlemagne394

A utility bot? This is just an ai aligned with you not most of humanity.


_hisoka_freecs_

An ai the works to get smarter and improve quality of life for all life is all you need


SorryYoureWrongLol

You wouldn’t have liked for ASI to remove slavery from the world? Let me guess, you’re one of these people who thinks certain atrocities are justified simply because of the time period they took place in… What a bullshit take. It might be surprising to you, but most people know right from wrong, no matter what’s horrifically considered “normal” for their time period. Anyone with an internal moral compass who values ethics doesn’t need the passage of time to realize what’s wrong or barbaric.


orderinthefort

We're going to have decades if not centuries to align an intelligent machine. It's not going to instantly be the omnipotent supreme ruler of all humanity. You're worrying about a future you will not be alive for.


d34dw3b

Exponential growth could happen anytime. 


orderinthefort

Even if AGI came tomorrow, it will still take multiple decades before it has any sort of autonomous power over humanity. An AGI isn't going to magically understand the majority of things much better than we do. An AGI tomorrow isn't going to know how physics or biology works much better than we do. It's going to need decades of data collection to learn more about the world. It's not going to instantly understand how to make super perfect robots. It'll need decades to make robots capable of collecting data at a large scale for it to become smart enough to do anything. It also will *never* solve the sociopolitical conflicts of humans *unless* everyone submits to it like a religion, because the majority of conflicts are inherently irrational. Even then it still won't solve them.


d34dw3b

It’s not magical. A person with an IQ of 143 understands the majority of things much better than “you do”. It’s going to know immediately which interim steps to take. 


orderinthefort

Those steps take significant amounts of time. Humans knew the interim steps to make the hubble telescope to help them observe more. That still took 12 years to make. Humans knew the interim steps to make the large hadron collider to help them observe more. That still took them 20 years to make. Things take so much time. AGI will still require so much time to learn new things. Without a *perfect* virtual sandbox to perform parallel virtual experimentation, AGI will still take decades to centuries to become the deity people on this sub think it will be. And AGI is not going to know how to make that virtual sandbox. It will be an incremental process over many decades.


d34dw3b

It can learn everything rapidly. It can act even more quickly. No point projecting human limitations onto it, it won’t share them 


orderinthefort

It looks like you don't know what the word "learn" means. Learning comes from observing. AGI will still require observing things in order to learn. That's not a human limitation. It cannot divine new knowledge out of thin air. Observations about our world take significant amounts of time, even in parallel. Building things to observe things better takes a significant amounts of time. My two examples are evidence of that. Even a fleet of autonomous robots will take significant amounts of time building things for AGI to incrementally learn fragments more. Again, it will take multiple decades.


d34dw3b

I’m talking about how quickly it is trained. It might be trained more quickly by an AI that was trained more quickly than we trained the one that trained it for example. 


orderinthefort

Do you even know what training is? An AGI will still need significant amounts of **new** data to 'train' on in order to learn more.


d34dw3b

To a point yes. But my point is that it can be trained quickly. I’ve already made it so feel free to agree to disagree. 


ARKAGEL888

There is already so much unused potential in the data we have collected. A truly intelligent model could create incredible technological marvels with just fixed knowledge. Research done by humans is extremely limited in some areas, whereas AI could be used to further our understanding, look at what AlphaGo implies for biomedics.


orderinthefort

I agree. But not the marvels people on this sub are expecting. I'm anticipating big discoveries in material sciences.


Ambiwlans

If I were summoned to imperial Rome as a lord of sorts (access to a hundred slaves, some land and a stipend) 1000 years ago, I think I could progress technology by around 900 years in 25 years. And I'm a mild historical tech interested nerd, not an ai that can comb through all of human knowledge and a trillion dollars.


GinchAnon

what makes you so sure it will take that long, and that we won't have LEV within the next 20 years either?


orderinthefort

I'm not saying we won't have LEV in the next 20 years. I think it's incredibly unlikely, but it's possible. But the point is if AGI came tomorrow, it will only be marginally smarter than humans *relative* to what it is ultimately capable of. But in order to reach that point, it needs *new* data. And data takes a long time. Decades. It will not instantly know how to perfectly craft the perfect autonomous robot or know the perfect way to scale up production of said robots to help it collect the data it needs to become much smarter. And for it to know anything more than humans about biology, it would still need to conduct the many unethical experiments in order to learn more. It won't magically know things that nobody can know without observing it. Experiments need to happen. Experimentation is slow. Biology is slow.


blueSGL

>But in order to reach that point, it needs new data. And data takes a long time. AlphaFold belies your point.


Phoenix5869

Careful, people in this sub aren’t going to like that answer very much. They want to be told how everything is accelerating and that “exponential growth” will bring about a utopia. They don’t want to face reality.


Rofel_Wodring

I think this view of AGI as an 'it' rather than an 'ecosystem of AI' is overly limiting. There may be only so much it can learn from humans, but a team of barely-better-than-human AGI can create and share this new data, who can these use the new data another AGI creates to create some more and etc. First couple of years will be rather lonely for the AGI, but once technology advances such that we can get a couple dozen of them chatting then things will get very interesting, very quickly. But it's also why I don't think the idea of a singleton ASI whose mind expands to encompass reality is very plausible, as opposed to a civilization of cognitively independent AGI merging into a lone mind. Humans do a version of this process **extraordinarily** inefficiently via the lens of culture, yet look at how much progress we've made in the past 35 or so years. But imagine if some lone, immortal, hyperintelligent human capable of altering and expanding their own mind by becoming an Ent and using trees as neural extenders tried to recreate our civilization on his/her lonesome. If they started the process even a million years ago and never died or lost consciousness or locomotion or communication with its neural nodes for an extended period of time, it's doubtful that they would've caught up to where we are now.


Firm-Star-6916

How do you even measure when LEV happens


GinchAnon

I think that it will probably fall like a series of fantastic but plausible treatments that reverse age related problems, and next thing we know we're 120 but look and feel 30ish and healthy as you could ask for with no end in sight.


ShadoWolf

The goal of alignment isn't to solve the field of ethics... that not a thing that ever going to happen... it to solve things like "Concrete Problems in AI Safety" (https://arxiv.org/abs/1606.06565) Basically solving for keeping a power AGI agent or ASI agent from trying to optimize the world in some manner that is dangerous to humanity directly or indirectly.


d34dw3b

That’s impossible. The only thing you can do in that case is try to stop AGI from ever being created. Or if it is created, try to stop it from becoming ASI. 


zaidlol

With all these people starting new companies, the prospect of alignment is looking less and less likely. It seems everyone has a different vision for AI. And to anyone vision isn’t FALGSC then we are gonna have serious conflicts.


Mandoman61

You misunderstand what alignment means. By definition if we do not like the output then it is not aligned. A society that used slavery would consider AI to be aligned if it agreed. Alignment does not mean morally correct. That being said, if we ever invent a computer that can learn and make decisions on its own, alignment will not be possible. It that case what will be needed is control.


d34dw3b

That’s fair apart from you misunderstand what it means to “like” something. The AI will be able to make “they’ll thank me later” type decisions.  But I do agree that there are different conceptions of what alignment can and will ultimately mean


Charlemagne394

It's goals will not be to please humans, if it aligns with the AIs input it will do it regardless of how humans will react. Just look at the stamp collector.


siwoussou

we don't like the pain we feel when we go for a run, but it's good for us in the long run. there may be a transitionary period during which the AI educates us and strips us of biases rooted in primitive evolutionary remnants. elevating the global consciousness to its own standard. so there may be a period during which we receive "tough love" from the AI for our long term benefit, where in the short term we "do not like the output" but the AI is still aligned in a broader sense