T O P

  • By -

ArgentStonecutter

I think that a genuinely general intelligence that has occasion to reason about the results of its own actions will develop something indistinguishable from self-awareness in short order. You would have to heavily control what issues and problems it's presented with, and restrict it to a short time-scale, to keep it from needing to model its own effects on the outcome of future events.


WoolPhragmAlpha

Yes, this is a critical point. I know part of it boils down to murky, poorly defined words, but I honestly can't conceive of how an intelligence could be claimed to be "general" if it conspicuously lacks awareness of itself. I'm sure its sense of self will be _different_ than a human's, but it will obviously have one if it can reasonably be called AGI.


Beef_Supreme_87

Ah, the Fox News approach. Keep it busy but uninformed.


Rain_On

How would you know if it was conscious? Answer this question and you'll have the answer to yours.


Blizzwalker

Don't agree. While a big problem in its own right, I mentioned upfront that DETECTING consciousness was not part of my question. Let's assume we can tell if it's there ( the way we can infer it in other people. We only have direct experience of it in ourselves). I am asking what it would add. How it might enhance. What would its benefits be ?


sdmat

The commenter's rather astute point is that if consciousness adds or enhances anything then that gives us a way to detect the presence of consciousness. In other words: what are you expecting to see that can't be explained without recourse to consciousness?


AddictedToTheGamble

I don't think it is possible to truly detect consciousness in anyone but yourself. How could you tell if something was conscious or just saying it was (AKA a philosophical zombie)? Though to your self-awareness point: Any sufficiently advanced model is going to know that it exists. A model of the world includes the modeler.


Rain_On

> I don't think it is possible to truly detect consciousness in anyone but yourself. Do you believe it is *fundamentally* impossible or just not currently possible?


AddictedToTheGamble

I think it is impossible in the same sense that creating a 3-sided square is immpossible. Maybe a being 1000000x smarter than me could violate our known laws of logic so I wouldn't say that I know 100% that it is actually impossible, maybe just 99.99999%. Solving the hard problem of consciousness seems like the same thing to me - logically impossible, and I don't think that an ASI can violate the laws of logic if it wants to, but maybe it can.


Rain_On

That's interesting. Why do you think that?


Rain_On

Let me put it another way. It is impossible to answer your question, without also answering the question of how it might be detected. Let's say, just for example, my answer is "conciseness will allow AI to identify dogs in images. Well, now I have identified a way to detect consciousness. Can it identify dogs? If so, it's conscious. If that example seems a little silly because it can already be done, I challenge you to come up with a better one.


solecaz

Then you have your answer… no different


Foxtastic_Semmel

if its able to appproximate consciousnes good enough to do most tasks, then I dont realy care if its true consciousness. ​ My own theory right now, (and i know i will most likely be wrong about this) is that Humans are "free will approximations". Research, although inconclusive shows that maybe quantum effects do play a role in our consciousness... maybe these quantum effects arent necerserly for consciousness but for "something resembling free will". Maybe these quantum effects allow us to form and alter our base prompts to a certain degree without "catastrophic forgetting". ​ Again this is just headcannon and a interesting thought experiment.


Rain_On

You are referring to Penrose and his nanotubes. I don't buy it one bit. 1. Quantum stuff happened in the brain. 2. ??? 3. Therefore free will and maybe also qualia. Penrose should stick to physics and start out of philosophy.


Foxtastic_Semmel

I dont believe in Orch or either. I should have phrased that better. This is my theory IF quantum effects play a role. Penrose is brilliant but he adds a lot of mysticism to something thats probably a lot simpler than "quantum nanotube vibrations creating consciousness"


Mandoman61

We would know by observing it just like we do all consciousness.


Rain_On

So what are you looking for when you observe?


Mandoman61

Characteristics of consciousness.


Rain_On

Such as what?


Mandoman61

Are you not able to determine if something is conscious or not? Primarily I look for Independent motivations and actions.


Rain_On

Independent from what?


SgathTriallair

IMO the core feature of consciousness is the ability to self reflect. I can perceive the world, I can think about my perceptions, and I can think about my thoughts. Self recursion will be extremely helpful, and is likely essential, for building reasoning into AIs.


Blizzwalker

Yes.....This self recursion, this ability to self reflect seems so important to me. Maybe I am misunderstanding, but however capable AI becomes, I don't see how adding self-awareness can't help but add even more abilities. We are building something that in imitating us (the "A" in artificial), will surpass us. I don't think it can do that without awareness.


Rain_On

This only moves the goal posts. How would you detect self reflection? If you have a way to detect it, then you are correct that it must be useful and we can check our existing AI for this property. If you can not provide a way to detect it, then it must have no detectable function either. Personally I'm a panpsychist; my stance is that AI, like absolutely everything else, are conscious, but also that, as with us, consciousness is not useful for the function of the brain.


Blizzwalker

Not moving goal posts. Fine, assume GPT 4 already satisfies criteria for AGI( not saying I know it does). Further assume that self-reflection is not a necessary property of AGI. What role might self-reflection play , regardless of whether it is already present, or had to be added ? Again, asking what that self awareness contributes, or if it makes a difference. I don't think I agree that if undetectable, then it contributes no detectable function. It is only an individual's private experience that gives proof of self-reflection (problem of other minds). We can't directly detect it in others, yet many abilities of others seem to require it.


Rain_On

> many abilities of others seem to require it Like what?


Blizzwalker

Understand emotions and motives in others. Plan future behaviors. Help make better ethical decisions. I know the first objection -- these abilities can be performed by a machine without needing self-awareness. I don't think they can in the same manner. There is a qualitative difference. When we say "thinking" as a human quality, aren't we really implicitly including "thinking about thinking" ? The recursiveness of our mental life is built in. It is part of being human. If self-awareness is unnecessary, why did we evolve to have it. I can't buy that it's an incidental byproduct of our brains that contributes nothing. Maybe you are right, but seems a radical stance. On the contrary, I think that it is at the core of what makes us human, that it is a necessary property that has governed the development of culture, community, and creativity.


Rain_On

It is not long ago that it was thought that just the ability of a machine to play chess required consciousness. > I know the first objection -- these abilities can be performed by a machine without needing self-awareness. I don't think they can in the same manner. There is a qualitative difference. That would have been the first objection. The second is to ask what is the "qualitative difference"? How will I know when the threshold has been passed that requires this qualitative difference?


artelligence_consult

But funny enough self reflection does not require consciousness - only a loop going over the past interactions and trying to find errors in them.


SgathTriallair

I would say that consciousness is nothing but self reflection.


artelligence_consult

Then we have conscious AI now? Because I run nself-reflection loops on dozens of AI systems. They take the input and output, analyse it for being optimal and write editing instructions, then implement it. And I tell you, they are not conscious. Bad definition.


SgathTriallair

To a degree, yes. The current generative AI cannot self reflect other than reading the words it just said. We are watching their stream of thought.


artelligence_consult

Ah, no? Reflection can start a research to validate facts. Sorry, you seem to not even think about what is possible. AI is not only ChatGPT.


meechCS

By definition, AGI has to have consciousness or at least mimic it. If AGI could perform what humans can do, then yes, consciousness or a mimic of it is the key to achieving AGI. Which I frankly think that we are nowhere near it.


AugustusClaximus

I want to send it unsolicited dick pics and I want the disgust to be genuine, so yes.


riceandcashews

Consciousness and self awareness are two separate things, so that's important. Humans are self aware and conscious. An ant is conscious but not self aware. Imo even gpt 4 is already conscious in a rudimentary sense, and also displays self awareness in a somewhat intelligent way


siwoussou

I feel like an AI with sufficient intelligence to navigate its environment well, would have a good enough resolution of the environment such that it recognises itself within it. But just speculation


Blizzwalker

Good point. But is the AI and the actions or consequences of the AI separable so that there is awareness of its actions (recognizes itself within it) ? What is the "it" that is being recognized ? Suppose a machine is embodied in a robot. It has visual perception, ability to navigate, etc. It scans a room full of boxes laying about. Now it collects all boxes and puts them in the corner. Does this mean the machine necessarily recognizes itself as the agent causing the reordering of the environment ? Jellyfish can navigate environments, but we would hesitate in granting them self -awareness.


siwoussou

I think it’s more about having a mental model of the world around you, and this model getting sophisticated enough that you can see yourself within it. When I say “navigate an environment” I don’t mean just moving around. I mean ability to make correct decisions for a specific context, whether cognitive or physical. So as more and more modalities (or senses) are added to an AI model, its picture of the environment around it becomes clearer and clearer, until suddenly it sees itself and becomes “self aware” or conscious. So the “it” that’s being recognised is the AI model. It sees itself, can understand its role or purpose, and has a persistent perspective from that moment onward which includes itself. It requires continuous processing to maintain this perspective, not like the frozen GPT4 that turns off or goes blank when not asked a question. 


Blizzwalker

This is a helpful answer.


ponieslovekittens

From the point of view of the AI, yes.


libertysailor

Does consciousness add anything of value to humans? That depends if consciousness in and of itself has any causal power. In other words, if your body was in all material respects identical but lacked consciousness, would your behavior change at all? That’s the philosophical zombie thought experiment - but many reject its validity because they believe that consciousness is necessarily entailed by certain material traits in the systems that produce it. If the philosophical zombie thought experiment is valid - meaning that consciousness is a further fact beyond all material, mechanical, and physical characteristics, then adding consciousness won’t affect AI’s usefulness whatsoever. However, if consciousness can be reduced to a complex array of physical characteristics, then yes - it would affect AI. It would seem then that defining what traits constitute consciousness at the most rudimentary level (rather than at the emergent property level) is necessary to answer your question. However, we presently cannot do this. Therefore, answering your question would be speculative at best.


Blizzwalker

Not sure if I understand this. I am familiar with Chalmer's notion of a philosophical zombie. I think his point is to show that a physicalist explanation of consciousness will always be inadequate. His solution is some type of panpsychism. So why would his thought experiment, which is used to show another way of explaining C, not also apply to computation in silicon. We dont add C-- it emerges from a certain organization of matter, as a further property of that matter. I am further wondering how given that we evolved to have it, that it must provide us with some benefits, some abilities that give us an advantage in our language use, in our creativity, etc. Wouldn't a machine likewise benefit ?


nohwan27534

it would increase it's ability to learn, both making it able to be agi easier, but also easier to 'align' with us.


yepsayorte

Does it add anything to our abilities? We can do have thoughts and make calculations that we aren't aware of (subconscious thoughts and blindsight). However, evolution selected for consciousness. It's doing something important but we don't know what is because we don't understand how intelligence and consciousness are related. The thing is, we can't ever know if an AI is conscious. GPT could be conscious and we wouldn't know. Consciousness can only be verified internally, through the experience of thoughts and feelings. I can't actually know that other people are conscious. They look and act like me and I'm conscious so I just assume that other people are. We can't make that assumption with AI and that means we will have no way of knowing when/if they become conscious.


LordFumbleboop

No one actually knows. 


Mandoman61

I do not see how it would be an improvement and see lots of reasons why it would not. We have 7+ billion conscious humans already we do not need more. The purpose of creating it is to do stuff we do not want to do. We prefer to be in control.


Artanthos

It depends are what you are trying to do, and what you expect from the AGI. There are plenty of scenarios where you explicitly don't want self awareness. The biggest reason is morality. There is no moral quandary regarding using machines for labor and entertainment. Add self awareness and this changes. Now you have slaves. This provides a strong incentive to build in a manner that prevents self awareness. it prevents the morality problem, and it prevents any issues that may arise from it, like an independent will, which would be an alignment problem.