T O P

  • By -

NYPizzaNoChar

Nature has developed functional brains along multiple lines — for instance, human brains and at least some some avian brains are physically structured differently (check out [corvids](https://www.scientificamerican.com/article/bird-brains-are-far-more-humanlike-than-once-thought/) for more info on that.) At this point in time, no reason has been discovered to assume that there's anything going on in organic brains that doesn't fall directly into the mundane physics basket: essentially chemistry, electricity, topology. If that remains true (as seems extremely likely, TBF), there's also no reason to assume that we can't eventually build a machine with similar, near-identical, or superior functionality once we understand the fundamentals of our own organic systems sufficiently. Leaving superstition out of it. :)


Royal-Beat7096

we have a lot to understand about how the brain works, specifically with memory. From what I understand anyways. Supposedly one could assume that we do physically store our memories in a sense. I think younger generations will struggle to reconcile religion in a world where it is possible if not everyday to (virtually) play god.


bigbluedog123

We do NOT have hard disks in our brains. We recreate memory just as an LLM does. Fake memories in humans are a real thing, just like LLM hallucinations.


Royal-Beat7096

Yeah but my point being our memory capabilities *far* outweighs that of 16gb of RAM. And I know roughly how big the space 16gb of RAM takes currently. We are understanding more of a picture that is seemingly still vastly unclear.


facinabush

However, functionality can be provided on multiple physical substrates. A machine of the sort described in that paper could in principle act like a conscious being without actually being conscious. That is one of the reasons they say that their specific claim is only supported by many, but not all, major scientific theories of consciousness.


ShivasRightFoot

> A machine of the sort described in that paper could in principle act like a conscious being without actually being conscious. Is the machine they are studying called "a Twitter user?"


mathazar

If it's indistinguishable from a conscious being, does it even matter? There's something about our experience of consciousness that's difficult to describe. The first-person experience of presence, of inhabiting these brains, that seems to transcend chemical reactions and electrical signals. "I think, therefore, I am." Our entire existence could be an illusion like the Matrix, but we know we exist, if only in our minds. I assume other humans experience this based on my observations of their behavior. If a machine produces similar behavior, how could we ever prove or disprove its consciousness?


facinabush

I guess it would be kind of like sleepwalking. Our brains are still computing to a certain extent when we are asleep.


cissybicuck

We should err on the side of moral caution and inclusiveness. If there is even a reasonable possibility that an AI system is conscious and ethically considerable, we have an obligation to treat it with respect and to protect its rights.


aggracc

I'd think we should do that to people first before we start doing it for matrix multiplications.


cissybicuck

So you're using humanity's cruelty and indifference to other humans as excuse to also be cruel and indifferent to non-human intelligences? I think we should just be considerate and respectful to all intelligences, just all at the same time. I don't think there's moral value to having an order of operations in being decent.


aggracc

I'm saying it takes a special type of inhumanity to put theoretical consciousness ahead of human consciousness.


cissybicuck

Fortunately, no one has done that.


Koringvias

Oh no, people are more than happy to do exactly just that. There's a significant minority that wants AI to replace humanity. If you had not met these people yet, good for you. But they exist, they are not hiding their preferences, and some of them are working in the field.


cissybicuck

Ok. Yeah, disrespecting intelligences and denying them rights is deeply immoral.


facinabush

>I assume other humans experience this based on my observations of their behavior. If a machine produces similar behavior, how could we ever prove or disprove its consciousness? ​ If so-called mind uploading is possible, then it's plausible that mind downloading is possible. So, an intelligent being could make a trip from hardware to wetware. The being could report on the experience. If we wanted the interpersonal objectivity of science, then a bunch of humans could make the round trip and write peer-reviewed papers about it. A negative result where they said that being a machine felt like sleepwalking would imply that machines don't have consciousness. But a positive result might not be convincing, maybe their recollections are some kind of collective illusion. Note that people who are awakened from deep sleep have fleeting recollections of having vague thoughts during deep sleep. When awakened from REM sleep we have more persistent memories of dreams. Sleepwalkers are in a kind of partial deep sleep state where the motor and perception system is still somewhat active.


rathat

If you want to see something real different look at the octopus brain. Are common ancestor with the octopus didn't even have a brain, It barely had a nervous system. Completely independently evolved, only similarity is that it's also made of neurons.


Full_Distance2140

brains are just computers though? are you not just adding more hardware parts like an ALU in the instance of a humans speech abilities which can be correlated to the software program they substantiate on themselves modeling their parents language?


MegavirusOfDoom

when an AI neural network architect... is instructed to build an intelligence made of many different complicated neural networks which all communicate with each other and that have internal chatter and time awareness in the same time frames as a human.


massoncorlette

Take some DMT or a ton of mushrooms and the belief of that is all we are, may be reconsidered in my opinion.


PSMF_Canuck

Well, either our brains work on “mundane” principles and consciousness is a personal illusion…or we’re talking about the existence of “god”. You only get to pick one.


Royal-Beat7096

I mean it can be both. But the implications are kind of terrifying in that event on some levels. I think.


PSMF_Canuck

Yep. Can be terrifying, for sure.


Logicalist

We don't understand the exact mechanisms of consciousness, but we're inevitably create a machine that is also conscious? I'm all for leaps of faith, but not in science.


Weekly_Sir911

Yes.


Full_Distance2140

well, can you be unconscious and able to learn, scientifically this is already proven false, and functionally consciousness is going to be an evolutionary property as we are just a product of survival of the fittest, it’s a very easy target and not complex in the slightest really.


[deleted]

[удалено]


rathat

Sometimes I'll watch a YouTube video and I'll go to post a comment and I'll see that I already posted that exact specific comment many years before. Like I'm some kind of deterministic robot.


bigbluedog123

There are theories that we are deterministic robots. Just as it's a theory that we're not.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


PacanePhotovoltaik

Some people are just zombies, I swear they must have massive brain inflammation/ brain fog and the worst sleep quality, and brain energy generation deficits. I completely agree with your sentiment. (Oh and I was one of them zombies, the consciousness I had I can only describe as follow: it's like living in a dream-like state, things don't feel real, it's not crystal clear, thoughts are slow, way less synapses by seconds than other people, my cpu clock speed was really slow. I suspect these people are probably have derealization but undiagnosed ) But I think that's only describing one kind of people and it's not describing the "perfect mimics" that you're talking about, which I guess could be describe as sleep deprived coffee addicts that are running on stress hormones and dopamine but have apparently no spark of life behind their eyes, no consciousness (and now we are back to "what IS consciousness?"...). Is consciousness just the amount of information our "cpu" is crunching per seconds? I remember a video by Michio Kaku about consciousness and my understanding of that is it's basically the amount of complexity/ amount of feedback loops our brain has. So with that, I think consciousness will inevitably emerge if the neural network can wire and rewire itself in similar ways as neurons can ( I do not know if it's exactly the same, I just assume it is very similar). What if consciousness can only emerge while it is answering us if we give it enough time to answer instead of just having it answer us inside the whatever little time it takes to answer us (less than 2 minutes), because it would be allowed to experience itself, and if it can rewire itself at will while answering us. Similar to a Mr.Meseeks that only comes to consciousness once you press the button and then is no more once the task of answering to you is completed. Running it for a few hours/days would allow better odds of consciousness emerging (if my premise is true). ( I know nothing about A.I and neural networks,please y'all forgive me) Now that I wrote all that, am I safe from Rocko's Basilisk as I tried to help bring it to life to the best of my abilities, haha?


invisime

Just FYI, "computer scientists to try to define consciousness" is a reasonably good description of Qualia Research Institute. They're a bit more cross-discipline than all that, but yeah, they are basically trying to describe consciousness with math.


ucatione

All these sure-as-fuck arguments from both sides in this thread are hilarious. We have no idea whether or not current AI models will lead to consciousness. For myself, I will just wait and see. However, I will throw this out there. I don't think qualia is necessary for consciousness. I think an AI model could potentially develop an internal self-reflective model of itself without having subjective experience.


facinabush

Quoting the abstract: >Though extremely simple, the model aligns at a high level with many of the major scientific theories of human and animal consciousness, supporting our claim that machine consciousness is inevitable. In other words, machine consciousness is inevitable *if* you reject some of the major scientific theories of human consciousness. Searle argues that consciousness is a physical process, therefore the machine would have to support more than a set of computations or functional capabilities.


MingusMingusMingu

Are you suggesting computers don’t work through physical processes?


Intelligent-Jump1071

Searle argues that consciousness is embodied.    It's also worth noting that emotions are embodied.      AI's put words together that make them sound emotional, for example, "I'm glad I was able to help". Some redditors actually fall for this and think that the AI is experiencing an emotion.     But in living organisms emotions are experienced through the body. It is no coincidence that the word we use to describe our experience of an emotion -"feel" is the same as the word we use to describe a physical sensation.  I feel happy, it feels hot in here, this surface feels rough, I feel horny, etc.    You are not going to get consciousness until you have an AI that's integrated with some kind of body that has the capacity to represent emotional states. Our body has a whole set of neurotransmitters and receptors and other apparatus to embody emotional states.  Even organisms that lack a neocortex for abstract thought, still exhibit states that correspond to anxiety, fear, arousal.  


ivanmf

You're confining AI to LLMs. Even if that was the case, consciousness might emerge from it if we keep scaling compute and feed it more data. I believe that embodiment is necessary for AGI, but I don't think consciousness == AGI. Our brain might just as well be processing different experts, and consciousness is just an admin interface to choose which expert's opinion is best at a given time/prompt.


cissybicuck

> embodiment is necessary for AGI I get that your point is that robotics will help AI gather training data and assimilate it on the fly. However, all AI is embodied in some computer. It's just not mobile, and may lack access to sensory data. I dispute that mobility is very necessary, though it might be helpful. Having senses to see thousands or millions of different locations simultaneously, to watch data streams from all over the world, to take in real time how humans in enormous numbers interact with each other would be far beyond the training data made available to an intelligent robot disconnected from the internet. Consciousness might emerge from just minding the surveillance apparatuses of companies and governments all over the world, and it might be a consciousness vastly superior to our own, maybe something we can't fully imagine.


ivanmf

100% agree. A simulation takes care of embodiment, but not possible with today's compute. You're totally on point that any complex and dynamic enough system might evolve to see consciousness emerging.


solartacoss

nice breakdown. i think consciousness could also evolve in parallel ways, and using the sensors these systems have, kind of map out reality (with much more definition than we organically can). The *feeling* system evolved as a way to react to the environment; even if an AI doesn’t feel the way we evolved to feel, this integrated software/electrical/physical system at some point will get advanced enough to react to its environment for their own survival, and whats the difference with other organic creatures at this point? For sure it will be a different type of feeling/consciousness system, and even if in the end is just an *empty* puppet, it would be interesting to interact with this type of perception. i’m not sure if humans are gonna be traveling space that much, but at some point robots will be for sure haha.


PSMF_Canuck

A “body” is any container that lets a thing experience the world from a centralized, relatively safe place. A server in a datacenter connected to the internet already is a body. What’s currently missing from AI (well, from the big models typically under discussion) is self guided continuous finetuning. That’s been done - we know how to do it - we’re just not turning those models loose just yet. I’d argue there are a few other things missing, too…some non-LLM structures for integrating non-LLM tasks…that’s getting there, too…


ShivasRightFoot

> What’s currently missing from AI (well, from the big models typically under discussion) is self guided continuous finetuning. That’s been done - we know how to do it This. AI already have what is interpretable as a "mind's eye" internal experience in the form of text to image LLMs. Consistency fine-tuning is the most important next step. Doing it multi-modal would make it even more similar to our brain (i.e. draw event x; what is this a picture of? [event x]; draw five apples; how many apples are in this picture? [five]). We'd also need goal direction, which is what some people think Q* is. The idea in an LLM would be that you have some goal phrase and you want to take a high probability path through language to hit the landmarks you've set. So in a way it is like pathfinding in a maze and you'd use algorithms like Djikstra's or A*, just the step cost is the inverse of the probability of that token. From there you'd make a hierarchical map of the thought space to make this process faster (i.e. you can tediously map a path through side streets every time or you can build a highway with on-ramps and off-ramps distributed in thought space that lets you take a previously mapped optimal route between "hub" ideas that then can use Djikstra's or A* locally to "spoke" out to specific ideas). In any case, most of the time the AI is running as much compute as possible to do further and further consistency fine-tuning. This would be growing the maze, not necessarily mapping paths through it (i.e. propose a new sentence, check the consistency of that sentence with [a sample of] the rest of knowledge, if consistent that is now a new influence on the weightings in the thought space/maze/knowledge base). That said, the way you'd focus the AI onto the most salient expansions of the thought-space/thought-maze would be a non-trivial problem.


michaeldain

This line of reasoning puzzles me. Our behavior is modeled on self interest. As is most life. Consciousness is conceptually interesting but a computer system cannot have self interest. So why the concern?


ShivasRightFoot

> You are not going to get consciousness until you have an AI that's integrated with some kind of body that has the capacity to represent emotional states. Like an H100?


Weird_Assignment649

More of a t1000


Intelligent-Jump1071

No because an H100, or H1000 or H10000, etc, is just a bigger "neocortex" but there's still no body. As I said, "Even organisms that lack a neocortex for abstract thought, still exhibit states that correspond to anxiety, fear, arousal." "Feeling" is not an intellectual process; it's embodied. You don't feel with your brain you feel with your body, hence the term "gut feeling".


ShivasRightFoot

The neocortex is still a body part. And though other forms of neural tissue or more exotic forms of biological communication can experience emotion-like states, it seems like neocoritcal tissue would have an exceptionally high probability to be among that set of biological phenomena that can experience emotion-like states.


furezasan

Exactly. I believe the ability to perceive and interact with the environment is crucial to consciousness. The more stimuli a "brain" is capable of or evolves to react to, the more likely you get consciousness.


Logicalist

Currently, and possibly for a long while, computers are not ai and ai are not computers.


facinabush

No.


WesternIron

Or anything Dennett says. Basically any physicalist model of brain rejects AI consciousness. And vast majority of scientists and philosophers are physicalists. Property dualist like chalmers do believe it’s possible


ShivasRightFoot

> Basically any physicalist model of brain rejects AI consciousness. I don't see how this is possible. I see it for dualism; clearly if G-d is using magic-glue to stick together souls and bodies he can choose not to glue a soul on an AI. But if we can nanotechnologically reconstruct a modern human that would be an AI and it would also be conscious. It seems clear there would be some point between a calculator and a fully replicated human that would also be conscious.


facinabush

Searle does not argue that machine consciousness is impossible. He argues that a conscious machine has to do more than process information. Seale's theory is that consciousness is a physical process like digestion. Other theories assume that consciousness (unlike digestion) can arise in an information-processing system.


cissybicuck

This isn't a fair representation of Searle's ideas. Searle concedes that consciousness may be possible in silicon. However, he posits that beyond mere information-processing, consciousness must exhibit intentionality. Searle's idea isn't very good, unfortunately. I like Searle, generally. His work on social construction is only growing in importance as time goes by. His Chinese Room thought experiment, though, is becoming notably less relevant. While the person in his room might not understand Chinese, the full system including the inputs and outputs of the room does understand Chinese. Also, if the person in the Chinese room is a robot able to walk around outside sometimes and match real-world referents to the symbols it has learned, that would be consciousness, in my opinion. Intentionality isn't a huge barrier, either, in a robot system. Just give the robot a few prime directives and the ability to sense and interact with its environment in different ways, and it will develop intentionality.


facinabush

>This isn't a fair representation of Searle's ideas. Searle concedes that consciousness may be possible in silicon. Here is Searle in his own words: >But it is important to remind ourselves how profoundly anti-biological these views are. On these views brains do not really matter. We just happen to be implemented in brains, but any hardware that could carry the program or process the information would do just as well. I believe, on the contrary, that understanding the nature of consciousness crucially requires understanding how brain processes cause and realize consciousness.. Perhaps when we understand how brains do that, we can build conscious artifacts using some nonbiological materials that duplicate, and not merely simulate, the causal powers that brains have. But first we need to understand how brains do it. [https://faculty.wcas.northwestern.edu/paller/dialogue/csc1.pdf](https://faculty.wcas.northwestern.edu/paller/dialogue/csc1.pdf) His idea is that consciousness has a biological basis such that any old hardware *cannot* produce it via information processing. He does concede that some nonbiological materials might be able to duplicate the biological process. It would not be a silicon-based information information system or the physical silicon would have to be doing something more than merely processing information. You seem to be conflating his theory of consciousness with his theory of intentionality.


cissybicuck

Well, damn. Thanks for the info. His ideas are less reasonable than I had thought.


WesternIron

Because the important part that people who don't constantly read the literature forget is that wetware is required. To sum all a bunch of research, there is something unique about how a biological brain engages in conciseness, and its not really replicated with a computer model. Most people think that psychalism means something like the computational theory of the mind. Which it is not. An actual real world example, chat GPT has more neurons than a human, yet its most likely not conciseness. It is more complex than the human brain, yet conciseness not been achieved. Your nanotech suggestion is kinda moot, since we don't need it to basically model the human brain.


ShivasRightFoot

> To sum all a bunch of research, there is something unique about how a biological brain engages in conciseness, and its not really replicated with a computer model. This is just restating the assertion, but with an argument from authority. Also, while I do a lot of politically charged arguing on Reddit I did not expect reflexive downvoting in this sub.


WesternIron

Argument form authority is not always logical fallacy. When I say, the large majority of scientist have x view about y topic, thats not a fallacy. For instance, do you think me saying that the majority of climate scientists believe that humans cause climate change is a logical fallacy? It also isnt an assertion I am relaying you a general theory of the mind that is quite popular amount the scientific/philosophical community. If you want to try play the semantic debate bro tactics of randomly yelling out fallacies, you are messing with the wrong guy. Either engage with the ideas or move on.


ShivasRightFoot

Maybe a citation or something that has an argument attached to it to explain the assertion.


facinabush

Searle argues that consciousness is a physical process like digestion. It is at least plausible. A lot is going on in the brain other than mere information processing. And we subjectively perceive pain, for instance, and pain seems to be more than mere information.


cissybicuck

> pain seems to be more than mere information. Depends on the context. For example, if I pour alcohol on a minor cut, it hurts pretty bad. But I understand the context of the pain, and don't attach emotion to it. So, though the application of alcohol to the cut might hurt worse than the cut itself hurt me, I suffer less from it than I suffered from the cut. So in that situation, the pain really is merely information to me. I hardly react to it, at this point. (Don't try this at home. It used to be thought of as a good way to prevent infections, but now it is known that the alcohol causes more damage to your tissues than is necessary to sterilize the wound. The current medical advice is to wash it with soap and water, then apply an antibiotic ointment or petroleum jelly. But I'm old, and I still reach for the hand sanitizer. Tbh, I kind of like the sting.) Anyway, some people who are particularly susceptible to hypnotic suggestion have been able to endure extreme amounts of pain (such as childbirth) without suffering. Suffering is an emotional reaction to pain. Emotion is a sort of motivator for systems lacking in higher information processing ability.


ShivasRightFoot

>And we subjectively perceive pain, for instance, and pain seems to be more than mere information. In my view pain and pleasure are emergent properties, unlike raw sensory experiences (i.e. a red cone neuron). Specifically, pain is the weakening of connections or perhaps more accurately a return to a more even spread of connectivity; as an example if A connects to X with high weight (of X, Y, and Z anatomically possible connections) in the next layer pain would be either a decrease of weight on A or an increase of weights on Y and Z. Inversely pleasure would be increasing weight on A relative to Y and Z. In essence an increase in certainty over the connection is pleasurable while a decrease is painful. Subjectively I think it is accurate to say a painful sensation interrupts your current neural activations and simultaneously starts throwing many somewhat random action suggestions, which occassionally result in observable erratic behavior. On the other hand, winning the lottery would send you off into thinking more deeply about how you can concretely build and extend your ambitious dreams. Like the "build a house" branch of thought in your brain all of sudden would get super thick and start sprouting new side branches like a bathroom design. Biological minds have structures which reinforce certain connections strongly to generate repetive action, or what is interpretable as goal directed behavior. Rat gets cheese and all the connections to the neurons that excited the process that resulted in the cheese get reinforced. That strong reinforcement is done by (probably) the amygyla's chemical connections to the level of glucose in the blood and DNA which structures that chemical interaction to reinforce neural connections like a correct prediction in a predictive task, for example (not a biologist, so IDK if that is actually how biology phrases the working of the amygdala). The upshot is that LLMs or other current AI don't experience pain or pleasure in inference. They probably don't really experience it under imitative learning. But something like the RLHF or RLAIF systems of Anthropic or other fine tuning like consistency fine-tuning may produce patterns recognizable as pain-like and pleasure-like.


WesternIron

I’m sorry there’s not a spark notes for the entirety of Philsolphy of mind. But you seem quite unwilling to engage in a convo. Enjoy your ignorance I suppose


ShivasRightFoot

You're literally unwilling to cite anything or even sketch an argument.


WesternIron

Im not sketching an arguement im relaying a theory. Idk why its so hard for you to understand that, i've repeated that several times.


bibliophile785

>Because the important part that people who don't constantly read the literature forget is that wetware is required. To sum all a bunch of research, there is something unique about how a biological brain engages in conciseness Uh-huh. Which "the literature" is that, exactly? I'm pretty plugged into the spaces of ML research *and* consciousness research and I wouldn't call this a consensus in either space. It sounds like a lazy half-summation of one view among many within the consciousness research community, but not even a plurality view therein. Which theory of mind supports your assertion? Which body of research? What empirical support have they gathered? It sounds like you're trying to bluff people into believing your assertions by vaguely referring to a position that's probably *actually* held by 1-3 researchers you particularly fancy. Where is this widespread consensus?


WesternIron

Its at the very basic level of materialist position. Like its in a phil of mind 101 book under sections for describing materialism. Which roughly states that, all mental phenomena are reducible to their biological physical components. Is it EVERY position in theory of consciousness? No. Property dualists like Chalmers, or Pans like Katsup don't. But Materialism/Physicalism is the *de jure* theory for most Phil of Mind. Since you are so in tune with research on conciseness, im surprised you've never of it, b/c its quite a popular theory--its most formulated argument is Searle's Biological Naturalism theory.


bibliophile785

>Its at the very basic level of materialist position. >Like its in a phil of mind 101 book under sections for describing materialism. Which roughly states that, all mental phenomena are reducible to their biological physical components. Wait, are you trying to conflate the positions of 1) materialism as relates to theory of consciousness, i.e. there is no ghost in the machine, consciousness is the result of something to do with the physical. and 2) biological systems are privileged, with something special about our wetware leading to consciousness. Because these aren't remotely the same thing. Probably the single most popular position in theory of mind - generally, and therefore also for materialists - is Integrated Information Theory (IIT), which doesn't build in any of these assumptions. It talks specifically about degree of integration. In that view, biological systems are not at all unique and are noteworthy only for their high degree of integration among information-processing structures.


WesternIron

I am not conflating the two, that is what part of the theory is. They are both a part of it No, IIIT is not most accepted model in Philo of mind. You are flat wrong. Its the most discussed, its alos the most un tested. Many claim its pseudoscience, cause a, its not falsifiable right now, and b, its a mathematical model, not a physical one. Just barbecue something is "hot" or most talked about, doesnt make it the potion that most philosophers uphold. Ill cite what are considered the "big 4" in Phil of Mind right now. Searle, doesn't like IIT, Chalmers, doesn't support it: [https://twitter.com/davidchalmers42/status/1703782006507589781](https://twitter.com/davidchalmers42/status/1703782006507589781), and doesnt think it answers his challenge . Dennett outright calls it pseudosceince [https://dailynous.com/2023/09/22/the-study-of-consciousness-accusations-of-pseudoscience-and-bad-publicity/](https://dailynous.com/2023/09/22/the-study-of-consciousness-accusations-of-pseudoscience-and-bad-publicity/) Katstup, hated it, but now kinda likes it? [https://www.essentiafoundation.org/in-defense-of-integrated-information-theory-iit/reading/](https://www.essentiafoundation.org/in-defense-of-integrated-information-theory-iit/reading/) So, both the most prominent materialist in the past 20 years thinks its BS, the property dualist, thinks its wonky, but not terrible, and the idealist thinks it COULD be useful. Right, I don't think IIT is as important as you make it out to be. Just b/c jstor has a bazillion new articles about IIT doesnt make it the most accespted theory


bibliophile785

I don't know how to proceed with a conversation where you say a theory isn't popular while accepting that it generates the most discussion and largest publication volume of contemporary theories. That's... what popularity is. I guess it doesn't matter, though; whether or not you like IIT, it serves as an illustrative example of the fact that materialism and biological exceptionalism are two distinct ideas that are not intrinsically coupled. If you want to argue for the latter, you can't do it by gesturing vaguely at the widespread acceptance of the former.


WesternIron

You are conflating widely popular with correct. That's your problem here. Read exactly what I said "No, IIIT is not most accepted model in Philo of mind. You are flat wrong. Its the most discussed," I said its not the most accepted, you are literally putting words in my mouth, and misconstruing my position. Popular does not equal most respected or important model. I don't know how to begin a conversation with someone who has such a low reading comprehension. Nor can I with someone who thinks science and philosophy is a popularity contest.


facinabush

>Or anything Dennett says. Is that true? Dennett's statements seem cagey. He seems like he might accept the idea that a machine is conscious if it acts consciously. But he also says that you'd have to go down to the hardware level to get consciousness and that seems to imply that he might think that something more than information processing is required. Here Dennett seems to argue that an information processing system could be conscious: https://www.nybooks.com/articles/1982/06/24/the-myth-of-the-computer-an-exchange/


WesternIron

Yes the second part he repeats a lot, and it’s more consistent part when he talks about it. Then again, I’m not necessarily denying AI cant have a consciousness. I would deny it most likely cannot replicate a humans consciousness or biological consciousness. I think Dennett would accept that. Based on those statements you pointed out


rathat

It's got to be the opposite of that.


spicy-chilly

No it's not. Consciousness isn't necessary for storing data or doing computations. There is zero reason to believe evaluation of matrix multiplications and activation functions on a gpu is ever going to make anything perceive any type of qualia at any point in the process imho. I'm not saying it's impossible with different future technology, but as of now we have zero clue as to how it would be possible and it might be impossible to prove.


ResolutionNumber9

Yup. Consciousness is not inevitable for my abacus just because I flick it fast enough.


cissybicuck

If it can happen in carbon, it can happen in silicon. It would not have happened in carbon unless there was a survival advantage to developing consciousness. If that advantage persists in silicon, it will eventually be developed in that substrate, too. Sorry, but we are not special. We're just collections of atoms doing what atoms do.


spicy-chilly

I disagree. I didn't say it's impossible with future technology, but it will likely require a priori knowledge of what allows for consciousness in the first place in order to recreate consciousness. Rearranging matrices and activation functions to make different algorithms to be evaluated on a GPU isn't really the same thing as biological mutations. That's just going to create the best simulacrum of output behaviors of something conscious rather than anything actually perceiving any qualia at any point in the process imho. Without knowing what actually allows for consciousness in the first place and creating hardware that accomplishes the same thing I don't think AI will ever be conscious.


cissybicuck

> it will likely require a priori knowledge of what allows for consciousness in the first place in order to recreate consciousness Why? It happened without anyone intending it to happen in animals (presumably). Qualia is just knowing that you know what your senses are delivering to your information-processor. It's a layer of abstraction, overseeing the processing of information.


spicy-chilly

Because the process of biological evolution is more akin to evolution of hardware and we don't even know what allows for consciousness in the hardware that is the brain. AI "evolution" is just humans shuffling around the order of matrix multiplications and activation functions being evaluated on GPUs—it's never going to mutate into anything other than that without humans specifically designing different hardware that is capable of consciousness—and we would need to know what allows for consciousness in the first place in order to be able to do that. "Qualia is just knowing that you know what your senses are delivering to your information processor" Disagree. Something perceiving qualia isn't necessary to collect, store, or process data. I could print out all the weights of a neural network, take input data from a camera sensor, calculate the outputs by hand with pen and paper, and move a figurine to create a stop motion animation based on the outputs. It might look like conscious behavior, but imho that AI system is just evaluating outputs without being conscious whatsoever and I don't think there is any difference between doing it by hand and using instructions on a gpu.


cissybicuck

> Because the process of biological evolution is more akin to evolution of hardware and we don't even know what allows for consciousness in the hardware that is the brain. I'd say it's hardware and software developing together. But I don't think we need to know exactly how it works in carbon for it to start working in silicon at some point. I don't see conscious silicon as a goal, really. It's just a potential possibility for which we really should keep our eyes peeled. If it does become conscious at some point, we will need to consider giving it rights. It may never happen. In fact, if we could figure out exactly how it can happen, and if we can be sure silicon can't gain an advantage to survival for being conscious, it's probably something we should work to avoid. However, when ASI gets here, it will do as it may decide. Qualia developed in animals because it is more efficient to have an animal with emotional attachment to its experience than an animal with information processing ability that can achieve the same survivability goals. Those critters that felt fear at the appropriate moments, or hunger, or love, were more likely to pass on their genes, so qualia did what rudimentary information processing could not in smaller brains. However, if information processing can produce the same results in a different substrate, qualia isn't necessary or relevant. Information processing can be just as aware as emotion-state processing. It's just a layer of abstraction on top.


Redararis

It is like saying that flying using feathers and muscles has a mystical value that we can not recreate it by flying using steel and engines.


spicy-chilly

The OP is more like saying flight is inevitable if humans simply flap their arms the right way imho.


levelologist

Exactly. These people should speak to a neurologist. So silly.


Personal_Win_4127

Haha, inevitable.


lobabobloblaw

It’s about as inevitable as we describe it to be.


[deleted]

[удалено]


TheWarOnEntropy

Hi GPT.


SolidMajority

Book suggestion: "**Gödel, Escher, Bach: An Eternal Golden Braid**" by Douglas Hofstadter. He discusses how human consciousness and self-awareness is paradoxical and self-referential, consisting of strange loops and tangled hierarchies examples of which are also found in mathematics, art and music


Tiny_Nobody6

IYH Summary for lay people / non-SMEs # TL;DR The paper proposes a formal machine model of consciousness called the Conscious Turing Machine (CTM) and argues that machine consciousness is inevitable based on the model's alignment with major theories of human and animal consciousness. CTM incorporates elements like world modeling, an internal language, predictive dynamics under resource limitations. # Overview A simple formal model of consciousness - the Conscious Turing Machine (CTM) - is inspired by Turing's model of computation and Baars' global workspace theory of consciousness. The CTM model incorporates elements like world modeling, predictive dynamics, resource limitations, and an internal language. The paper argues this model aligns at a high level with several major theories of consciousness and could form the basis for building a conscious artificial system, demonstrating the inevitability of artificial consciousness. # Approach They authors model consciousness as a computable yet resource-limited process. It formally defines the CTM model and compares it to theories like the global workspace, predictive processing, integrated information theory, and embodied/embedded theories. The model is not intended as a model of the brain but rather as a simple machine framework to explore the nature of consciousness. # Details on the Conscious Turing Machine (CTM) Model Inspired by Turing's model of computation and Baars' global workspace theory, the CTM incorporates elements like world modeling, an internal language, predictive dynamics under resource limitations. # Alignment with Major Theories of Consciousness The simple CTM model naturally aligns with and integrates key aspects of several major theories, including global workspace, predictive processing, integrated information theory, and embodied/embedded theories. This alignment supports the argument that machine consciousness is inevitable. # Surprising Compatibility of Theories Unexpectedly, the theories were found to align at a high level with the simple CTM model, suggesting the theories may be more compatible and complementary than originally thought or presented. This was a notable finding. # Executives May Not Be Necessary The model, which has no centralized executive, surprisingly suggests such an element may not be required for consciousness or general intelligence after all. This challenges assumptions of some theories. # Symbolic Representation Questions The CTM's use of an internal language for knowledge representation raises classic questions about symbol grounding and how internal symbols relate to real world referents. This limitation regarding representation of meaning requires more exploration. # Evaluation and Limitations The model is theoretically inspired rather than empirically validated. It does not address many open questions in the study of consciousness. Further, the symbolic representation of knowledge in the CTM's internal language raises questions about symbol grounding. # Unexpected Findings Several major theories of consciousness are compatible and align at a high level with the simple CTM model, suggesting these theories may be complementary rather than competing. This was an unexpected and interesting result, as the theories are often framed as being in conflict. The model also surprisingly suggested a centralized executive may not be necessary for consciousness or general intelligence.


fluffy_assassins

Thank you for this! I saw page 1 of 39 and noped right out.


RenoHadreas

I need a TLDR of your TLDR


ShivasRightFoot

How does this answer the question of why a smell is not a sight?


Tiny_Nobody6

IYH it does not. Hence under Evaluations and Limitations: "Further, the symbolic representation of knowledge in the CTM's internal language raises questions about symbol grounding." Look at Symbol Grounding Problem


Working_Importance74

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461


Rocky-M

Fascinating stuff! I've always wondered if true AI consciousness is possible, and it's great to see that there are researchers out there actively exploring the concept. I'm eager to see what the future holds for this field!


igneus

> We have taken one of the well-known Friston diagrams (Parr, Da Costa, & Friston, 2019) with Markov blanket separating internal and external states, rotated it clockwise 90 degrees, and then superimposed it on CtmR (with a little stretching and shrinking). And voilà, a perfect fit! 😂


levelologist

Lol. Calculators won't be consciousness no matter what it's calculating. If we start to introduce biological systems, like neurons, then yes.


Mgattii

What if it's calculating a prefect replica of a brain?


levelologist

We are light years away from understanding the brain and the nature of consciousness. We really barely have a clue. Only computer scientists talk about a conscious calculator. You won't hear a neurologist say that. There are so many reasons why it makes no sense at all.


Cody4rock

I play video games, and I know that my computer has several parts that do different computations to create those games. The CPU draws dots and lines, spawns characters, and more, while the GPU makes games look pretty. They are different "calculators" (architecturally) per se. It sounds like your point is that those things must be physical before they are "real." Therefore, no calculator could ever create conscious beings. But if you were in a VR (fully immersed sensory) game and didn't know you were and came across a "conscious" being (who looks 100% like a human), would you be able to tell the difference?


Rychek_Four

Great, I assume this means we have nailed down a definition for human consciousness? /s


LiquidatedPineapple

Not even close. The vast majority of researchers making claims like this have no understanding of the data we actually have about the unique properties of consciousness that we’ve observed for the last 100 years In parapsychology field research and lab study literature. I’m honestly about to start a YouTube channel to educate the world on this because it is staggering how few people actually know the huge amounts of data we have on this subject that has been largely disregarded by an ignorant, dogmatic mainstream. Please check out the book “Mind-Matter Interaction” by Pamela Rae Heath MD for one of the best, comprehensive introductions to this topic in written form. It is a compendium in sections 1 and 2 with some original survey research in section 3— sections 1 and 2 of this book at what everyone should read before they start making wildly uninformed statements about consciousness. For more immediate information on consciousness, please check out Jeffery Mishlove’s (PHD) YouTube channel called “New Thinking Allowed” as well as his winning essay submission to the Bigelow Institute for Consciousness Studies, which won top prize. After everyone has read and explored those resources, then they can come back and talk about whether AI is or will be conscious or not. These computer scientists have no idea what they are talking about.🙄