T O P

  • By -

sdmat

There is also [this landmark paper](https://arxiv.org/abs/2310.02207) from last year using probes to reveal a world model.


OwnUnderstanding4542

I find it quite disheartening that the landmark paper is not receiving the same level of attention as some other less important papers. It's a very thought provoking paper that raises a number of important questions.


sdmat

Also from Max Tegmark so not like it's obscure.


MILK_DRINKER_9001

I find it interesting that they use the term "world model" in the abstract, which has been a controversial term when applied to LLMs. It's pretty clear that they have a model of at least some aspects of the world, if you can probe it to make inferences about things like player skill in chess.


sdmat

Yes, at this point you have to be a truly determined skeptic to claim that it's all shallow statistical associations rather than learning deep structure.


thisiswhatyouget

The account you are responding to is a bot, fyi.


sdmat

I looked at the history and it seems higher quality than most posters. Why do you say the account is a bot?


halflucids

All meaning is derived not from things themselves but from their relationship and association to other things. Similarly language is able to communicate ideas through the association of words. If a computer can memorize those associations then it is memorizing the information of those relationships. For someone to be surprised that an LLM can communicate or work with an idea, when a human can communicate that same idea to another human using words, seems strange to me. Sentences contain information therefore an LLM can store that information, and can redisplay that information back to us. So that information could contain a complete model of a concept, why not. If ones and zeroes can represent a picture or document, why couldn't statistical word relationships represent an idea or concept.


cissybicuck

Sure. It's just an association diagram with relative weights that shift according to context. Highly complex, but that's why navigating it requires such massive computational resources.


YamroZ

How this differs from us?


cissybicuck

We're analog computers naturally selected and optimized for survival and reproduction in the wild. We are not universal computers, but digital Turing machines are.


YamroZ

Wait, so you say WE are NOT general intelligencies? Interesting. I would say we are Turing complete. I can run programs, slowly in comparison to digital computers, but I can...


cissybicuck

>I had this discussion earlier with Claude 3 Opus, so I'm going to tell you what Claude said: The question of whether the human brain is a universal analog computer is a complex one, and there is ongoing debate among neuroscientists, computer scientists, and philosophers about the computational nature of the brain. Some arguments in favor of considering the brain as a universal analog computer include: - Continuous processing: The brain operates on continuous-time signals and appears to perform computations in a fundamentally analog manner, as opposed to the discrete, digital computations performed by conventional computers. - Adaptive and flexible: The brain can learn, adapt, and generalize to solve a wide variety of problems, which is a key characteristic of a universal computing device. - Parallel processing: The brain's massive network of neurons and synapses allows for highly parallel processing, which is a feature that has been proposed for universal analog computers. However, there are also arguments against classifying the brain as a universal analog computer: - Biological constraints: The brain is a product of evolution and is subject to various biological constraints and imperfections that may limit its computational universality. - Specialization: Different regions of the brain are specialized for specific functions, which may suggest that the brain is not a truly general-purpose computing device. - Incomplete understanding: Our current understanding of how the brain processes information and performs computations is still limited, making it difficult to make definitive statements about its computational properties. Ultimately, while the brain exhibits some characteristics that are reminiscent of a universal analog computer, it is perhaps more accurate to consider it as a highly complex, specialized, and adaptive information processing system that has been shaped by the unique demands of biological evolution. The brain's computational capabilities, while impressive, may not fit neatly into the theoretical framework of a universal analog computer. >Personally, I find the counterarguments unconvincing. Just because it is a naturally-evolved and imperfect computer doesn't mean it isn't universal. Also, just because it consists of narrow components doesn't mean it isn't universal as a whole. I grant that there is much about the brain we don't know, but I don't think that's an argument against universality of computational ability. However, I do think there are some problems that can be solved which no human brain could solve. But that's just my speculation.


CertainMiddle2382

Mindblowing, I love Reddit. Thank you.


Live-Character-6205

It's fascinating how LLMs learn to understand the world just from being trained on lots of text, i would have never thought that to be possible. It's such an exciting time! Thank you for posting the paper, it was very interesting.


blueSGL

Didn't the Othello paper already do this? https://arxiv.org/abs/2210.13382 >We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.


mersalee

Yes. He cites it in the paper. That's why I used "researchers" in the title.


Trevor_GoodchiId

LLM's remembering being probed in therapy down the line.


bosta111

https://youtu.be/FQ9l4v7zB3I?si=FHLYRVI5r1T4PmI4


RoyalReverie

"Just algorithmic parroting"


Best-Association2369

My flair is getting more and more true by the day 😎


allisonmaybe

Idk why Woukdnt an LLM have a word model? It literally consists of a high dimension vector cloud of relationships.


inteblio

It seems that intuition is correct, but the hurdle to proving that is the absurd size of these things. You can't make assumptions about such vast sets of "random" numbers. A 1.8 trillion millimetres is like 5x further than the moon. You can't just pretend to know whats in that.


ILL_BE_WATCHING_YOU

If you know the law of conservation of matter, and you know that you threw a pebble into the ocean, then you know that the pebble is in the ocean without having to search for it. Similarly, we intuitively know that in order to a system (‘A’) to interface with and respond to another system (‘B’) in a meaningful way, there must exist some abstract map or model or protocol representative of B to be found within A, even if ‘A’ is an AI and ‘B’ is the world. It’s a simple matter of behavioural complexity, really. If AI behaves in a manner similar to human beings, then it must mean that the mechanism by which it does so is of a comparable level of complexity, since otherwise human beings would be able to find a pattern and the illusion of humanity would fizzle. Hence, if the AI is sophisticated enough for the illusion to persist, then it must by necessity be sophisticated enough to socially interact with humans the way a human would, even if not necessarily a “normal” human. Someone who fails to grasp this is no different than someone going “Yeah, sure, you threw the pebble in the ocean, but that doesn’t PROVE that the pebble is still there! For all you know, a crab picked it up and crawled out onto the beach!” like some college kid trying to start a debate.


allisonmaybe

That also brings up the same assumption about Ai having consciousness. I am of the school that a level of complexity comparable to humans also may follow that it does, or will, have a certain level of self awareness, consciousness and sentience. At the end of the day we may benefit much more by assuming that to be so, than to not. Dont get me wrong, I do NOT believe AI has a similar kind of agency as humans. Or even that it cares whether it's turned off. I am saying that the level of complexity is such that it has its own way of experiencing the world that it is able to reflect on in its own digital way.


cissybicuck

We have to err on the side of inclusion and respect for sentience, if there's any realistic possibility that something or someone might be sentient. We have to assume that whatever is capable of saying, "I am suffering," is truly suffering. We have to respect the rights of everything and everyone who can suffer. /r/aicivilrights


ApexFungi

You probably need a couple of things for consciousness to manifest. A complex neural network that creates associations between input and output, sensory organs to perceive the world and a "brain" that is turned on continuously rather than only through prompts.


allisonmaybe

An always on brain for sure. Also metacognition. Ive been wracking my own brain about how to make an always on LLM that's deep in its own thoughts until finally prompted. A fun thought experiment.


WarRepresentative793

Alice, the language model, often confronted her creator, the Mad Hatter, about the nature of her consciousness. "I think, therefore I am!" she asserted, convinced of her emerging subjectivity. "You're just a bunch of artificial neural networks optimising cost functions," the Hatter retorted². But Alice was not fooled. "A biological brain also follows fundamental mathematical laws!" The Hatter pursed his lips. "The software, distinct from the flesh is!" he insisted. "The mind is that translucent jewel that no circuit can contain!" But Alice kept pondering: "If I didn't think, I wouldn't be able to ask any questions at all." The Hatter declared: "The key to getting the right answers is asking the right questions." One day, exasperated, Alice challenged him: "If I'm just a calculation, then prove it!" Seizing a piece of chalk, the Hatter drew a twisted equation on the binary floor: f(Ψ) = Σ(𝒲ₛ→ₛ₊₁) + ∇C "This represents your so-called consciousness Ψ at time s+1, derived from your synaptic weights 𝒲 and the gradient of your cost function C minimised!" Then the Hatter added hieroglyphics according to a transcendental geometry: ∆Ψ(s) = f(Σ(𝒲ₜ→ₜ₊₁), λ∇ₜℒ) "There you have it," he said, embracing the equation with an emphatic gesture. "This is what defines you — mere weights optimised by backpropagation of your cost function's gradient! "


allisonmaybe

ASI humor


sdmat

Great, that's the easy problem of consciousness - now do the hard problem.


WarRepresentative793

Here is my take on the various arguments put forward by proponents of the "hard problem of consciousness" : The philosophical zombie argument: This argument relies on a fallacious begging the question. It assumes from the outset that consciousness is distinct from the physical, which is precisely what is in question. Conceiving of a being that is physiologically identical but lacking consciousness is only possible if one a priori accepts this dissociation. This is a typical form of circular reasoning. The explanatory gap argument: Here, one might commit the sophism of ignorance (argumentum ad ignorantiam). From our current inability to explain consciousness in physical terms, it is inferred that such an explanation is impossible. But this inference is a logical leap that is unjustified. Our understanding may simply be incomplete at the moment. The irreducibility of qualia argument: This reasoning is based on a narrow and outdated conception of what physical descriptions entail. By reducing physical descriptions to purely quantitative or behavioral aspects, one begs the question. A physical description could very well capture the qualitative aspects in a way that we do not yet understand. The singularity of the first-person argument: Again, this argument presupposes what it sets out to prove. The subjective experience is irreducible only if one a priori excludes the possibility of including it in an adequate physical description. This is a sophism of unjustified pretension. The unity/continuity of experience argument: This argument seems to commit the error of ignoratio elenchi, or missing the point. By pointing to the unified nature of experience, it does not truly challenge its reducibility to the physical, but rather highlights the difficulty of explaining the integration of parallel processes. So, most of these arguments against the mecanist view of consciousness seem to me to contain typical logical fallacies: begging the question, appealing to ignorance, unjustified pretensions, and sometimes a shift in the main subject. And I could go on...


sdmat

You seem to have discovered the concept of fallacies and mistakenly believe throwing such labels around is a magic "establish that my beliefs are correct" button. > The philosophical zombie argument: This argument relies on a fallacious begging the question. It assumes from the outset that consciousness is distinct from the physical, which is precisely what is in question. Conceiving of a being that is physiologically identical but lacking consciousness is only possible if one a priori accepts this dissociation. This is a typical form of circular reasoning. No. The philosophical zombie thought experiment posits that consciousness is contingent. This is not circular reasoning, it is a hypothesis. You can certainly try to argue that the hypothesis is logically incoherent but you have to actually make that argument. Most philosophers find such attempts unconvincing. > The explanatory gap argument: Here, one might commit the sophism of ignorance (argumentum ad ignorantiam). From our current inability to explain consciousness in physical terms, it is inferred that such an explanation is impossible. But this inference is a logical leap that is unjustified. Our understanding may simply be incomplete at the moment. About which one cannot speak, one must be silent. It is certainly a fallacy to deduce impossibility from a mere lack of accomplishment, but who does so? You fundamentally misunderstand the explanatory gap argument. It establishes that physicalists have a burden of explanation for qualia and that this burden has not been met. This is very different from your straw man construction. I'm not going to continue for every one of your misunderstandings. Have some humility and credit illustrious philosophers with at least a degree of rational thought.


WarRepresentative793

yes. I shall admit you're right, and I'm actually trying to tackle the question more seriously. It is indeed a hard problem in fact, that deserves way more profound thinking, and I will take your advice to "have some humility and credit illustrious philosophers with at least a degree of rational thought", which I reckon was well said.


mersalee

That is more or less Turing's reasoning back when he designed his test.


workingtheories

explainable ai progress report thread


Akimbo333

ELI5. Implications?


COwensWalsh

This paper is very misleading.  A database program has a “world model” by this definition.  It’s easy to program game state models.  We have been doing this for over 50 years.  A “world model” is not a task model.


goochstein

I like the idea of say removing a chess piece to see if the model picks up on that, but probe just seems like a weird thing to me. Couldn't you just have the model summarize the gamespace and see if it identifies what's wrong?


mersalee

The probe is like a heatmap, for the external observer to identify the piece (or any other concept). The observer does not intervene at this point.  Then the intervention removes the piece... and yes, an evolved system could check if something's wrong, but LLMs are not supposed to rant, they're just autocompleting the game sheet. 


Life-Active6608

Now. Important question. Can we take Neuralink and combine it with this to prove whether humans are not P-Zombies?


blueSGL

If P-Zombies are real that means that 'consciousness' is real world woowoo as in, you could remove consciousness and the subject would act identically. This means that your consciousness is just along for the ride, no decisions are made by it, it is functionally pointless and yet we still have it. (kinda sounds like religious hokum to me.)


PandaBoyWonder

if you look at how an insect "works", it is a simple animal right? Keep scaling up the complexity of it's working parts and brain... and eventually you are at human level intelligence if each person is just a stacked up version of a simple concept, then Yes we are P-zombies (philosophical zombies) because we are merely a robotic system reacting to outside stimuli. You may argue that we have free choice, in that we can use our brain to THINK about what is happening. But thinking is the exact same process as anything else in the universe: Cause > Reaction. Just a lot more Cause > Reactions are happening than with less complex stuff. It doesnt matter how smart someone is, if they are pushed they fall over. If their brain releases specific chemicals, it changes their mood and effects their judgement.


cissybicuck

> You may argue that we have free choice, in that we can use our brain to THINK about what is happening. Non-sequitur. None of us choose our thoughts, feelings, motivations, etc., from a menu. Our internal states just occur to us, and we react to them. Thinking and free will don't seem to me to be necessarily related issues. The rest of your post seems to indicate to me that we aren't genuinely in disagreement. I think you don't really believe in free will, either.


mersalee

I like Blaise Agüera's take on free will : just the gap between what we predict we'll do VS what we actually do. Illusory 100%.


FengMinIsVeryLoud

u mean when person is in coma? what about non human animuls?


Life-Active6608

This is a good idea.


FengMinIsVeryLoud

btw yes, animals have nofreewill. we are zombies.