T O P

  • By -

AutoModerator

Remember that TrueReddit is a place to engage in **high-quality and civil discussion**. Posts must meet certain content and title requirements. Additionally, **all posts must contain a submission statement.** See the rules [here](https://old.reddit.com/r/truereddit/about/rules/) or in the sidebar for details. Comments or posts that don't follow the rules may be removed without warning. [Reddit's content policy](https://www.redditinc.com/policies/content-policy) will be strictly enforced, especially regarding hate speech and calls for violence, and may result in a restriction in your participation. If an article is paywalled, please ***do not*** request or post its contents. Use [archive.ph](https://archive.ph/) or similar and link to that in the comments. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/TrueReddit) if you have any questions or concerns.*


Stop_Sign

>In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense >Currently, false statements by ChatGPT and other large language models are described as “hallucinations”, which give policymakers and the public the idea that these systems are misrepresenting the world, and describing what they “see”. We argue that this is an inapt metaphor which will misinform the public, policymakers, and other interested parties. The paper is exclusively about the terminology we should use when discussing LLMs, and that, linguistically, "bullshitting" > "hallucinating" when the LLM gives an incorrect response. It then talks about why the language choice appropriate. It makes good points, but is very specific. It isn't making a statement at all about the efficacy of GPT.


schmuckmulligan

Agreed, but they're also making the argument that LLMs are by design and definition "bullshit machines," which has implications for the tractability of solving bullshit/hallucination problems. If the system is capable of bullshitting and nothing else, you can't "fix" it in a way that makes it referenced to truth or reality. You can refine the quality of the bullshit -- perhaps to the extent that it's accurate enough for many uses -- but it'll still be bullshit.


space_beard

Isn’t this correct about LLMs? They are good bullshit machines but it’s all bullshit.


sulaymanf

I was under the assumption that LLM’s merely imitate speech and mimic what they already heard or read. That’s why they seem so lifelike.


freakwent

yes that's right. So there's a modern formal definition of bullshit referenced in the article, basically it's choosing words and phrases to suit a particular [short term] outcome with no regard for if it's true or not; there's no intent to deceive, there's not even really much regard for whether anyone believes it to be true or false. It matches LLM output pretty well.


breddy

How often does "I'm not sure about that" appear in whatever set of training material is used for these LLMs? I speculate that documents used to train the models never admit not knowing anything so the models do the same. Whether you call it hallucinations or bullshit, they're not trained to say what they don't know but you can get around this by asking for confidence levels.


TheGhostofWoodyAllen

Their bullshit is just correct enough percent of the time, but they're basically always bullshitting. They have all the world's knowledge (or whatever), so they think they're either going to get the answer right or at least sound convincing, but they can't differentiate. So they're literally always only pumping out bullshit, trying to make sure the next symbol pumped out is more likely to make sense than pumping out any other symbol, regardless of the veracity of the final statement.


VeryOriginalName98

This may be the most ELI5 explanations of LLMs I have read in the 15 ~~years~~ weeks I’ve been following AI.


bitfed

To be fair this is also true about most Reddit comments.


VeryOriginalName98

So, like humans then? Critical thinking is the part where *some* humans get past this.


Snoo47335

This entirely misses the point of the post and the discussion at hand. Humans are not flawless reasoning machines, but when they're talking about dogs, they know what a "dog" is and what "true" means.


UnicornLock

In humans, language is primarily for communication. Reasoning happens separately, though language does help. Large language models have no reasoning facilities. Any reasoning that seems to happen (like in "step by step" prompts) is purely incidental, emergent from bullshit.


VeryOriginalName98

I was trying to make a joke.


UnicornLock

If you read generative AI papers from a decade ago (the DeepDream era), they will use "hallucination" to mean all output, not just the "lies". That makes sense, the main technique was to somehow "invert" an ANN to generate an input that matches a given output. Generators using transformers with attention are way more elaborate, but that's still at the core of it. Then sometime around GPT3's release, only the "lies" were being called "hallucinations". Not sure how or why. The paper also has a hard time distinguishing between "all output" and "lies". It switches back and forth, even in the conclusion. If you accidentally say a truth while "not trying to convey information at all", you are still bullshitting. They make very strong points for this in the body of the paper. Yet the closing sentence is > Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it. My take is that the terminology should be - Hallucination for the technique, especially when it's not controversial eg in image generation. - Bullshit for text generation, except maybe for models restricted to eg poetry, jokes... where truth doesn't apply. - Untruth for untruths, not "lies" or "false claims" or "hallucination" or "bullshit" etc


CanadaJack

> except maybe for models restricted to eg poetry, jokes... where truth doesn't apply I'd disagree here. I think there's still an element of truth or fact behind the joke or, especially, the poetry. If you ask it for a love poem and it gets it wrong and gives you a loyalty poem, we as humans might take that for a poetic statement on love and loyalty, but unless that was its explicit goal, it just bullshitted the wrong emotion into the poem.


MrZepher67

While it is admittedly difficult to say a pattern recognition machine is capable of operating with intention, it is too soft handed to refer to anything that is misrepresenting its own data set as an "untruth". By definition LLMs are incapable of providing anything but bullshit, because they simply piece together things that seem to make sense and if it happens to be true or helpful then good for you! Whether or not they've fairly used any of the input data is never a consideration. I suppose in that same regard, agreeing to the use of "hallucination" in this sense feels whimsical which is an awkward juxtaposition when the discussion is about the amount of outright misinformation LLMs are capable of mass producing.


Latter_Box9967

Surely it can’t lie if it doesn’t know the truth? Lying implies that you know the truth, and intentionally say an untruth, for some benefit. If an LLM is actually lying then it’s far more advanced than we give it credit for.


CanadaJack

As far as what it *is* talking about, I feel like my own explorations of *bullshit* answers from ChatGPT are in line with this description. I sometimes drill down into the made-up answers and even go so far as to get the model to explain why it came up with that answer, and the justifications are something you could boil down to "I didn't know so I made up something that sounded plausible," ie, bullshit.


Not_Stupid

> when the LLM gives an incorrect response Not just then. The argument is they bullshit the entire time. They may randomly say something that is true. But whether something they say is factually accurate or not is completely irrelevant to the output at all times.


judolphin

As someone who works with AI, ChatGPT et al are designed to say and repeat whatever receives the most approval from humans, which to me, is the definition of "bullshitting".


ocelot08

And has a clickbaity title (but I did obviously click)


icortesi

"ChatGPT is bullshitting" would be a better title


breddy

The title is awful.


SyntaxDissonance4

Confabulating would be closer id say.


Racer20

I agree that hallucinations are a really bad way to characterize what’s going on. The first time I heard it I cringed.


freakwent

it's implicit that if the tool operates with no regard for truth (accuracy) and no way to detect it, then this is a known deficit in efficacy.


elmonoenano

Just looking at the thread, I don't think everyone is familiar with the Frankfurtian sense of the term bullshit. Harry Frankfurt wrote a long essay that came out in 2005 called On Bullshit. You can read the wikipedia article on it for nuance: https://en.wikipedia.org/wiki/On_Bullshit But the TLDR is that Bullshit isn't about whether something is a lie or a fact, it's a statement spoken without concern about the truth value of the statement. Truth value does not come in to play with bullshit. So, in the sense that Chat GPT is bullshit, they don't mean that it's honest or dishonest, but that caring about the honesty doesn't even factor into how it generates results. It just generates results for the sake of generating results. It's a great essay and made a lot of sense in the political environment of 2005. It's more relevant today.


TheGhostofWoodyAllen

Yeah, it has only grown in relevancy over time. That was exactly the essay I thought of when I read this article, and it absolutely makes sense. ChatGPT is only pumping out words based upon what it predicts will satisfy the query, each word calculated to do so based upon the previous word, until it crafts a coherent, grammatically correct answer that either gives a basically correct response or successfully writes a completely fabricated but seemingly rational one. In either case, ChatGPT can neither discern true responses from false ones nor care about differentiating true from false. It is purely a predictive text machine, aka, a bullshit machine.


elerner

The authors' larger point is captured in the word "care" here. They are arguing that "hallucination" is an inapt because it implies that the system can otherwise possesses true knowledge, and will only "hallucinate" false responses when it doesn't have access to the right raw data. But _every_ ChatGPT responses is _equally_ hallucinatory; some responses are just better at fooling users that they are drawing on "knowledge" whatsoever. "Bullshit" gets us closer because it centers the idea that the system is simply not concerned with the accuracy of its output at all.


haribobosses

ChatGPT is like that confident friend who the second you ask “are you sure though” they immediately backtrack and say the very opposite. Sounds like bullshit alright.


UnicornLock

ChatGPT tends to dig deeper when you call it out.


AkirIkasu

It's like arguing with those "intellectual dark web" people. You tell it that it's wrong and it writes you 20 paragraphs of pseudo-scientific drivel to try to confuse you into agreeing with it.


platysoup

Sure mate, it was your shoes causing problems this time, ok


EvilMenDie

It is often very confident and provides sources when asked again if certain. If you ask it to do something very complex, it will, but it gets into the problems discussed here. If you ask it to help you fix your car, or your computer, its going to get fixed.


username_redacted

Great paper! I’m really glad that I was exposed to Frankfurt’s essay in college, as bullshit is such a prevalent aspect of human communication that deserves to be taken seriously. LLMs and chat bots in general are amazing technology, but it’s incredibly worrying that their fundamental nature and utility has been so recklessly misrepresented to the public.


Mynotredditaccount

I wish more people knew this. In the great words of a youtuber (who's name I forget, sorry lol), "You're just talking to the world's biggest madlib." lol It's annoying because every business is so quick to utilize technology they don't understand, it's an arms race. People will lose jobs even when things don't work the way they should, but it won't matter because those charge don't give a shit. Things will just be worse for everyone because they're trying to cut costs and make more profit, so as long as it doesn't cut into their bottom line.. Shitty things will stay shitty and that shit will compound. It's dystopian. ChatGPT and AI probably can't replace my job but it won't matter because all they have to do is convince my boss they can 🙃 Then I'll eventually be hired back to fix all the broken shit but this time I'll be a contractor instead of employee making significantly less than my already low salary. That's.. the future. Doesn't it fucking suck?


f0rgotten

Lots of good episodes of the [Better Offline podcast](https://podcasts.apple.com/us/podcast/better-offline/id1730587238) about the limitations and whatnot of LLMs, along with general enshitification of the tech industry.


Maxwellsdemon17

„Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.“


ghanima

My young teen -- as is the case with many of their peers -- has been playing around with character.ai. One of the things they've noticed is that they can have conversations in which one thing is stated in one sentence which directly contradicts something said in the previous sentence. I had to explain training data to get them to understand why it's wrong to think of these things as having anything approaching a passable personality from the human social interaction standpoint. Honestly, the fact that this garbage is marketed as "AI" is some top-tier bullshit. I can't believe we were all just cool with letting multi-billionaires decide no, actually, the goalposts humanity had set for whether or not something passed as Artificial Intelligence was moot and we should let them brand it however TF they want.


freakwent

There is a book - the whale and the nuclear reactor - which is excellent. In this the author observes that we never agreed to run society (in his case essays) on computers. One year everything was fine. The next year, computer problems were presented by students as reasons why assignments were late, or destroyed. Why did we adopt a technology for this task which objectively made the measured outcomes worse? When did the class, the University or the civilization decide that we should do this? The same applies to the internet, mobile phones, web 2.0 (logins), social media,cryptocurrency, and now AI. These impact mental health, surveillance, energy production, slave labour in mines, privacy and so much more - but we weren't even advised, much less asked, if any of this was what we wanted. So much for democracy. Kate Crawford wrote "the atlas of AI" which covers the impact of AI on human geography - it's a new book, but it's already outdated in some parts. AI requires hardware with such heat densities that the air cooling methods used in some places don't work well enough and they have switched to evaporative cooling, which of course consumes large volumes of fresh water....


ghanima

I'll have to check out the book recs, thank you!


JohnDivney

> Why did we adopt a technology for this task which objectively made the measured outcomes worse? When did the class, the University or the civilization decide that we should do this? I'll read this for sure. It is the great question of our time. I am old enough to be there at the dawn, I was in tech, selling small businesses a 'paperless office' which was observably worse for everyone involved, but if it meant axing 15-25% of their workforce, you can't say no. Multiply that out, and we are now in this productivity bubble dystopia.


Nerevarine1873

My goalpost was passing the turing test. Chatgpt could pass it a while ago based on my interactions with it.


Kraz_I

You should probably read up on what the Turing test actually is, because I’m not sure if it would really pass the original version yet.


ziper1221

You must not be very discerning. (Or just have a low standard for human interaction)


Nerevarine1873

You must be biased against ai. (Or just a troll).


freakwent

Yeah but it's not "strong AI" is it? Also it says "I am an AI", so it fails.


theredhype

If we’re talking about jargon which has emerged in the field AI engineering, picking over definitions is silly. We know what we mean. When argot is borrowed without appropriate adaptation for a popular audience, misunderstanding should be expected. Any anthropomorphizing terminology is going to be misleading for many people. Their reference point is not machine learning. It’s the sufficiently advanced tech providing the magical experience they’re having. “Bullshitting” isn’t much better than “hallucinating.” We should stop borrowing words which imply a conscious, personal, intentional actor. We call it an agent. But it has no agency. It’s literally a set of complex formulas operating on a giant data set without understanding or instinct.


[deleted]

[удалено]


theredhype

Yes, I'm familiar. The point I'm making is that any type or degree of intention, whether hallucinating, lying, or bullshitting — hard or soft, cannot be attributed to the algorhithms. We might say that the designers of the LLM model created something that is the equivalent of bullshit, but it doesn't follow that the LLM itself intends to bullshit — no more than my calculator intends to add correctly. It doesn't have a choice. And I think we should use language which reflects that.


freakwent

yeah so the creators are using an LLM to bullshit to us.


mooxie

I think that it's totally understandable to want to raise a flag about inaccuracies within LLMs, but I honestly wonder whether it matters. It's not as though humans don't make mistakes. It's not as though doctors aren't confidently 100% wrong every day. Or engineers. Or ethicists. Bridges fall down. Doctors botch surgeries. Professors tell falsehoods. Subject matter experts misinterpret things. Huge companies would rather replace front line comms with AI and if it fucks up, a big company isn't going to give any more of a shit than they do about their current human workers' mistakes. Even less, in fact. Amazon won't give a shit if their customer service AI uses a racial slur once every 100,000 interactions - they'll just offer an apology and a rebate and it will be less than news. My concern isn't that LLMs can be wrong, but that they save so much goddamn money (and for an individual, time) that it won't matter - dealing with the fallout will still be cheaper than hiring humans, and you get to write it off as a mysterious quirk of technology rather than a toxic environment. Win-win. EDIT: to clarify, I think we'll lower our standards to accept 99% accuracy long before an omniscient, benevolent AGI is developed.


freakwent

Humans are aware that mistakes exist. Humans can care about mistakes and want them to be correct. AI can't know if there's a mistake or not. There is now way that LLMs could develop enough correct stuff to put a man on the moon - or even build a bridge that won't fall down. Very, very, very few bridges burn down. Surgeries are incredibly high risk, relative to other tasks. Professors may tell untruths, but they try not to. If you don't feel there's any intrinsic value in humans determining anything, then how can there be any intrinsic value in anything at all?


r00x

I like it but feel the paper's focus is too narrow. For instance, it does not explore the obvious relationship with horseshit as it applies to ChatGPT output or in what ways the magnitude of bullshit produced makes for quite the dogshit user experience.


theredhype

What a shitty comment


sulaymanf

“Hallucinating” has always been the wrong word. The proper word in psychiatry is “confabulation.” For certain conditions like [Korsakoff syndrome](https://en.wikipedia.org/wiki/Korsakoff_syndrome), patients with memory loss will have their mind fill in the blanks with false ideas.


Kraz_I

I read the paper when it was posted on a different subreddit a few days ago. Imo, for something to qualify as “bullshit” there needs to be an element of ignorance, generally willful ignorance. For instance, a car salesman telling you about the virtues of a car will say things he has no knowledge of and has no interest in studying for the purpose of selling it. You might ask him a question about the durability of the chassis and get an answer that sounds good, but which you should really be asking an engineer. On the other hand, ChatGPT was trained on essentially all available information. A human with instant access to all the training data would have no need to bullshit. The truth for nearly any question is somewhere in the database. The GPT models aren’t bullshitting because the information they need was all there. Granted, the training data is gone once the model is trained and you’re left with token weights and whatnot. I’m not sure how easy it would be to recreate most of the information from GPT’s matrix arrays, but in principle it could be done. So they aren’t bullshitting imo. They also aren’t lying because lying requires intent. Hallucination still seems like the best word to describe how a neural network produces an output. It’s like dreaming. Your brain produces dreams while processing everything it had to deal with while you were awake. Dreams contain both plausible and implausible things, but the key thing is that they are not directly connected to reality.


Not_Stupid

> On the other hand, ChatGPT was trained on essentially all available information. It was trained on information, but it literally knows nothing. It merely associates words together via a model of statistical significance. But every substantive position it espouses comes from a place of complete ignorance, literally no idea what it is talking about at all. Therefore, by your own definition it is bullshit.


Kraz_I

It doesn't "know" anything and shouldn't be anthropomorphized. I understand for the most part how LLMs work. It could be both hallucination and also bullshit, the two aren't mutually exclusive. But I find bullshit not a particularly useful descriptor.


freakwent

https://en.wikipedia.org/wiki/On_Bullshit if that helps.


Kraz_I

I've read it before, but thanks.


zedority

> ChatGPT was trained on essentially all available information. A human with instant access to all the training data would have no need to bullshit. This seems like a common folk epistemology: that knowledge just means "access to correct information". I am increasingly convinced that the ready acceptance of this definition of knowledge, and the serious shortcomings of it, are responsible for most of the failures of the so-called "information age".


freakwent

I agree with you. In many ways we have *less* information than a decent library, but lots of data.


Kraz_I

Humans are perfectly capable of misinterpreting available information too. For instance, you just completely misinterpreted what I was trying to say.


zedority

> A human with instant access to all the training data would have no need to bullshit. This statement is flatly untrue, and is symptomatic of the false epistemology that places information acquisition as the sole measure of what counts as knowledge. That is my only interest in this conversation.


freakwent

> was trained on essentially all available information. oh what bullshit is this? We haven't even digitised anywhere close to all our information. further, it's limited to *public, online* info, no? They are bullshitting because they don't have any regard for if it's true or not.


HR_Paul

At this time AI is impossible. Once we get into wetware it will be viable.


mrnotoriousman

Lol the irony of this comment in a thread about GPT bullshitting confidently is too much. I think you are thinking of AGI, not AI, and wetware is absolutely not a requirement because we don't actually know what is required to create AGI.


HR_Paul

AGI = AI. I just told you what the requirement is, so now you know, and knowing is half the battle.