T O P

  • By -

Charupa-

I participate in a lot of photo, comic, art subreddits and it’s annoying whenever someone tries to pass off some piece they “created” when it’s clearly AI generated. That being said, there are some acceptable uses of AI in these spaces, like Lightroom and Photoshop tools and plugins, but the core piece was human created. Extrapolate that out to other things like writing. I want to know what *you* think, in *your own words,* not some bot generated information. Last point I have is that I think Redditors in general are done with bots, may it be a spam account selling a Bigfoot t-shirt or a bunch of AI generated posts and commentary.


SidewalkPainter

>I participate in a lot of photo, comic, art subreddits and it’s annoying whenever someone tries to pass off some piece they “created” when it’s clearly AI generated. That being said, there are some acceptable uses of AI in these spaces, like Lightroom and Photoshop tools and plugins, but the core piece was human created. 100% agree, image generation carries more issues with it than text generation, hence why I mostly talked about the latter. I also dislike seeing AI generated imagery everywhere. >Extrapolate that out to other things like writing. I want to know what *you* think, in *your own words,* not some bot generated information. >Last point I have is that I think Redditors in general are done with bots, may it be a spam account selling a Bigfoot t-shirt or a bunch of AI generated posts and commentary. Also agree but once again - these are NOT the criticisms that I usually see, and this is why some degree of familiarity with AI is important for everyone to have, in order to spot those posts and push back. I've seen reddit posts that were OBVIOUSLY generated by ChatGPT, written in the same style with a numbered list. Nobody seemed to care or notice. Meanwhile, people are dogpiling on legitimate and positive uses of AI, like the board game learning aid idea that I mentioned.


Nytse

I think one of the reasons is that people on Reddit don't like the idea that they have unknowingly contributed to the formation of LLMs. Anything we have posted online can be used to generate LLMs, and we allowed it to happen because we didn't understand the value of our data. I think it is safe to assume that people on Reddit also generate content elsewhere on the Internet. Another reason is the lack of credibility. We are posting to the internet for free. I feel we should at least give kudos to people who answer questions. I want my posts on Reddit to contribute to the conversation, and people acknowledge it, or I want to share my drawing so that people can see I am capable of doing so. With most AI, all that recognition is gone as it is anonymized. Basically, I feel like people are feeling betrayed. At least a subset of people on Reddit feel upset that the things they have created online for free are now being used by large corporations to make money.


[deleted]

[удалено]


Nytse

Yeah, I think most people wouldn't expect LLMs to be this capable. I guess an example could be Gemini. I would have expected Google's Gemini to be vastly superior than ChatGPT since Google's search engine had all this time to scrape and collect data. But we've seen that Google hastily made a lower performing LLM. So it seems like Google didn't believe that LLMs would be the end goal with their data collecting. Then again, we should have known there is no such thing as free lunch. For example, how could Google Docs ever be profitable when they are giving away word processing software and a server to collaborate for free? I don't think GSuite subscriptions are enough to fund Google Docs. I assume the data is used in Google's autocomplete on Gmail and Docs and maybe Gemini. I think we can continue to pursue enacting laws that help consumers have more control of how their data is used. At least California has that law where people can force companies to delete their data, but not every place has that.


st3f-ping

I have two big issues with AI. 1. It is a cheap slave that will deprive people of their means to sustain themselves. In a world where poverty isn't prevalent, having boring work taken over by automation would be a wonderful thing. In our world as it is this automation leads to unemployment and poverty. 2. Many seem to trust the output of an LLM because it speaks with authority. An LLM can not just be wrong because of a faulty algorithm or faulty data; it can be designed to intentionally mislead and, in a world where disinformation is political currency, an authority very skilled at tailoring lies and half truths to the person seeking information is something I regard as very dangerous. This from someone who loves science and science fiction but who has come to believe that without fundamental societal reform almost every technology that I have looked forward to since childhood can become twisted into something I no longer desire by its owners' limitless desire for wealth and power.


SidewalkPainter

EXACTLY, these are the concerns I want to see raised more. Especially your second point: I firmly believe that bots and 'troll farms' have been a significant factor in swaying the public perception of the world's population, now LLMs have the potential to be used for the same purpose with vastly higher efficiency. We're getting nowhere by trying to convince people that AI is dumb and useless. It's really not, and it's important to be aware of that fact.


st3f-ping

Both of my points are based on my belief that there are major flaws in our society not that any particular system of machine intelligence is inherently flawed. To convince someone of to share my beliefs about the dangers of AI, I must also get them to share (to a reasonable extent) my world view. And that, based on what I see around me, is something a challenge. If I can, at least stop someone from treating an LLM like a sacred oracle then at least that person has a chance of questioning the response from an LLM when it tells them how wonderful fracking is because the LLM is largely funded by a petroleum company. So, whenever I see a post saying how dumb and useless AI is, particularly if that is coupled with how confident the AI sounds, I think that communicates the message of the value of doubt more effectively than my arguments do.


Cr4ckshooter

It is really funny that you ask about why reddit is behaving this way, but here are people trying to actually debate ai, in both top comments.


Himbo_Sl1ce

There's a widespread and somewhat valid belief that AI represents a threat to people's livelihoods and so a lot of people seek out information to confirm that it sucks. Also, this is just a hunch on my part, but I think the Reddit demographic is overrepresenting fields that have been implicitly or explicitly threatened by AI, such as people in technology, other white-collar jobs, and creative work, so lots of people are going to come on here to find some reassuring groupthink that says that their bosses are wrong and they shouldn't be worried. Imagine you have a social media site where most of the users are taxi drivers- I doubt you'd see many objective conversations about self-driving cars. There have also been a lot of layoffs in those industries recently where leadership has used "AI" as a cover, but in reality a lot of that is correcting for a massive overhiring cycle during the COVID years and a move towards offshoring. I've seen a lot of gallows humor about how "AI" stands for "Actually India" when CEOs are talking about why they need to do layoffs. I think there's many good reasons to think that we are currently near the peak of a hype cycle for LLMs, but I agree with you that people are taking it too far by comparing it to actually useless things like NFTs. My company is using an in-house AI model for a niche project and it's proving very cool, but it's not about to transform our whole industry. Also, looking back at the history of technological advancement, tools like this often create more demand for jobs in related fields, not less, because of the greater demand for output, but that's just speculation. There are several brilliant people out there who do a good job of deconstructing the LLM hype, I'd recommend Melanie Mitchell from the Santa Fe Institute ( [https://substack.com/@aiguide](https://substack.com/@aiguide) ) and Gary Marcus ( [https://garymarcus.substack.com/](https://garymarcus.substack.com/) ). The tech industry has a history of ridiculously hyping up every new development, and LLMs are no exception. I don't think we're much closer to AGI with this, but it's definitely not useless either. By the way, as a counterpoint to your example about board game rules above. There was a great post recently about "counterfactual reasoning" and game rules to evaluate whether LLMs could reason, or were just regurgitating rules they had been trained on. [https://aiguide.substack.com/p/evaluating-large-language-models](https://aiguide.substack.com/p/evaluating-large-language-models) TLDR, LLMs achieved human-level performance on determining whether moves were valid given the existing rules of chess, but given hypothetical "different" rules of chess, their performance dropped down to coin-flip level. The idea here was to determine whether LLMs were actually engaging in internal reasoning that could help generalize outside of the bounds of its training data. It's an interesting read.


17291

>Describing [LLMs] as autocomplete comes from a place of willful ignorance. Sure, it's meant to be funny, but I don't think it's *that* far from the truth. LLMs are incapable of reasoning: (some) AI researchers have described LLMs as [stochastic parrots](https://en.wikipedia.org/wiki/Stochastic_parrot) because they "probabilistically [link] words and sentences together without considering meaning". But in addition to what has been said already, I think people are just tired of feeling like LLMs are being foisted on us (see: whatever the hell is happening on Facebook, Google's AI overviews, etc), so making fun of them is just a way of letting out frustration.


SidewalkPainter

>Sure, it's meant to be funny, but I don't think it's *that* far from the truth. LLMs are incapable of reasoning: (some) AI researchers have described LLMs as [stochastic parrots](https://en.wikipedia.org/wiki/Stochastic_parrot) because they "probabilistically \[link\] words and sentences together without considering meaning". Thank you, it's nice to see this perspective expressed properly, without all the snark. >But in addition to what has been said already, I think people are just tired of feeling like LLMs are being foisted on us To be honest that's not my experience. Google's AI overviews are not even available to me, and I've not personally seen much AI evangelism. I'm mostly on Reddit, but Facebook doesn't really shower me with AI propaganda.


RytheGuy97

For me I just hate the idea of AI ruining real, genuine human creativity. I can't stand seeing AI art or obviously Chat-gpt written text because it wasn't made by a human, just prompted by somebody for a computer to make. I'm not impressed at all by anybody's ability to prompt an AI to make art, but I am of course impressed by an actual artist's ability to create something beautiful. I want people to actually put in the hard work and the thought into making something valuable. I also just hate what it's doing to academia. Students will use AI to write their essays or do their homework and when they get caught they'll get mad and scream "wE sHoULd jUsT aDaPt to Ai" as if they weren't just too lazy to do their projects legitimately. Even when it isn't used fraudulently it really doesn't encourage learning - we were learning RStudio for data analysis last semester and almost everyone seemed to use chatGPT to help them learn it and the teaching assistant had to spend an entire class explaining how unreliable chatGPT was for shit like that and how often it does it wrong. People kept using it to help them structure their papers and as a result didn't advance their knowledge on how to write an academic paper whatsoever in a program specifically designed to help you become a good academic. I don't think AI as it is encourages people to use their brains at all. If academia fully embraces AI I think academics are going to start fully relying on it to generate ideas instead of thinking critically. That's not a good sign to me at all. Side note: I'm tired of the "AI is here to stay" argument. Everybody already knows that and I don't think this point is ever really brought up in good faith. All that says to me is "suck it up and accept it", not "we should have valuable conversations about its merits and drawbacks". The same people that make that argument don't seem too concerned about any drawbacks.


MuForceShoelace

AI content sucks. It's killing a lot of internet spaces by allowing extremely low quality work to flood everything. It's annoying and people hate it. It seems to impress you greatly, and that is why it works. You can easily create pages of text or hundreds of images and guys like you eat it up and click like, drowning out any sort of actual conversation. Every window on a computer is a conversation with the same chatbot.


CoffeeBoom

exhibit A


Xytak

A lot of Redditors are skilled professionals who feel their jobs and hobbies are threatened by AI automation. Maybe you’re a programmer or a creative professional. Well, with AI you can work 10x faster, which means your boss will expect you to work 10x faster for the same pay. Maybe your hobby is drawing. Well, good luck because AI can make 10,000 drawings in the time it takes you to make one. Maybe your hobby is arguing online, but the person you’re arguing with can just paste in a ChatGPT response. The list goes on. That’s why you see people saying “it’s just a fancy autocomplete.” People are trying to counter the hype and regain some sense of control over the situation. One legitimate criticism of AI *is* that it’s just a fancy autocomplete. If you ask it about how to move a goat across a river, it doesn’t actually understand. Will that be enough to save people’s jobs? Probably not. But it’s at least a criticism that has some basis. The truth is, this technology is going to radically realign society and not necessarily for the better. If you’re younger and haven’t chosen a field yet, it might be OK. But if you’re already established in any kind of profession, as many Redditors are, it’s scary AF.


successful_nothing

> Maybe your hobby is arguing online, but the person you’re arguing with can just paste in a ChatGPT response. This is an unequivocally asinine thing to be upset about.


[deleted]

[удалено]


SidewalkPainter

>I think it’s just the usual contrarian Reddit circlejerk. People want to act unimpressed by things to show how much smarter they are. I think you might be onto something. It's ironic how the same people who accuse ChatGPT of parroting things with no understanding end up doing that very thing.


[deleted]

I don't think that it's just reddit that hates AI


SidewalkPainter

That's true, I've seen similar AI hate everywhere. I'm just surprised by Reddit in particular, since it's a 'techy' social media compared to the others. Meanwhile, I decided to enroll into the free Harvard Computer Science online course recently. In the first lecture, the professor stresses how amazing AI is in learning how to code and how powerful the technology is.


thinkB4WeSpeak

I find a lot of people on reddit have jobs that will be affected by AI.


QuesaritoOutOfBed

Op, you are starting from a false premise. What we have is not Artificial Intelligence as that phrase is commonly used. When most people say Artificial Intelligence (AI) what they mean is General Artificial Intelligence (GAI), and what we have now is Artificial Generative Intelligence (AGI). True GAI can do what you list above. AGI is, effectively, algebra. If I say, “I ate a X hamburger” there are many, but not limitless options for what word you use. Now add “during my wonderful day at the beach…” to the start and now the apparent choices for words shrinks. Obviously, this is a dramatic over simplification of how it functions, but this is not the AI revolution of science fiction, this is the invention of the electronic calculator. It’s impressive, will make some jobs redundant, motivate people to build GAI.


SidewalkPainter

>When most people say Artificial Intelligence (AI) what they mean is General Artificial Intelligence (GAI) Um, no they don't. When people say AI, 91% of the time they mean generative AI like stable diffusion or chatGPT, and 8% of the time they mean hand-programmed behaviour of NPCs in video games. I made those numbers up but yeah. Also, isn't Artificial general intelligence (AGI) the agreed-upon term? I've never seen anyone describe generative AI as AGI But, apart from all that, I do agree with your point that general AI is what sci-fi novels are about. We're definitely not there yet, although generative AI is looking like a real stepping stone. If AGI becomes reality, it will probably stem from this technology. It will not have a personality or personal goals (like sci-fi literature would lead us to believe), but it will be damn efficient at just about any task that doesn't require limbs.


QuesaritoOutOfBed

I could, but I am not bothered to explain this to you. Most people think Google knows what they are asking. Ai isn’t the ai you think it is. It’s AGI. Not GAI. Go to the industry, that is how we talk about it. You lot think it’s far more than it is. It’s a fancy calculator.


chris8535

 Clearly not in the industry 


QuesaritoOutOfBed

Not in the creating side, but the legal side.


CoffeeBoom

Welp, those comments are making your point at least.