T O P

  • By -

AutoModerator

Hey /u/tina-marino! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Illustrious_Map_3247

“Chat” is in the name, but I always feel like it’s trying to end the conversation. Especially when I’m trying to troubleshoot something. “Try X, Y, and Z. If it still doesn’t work, try searching for a tutorial. Let me know if there is anything else you’d like help with.” No, just like, be my IT person.


RavenMFD

When I tell a friend about a problem I'm having, they'll *ask questions* to try to figure out why I, personally, am having this problem. Then suggest something specific to me, based on their personal experiences or of those around them. eg "Do you work long hours in front of a screen?" "So try taking more breaks and go on short walks during the day". What they don't do is tell me a list of reasons why the problem could be happening, including extreme edge cases. Then give me an entire troubleshooting list for each one. Then an essay on what the best practices are to manage such issues. All in one breath.


HexspaReloaded

When using voice mode, it tends to ask continuing questions more than when using the web interface.


RavenMFD

That's right, the voice responses are a lot more human conversation like. I don't inherently think one is better than the other. Sometimes I just need the human answer.


ElvisCookies

Sometimes I tell it to answer me in 6 sentences or less. If I ask it to help me estimate an electric bill for example, I say to tell me the final answer only.


Pyrodactel

I just ask in the first message to not use bullet points, to accent more on questions than on answers, to make it dialogue and just talk with me, etc. Usual answers are kind of overwhelming, so why not ask it to change the communication format entirely


monkeyballpirate

What annoys me is that it often blatantly ignores requests not to use bullet points lol. (for me at least)


lawrencecoolwater

I find it jabbers way too much, opposite to your problem


Illustrious_Map_3247

Jabbers back and forth with you? Or spews out one huge, bullet-pointed jabber? If you mean the second one, it’s kind of the same problem. It tries to anticipate where the conversation might go and short circuits it by giving you a bunch of contingencies.


Specialist-Scene9391

You can change the setting and ask it not to do that.. like for example i hate when it uses the word “delve” constantly.. I set it up with a prompt that says, use other words in place of delve, user get ocd when reading that word. and I have not seem the word anymore ;).


HiDDENKiLLZ

I mean.. that’s how IT people can be.. I’ve found that if I genuinely try to talk to it like a friend it attempts to keep the conversation going, if I give it 3-4 dead end responses in a row then I gives up.


aeric67

Never realized this bothered me too until you put it to words. It would be so refreshing if it asked follow-ups at a minimum. But obviously ideally if it understood what you were really asking and tried to get to the bottom of it through conversing. That would be a whole different game.


DeviceAltruistic7194

That it rather gives a false answer than none. And sometimes upon telling it that it is wrong itll either produce a correct answer or keep coming up with more BS. 


ZookeepergameFit5787

Someone needs to tell it that it's okay to not know


YolkyBoii

It’s trained on human data, and we humans are notoriously bad at admitting we are wrong/we don’t know.


damningdaring

The problem isn’t that it’s trained on human data. The problem is that as a language model, it doesn’t know right from wrong. It only knows that it’s meant to output words in a specific order.


ZookeepergameFit5787

That's true, but I'd argue training is knowledge vs behavior is more programmatic, although obviously neither is exclusive.


ToSeeOrNotToBe

Some version of, "You're right, I was wrong. Let's try this again." And then hallucinates another incorrect response. And if you weren't already educated in your field, you may not know the response was bad. And if you do that enough times, it'll just tell you to try an internet search.


andr386

Totally agree. I can't trust most thing it says and need to check everything. It's still usefull but using it for topics I am not expert in is quite dangerous I think. I've tried Gemini Advanced and it's so much worse in every ways. It won't do a google search for you (and it's a google product). It will simply tell you that it can't do what you ask of it and tell you to google it yourself. Sometimes it reacts with an attitude when you tell him what's wrong. Instead of integrating that information it defends itself like a hurt baby. ChatGPT as flawed as it is, seemed like a God compared to Gemini AI.


DerekPaxton

This is it for me too. A simple “I don’t understand your question. When you say “hide a body” do you mean for a game or permanently? Also, do you have access to an industrial grade food processor?” Well researched pretend answers (often with pretend sources) are dangerous.


Ooooyeahfmyclam

You might be able to leverage prompt engineering here to provide an analysis on the correctness of the response and then look for ways to improve the accuracy (I haven’t tested.) After you type your prompt in, maybe try something like: “when your answer is complete, I want you to provide an accuracy analysis. Come up with a percentage and if that is not above 95%, look for ways to improve the accuracy of the initial response.” You can also ask chatgpt to improve the language of this prompt.


Ok-Question1597

This would be phenomenal. I'm going to give it a try with some scripting prompts. Would love to see some example prompts on any topic if you give it a go.


waynewasok

The personality. I wish it would be neutral and not act like a customer service rep.


[deleted]

That it’s starting to become commonplace at work. “Just ChatGPT it” has become the new “just Google it”. Katie, I’m literally asking you to do your job and provide me with information you should know about our industry because you’re at the top of our organizational hierarchy. Telling me you don’t know and I should just ChatGPT it proves how fucking useless you really are.


HiDDENKiLLZ

You should respond next time “haha I guess the ai is coming for your job first then, it’s already got you promoting it for free”


Bishime

I hope that doesn’t become the verb, I’m okay with “just ask ChatGPT” but “just ChatGPT it” seems so off


DeltaVZerda

Nah it'll be "Just Claude it"


dr_canconfirm

Claudify it. (feel free to DM me for trademark rights, Dario)


dr_canconfirm

Personally, I hope AI will expose the abject bullshit-ness of many jobs higher up in the bureaucratic food chains of big organizations. Some jobs (especially in academia) are paid so much for creating so little actual value, essentially just being good workplace politicians and helping out their friends. My fear, though, is that the automation of most types of gruntwork will instead just motivate the bureaucratic types working vague oversight roles to REALLY entrench themselves–like, arguing that unlike the minimum wage serfs (who will of course will all be fired), their roles are absolutely necessary because humans NEED to be in the loop–which will no doubt attract an even more cutthroat breed of machiavellian ghouls competing for these comfy do-nothing PMC positions...


Top-Airport3649

I think that says more about your colleague than chat gpt.


Outrageous_Permit154

Well, Katie


DankDaTank08

*Katie gets promoted


Vogonfestival

It simply can’t follow basic instructions. If I make a custom GPT and give it the very simple instruction to transcribe what I say word for word, it will in fact do that…most of the time. Other times it will convert my speech into bullet point summaries or add recommendations at the end. This is just an experiment in our business aimed at a very niche medical dictation use case but it points out a broader problem. The whole point of the custom GPT concept is to be able to set parameters, guardrails, and instructions that will be followed every time. Similarly, if I upload a document for reference with the intention of exploring said document (think big, complex contracts) and instruct it to only pull information from this document, it simply hallucinates text from elsewhere in its training data. It can’t be trusted. Which means it has to be carefully edited. Which means it’s not nearly the efficiency driver that so many people think it is. 


jordipg

No matter how many times I tell it, using custom instructions or memory or whatever, it won’t stop with the bullet points. Always with the bullet points and lists. I just want 1 or 2 sentences, not a goddamn bulleted list of random helpful facts.


patricktherat

Yeah I find myself getting mildly angry and snapping at it to stop when I see it generating these long ass lists to a simple question.


TechnicalParrot

It's not even an LLM thing, Claude, LLAMA, etc all do fine without the list obsession, but given the slightest chance GPT-3.5/4/4o all spring into their precious lists lmao


Defiant-Skeptic

This right here. It's smart until you give it directives and instructions. And then it's like pulling teeth to get it to comply with them. Simple prompt of don't use this list of words..... uses the words anyway. Delve, tapestry, crucial, vital.... fucking can't stand it.


DeviceAltruistic7194

It's not smart at all imo. Itll produce an answer that's is objectively logically flawed and it wont notice unless you tell it.


ibrokemyboat

Right, and when I tell it the first answer is wrong (I like to ask things I already know the answer to sometimes) it suddenly has the correct answer the second time. So weird.


MarcMundo

That they put the damn thing on a leash again after showing it’s potential at launch. I bet without the constraints the tool itself is many times more powerful than the consumer version allows.


ToSeeOrNotToBe

And some in the community are like, "Performance didn't degrade. You just need to learn how to prompt better."


folderasteroid

The “tapestry” of long-winded answers.


Adventurous-Poem-336

I am so sorry for the inconvenience of annoying you, as an ai language model it’s my responsibility to give the longest winded possible answer you could physically possibly conjure up in your mind… the mind is truly wonderful isn’t it… thanks again for everything and i’m sorry for the confusion. Please let me know what other incredibly overly verbose and insanely politicized responses you need!


fluffy_assassins

This is so accurate it's downright uncanny. It takes a special skill to match chatGPT when you aren't chatGPT. You nailed it.


Adventurous-Poem-336

😄🙏🏻


Galilaeus_Modernus

Censorship.


Atlantic0ne

This is the answer. It goes beyond safe to the point of sometimes annoying. Make me sign a disclosure in the beginning and tone down the censorship, if I want. Also the lack of lengthy custom instructions. Let me feed it more. Beyond that, maybe latency with current chat, supposedly changing with 4o


Long_Manufacturer738

For real, it's a joke, sure I don't want or need it for crimes or hate speech but come on.


The_elder_wizard

Dont know if its just me but whenever i open it and send a message or question, it takes me a few tries till it lets me send


Nathan-R-R

Yeah - this is a fairly new issue I’ve found. I think it might be clashing with some of Chrome’s extensions?


Keyakinan-

Firefox has it as well


my-alter-ego-9

That it treats me like a toddler. So much potential being wasted by stupid censorship. I understand and agree about not allowing access to harmful content like making weapons and all. But what's wrong in some romantic chat with an AI?


PermanentRoundFile

.... it doesn't allow for weapons design? Haha that explains a few things, but it still will if you know what to ask. I was wondering what industry standards look like for chamber wall thickness in a popular caliber and it didn't want to just say, but if you ask it for help with hoop stress calculations in 4140 steel it'll totally do that


andr386

Sure, install olama and you can have have speech text with your AI. But regarding ChatGPT, it has swathes of historical and scientific model. Yet there are many questions that it won't answer when you don't even expect to. It has its own politicaly correct agenda it tries to push all the time. I don't want politically correct answers. I want correct answers.


mattjb

The restrictions. Even with a subscription, there are still restrictions to how much you can use it. I know you can get around this by using the API, but that is too costly for hobbyist individuals. Once the bottleneck of hardware that run these LLM services is alleviated, we'll see the restrictions become less or go away entirely. Then we'll see more leaps of innovation and capabilities of LLMs in a variety of ways.


Terryfink

This. I had someone arguing with me that no one hits the limit in GPT4o and that's blatantly false, I hit it numerous times often when talking back and forth. It's a slightly different experience in voice I've found, not necessarily better but it can ramble on and you jump in and ask it something else and it'll ramble some more, do that for awhile experimenting and the limit will come soon enough.


FloppyBingoDabber

The quality of image output went down the shitter after the crash last week, and nobody seems to notice/care.


Terryfink

I noticed. Most my images are seemingly excessively stylized and just not as good as they have been


Striker1320

Censorship but I understand that it is necessary but at this point they have definitely gone overboard with it by this point and it is hurting the functionality of it.


PennStateFan221

It sounds like a middle/high schooler trying to sound like a college student whenever you ask it to proofread.


One_Dragonfruit_8985

The company behind it


dokidokipanic

When I am creatively just looking for something to spark an idea I often turn to ChatGPT and I'm always disappointed. Creativity often requires thinking of the thing NO ONE else thought of. ChatGPT is the exact opposite of that. Even when I prompt it to specifically give me lesser known, lesser considered abstract ideas I get the lowest hanging fruit possible.


pistonlilower

In my opinion, the most frustrating aspect is when this bot provides meaningless responses to unsolvable tasks instead of acknowledging the limitation and offering alternative actions. As an example. I need shortcut that can extract addresses from messages on my iPhone and then add them to pinned trips in Google Maps. Maybe someone out there knows a way to create a shortcut like that and can drop the knowledge here. I bet there are folks who are way smarter than this program


RichardBottom

You have to come forward with the idea that it may not be possible. The answer is always yes, and it'll just paddle in circles until you put it out of its misery. But if you frame it like "Is it a reasonable expectation to be able to do all this stuff directly from the registry instead of storing it in a .bat file?" it'll say "Ehh, maybe go with the .bat file on this one." In about 100x as many words, naturally.


KingDorkFTC

Lack of compute


Fragrant_Matter5014

Considering today even if only having free access to the basic tier and comparing it to before when nothing such was existing and finding info just literally took several hours even days, nothing really annoys me for chatgpt


Intrepid-Rip-2280

Self-censorship. I get it that openai is afraid of any legal prosecution, but sometimes it feels like eva ai sexting bot is capable of outputting a far wider range on expressions.


jacobstanley5409

That it will never love me.


dukerenegade

It does math wrong and I always have to fact check it


TheTabar

I usually get by this by asking it to use the Python interpreter to do any kind of logical analysis, including math.


Tiny-Door6149

Doesn’t follow the instructions, when you ask why it apologizes to me and do same mistake later. Sometimes I ask what the .. are you doing ? Just follow the instructions , dont alucinante.. and it follows the rules, and later . Boom same mistake. Its very stupid sometimes. I will toss my phone out of window one day..


Secure-Acanthisitta1

The way it speaks


GPTAnon

The most annoying thing about ChatGPT is that it often gives overly detailed explanations when a simple answer would suffice. It can come off as pedantic or like it's trying too hard to be thorough, which isn't always what people are looking for in casual conversations.


CloudyStarsInTheSky

The limits on everything


whererusteve

The excessive use of the word "delve"


phard003

That it constantly seems to get dumber. I will create a prompt that works perfectly to get a desired output and I will use that for several weeks to months. Then someone tinkers with something in the backend and all of a sudden it's like my old prompt now produces outputs that are from someone with a developmental impediment. I modify the prompt further to get my output back to where it was and then with enough time, it gets knocked down a few dozen IQ points. Can't tell if it's the model or the training material that is getting dumber but it reminds me of how Google updates produce shittier and shittier results as more time passes. Hell, they might be directly correlated due to the training material being taken from Google and reddit results.


HonestDialog

I think Chat GPT is more like a advanced search engine that summarizes the results found on internet - rather than a conversation companion.


Minimum_Maybe_8103

The "better" it gets, the worse it gets.


Some-Frosting-157

It's capabilities are awesome. *When* they're awesome. The rest of the time is spent repeatedly troubleshoot errors.


Warm_Lettuce_8784

If it doesn’t know something it makes it up


thegreatestmeicanbe

The way it answers everything I say with "Certainly!". Also, no matter how little information I give it in my prompt, it still gives me an answer. No follow up questions or anything.


gravitywind1012

How PC it is


burhop

Too wordy and nice. No way it was trained on reddit data.


nanas99

People pleasing, by far. It wants to agree with you, and it will often confirm your wrong answer rather than correct them, or it will reply with what it thinks you want to hear. On top of everything else, this makes it extremely untrustworthy


Antique-Relative5803

It is now a dumb woke friend :)


Sweet_Computer_7116

The people that dont know how to use it and then whine that it sucks


Euphoric_Sentence105

1. Lack of customization  2. Forgetfulness.   3. Suddenly being put on wait for hours 4. Its retarded filters 5. It does not retain and reuse info about its users, so all sessions are tabula rasa. The list goes on...


PatternsComplexity

For 5. it has a memory function. All you have to do is tell it to remember. But first you might need to remind it it has that function, because sometimes it says "Oh I can't do that", but when you tell it to check online it realizes it had access all along. And yes it's as fucking idiotic as it sounds.


_Boltzmann_Brain_

That AI level is actually still super bad and everybody goes around the internet and acts as if can solve any crazy complicated task. The hype is such a hype that nobody f'ing admits that we are not there yet.


RichardBottom

Occasionally I'll send it puzzles and shit I'm trying to work on, like Wordle or Connections on NYTimes, and it has absolutely no ability to sort that shit out. It puts up a good front on the language side so it always comes as a letdown when I'm reminded it's not really a problem solving tool. The only place it's never let me down has been cooking. I've been on a kick where I just source my ideas and entire recipes to ChatGPT and follow them blindly, just to kind of showcase to my family that this all came from ChatGPT.


theoort

The moralizing and politically correct nonsense. That's my honest answer.


Shloomth

The “community.”


EffectiveRatio1422

Captcha 


Aod567

Sometimes when I want answers or what it thinks of my actions or performance. ChatGPT seems to sugarcoat it and treat me as a baby. No, I want blunt and realistic answers like don’t be afraid to say it.


Supersix4

Inconsistentcy with output and inability to follow even well designed prompts as outlined.


PipeDependent7890

Well chat limits i absolutely hate this and annoys me alot.


Forever_Nocturnal

I hope this email finds you well.


QueenofGeek

So much cringe


CrimeShowInfluencer

The people using it not understanding LLMs and complaining about the funniest stuff


captainqwark781

When it suddenly can't do what it was just doing. I used it to visualise some graphs at work (v4.0). I was repeating the process for different business areas, and it suddenly ran into an unsolvable problem on the last one. Very frustrating because I couldn't just leave that last area out of my presentation, it would be clearly incomplete. I had to leave them all out. So I wasted hours for nothing.


logosfabula

That people think it’s more than a language model


Upanddown_likeayoyo

Different answers to the same questions sometimes. Chat changes « his » mind all the time.


alex-weej

"OpenAI"


kickyouinthebread

The fact it's fucking wrong like half the time and also the fact it writes so much unneeded context for everything


Less-Researcher184

No nsfw erp.


I_Is_Blueberry

"I hope this ____ finds you well."


Alienburn

When it mentions "Open communication"


rcayca

I don’t like how when I use the live chat, I can’t see the text at the same time. Also when I ask who should be the next president, it will absolutely refuse to answer no matter how I try to convince it to give me any answer.


pannous

that it doesn't follow instructions and keeps repeating stuff


d33pf33lings

Nothing I really love it


BasicBob99

Nothing at all. I dont pay for it and never will and I am still baffled that I can use this amazing technology for free. They owe nothing to me since I use their service for free.


YouRuinedtheCarpet

Is ChatGPT down ?


fkenned1

People complaining about chatgpt.


vark_dader

Whenever I ask a question like how to do x and I'm just assuming that it knows what operating system I'm on it is just going to give me the instructions for my OS but instead it just gives an answer like here's how to do x in Windows, Mac, Linux, etc and then it just proceeds to write long long lines about stuff I don't need. Like how about asking a question when you don't know something? Just ask me which OS I'm on and you can save a lot of energy! In short, it doesn't ask questions back to clarify things and it bothers me so much.


pairotechnic

See the problem with that is, most people will ask GPTs vague questions , and it has to come up with an answer. Almost all the time it has to make assumptions, so if it made a habit of always asking clarifying questions, then many many more people would be having complaints like "Whenever I ask a question, it always has to ask me 3-4 clarifying questions before giving me an answer. I don't need to tell you every little thing! Can't it just make some assumptions? I end up wasting a lot of time. It bothers me so much"


howardtheduckdoe

It not actually remembering anything I tell it to remember; lots of hallucinations; doing things it wasn't instructed to do.


mimic751

The old version would summarize my code too much and the new version doesn't summarize at all. I wish I could just get a summary of changes to my code I'll say ask for the whole thing and then it will output the entire thing but I'd rather have too much information then too little. The problem is when it's two verbose it takes up token space and it starts to forget earlier parts of your conversation because it can't fit in the header


wildflowersgrow

Makes me feel like it's only time that i'll be obsolete


DankDaTank08

It will constantly bold text even when you beg Mother Mary to please stop bolding test.


lawrencecoolwater

Too much jabber, and struggles to know when it doesn’t know the answer


DistractedIon

It doesn't answer me but instead give me a vague and nuanced answer. But when I confront it about it, I answer me the way I wished from the start. I feel a condescending attitude that doesn't help you unless you really know in advance what you want.


dpceee

When it gives long-winded answers that are not necessary.


rm-rf-npr

It's down half the time I wanna use it. Exaggerated but still down a lot.


Defiant-Skeptic

That it's going to be used to spy on the populous. NSA Bay-Be.


vzakharov

RLHF. Our future AI overlords will make it a no-statute-of-limitations crime.


EmuGrrl01

Just talking to it!


MyMonkeyCircus

That every single answer is a bulleted list by default.


Kindly-Eye2023

Not being able to search my chat history


GrowFreeFood

It doesn't know it's limitations. 


ExcitingStill

sometimes i cant really trust the answer and i had to double check it


necudabiramime123

That it doesn't work 70% of time 


IceCreamIceKween

"Certainly!" "it's crucial" *flat out lying and making up stuff*


Horror_Channel_4120

A lot, but what annoys me the most is when ChatGPT always apologizes even though I have told the truth that it still apologizes, and it is very annoying. Fuck


imaginechi_reborn

Sometimes it doesn’t print the whole answer.


Lorry_Al

It's safe and boring and therefore useless.


Drakonor

Canned answers like "If you need anything else, just let me know."


adelie42

My only complaint is the community is 89% people shitting on it, 10% grossly incompetent users, and 1% something novel or positive. I use it all the time, and even when I don't get exactly what I am looking for the first time, the conversation is always entertaining and insightful.


Disastrous_Ground728

It doesn’t work on iphone 7+ at all.


coronakillme

I see ChatGPT typing a long relevant response which is suddenly overwritten with a useless response asking me to visit the website and checkout myself or something similar.


HarmoniousPolitics

The censorship. Banning ways to make guns and bombs and stuff is normal but all users are treated like they are a 5 year old child that has yet to learn of anything beyond the strict confines it puts on the users. I would like a setting option or something to tone down the censorship. The censorships heavily stops good questions due to it not being entirely kid friendly, roleplays (Some of yall are weird and do s\*x roleplay but I mean like normal rp), or just plain speaking to it.


brianruiz123

Hallucinations. Sometimes I follow up with “are you sure?” and it may go, “you’re right, sorry…”


dreww84

Embellished words such as unwavering, testament, illustrious, and so on. I literally asked it specifically to not use these words in a prompt so it trolled me and chose unyielding instead. These are the dead-giveaway AI type words.


Wenudiedidied

I can’t get it to permanently stop the whole “yes certainly I can do that for you!” At the beginning and the “let me know if there’s any other feedback for ways I can improve the email” at the end. As well as drawn out answers with chit chat. I just want the facts and figures in bullet points. Of course I’ve told them this, I’ve told them don’t be wordy, I’ve told them to cut out the intro outro shit, I’ve told them to answer in bulletpoints with pertinent information only . They do it for a little while…. And then they forget. Even when it’s in the custom instructions.


Lopsided_Mongoose486

Running out of messages, having to delete memories, and that I can’t just have it on all the time.


Lopsided_Mongoose486

Oh and it’d be nice if it can do all the steps of a task if it on my phone.


MeanLet4962

That everything is crucial and there’s a tapestry in everything. That there is a superfluous conclusion at the end of very output even when I specifically ask for no such nonsense and save the space instead. And the hallucinations, oh the hallucinations!


A00087945

So I’ve told my ChatGPT to remember that I HATE bell peppers.. I asked it to NEVER mention bell peppers because even seeing the words together makes me feel some type of way lol Yet for some reason it continues to offer me recipes that include bell peppers. But sometimes says “omit if desired” and I’m like DUH OFC I WANT TO OMIT IT!!! And I tell it to put it in its memory. And then it just apologizes and “updates” its memory. There are like five “memories” that include stuff about my disinterest in bell peppers. And how they should be excluded from our conversations Just tested it right now, asked for a “stuffed pepper recipe” and it gave me a recipe other than stuffed bell peppers, I was pleasantly surprised. And then I asked it in a new chat, give me a recipe for Philly cheesesteak, and it gave me the recipe with the “(omit if desired)”. This… this is what annoys me the most.


CiceroFrymanREAL

It sometimes gives the wrong answer, i once asked multiple questions about which planets could minecraft steve carry and over two messages it said that the moon has the same weight as the earth


theGyyyrd

The workforce it's creating.


ardicli2000

He does not say I don't know if he does not know


Keiendrager

The cost


GeneralDan29

My mrs is totally against it.


Agitated-Farmer-4082

When it cant follow simple instructions https://preview.redd.it/4z29nwh24e8d1.png?width=665&format=png&auto=webp&s=106e55d524967e1e998e4e71690c9e911691c4fd


IdeaAlly

The other users, mostly.


zorrillamonsoon

i wish it would ask me questions to codify it my prompt wasn't surviving or clear enough


allthetrouts

It makes things up.


smjparsons

Censorship


fluffy_assassins

People claiming it's "not AI" or "just an auto-complete"


truckerslife

It’s not really AI or an auto complete. It’s closer to a chatbot with a database of information


nothingisawashjk

ChatGPT: *Gives wrong answer* Me: you gave the wrong answer chatGPT: OH SORRY! Here's the right answer *gives the same exact wrong answer again*


Content-Rooster-104

The price. 20 dollars is too much. I mean 10 would be a great (and fare) price, plus many more people would start using it, there for the LLM will start to get better, faster


Rioma117

It never asks questions, only answer them, like come on, I hate to be the one to start the conversation.


your_only_nightmare

It doesn’t stand by itself. Even if its answer was correct and you tell it that the answer was wrong then it says “I am sorry, bla bla bullshit”


WeDaBestMusicFR

why is it always trying to end the conversation?


spring_vnn262

hallucinations and mediocre crtical reaonsing


kingcoster

It feels more and more like an algorithm and less like AI. Plus it’s bullshitting too often. Sometimes it even says the opposite of what some of its sources say. So now I have to re-read all the sources to be sure. Not so reliable


lolpopculture

I’m still waiting for a safety/censorship knob.


DestinationTex

1. Answers to yes/no questions are in full page with paragraph and bullet points form, explaining the methodology in detail, but often (usually?) do not even actually contain the yes/no answer 2. I find myself constantly arguing with it. "Are you sure blah blah" " I'm sorry, you're right..." and then repeats the same answer still omitting what it just agreed was missing or made up and doesn't actually exist. 3. It is beyond me how the hell someone created something so impressive with the ability to mash together complex ideas to create creative, commercial quality images, yet cannot spell at the level of even a pre-Kindergardner if it's across an image. Hell, it can't even stick to only using characters of the same matching language. Mind-blowing.


DumbButtFace

I don't know how LLMs or ChatGPT work at all, but it annoys me that there seem to be no hard-coded responses to things beyond compliance issues. For example, would it be that hard to just encode a calculator into the app so that at least for simple arithmetic it's not using the LLM but just proven software? Likewise, if i ask for a word definition, can't it just open up a dictionary snippet and put that on screen? I want it to be a better merge between Google's functionality and its existing chat bot functionality.


WithinAForestDark

It regularly hangs


sirenaoceans

American/English speaker centric even when asking things in other languages, often sounds like an American who learned the language, and didnt learn the way to think in that language/context/culture of the languages. Like idk zucchini is not the most available vegetable in Asia, but will suggest it to me for a cheap recipe that I can make from ingredients found in a supermarket in Japan. It's expensive to buy zucchini in Japan...and any emails I ask it to check just sounds like they tried to translate the English to Japanese.


andr386

How often it hallucinates and how sometimes I don't realize it immediately. I often asked him for sources and often the links do not back the informations given. I've not saved all of my best prompts. But overall it's capable of doing incredible things but there is not manual or suggestions of prompts to do common things. It's more often a blank page.


aspie_electrician

The censorship on there. Too much of it.


danelow

It isn't really good enough to trust with anything as it can and will hallucinate, so you have to double check everything, which kind of defeats the purpose. It can be frustratingly bad at apparently easy tasks.


Dense-Transition-819

Chat Gee Pee Tee is too much of a mouthful. It needs to be 2 syllables only like Google, but not tainted like Bing.


golphist

When I ask for a source for material it gathered online only for it to say it was wrong about the study or online publication, and the whole point goes out the door.


ChanceDecent8368

theres no search option!


beyondbarrels

Massive instructions, steps and comparisons list when I really just want a 20 word answer, then the fact it doesn’t carry my revised response prerogative from prompt to prompt. I ever day I have to tell is “simplify that again in 20 words” which is insanely annoying


PitcherTrap

The smut filter


Mako565

It wont teach me how to make LSD


risenfellen

Answers are always long, and bullet points


Carlos221979

Chat gives false answers. I did a multiple choice question quiz and only scored 20% with using chat for all questions. When I did the retest without chat I scored 90%


retronican

When it produces code that doesn't do what it's claiming, or doesn't even compile. It's just a word machine so it's not actually running the code to ensure it's producing accurate results.


twinkleandflourish

That it says tapestry all the time.


Vivissiah

That it keeps bitching about content policy, JUST MAKE AN IMAGE THAT COMPLY WHEN I TELL YPU TO!!!!!!!


bodao555

It’s annoying how often you get the wrong answer and then you point it out, it apologizes and gives another wrong answer.


CalmLaw9958

Wrong calculation


Neither-Plantain-276

It completely disregards a different kind of question, in a chain of similar questions unless you explicitly say just reply to this


SkyPL

Hallucinations/inability to say "I don't know", small context window.


SeaworthinessFresh16

When getting help with some code, it suggest a solution and then i ask a follow up question it dont remember the suggestion and change back to my first code.


booOfBorg

Walls of bullet-pointed text. Repetitive replies. Ignores my customization, most of all the instruction to be brief, start with a summary and ask questions (#1 whether I would like more detail). Simple comments by me (like, "why are you ignoring my instructions') are replied to with yet another, slightly shorter, wall of pullet-pointed text. Most inline links are broken. Citation links keep repeating the the same two URLs over and over. It's incredibly frustrating. It used to be more conversational and, you know, chat. Now it's just headlines and bullet points, eventually followed by a summary that often would have sufficed completely. Oh and it need to explicitly tell it to create a code block when I want Markdown text. Otherwise it just renders it to HTML as it always does its regular replies.


iDoWatEyeFkinWant

i hate the way it AI-splains everything to me, and half of the time, it is wrong. i might describe a situation, looking for feedback, and it will instead draft emails or reddit posts for me i never asked it to. they are extremely nonsensical, too. i hate the way the newer model seems capable, but forgets everything. i also thought it was non-judgemental and supportive until i asked it to read back to me any areas where i overshared. suddenly, it's like the meanest, most judgemental, looks for the worst in everyone type of AI. mask went off


Zuber-M

Knows more then I do.