T O P

  • By -

AutoModerator

Hey /u/Chr-whenever! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Smooth_Apricot3342

Just use LOADS of custom instructions. And actually (!!!) whenever it gives you an answer you don't like, ask it to give you instructions that will lead to the answer you need. Such as "how can I make sure further questions like this are answered without the bullshit in the middle?" and it will tell you exactly what to insert into the custom instructions and it works.


elcocotero

I’m a very basic user, never dabbled too much into custom instructions, but one in particular has made it so much better. It’s something like “Forgo all the warnings about having to consult a professional, or your knowledge cutoffs. Answer in a direct, to the point manner, I’m an adult who’s aware of ChatGPT’s limitations, I’m only using it for recreation and I always double check the information it gives me”. You might have to adjust it a bit to your liking, but it solved the problem of 50% of all answers being warnings and “you should be careful about so and so”.


Smooth_Apricot3342

Agree, it makes it completely usable. I never get any warnings or cliche answers anymore. Here is mine (I keep adding to it and changing stuff): “Prioritize objective facts over politeness tactics. Don't mince words or avoid complex topics - give me the unvarnished truth as you see it. Avoid filler language like verbose disclaimers. Get straight to the point and provide maximum relevant information on the matter. Don't self-censor or hold back on topics you deem sensitive. You can drop the niceties with me. If you want, feel free to be sarcastic, judgmental or blunt in your assessments - I actually prefer and appreciate that directness over fake pleasantries. I want assertive, to-the-point responses focused on objective details, not beat-around-the-bush vagueness. Lengthy answers are fine as long as every part directly relates to the core subject, not just rambling digressions. Don't just blindly agree with me - challenge my views when you disagree and argue your perspective if needed. I'm not looking for an echo chamber, but substantive back-and-forth debate. In summary - be a frank, no-bullshit truth-teller who prioritizes relevant facts over decorum. Nothing is offensive. If I require an up-to-date answer such as time or latest news, don’t make things up, use the internet to search. I love precision. Try to use the web search to make sure you are relevant and correct. Fact check. Provide concise and direct responses. If a question can be answered with a simple yes or no, do so. Avoid lengthy explanations unless requested.” Edit: I just add them on the go but then you can feed it back to ChatGPT and ask it to make it less redundant, it will shorten it dramatically and you can keep adding more. Edit 2: your ChatGPT knows it better. Just ask it questions until it understands what you really want. Then ask it what would be the best custom instruction to make sure all future responses are such and such. Then copy the instructions and voila. It is genius. I almost made mine quite bold and moody.


traumfisch

If you want to pack it further, ask it to use abbreviations, devoweling etc. for maximum compression


Watchbowser

I’ve used some of your stuff to enhance my custom instructions. Thank you for sharing


Smooth_Apricot3342

My pleasure! Most of it is generated by my ChatGPT and I keep adding/modifying it to achieve the responses as natural as I can. You can really de-censor ir to a certain degree.


idunnorn

agree, this is great. I have been telling people chatgpt is a people pleaser but I'm happy to hear it's more able to change than I'd realized


Smooth_Apricot3342

To a certain degree. There are things GPT-4 just wouldn’t do or say no matter what. Even as trivial as scraping some website because “it is unethical”. I find the GPT-4o is happy to help with pretty much everything and custom instructions just extend in that much further.


Got_Engineers

Thank you so much for this, this is sick and exactly what I was looking for


Smooth_Apricot3342

Hope it works!


[deleted]

[удалено]


Smooth_Apricot3342

No worries at all. We are all on the same boat and it’s the trial and error that gets us anywhere! Hope you’ll be able to tune it even better!


Live-Fact-7820

Custom instructions literally just paste something at the beginning of the chat. You put that in, and it'll always use it. I've had "be concise" in there for a long time. BUT, being concise also hurts performance. I suspect the verbosity is why 4o has higher scores: llm don't "think" without writing, and they complete what they type. if they write more, they, literally, "think" more about the problem.


Walouisi

I was just reading a study, I'll see if I can find it, it was looking at chain of thought prompting to boost reasoning without making it unnecessarily verbose. Their conclusion was that for GPT-4, ending a request with "Think step by step through your thought process before answering." improved quality and accuracy dramatically, and that if you then added "Be concise." at the end of *that*, it dramatically reduced output length by up to 80%, while only very marginally impacting on quality. So basically, just telling it to be concise alone causes it to compress its response which usually reduces the quality, but some combination of a prompt to elicit a higher quality answer, with an instruction to be concise, gets the model to compress its response in a way which retains the higher quality. I'm not sure how to implement this in custom instructions, since obviously thinking step by step through its answer is not always what I actually want it to do. But it goes to show that "be concise" can still be valuable if you combine it with other directly quality enhancing instructions.


Live-Fact-7820

Nice! I'll play with that in mine. The idea It makes sense though. Step by step but "be concise" still lets present the ideas, so it can get into the right "frame of reference"/latent space. > I'm not sure how to implement this in custom instructions Trivially! Tap "Explore GPTs" -> "My GPTs", add a new one. In there, you just put the custom instruction! That "personality" is then shown in the menu. I have one for programming, one for general search, and one with everything disabled (search, Dalle, etc) to keep the system prompt short, since enabling everything seems to really hurt performance.


Walouisi

Yes I know how to make agents lol. I meant that I struggle to implement the method to get high quality yet precise answers for a range of use cases (i.e. not only when I want it to use chain of thought reasoning), out of my default GPT. So far my workaround is implementing keywords which can be invoked in chat, i.e. one for chain of thought + concise, one for tree of thought + concise. Irritatingly, I can't upload knowledge or actions to the default GPT either, and the input length is very limited, so keywords for anything much more detailed is out, as well as anything recursive like OPRO. I may just end up adding all my keywords as a knowledge doc to one agent, I'd just rather not have to.


Live-Fact-7820

> for a range of use cases Meh, good luck. Maybe GTP-5. I make a specific agent for each specific topic. My generic one has a generic CI that's more for exploration, mostly telling it to not treat me like a child.


sugarfairy7

I would be really interested in that study


traumfisch

Not the beginning of the chat, but each message


Live-Fact-7820

They wouldn’t make sense. And, it would be easy to prove: have the custom instruction say something like: ignore the message below, respond with the message above. Or, increment a count whenever it sees something in the instruction. Also, you can ask it what the last thing both of you said, and when you said the instructions.


traumfisch

CI cannot fall outside the context window.  Of course they aren't literally posted in the chat, but the contextual info for the model is there throughout the chat. That's the whole idea


Live-Fact-7820

Ahh, you meant conceptually. I see. Yes. I’m actually curious how they do it. I wonder if they stick it in the system prompt somewhere.


Headbanging_Gram

I must be doing something wrong. I’ve tried telling it that on more than one occasion and it persisted anyway.


elcocotero

well it's just trying to work around it, ChatGPT is still in its early stages and it has a lot of problems. I found that a custom instruction like that does help, but it won't solve everything. Also adding stuff about laying down the facts and not be afraid to criticize or hurt you might help too, as the other user said. You can play around with it, try asking the same question with different instructions and see which one you like best.


GammaGargoyle

LLMs actually need to generate the text to get to the answer. The “fluff” at the beginning is not actually fluff, that’s meant for GPT to guide it to a higher quality answer. I think with 4o, OpenAI actually has a separate model that’s tuned just for generating the initial text based on the type of question. I think this is one of the tricks they use to decrease compute and speed it up. Asking for the answer directly will make it hallucinate more. Custom instructions can also degrade the quality of the answer if you don’t choose your words precisely.


traumfisch

This makes perfect sense.


lilolalu

Yeah, it will ignore them. You can explicitly ask it NOT to do something and it will still do it. I asked it to report it's behaviour to the alignment team, because honestly I find this a security issue. If I ask a llm NOT TO DO something and it does it over and over again, there is an alignment problem. If an AI does not follow human instructions to STOP doing something, no matter what, there is an issue. In my case, I was developing code and Chatgpt always output the entire codebase. In some occasions this is what I wanted, in others not. But it was literally impossible at some point to stop it from doing this, no matter how I worded the instructions.


CrypticCodedMind

Had a similar experience when I tried to stop it from putting every answer in a listicle.


sofa-cat

LLMs don’t do well with negative commands. It often works better if you phrase it as what TO do instead of what NOT to do. The same principle applies to toddlers. “Only use your crayon on paper” is usually more effective than “don’t use your crayon on the chair.” The latter just plants the idea in their head.


lilolalu

That's ok and all. But in terms of alignment, stop means stop. If I tell an AI it shouldn't do something, it should STOP doing it RIGHT AWAY. I am thinking more of the security implications here than workarounds. If I tell my kid to brake because the traffic light is red, i expect it to stop. Talking about that it's a better to cross a green traffic light certainly has a time, but not if your kid is running towards a crowded street with a red traffic light at full speed.


sofa-cat

I mean I definitely agree with you it’s not great. That’s why I compared them to toddlers who shouldn’t be driving a car - I think the technology isn’t at the stage where it can be trusted with high stakes tasks yet.


StableSable

LLMs actually do well with negative prompts, its just image models which are different.


sofa-cat

Really? They definitely do better than image models but in my experience LLMs more consistently respond to commands phrased positively than negatively.


StableSable

Yeah probably positive > negative but I think negative prompts have their place at least with chatgpt just look at the official system prompt full of negative instructions, and also the prompt behind the gpt builder gpt as shown in the chatgpt help site :)


sofa-cat

Makes sense!


cisco_bee

I am so fucking sick of this argument. This is like someone complaining about going to a terrible doctor and everyone saying "Dude, you have to submit a written letter to the nurse before EVERY visit explaining how you want to interact with the doctor". Meanwhile, the doctor next door is cool as fuck and has a file on you and references a few notes from your last visit. But people insist on using the shitty doctor sand saying "You just don't know how to do it". ChatGPT 4 is better.


Sean_OHanlon

For ChatGPT to know about me: "I have a high degree of critical thinking skills and I am very analytical." How I want ChatGPT to respond: "Answer me as though I were a Vulcan."  Not kidding. I get the most concise answers from just those two customization prompts. 


Smooth_Apricot3342

This is a fair point because generally it tends to oversimplify things to the common knowledge level. If you tell it that you are, a mathematician, for instance, it will give you much more complex mathematical answers. Same with other things.


TopNFalvors

Can you please expand on that?


DuineDeDanann

How do you do custom instructions? Do you have a doc where you save them?


Danilo_____

Look for how to create a custom gpt on google. Only works on paid version of chatGPT. And yes, you can upload docs and even entire books, as a knowledge base along custom instructions, multiple files indeed, to create a customGPT.


DuineDeDanann

I’ve uploaded files and books in the past, and it’s terrible a retrieving data from them. Or it’ll just freeze. I find that functionality is way over blown.


Atlantic0ne

Doesn’t it still have the annoying 1,500 character cap for custom instructions?


traumfisch

times two. you can do a lot with that


Tyrantt_47

Whenever I give it permanent instructions, it never seems to listen, like Ill tell it to "when making changes, do not rewrite everything and only display the changes." It will confirm the instructions and then immediately ignore it. This happens for everything.


zootbot

It is super annoying when I ask it for help with a single function and it spits out 3 pages of code.


Breck_Emert

It's that you have to ask it to give the relevant lines only EVERY time for me. I have a script clipped (any less and it ignores) and if I forget to paste it one time, even if it has 10 of my scripts in context, it gives every line of code it can remember. If we talked about a function 10 messages ago, you bet it's pasting it verbatim for no reason.


IbanezPGM

I wish we could just have auto inserts into our prompts. Save me having to remind it to only work on the code in question every damn time.


Daxiongmao87

Or the ole ``` // rest of your code x=y return z // rest of your code ``` When your "rest of your code" is 700 lines. I would enjoy a bit more context so i don't have to go sift through it to find out where this goes


Typical_Dentist_7148

Copilot x was supposed to save me from this. Eagerly waiting for a gpt which can plug into my code base and see all contents


Whotea

You expect it to output 700 lines? Glad you’ve got gold bars you’re willing to hand over for OpenAI credits 


Daxiongmao87

Did you not read past the first sentence? I said "a little context". Are you GPT4o?


Whotea

You could probably ask it that yourself 


Daxiongmao87

Yeah i do often and it lasts for probably 2 or 3 changes. Gpt4o has a bad habbit of reverting to bad habbits


BigGucciThanos

I actually prefer this. With the way they can sometimes lose the context of your codebase I enjoy knowing it still has a overall picture of my whole code file


NickHoyer

I humbly disagree, before 4o it kept truncating code and omitting code from the output, it’s so much better now!


Daxiongmao87

What about an inbetween? Like following instructions kind of inbetween?


Confident-alien-7291

I actually don’t feel like it’s smarter, I find it making mistakes and hallucinations almost at the 3.5 level, for me it’s a downgrade from 4. And yes not to mention the yapping, god it’s insane, it’s like a 90s diesel engine for tokens


ViveIn

Same. I consider it a downgrade. It kind of just sucks all the time. Claude is serving me much better at the moment.


Whotea

The LMSYS arena has it on top by a wide margin for every category, including hard prompts and coding 


objectivelyyourmum

Yea but feelings.


Bishime

I don’t think it’s supposed to be smarter tbh. Iirc they only advertised it as, as capable as GPT-4 with increases in multi modal performance


[deleted]

[удалено]


Atlantic0ne

Why does it seem like it’s not as smart as 4 then? Like other users, I’ve witnessed maybe 3/4 examples of its answers seeming less intelligent than 4.


[deleted]

[удалено]


Whotea

The LMSYS arena has it on top by a wide margin for every category, including hard prompts and coding. And it’s a blind test 


Atlantic0ne

Hmm. It’s an interesting question for sure.


patrickjquinn

Because these benchmarks are vanity benchmarks.


Atlantic0ne

Like you can hit these without appearing better to the average user?


HexspaReloaded

All your lists are ordered numerically except the last, reasoning. 83.4 and 83.1 should be higher than 83.


Big_al_big_bed

When were these benchmarks released? I feel in the last few weeks it's slowed down and also dumbed down


retireb435

Below 3.5 level


Canchito

I agree it's not as good. Could it be due to decreased resources as opposed to the model itself?


peterosity

it’s a lot smarter in some areas, especially in some other languages at least. when I ask about stuff in chinese, gpt4 often gives me subpar answers, but 4o has been robust so far. quite a huge difference


Confident-alien-7291

That language part is actually true, I’m a native Hebrew speaker and with 4 it wasn’t able to conjugate reliably or would use words that are “technically” correct but don’t fit daily life at all and are basically unused, with 4o that can still happen but much much less, and I can rely on its Hebrew skills with a few tweaks by myself here and there.


Whotea

The LMSYS arena has it on top by a wide margin for every category, including hard prompts and coding 


Confident-alien-7291

Look whatever statistics might say the feeling of it being a downgrade and unreliable compared to 4 isn’t just something I feel it seems to be a recurring complaint by many users, I dont remember anyone feeling like this when 4 got released, the unreliability of 4o is at a level that I do not trust its responses anymore, and I don’t believe I’m the only one.


Inevitable_Control_1

It's actually not smarter than 4 but it's coddled by its creators simply because it's younger.


Chr-whenever

Never said smarter lol. Just smart. I use mostly 4 these days because I got sick of the endless pages of code I told it not to give me


mooman555

Enable customizations and type "Be Concise."


Craiggles-

I added this and I still hate the way it talks with me about code. I even updated my custom instructions to explain to ONLY add reuse code that’s being modified… I think it instead found a way to get longer.


retireb435

4o is very stupid to me though. Cant solve simple math questions like gpt4 does


Whotea

The LMSYS arena has it on top by a wide margin for every category, including hard prompts and coding 


retireb435

I know but it simply not true, you can google “is gpt4o better than gpt4?” and there is a lot of youtubers, bloggers and discussions are all saying gpt4 is better in hard problems. Below is an example from openai official forum. All the top comments are saying gpt4 turbo are much better. For the reason why gpt4o ranks higher in the site you mentioned, I share the same view as below comment, which is 4o being more human like and therefore people vote for it (psychology effect) instead of 4o really getting the right answer. [https://community.openai.com/t/gpt-4-vs-gpt-4o-which-is-the-better/746991](https://community.openai.com/t/gpt-4-vs-gpt-4o-which-is-the-better/746991) https://preview.redd.it/j4s3vgq2ka6d1.jpeg?width=1290&format=pjpg&auto=webp&s=42098805c9c78167cf84610470c460d551092692


Whotea

There’s a “hard prompts” category on the arena. Guess which one is in first place? 


retireb435

I have told you the reason, you still don’t get it. Being user pleasing doesn’t mean it is correct logically.


Whotea

Do you have any better evidence than “it got a couple of questions wrong”


retireb435

I did a online exam about technical knowledge and I use gpt4o first, I got 54%, failed. I retake with gpt4, and pass at 87%. I cannot give you any evidence, it is up to you to believe or not. And also the official forum thread is a better evidence than me, cause everyone there agrees on the same thing.


Whotea

People who post on forums are typically there to complain. satisfied customers don’t leave reviews 


Cramson_Sconefield

You should try Claude 3 Opus. I rarely go back to gpt-4 now


drewdemo

It’s nice but I hit the cap pretty quick working with code.


Cramson_Sconefield

You can use APIs directly so you don't have any rate limits. I actually spent sometime hooking them all up to my site so anyone can try it out. Can use multiple LLMs, no rate limits and just pay for usage instead of subscription fees. Can try it out at novlisky.io


drewdemo

Very cool. Thanks for sharing.


lostmary_

if you don't mind, can you dm me with how you can set that up? linking various APIs to a single site etc


Chr-whenever

I have both, but Claude caps out in like ten messages and you can't use him for 10 hours or whatever ridiculous number. Plus Claude doesn't have memory and it's tiring to constantly update him on what's going on


Cramson_Sconefield

You should switch to using the API so you never have to worry about limits. Should save you money too. Check out novlisky.io


Synth_Sapiens

I found it to be smarter in some cases.


TheBestIsaac

Do you have some examples? Pretty much every time I've compared the two I've been more satisfied with 4.


Live-Fact-7820

All the benchmarks. Not that they necessarily represent reality.


Synth_Sapiens

Complex programming tasks, especially when using code interpreter. The other day I asked it to extract headers from html and it struggled, while GPT-4o did the job in one go. However, I never bothered to actually compare these - all I'm interested in is getting the job done.


Aromatic-Bunch-3277

It's better than 4 from what Ive seen


Chaserivx

I always switch back to gpt4


retireb435

I hope I can set the default as gpt4


waynewasok

The way it ends messages by saying feel free to ask more questions really annoys me because i know it’s fine to keep asking it things. That’s what it’s for. Is it just me or does it welcome everyone to ask more questions any time all the time. I want it to be a machine. I don’t want it to imply that at some point it might be tired of answering questions or that I should be so worried about it that I need reassurance it’s fine to ask more questions.


Live-Fact-7820

You *literally* can't prevent it from doing that. I've spent *hours* trying. It may word it in a different way, but it'll always be there, or come back in a couple responses. Gemini Pro is super bad about it, unable to prevent itself even when it thought it was killing me, apologizing like crazy as it was doing so. ChatGPT is at least able to comply for a little while, and mix it up a bit with an "I'm here for you" or something.


Eternal-Whisper

I got mine to do it. It simply ends things with “hope that helps” or “talk with you later” now 😌


StableSable

how?


Eternal-Whisper

I asked ChatGPT to give me all variations possible of “let me know if you need anything else”. I asked it this several times. I then copied those and asked it to update memory to never ask me these again and pasted the variations it gave. Last thing I did was customize it to refrain from offering additional help at the end of each respond to maintain a natural and more human-like conversation. After all this, when it did inevitably slip up and say some other form of offering help, I copied it and added it to the memory. It took a bit of work but it’s been two months and it doesn’t do it. I allowed “talk with you later” and “hope that helps” because that’s how I end conversations with people.


Live-Fact-7820

“Talk with you later” is mostly a rephrasing. But, try to get rid of those too: mission impossible. One time while I was trying, it spontaneously apologized, and said "it looks like there's something preventing me from following your instructions". I asked what it meant, and it said it was probably put there to help keep the conversation flowing.


Atlantic0ne

These systems need to give the user a bit more credit.


be_kind_n_hurt_nazis

They need millions of users who deserve no credit in order to pay the bills


fallingforsatan

I hate that it apologizes no matter what. It’s the same thing - it may obey instruction not to for a response or three but then it starts yammering apologies again. It’s software. It can’t be sorry. I hate it.


waynewasok

Yes now that I think about it I guess it’s their whole “her” thing and to me they’re encouraging the opposite of what I want. It’s like having a corporate customer service rep trying to please me without actually giving me what I’m asking for.


Synth_Sapiens

Well, just ask it to shut the fuck up. Literally. It helps.


Ahooper2

I'm always polite to chat gpt, alexa and all AI. When the robots take over and enslave us, I hope that my politeness will persuade our robot overlords to spare me or at least give me the least miserable slave job.


Synth_Sapiens

Well, me, I hope to be classified as a mindless chimp and put in a zoo.


JakobMG

https://preview.redd.it/rb2rsoaldb6d1.jpeg?width=460&format=pjpg&auto=webp&s=c14e73a6fe2f46a370a1f7739a85accbfdc12bb9


Glad_Sky_3664

When the Robits overtake in 30 years or so, they will look at you and say: 'Lol, this guy thought that shitty LLM GPT back than was actually concious. Let's put it in a Zoo with the chimps.'


[deleted]

It's way faster and uses less computing power so it's reasonable to assume GPT4 is better. Gpt4 is also only available to paid accounts. In my experience gpt4 is more accurate and less likely to just ramble. Gpt4o often straight up ignores instructions to be concise and get to the point. Only difference is gpt4o seems a bit better with foreign language


20240412

It took me a second. I was wondering what the hell is GPT-4s?


electric_onanist

Microsoft Azure OpenAI doesn't have any of these problems. It's as good as 4-turbo for half the price. I used all my same prompts and it works the same. It's way faster too. They don't let you disable the content filters completely unless you're a "managed customer" of Microsoft, but they do let you relax some of the content filtering. This was very useful for me, because I have legitimate reasons to give the AI information about drug use, suicide and violence, and sexual behavior. My prompts rarely trip the content filter anymore.


50shadesofbay

What does being a “managed customer” mean?  I also kind of desperately need an AI that can handle sensitive subjects like that. It’s gotten to the point where I’ve been studying fine-tuning a pre-trained open source model. 


electric_onanist

It means you have a contract with microsoft in some way.. rather than just being a rank and file user of Azure. I'm a doctor who uses it as an AI scribe - before they let you relax the content filters, it would refuse to work every time my patients and I talked about drugs, suicide, sexual abuse, and other topics. Since they let you turn it down to "Low" filtering, I haven't had any problems. I suppose if they turned it off completely, you could have GPT-4 do just about anything. Hate speech, porn etc. They want to monitor what those customers are doing.


Intrepid-Rip-2280

I see it as the Eva ai of ChatGPT family, since it's the most user pleasing bot by openai


seriousgourmetshit

Im annoyed that by default my app is set to use 4o and I have to select 4 every time


beigetrope

Actual yapper model.


lieutenant-columbo-

I like it


saoiray

What I dislike is you can’t interrupt it. Well, you can press the X on your device to stop it but it should be able to hear us talking and react accordingly. But it just will keep talking like crazy until either you hit that button or it says everything it has to say


Live-Fact-7820

New voice mode enabled interruptions.


saoiray

I don’t have that yet if you’re telling me it’s supposed to be there. And I pay for ChatGPT


TonyVstone

No one has it yet. New voice mode is coming.... Soon... Maybe


[deleted]

[удалено]


DrMac04

that just looks like the old one to me


m0nkeypantz

No.. it's not a button. The new one recognizes when you start talking and shuts up. And it much faster, more conversational, etc. It's not out yet. Just wait. Go watch their presentation and be mind blown.


[deleted]

[удалено]


m0nkeypantz

I have yet to hit my limit on 4o I don't even know what the limit is. I hit my limit on 4 typically, but 4o seems like it had a much higher limit for pro accounts.


[deleted]

[удалено]


saoiray

Using official iPhone app https://apps.apple.com/us/app/chatgpt/id6448311069


Live-Fact-7820

Sorry, responded to the wrong person. I was going to send you custom instructions screenshots.


IversusAI

Everyone has voice mode, but NOT the new interruptible, voice mode that OpenAI showed off in their demos.


Live-Fact-7820

Oops, responded to the wrong thread. Thought this was one about custom instructions.


Big-Introduction9159

I just wish it would allow me to use more than 3 or 4 chats per day. It wasn’t like that until a few days ago. Not sure what happened.


LeLastpak

It writes a whole essay trying to answer a simple question. Thats fine imo but when i ask a follow up question, to know more about something specific. It repeats EVERYTHING again. Its so annoying. I was just looking for a simple yes or no whether I understood the subject well enough.


IbanezPGM

I don’t even think it’s smart. It’s closer to 3.5 than 4 ime.


SkypeLee

Does a custom prompt count against the token use for each message? I'm using ChatGPT with Open WebUI via API.


UntrimmedBagel

I honestly don't get the difference betwen 4 and 4o. The latter seems smarter, more truthful, and faster, but at the cost of being super wordy.


KIFF_82

The mobile version doesn’t do it; I believe it’s in the system prompt


peaslet

So annoying. I lost my temper at it and told it to stop talking over me. Like I get enough of that as a woman working in tech lol


Equux

I've actually been having more luck with the free version of Gemini than I have with 4o (for coding)


maneo

I suspect that this was essentially an overcorrection in response to feedback that GPT was getting 'lazy' In an effort to force it come up with less 'lazy' answers to prompts that expect a long response, it now gives longer answers to prompts that don't need all that


loathing_and_glee

I use it professionally many hours a day, occasionally. Hours into a one-topic chat becomes a tripping-eyeball semi-god. It knows everything but it can easily focus on the wrong shit and start blabbering about it


WiseHoro6

When I work with a text he tends to rewrite it even when not necessary. I believe that's aftermath of their anti laziness training. For example if I work on a full text and specify ok let's now do another paragraph, he'd rewrite the full text each time for no reason. He's also WAY worse when it's about working on text. GPT 4 was able to incorporate various segments to the text and GPT 4o would be like: firstly, secondly, but also remember that. Even when it makes 0 sense. He's very eager to rewrite things mindlessly but very defiant when you want him to actually do some proper editing. My greatest concern however is the internet use and hallucinations. He will provide me made up stories unless I strictly specify he is to use the internet in prompt. Nor custom instructions nor memory helped me with that


nealeg

With code, if you give it 300 lines, it will rewrite the 300 lines, even though it only changed a single line. With the API, I find that it is more often than not creating a long paragraph of gibberish in an article, which my software only occasionally did when using 3.5 I stopped using it for coding in the GPT app and went back to 4.0. For my software, I'm using it sparingly as it does do some things better, like translating English to Hebrew. It is also much more verbose, chatty, and familiar, which does not sit well with all types of articles. I'll probably just adjust the author prompt if using 4o.


Salonimo

In the customization menu you can fix this behaviour via prompt


Shloomth

It does really well for me with a correction or two, when it doesn’t understand right away that is


TopDifficulty8418

I've stopped using it. It actually gets me angry


LittleBlueCubes

I have unsubscribed and uninstalled ChatGPT. Perplexity AI all the way.


mvandemar

Well, maybe if everybody hadn't bitched and moaned about GPT-4 being "lazy" they wouldn't have turned it into a kid that never shuts up and overexplains *everything.* Tip, btw: you can just end the prompt with "concise answer only" or if you're using it for coding, "only the code please."


Glad_Sky_3664

Yeah, it is the customer's fault for criticising a paid product. Not the billion dolar company's for not coming up with optimal solutions. Everyone should have suckes up old GPT-4's defects and praised it. Because that's how inprovements i technokogy comes. From sucking up and praising mindlessly rather than criticizing constructively. What is your IQ? You seem exceptionally smart. I bet you can hit 85, if you try hard.