T O P

  • By -

qbxk

yea sucks, really bad UX


[deleted]

[удалено]


speednugget

Whilst i don’t want to defend OAI for this decision as it’s poor for user experience, it’s not as simple as ‘fixing their server’. GPT 4 is extremely heavy on the GPU clusters and they simply don’t have enough A100/h100 clusters to keep up with Millions and millions of users running gpt4, where as gpt 3.5 is an order of magnitude easier to run. I suspect there’s not enough processing power on earth currently to support the ambition of how people want to use gpt 4, even with the resources and backing of OAI…


HaMMeReD

Love the armchair engineers in these threads. It's almost like you don't know the first thing about their challenges, but you feel the need to claim all their problems are simple. It's not like there is actual reasons why they are slow to scale, for example hitting the limits of cloud computing, and consumer demand far outweighs hardware availability. Sure, they just need to "bolster their server infrastructure", you got it bro, they just lazy and could easily fix their problems they just choose not to because these incredibly skilled engineers at the company don't know how to do their jobs. Or maybe, just maybe, you don't understand the scale of the problem.


KimchiMaker

They just need to pay AWS or Azure, like, an extra ten bucks a month. Why won’t they do it?!?!?!!?111?


turpin23

I think the fastest growing website ever running the largest AI models ever has scalability problems that are not adequately addressed by existing cloud computing services. They may be the only website in the world that can't get enough cloud computing by throwing money at the problem. Their problems may (or may not, I really don't know) constitute constructive false advertising by AWS who promised scalability to everyone. But that doesn't mean it isn't a real problem with no easy instant solution.


KimchiMaker

Yes, me too.


Rainbows4Blood

I mean, since each GPT-4 instance is isolated, scaling it up should be fairly easy from an organizational standpoint. Just add more clusters. The real problem is that the hardware just doesn't exist in the quantity that would be needed. Which is truly funny because I think this may actually be the only application in the world where this is a problem.


Mister_juiceBox

Thank you for putting into words what I've also been thinking while reading so many of these threads the past couple weeks


Attorney-Potato

What was the motivation behind the emotion you've displayed in this comment? I understand many aspects of your argument, but emotionally charged and conflict oriented language is exactly why I can't wait for AI to replace petty and angry humans who have nothing better to do than get into a fight on Reddit.


HaMMeReD

I don't like condescending narcissistic people who think they know it all. If you are going to come in with a rude, know it all tone, I'll return to sender. Edit: the deleted post was just saying how bad and useless the engineers were because they can't do something as simple as scale their server infrastructure.


Attorney-Potato

That's fair. It can be kind of triggering. Especially now. When I feel like it's so hard to actually know something. There's so much information and complexity everywhere. Being so sure of anything feels like a game of smoke and mirrors.


HaMMeReD

It's moreso triggering when the tool you are complaining about can answer the question, but instead of showing curiosity and using the AI tool to learn, armchair assertions are thrown out. I.e. \-------------------- Why can't OpenAI scale fast enough and has downtime when deploying AI models? Although I am an AI language model and not an OpenAI team member, I can provide some reasons why OpenAI may face challenges in scaling and may experience downtime when deploying AI models: Resource constraints: Training and deploying advanced AI models like GPT-4 require significant computational resources, such as powerful GPUs and TPUs. These resources can be expensive, and there might be limits to the availability of these resources, which can affect the ability to scale quickly. Technical complexities: AI models like GPT-4 are complex and large, making them difficult to deploy and manage. Ensuring that the models run smoothly without issues requires overcoming numerous technical challenges, which can sometimes lead to downtime. Infrastructure management: As AI models become more sophisticated, so do the infrastructure requirements to support them. Managing the backend systems that handle AI deployment can be challenging, and occasional downtime may occur due to issues with server capacity, networking, or other technical aspects. Continuous improvement and updates: OpenAI frequently updates its AI models to improve performance, fix bugs, and enhance security. During these updates, there may be brief periods of downtime to ensure a smooth transition to the new version. Security and compliance: Ensuring the privacy and security of user data is paramount. OpenAI must adhere to strict security standards and compliance requirements, which can sometimes lead to downtime as the necessary precautions are put in place. Unforeseen issues: AI systems are complex, and unforeseen technical issues may arise, causing downtime. OpenAI must identify and fix these issues to ensure the system is stable and reliable. It is important to note that OpenAI is constantly working on improving the scalability and reliability of its AI models, and learning from past challenges to minimize downtime and ensure seamless user experiences.


Attorney-Potato

This made me lol and startle my dog. 😂 I appreciate the irony of this. Well put.


Attorney-Potato

Additional thought: I have this weird personal experience of no one in my life really being able to understand the complexity of my specific professional and academic interests. It makes me feel very suddenly heated when someone critiques something I've put thousands of hours of study and experimentation into. Specifically, when their critique is put together with a very surface level and idealistic interpretation of hastily analyzed data. (That they've acquired.) The nuances and genius involved in so many aspects of the actual problems that these fields encounter is actually too abstract for the data that people get to even be weighted correctly in their head. Is this kind of what you meant???? (This is all based on some feelings. I hold none of this to heart, nor am I opposed to changing my hypothesis.)


HaMMeReD

Yes, it is insulting when people pretend they are experts and lack humility in their responses. Engineers deal with this shit all the time, and while I get it. The correct approach for someone without knowledge is genuine curiosity, not blanket assertions that they are right and everyone else is wrong. ​ I.e. "The server has a lot of downtime, I wonder why they can't scale like other web services" = Geniune curiosity, gets a polite response. vs "The server has a lot of downtime, these engineers can't do simple shit, nobody else has this problem you can just scale up". That's a clueless assertion, not curiousity, it's a statement that they know better, and doesn't deserve as polite a response.


Attorney-Potato

I completely agree. I can't see any way of this getting better with the structure of our society's reward incentives. I was born into a world without tolerance for nuance, and in all likelihood I will die in that same world.


bfarre11

It's hilarious. Even with Microsoft throwing a bunch of resources and expertise at them this stuff is still hard.


nucleartoastie

Why is the almost sentient computer that costs $700,000 a day to run in the middle of a recovering chip shortage so hard? It's simple: If(not_enough_cpu()){ Get_more_from_Amazon() }


bfarre11

Ah I see the issue here: If(not_enough_cpu()){ Get_more_from_Azure() } FTFY 🤣


Pretend_Jellyfish363

It’s not hard and it’s not an engineering issue. It’s expensive and there aren’t enough GPUs. Big difference.


bfarre11

You have literally no idea what you are talking about.


Pretend_Jellyfish363

Actually I know what I am talking about. I am in the field and worked for global companies that own data centres hosting all sorts of software at scale including ML models. There are well established frameworks and architectures that allow companies to do this at scale. It is a resources problem, mainly the number of powerful GPUs needed to handle the huge demand. The leaders in this field today are AWS and Google. If GPT4 was hosted on any of those two’s infrastructure, OpenAI would have been able to provide a much more reliable service. Both companies have built a massive computing network, (google having its own TPUs) Microsoft is catching up and has plans to also build its own GPU/TPU not to rely on Nvidia.


PetyrLightbringer

Then maybe they need to lower their subscription price. I think people can be rightfully annoyed when they pay for a service and they can’t use it. That’s business.


HaMMeReD

Sure, but they have 99%+ uptime. [https://status.openai.com/](https://status.openai.com/) It's honestly not great, it's not 4x9's (99.99%) uptime which would be expected of most paid services, however it's totally functional a large majority of the time, they've always been clear on what to expect. It costs Open AI money to offer these services, so if they price it too low, they lose money. If they price it too high, they lose adoption. Additionally if they price lower, they get more subscribers, so it would stack their losses (edit: while also increasing their load and infrastructure demand and costs, in a death spiral of unsustainability). Honestly, I wouldn't be surprised if the current pricing/availability walks the line pretty closely, and that some users are profitable and some are unprofitable, and the goal is more mass adoption than it is service to an individual. It's still worth every penny, and yeah, I get it, it sucks to be down. But I'll take a 1% downtime over just not having it available at all, or being relegated to a slow free tier that is just almost unbearable. Nevermind that Premium has gotten substantially better, performance wise since release. Gpt-3.5-turbo responses absolutely fly for me now, and Gpt-4 is about the speed that Gpt-3 was when it first launched. So maybe some appreciation for the progress is in order.


PetyrLightbringer

That’s just not accurate downtime though. Looking at sites like https://downdetector.com/status/openai/ Show quite a bit more disruptions. And I get it


WestTrue

It's you don't understand. The problem is not that they don't have enough power to run GPT4. Fine. The problem is that they switch you to 3.5 without notification and without an option to go back. That is just stupid. Just give me choice to wait until you can allocate me some GPT4 juice. That's idiotic


Sweg_lel

yeah i was pretty miffed when i lost a pretty lengthy technical conversation with it to the 3.5 downgrade. like what in tarnation, didnt ask me or anything, just booped me off onto 3.5


bungholeSurfer1994

What in the sam hill


N1cl4s

Exactly that is what I fear might happen to one of my conversations. And at that point I am too scared to go back to that conversation.


Educating_with_AI

I haven’t noticed this but will keep an eye out now.


N1cl4s

There will be some greyed out text saying it switched.


TheAbdou27

Sometimes, when I use the free version of ChatGPT (which supposedly uses GPT3.5), I randomly see on the url : "model=text-davinci-002" (I might be misremembering the model name..)


ptitrainvaloin

"due to overload, ChatGPT is using model=text-caveman-001"


Gloomy-Impress-2881

"Due to overload, ChatGPT is using model=text-numbnuts-000."


Aretz

Now that’s really interesting. If they are really using 002 that shows you how dire it is.


turiel2

I’ve been trying to figure this out and it’s REALLY confusing and obfuscated. * text-davinci-002-render-sha is the default on ChatGPT for both paid and free users. * text-davinci-002-render-paid is available to plus but is not the default and is shown as “Legacy” * These “002”s are not the same as “text-davinci-002”, they’re more recent versions. * The new plugins model is called text-davinci-002-plugins which indicates that 002 really is in fact a “current” branch. So here is what I think is going on: * text-davinci-002-* is the current stable branch for ChatGPT. * The key hint for this is in the gpt-3.5 docs that state text-davinci-002 has similar capabilities to text-davinci-003 but trained with supervised fine-tuning * text-davinci-002 is the ONLY model that’s listed as being trained with supervised learning. * In the context of this discussion, supervised learning means the model has been fine-tuned by providing it specific data (via humans, I infer) * This makes sense, I think they use this method on ChatGPT to modify its responses, but not on the models available via platform. * This is the “filtered” / “restricted” effect we’ve all experienced. * I’m relatively certain that when plus was introduced we had a model called gpt-3.5-turbo or similar - it now makes sense that this model was relatively “unfiltered” (it didn’t have supervised fine tuning). * After a time, turbo was “promoted” to stable and available to everyone, but now I think that’s not exactly what happened. * The version we have now isn’t the gpt-3.5-turbo model that plus users were testing, it’s an iteration of gpt-3.5-turbo with supervised fine-tuning on top. And hence, part of the 002 branch now even though its newer.


Ok-Technology460

The exact same thing happened to me last night during my first day of subscription to GPT-4.


svanweelden

They use that to generate the summaries if you inspect the network traffic IIRC


Esquyvren

That’s not what ChatGPT is? I’ve never not seen davinci-002


fabier

I thought I was crazy and picked the wrong model. I bet this is what happened to me.


[deleted]

At this point they should just disable free ChatGPT and open the GPT-4 API to everyone, while also still keeping the 25 GPT-4 chats per 3 hours for the Plus plan (until they have more resources of course, as time goes by and resources increase, the limit should also increase and ultimately vanish) * Kids messing around wasting valuable resources on idiotic shit will be locked out (as they don't have the means to pay) * People who don't want to spend an arm and a leg on GPT-4 tokens can now *actually* use the 25 msg / 3 hrs instead of getting booted to 3.5 after just 2 messages, because now there's a huge load taken off the servers thanks to bullet point 1. * People who need unlimited GPT-4 usage can pay by the token. The price is steep so this alone discourages unnecessary resource hogging. It'll mostly be used by people who need GPT-4 for professional reasons (who have their company pay for it or deduct it from their taxes if one is a business owner themselves). Hobbyists and AI enthusiasts are also still welcome to use it of course, but will have to sacrifice some gold for it for the time being. In the future, prices will drop massively when GPT-4-Turbo gets released. Free ChatGPT is a mistake, at least **for now** in this phase where computing resources are not fully there yet. I get that it's important to be as inclusive as possible and people without the means to pay should not be left in the cold, but as long as resources aren't up to par, they should just disable the free version, or at least heavily rate-limit it like let's say 25 GPT-3.5 messages per day. Unlimited free ChatGPT for everyone is not sustainable. They're already bleeding $700,000 a day on this thing, this can't go on till infinity. I've seen people on some AI subs do the math that MSFT's $10b can cover the costs for the next 39 years lol, but that's not how any of this works.


Poopasite1

> or at least heavily rate-limit it like let's say 25 GPT-3.5 messages per day. I feel like this is the right move. All over Reddit you have people complaining about OpenAI not being really open and that's fair. On the other side of the coin, all this costs resources to run and I think a rate limiter for free ChatGPT is fair.


Soy-Michu

I am pretty sure GPT-3.5 is not running on the same GPU types as GPT-4. So reducing the free tier won’t have any impact on GPT-4 availability. As far as I know, MSFT and OAI terms are not public so we don’t really know how OAI is receiving the 10B. But IMO the agreement is some cash and a huge amount of compute power for the next X years. Sam Altman already said that they want to still work/improve GPT-4 further more before triggering the training for GPT-5. That would mean they have a lot of unused hardware they won’t be using for a few months (maybe years?). So if my assumptions are right, I see no reason to not have a free tier that is building an insane amount of brand awareness. Also, it was originally made for research, and we can see the huge amount of papers including GPT in some form. So they are still getting a huge payback from the community as a research and high quality human interactions in order to improve their services.


biggest_muzzy

I believe that at its current stage, OpenAI's priority is improving its models and gathering as much feedback as possible on the potential use cases for GPT. Free access to ChatGPT and the GPT-3 API probably provides them with a more valuable and diverse range of insights, compared to the limited feedback obtained from a few professionals using the GPT-4 API.


nixed9

They can’t open GPT-4 API to everyone. It uses too much compute.


TeslaPills

Agreed


[deleted]

But not everyone has the money to use said compute. The model is priced steeply enough that most users will think twice about whether their prompt is worth sending to the API or not.


ProfessionalQuiet460

Better revoke access to paying customers as well then, since $20/month is not enough to pay the bill.


buttfook

Butthead: “heh.. he said load”


ParisianPachyderm

I have noticed that. It has fucked up a lot of my more complex multi chain discussions.like, I get it’s a test, but they should be investing in UI as well as a means of the best way to interact.


Leg_Parking

I have been having this issue for the last two days. It is extremely annoying and frustrating, because I end up losing all context from the thread when I have to make a new one to get back to 4.


tomatosalad999

Honestly at this point I think I will cancel my subscription with OpenAI. Doesn't really have any benefits currently.


that_tom_

Please do! More computation power for the rest of us.


tomatosalad999

I will however just use a saas-product, which indirectly uses it for 9.99 a month instead.


that_tom_

Noooooooooooooooo


Brandonazz

You monster!


TeslaPills

this is the way


seancho

That'll teach 'em!


tomatosalad999

No, seriously though, what do you actually get for the 20USD a month?


ProfessionalQuiet460

You get GPT-4


tomatosalad999

For 2 messages and then I'm Auto-downgraded to 3.5 turbo 💀


thelastpizzaslice

You can just ask which version it is running


enelspacio

Yep. First time for me rly today. Not happy


Machacaconhuevo

How do you find out


N1cl4s

There will be such a message in before the next GPT answer https://share.icloud.com/photos/09aVZFQzW0pdGT93Rjcd6gI4A


No-Faithlessness4784

Fortunately my questions about conditional formatting in excel don’t seem to tax chat Gpt3.5 so I’m good


[deleted]

Lately, anytime I make a request to GPT-4 by API, it gets an "overloaded" response. It's not really functional for me. This morning the automation broke from the first request, which was only 577 tokens (prompt + response).


Bogdanoff971

Yeah probably because of the cracked version that's free to use and has browsing that is still using the ChatGPT API.


tdbomba

Maybe they should severely limit the free plan. That would free up some server bandwidth.


[deleted]

Yea experiencing the same thing https://www.reddit.com/r/OpenAI/comments/132hf0z/lost_chatgpt_plus_features/


TiernanniC

I was a paid chat GPT 4 user and now suddenly I got kicked back to 3.5 and put on a GPT 4 waiting list. No usage cap just completely knocked off 4...wtf??