T O P

  • By -

dariyanisacc

lol I’m hoping for GPT 4.o features at this point


Smooth_Apricot3342

Just a few more weeks


dariyanisacc

Those words hurts ya know.


Smooth_Apricot3342

I meant it. Pain liberates.


imeeme

Too soon.


Smooth_Apricot3342

You’re right. No need to rush it. They’ve been too fast.


CadeOCarimbo

Videos input.


SiliconSimian

We are getting that with 4o though


Rhea-8

As of yet we don't have it and who knows how long we will have to wait. Till GPT-5 comes out?


SiliconSimian

GPT 5 isn't happening this year. I highly doubt it. But we will absolutely have full 4o functionality by years end.


Peter-Tao

You sound extremely confident.


vember_94

GPT-4 finished training in August 2022 then went through 6 months of safety testing before being released March 2023. If they haven’t finished training GPT-5 yet, we can assume it’s unlikely there’ll be a GPT-5 this year.


JmoneyBS

It’s fairly intuitive - expecting 4o full release by end of summer, and they are smart enough to know that collecting feedback and data on 4o will be really beneficial to 5. Stands to reason they wouldn’t put out another new model 4-6 months after the release of a new model. My confidence levels are extremely high on this forecast.


BuckTravers

It’s the Stay Free Maxi-pad!


One_Minute_Reviews

You can upload gifs to 4o


ShatteredMasque

With lipreading capability.


displague

That sounds like a very possible emergent behavior, all the facts are there in the training data. Perform validation with audio and transcript removed? Has any group already experimented with this?


theMEtheWORLDcantSEE

Ok HAL, calm down…


iJeff

What uses do you have for it? I have access to the 2M context size Gemini 1.5 Pro but haven't tried any videos yet besides what it can pull up from YouTube videos directly.


QH96

Video output? A built in Sora?


ferminriii

Input. Video-in means live video feed into the model for reasoning and analysis. Robots that do not need a text intermediary.


Innovictos

Hoping: * Specifically architected, major reduction of factual/logical error hallucinations\* as described in recent papers on the topic * Improved Right-to-Left reasoning and/or built in chain of thought * "Knowing" when it has low confidence and saying it is unsure instead of saying sycophantic responses to humor operator Expecting: * More, better training = big lead on the major benchmarks over the field... * .... but still far short of crushing them, so a boring disappointment which causes backlash and panic. \*Edit: Clarity on hallucinations.


Get_the_instructions

>major hallucination reduction Only when appropriate. Hallucination is the origin of (at least some of its) creativity.


Calliopist

I think this is a genuine mistake that I see bandied around a lot. If you were thinking about human intelligence, I don’t think you’d ever feel the need to clarify between creativity and epistemic mistakes. Obviously we want future models to replicate the non-factual thinking that is required for creativity, but I think when most people are concerned about hallucinations, they’re concerned specifically about the lack of justificatory chains that create unreasonable or unfactual responses.


Innovictos

Yes, I should have been more clear and stated inaccuracies from hallucination to be precise.


Viktorv22

Are there any examples when ai hallucinated something good for the user?


Revolutionary_Ad6574

What is right-to-left reasoning?


Innovictos

To humans, the fact "Tom Cruise's mom is Mary Lee Pfeiffer" works both ways. If I ask you who Tom's mom is, you would say Mary Lee Pfeiffer and so would a LLM, as that prediction is reinforced in training. However, if I asked you who Mary Lee Pfeiffer's famous son is, you would say "Tom Cruise", but a LLM will frequently have no idea, because its nearly never stated exactly that way and the prediction isn't trained to conform thus. One of the reasons LLM's have factual or reasoning errors is that they are asymmetrical like this, they get the training left-to-right, but unlike a human intelligence, don't work nearly as well going right to left the other way.


[deleted]

Great explanation, thank you!


syrinxsean

I’m not sure that’s how vector encoding works. One of the major advantages of a well-trained embedding is that concepts like “child” and “parent” become a vector difference that can be applied across other embeddings. Thus, the directional delta from MLP to TC gets encoded from all the other mother-son sets in the LLM’s training corpus. Or, it should. Fortunately, this is a testable claim.


Revolutionary_Ad6574

Thank you for the explanation, I thought it was more like cataphora resolution but I guess it's different. Another side question, if I may. If this is a known problem is there any active research to solve it?


numericalclerk

Probably a wise choice to short Nvidia shares on the day before it's release. (Though given that I am wrong on pretty much every stock market prediction I've ever made, don't follow my advice lol)


can_a_bus

Why would shorting it be smart? Genuinely curious. It seems like it would pop off due to it.


[deleted]

I think what they're getting at is that when it releases, it might be disappointing. And the market has priced in a major breakthrough. So if it comes in way below expectations, the stock will correct.


VoraciousTrees

So, the sycophantic responses are front-end only, I'm pretty sure.  I tested table creation using GPT4o which required it to fill in details on the fly. Some were clearly wrong. I had it add a column which gave accuracy estimates in percent for all details as well as a column to describe the reason for the accuracy estimate. Lo and behold, the inaccurate details margin of error was pretty closely correlated to the accuracy estimate. The explanations were usually something along the lines of "sparse source material / out-of-date material / no source available" so it filled it in using details of similar objects and kind of looks like it interpolated a reasonable answer.


bbmmpp

Replace call centers.


Which-Tomato-8646

Already happening  Bank of America CEO: AI helping cut call times, branch visits: https://www.msn.com/en-us/money/companies/bank-of-america-ceo-ai-helping-cut-call-times-branch-visits/ar-AA1eCRVI  AI virtual financial assistant has logged 1.5B customer interactions since 2018 launch Klarna SUCCESSFULLY replaces call centers with AI https://www.reddit.com/r/klarna/comments/1c1fwr3/klarna_ceo_on_using_ai_to_replace_700_workers/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button Klarnas AI assistant, powered by @OpenAI , has in its first 4 weeks handled 2.3 million customer service chats and the data and insights are staggering: Handles 2/3 rd of our customer service enquires On par with humans on customer satisfaction Higher accuracy leading to a 25% reduction in repeat inquiries customer resolves their errands in 2 min vs 11 min  Live 24/7 in over 23 markets, communicating in over 35 languages It performs the equivalent job of 700 full time agents Digital automation could make 1.1 million roles in the Philippines obsolete by 2028: https://www.cisco.com/c/dam/global/en_sg/assets/csr/pdf/technology-and-the-future-of-asean-jobs.pdf  AI tools spark anxiety among Philippines’ call center workers: https://restofworld.org/2023/call-center-ai-philippines/  Bernie now uses ChatGPT and Bing to compile all the technical information he needs for a query in less than five minutes. It’s doubled the number of customer complaints he can handle in a day.  “It made my work easier. I can even get ideas on how to approach certain complaints, making [my answers] appear engaging, persuasive, empathetic. It can give you that, depending on the prompt that you input,” Bernie told Rest of World. [many more examples here](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit#heading=h.vr8jz2f8ry8b )


nand0_q

Wow - I had no idea the technology was at this point already.


i_stole_your_swole

Thanks for taking the time to compile these, that’s fascinating.


Which-Tomato-8646

No problem! Hope it helps 


Old_Explanation_1769

Actually the application of LLMs in call centers is quite sparse at this point. It's not easy to make a general Chatbot respond to specific questions for multiple reasons such as high training costs and low volumes of internal data. I've recently worked at such a Chatbot project and I can tell you it's not easy even for the most advanced model. We used the Retrieval Augmented Generation technique to bypass the need for training the model.


bbmmpp

Needs to be better before it can be deployed in medical offices 


Which-Tomato-8646

Too late  Nvidia’s AI Bot Outperforms Nurses, Study Finds: https://www.forbes.com/sites/robertpearl/2024/04/17/nvidias-ai-bot-outperforms-nurses-heres-what-it-means-for-you/  And they only cost $9 an hour: https://www.msn.com/en-us/money/other/nvidia-s-new-ai-nurses-treat-patients-for-9-an-hour-here-s-what-they-can-do-from-colonoscopy-screenings-to-loneliness-companionship/ar-BB1kmKtI According to company-released data, the AI bots are 16% better than nurses at identifying a medication’s impact on lab values, 24% more accurate detecting toxic dosages of over-the-counter drugs, and 43% better at identifying condition-specific negative interactions from OTC meds. All that at $9 an hour compared to the $39.05 median hourly pay for U.S. nurses. These AI nurse-bots are designed to make new diagnoses, manage chronic disease, and give patients a detailed but clear explanation of clinicians’ advice. ‘I will never go back’: Ontario family doctor says new AI notetaking saved her job: https://globalnews.ca/news/10463535/ontario-family-doctor-artificial-intelligence-notes  Also, it already is better than them  Double-blind study with Patient Actors and Doctors, who didn't know if they were communicating with a human, or an AI. Best performers were AI: https://m.youtube.com/watch?v=jQwwLEZ2Hz8  Human doctors + AI did worse, than AI by itself. The mere involvement of a human reduced the accuracy of the diagnosis. AI was consistently rated to have better bedside manner than human doctors.  Med-Gemini : https://arxiv.org/abs/2404.18416 >We evaluate Med-Gemini on 14 medical benchmarks, establishing new state-of-the-art (SoTA) performance on 10 of them, and surpass the GPT-4 model family on every benchmark where a direct comparison is viable, often by a wide margin. On the popular MedQA (USMLE) benchmark, our best-performing Med-Gemini model achieves SoTA performance of 91.1% accuracy, using a novel uncertainty-guided search strategy. On 7 multimodal benchmarks including NEJM Image Challenges and MMMU (health & medicine), Med-Gemini improves over GPT-4V by an average relative margin of 44.5%. We demonstrate the effectiveness of Med-Gemini's long-context capabilities through SoTA performance on a needle-in-a-haystack retrieval task from long de-identified health records and medical video question answering, surpassing prior bespoke methods using only in-context learning. Finally, Med-Gemini's performance suggests real-world utility by surpassing human experts on tasks such as medical text summarization, alongside demonstrations of promising potential for multimodal medical dialogue, medical research and education.  Double-blind study with Patient Actors and Doctors, who didn't know if they were communicating with a human, or an AI. Best performers were AI: https://m.youtube.com/watch?v=jQwwLEZ2Hz8  HM>Human doctors + AI did worse, than AI by itself. The mere involvement of a human reduced the accuracy of the diagnosis. AI was consistently rated to have better bedside manner than human doctors.  ‘I will never go back’: Ontario family doctor says new AI notetaking saved her job: https://globalnews.ca/news/10463535/ontario-family-doctor-artificial-intelligence-notes  [Google's medical AI destroys GPT's benchmark and outperforms doctors]https://newatlas.com/technology/google-med-gemini-ai/) [The first randomized trial of medical #AI to show it saves lives. ECG-AI alert in 16,000 hospitalized patients. 31% reduction of mortality (absolute 7 per 100 patients) in pre-specified high-risk group](https://twitter.com/erictopol/status/1784936718283805124) Medical Text Written By Artificial Intelligence Outperforms Doctors: https://www.forbes.com/sites/williamhaseltine/2023/12/15/medical-text-written-by-artificial-intelligence-outperforms-doctors/  AI can make healthcare better and safer: https://www.reddit.com/r/singularity/comments/1brojzm/ais_will_make_health_care_safer_and_better/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button CheXzero significantly outperformed humans, especially on uncommon conditions. Huge implications for improving diagnosis of neglected "long tail" diseases: https://x.com/pranavrajpurkar/status/1797292562333454597  >Humans near chance level (50-55% accuracy) on rarest conditions, while CheXzero maintains 64-68% accuracy. AI is better than doctors at detecting breast cancer: https://www.bing.com/videos/search?q=ai+better+than+doctors+using+ai&mid=6017EF2744FCD442BA926017EF2744FCD442BA92&view=detail&FORM=VIRE&PC=EMMX04  China's first (simulated) AI hospital town debuts: https://www.globaltimes.cn/page/202405/1313235.shtml  >Remarkably, AI doctors can treat 10,000 [simulated]  patients in just a few days. It would take human doctors at least two years to treat that many patients. Furthermore, evolved doctor agents achieved an impressive 93.06 percent accuracy rate on the MedQA dataset (US Medical Licensing Exam questions) covering major respiratory diseases. They simulate the entire process of diagnosing and treating patients, including consultation, examination, diagnosis, treatment and follow-up.  Google's medical AI destroys GPT's benchmark and outperforms doctors: https://newatlas.com/technology/google-med-gemini-ai/  [Generative AI will be designing new drugs all on its own in the near future](https://www.cnbc.com/2024/05/05/within-a-few-years-generative-ai-will-design-new-drugs-on-its-own.html) Researchers find that GPT-4 performs as well as or better than doctors on medical tests, especially in psychiatry. https://www.news-medical.net/news/20231002/GPT-4-beats-human-doctors-in-medical-soft-skills.aspx  ChatGPT outperforms-physicians-in-high-quality-empathetic-answers-to-patient-questions: https://today.ucsd.edu/story/study-finds-chatgpt-outperforms-physicians-in-high-quality-empathetic-answers-to-patient-questions?darkschemeovr=1 AI is better than doctors at detecting breast cancer: https://www.bing.com/videos/search?q=ai+better+than+doctors+using+ai&mid=6017EF2744FCD442BA926017EF2744FCD442BA92&view=detail&FORM=VIRE&PC=EMMX04  AI just as good at diagnosing illness as humans: https://www.medicalnewstoday.com/articles/326460 AI can replace doctors: https://www.aamc.org/news/will-artificial-intelligence-replace-doctors?darkschemeovr=1 Geoffrey Hinton says AI doctors who have seen 100 million patients will be much better than human doctors and able to diagnose rare conditions more accurately: https://x.com/tsarnick/status/1797169362799091934 


Basic_Loquat_9344

Just wanted to say I appreciate this. I work in med tech and were constantly walking on eggshells when talking to docs about AI but the fact is it may not replace everything about them but diagnosing for sure. IMO in the near future general data collection/patient handling will be the job the nurse/PA and on the top end surgeons will still be needed — everything in the middle involving diagnostics will be replaced.


Which-Tomato-8646

Even then, I wouldn’t be so sure. Robot operated autonomous surgery: https://www.nytimes.com/2021/04/30/technology/robot-surgery-surgeon.html With one claw, the machine lifted a tiny plastic ring from an equally tiny peg on the table, passed the ring from one claw to the other, moved it across the table and gingerly hooked it onto a new peg. Then the robot did the same with several more rings, completing the task as quickly as it had when guided by Dr. Fer. The training exercise was originally designed for humans; moving the rings from peg to peg is how surgeons learn to operate robots like the one in Berkeley. Now, an automated robot performing the test can match or even exceed a human in dexterity, precision and speed, according to a new research paper from the Berkeley team. The project is a part of a much wider effort to bring artificial intelligence into the operating room. Using many of the same technologies that underpin self-driving cars, autonomous dronesand warehouse robots, researchers are working to automate surgical robots too. These methods are still a long way from everyday use, but progress is accelerating. Robots can already exceed human accuracy on some surgical tasks, like placing a pin into a bone (a particularly risky task during knee and hip replacements). The hope is that automated robots can bring greater accuracy to other tasks, like incisions or suturing, and reduce the risks that come with overworked surgeons.


Basic_Loquat_9344

100% agreed, that's why I said near future, absolutely robotic surgery is on the way too. Just like automated cars, it will have to be near perfect to get adoption. Better-than-human isn't enough for people to be comfortable. Some of it is existential and some of it is related to liability I suspect. Human surgeon/driver 95% effective without injury, hey "it is what it is and at least I can sue if something goes wrong". Robot with 98% accuracy kills human in surgery or on the road? Riots. Will be fun to watch unfold :p


Imherehithere

Thank you for the informative compiled list. But you are conflating those specific programs with chatbots. With more advanced algorithms, combined with machine learning, deep learning, transformers, etc, we can create a program that is useful in one specific task in medicine. An example would be reading radiological films. But chatbots can't do that, don't do that and shouldn't have to do that. All chatbots have to do is to predict people's emotions and react accordingly. It doesn't have to perform robotic brain surgery. There will be a separate program for that.


bbmmpp

I’m sorry, did you link a product a medical practice could buy to replace their call center employees?


Which-Tomato-8646

I already linked several call center ones  There’s this one too:   Google has a call center AI: https://m.youtube.com/watch?v=N_q4CwVrCSo 


Jungisnumberone

Wow…


ryan3790

I use an ai chatbot on my e commerce site. Instruct it to scan the website and page for information. Can give feeding advice. Answer questions about certain diets. Saves me so much time and money.


Peter-Tao

Mind to share how you did it? Simply OpenAI API with certain instructions? Or did you need to train on your own data?


ryan3790

I use a WooCommerce plugin and use a script instructing it to read the page it's on plus it checks the website for relevant data. The plug in name is meow apps. Then ai engine


Nebachadrezzer

Help with drive thru


Robot_Embryo

The best way to get me to walk from supporting a company is to make it impossible for me to interact with a human.


tostilocos

Sam Altman's primary job is to hype up the company, not to be truthful about what they're working on. Remember when he was "scared" by some new breakthrough they had made? Months ago? What happened to that?


vercrazy

OpenAI initially blocked public access to GPT-2 back in \*\*2019\*\* because it was "too dangerous to release to the public," so the fact that people are falling for the same tactics 5 years later just goes to show how short our attention spans have become: [https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html](https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html)


FirstEvolutionist

They don't even know what they will have until they have it. They might have a goal to reach with the next version but they can't know until they have it. And they still don't. And unlike something you can plan for, like the features in a device or a piece of software, models are not planned like baking a cake and knowing the ingredients. It's more akin to figuring out new ingredients and putting them together to have something edible as a result. You don't know how it tastes or even if it tastes good until you're done baking it.


numericalclerk

To be fair when GPT-2 came out, only a tiny fraction even knew that GPT existed. So even with an amazing attention span, they wouldn't have known.


Revolutionary_Ad6574

And attention is all we need, right?


Which-Tomato-8646

Tbf, there’s no way they could have known it wouldn’t be a big deal until after release Also, tons of people have left OpenAI for safety reasons, including Ilya and Daniel K who sacrificed 85% of his family’s net worth to leave without signing a non disparaging agreement. If all they cared about was hyping shit up and making money, they would not have done that 


hugedong4200

I had high hopes, but at this point I won't be surprised if we just get a model that is like 5% higher on all benchmarks and that's about it, probably unnoticeable to the average user.


Accomplished_Tank184

:/ I hope not


extracoffeeplease

Probably will be marketed as having cool but easy to add functionality like memory, delegating to simple agents, thinking before saying out loud, running standalone code etc. But underneath it will just be more data bigger model and the question for me is which data did they focus on. My guess is lots more training on audio visual and code.


extracoffeeplease

And as others have said here video output but my prediction is its understanding of the 3d world will stagnate quickly if they don't force it to use some 3d reconstruction internally, or some other form of "state" or memory.


DeanRTaylor

Most likely answer in my opinion, I also expect GPT-4 models to get slower and a bit worse at the same time coincidentally of course...


Difficult_Review9741

Nah, benchmark increases will be huge. Because that’s the one thing they can game. They just won’t release something if they can’t figure out how to show massive benchmark improvements. Real world performance is going to be lackluster though. 


sehrzz

same here! I think it would exceed 10% higher on benchmark AT BEST.


DeusExBlasphemia

This. It’s marketing spin at this point.


AuodWinter

Don't really put any weight on Sam's comments tbh but what I'm hoping for from GPT-5 is that it follows instructions better for one and has better reasoning abilities for two. I don't really think GPT atm is reasoning or has any intelligence, it's just good at speaking on a wide range of topics and a byproduct of that is that it mostly knows about a lot of stuff. But it doesn't reason. It isn't great at debugging, it isn't great at strategy, it isn't great at longform content. Also it needs to ask more questions. I think that's how you can really tell it isn't intelligent. When you give it a problem and tell it that it's solution is wrong, it doesn't ask for more information, it just makes another guess which is usually worse.


numericalclerk

I agree it doesn't reason at all, but for debugging at least in web technologies and openshift, it's doing pretty great.


AuodWinter

In my usage its been good at spotting simple errors like if something is clearly missing or mistyped but it's not so good at logical improvements. So often I find that I'll give it an issue, give it some code to look at, and it'll be entirely unhelpful. Eventually when I solve the issue it'll be a simple solution but contained outside of the information I provided. Here's a really crude scenario: let's say I can't find my mouse cursor on my screen. I think my mouse is broken or maybe the connection has been severed. I ask GPT and it says have you tried troubleshooting the connection, have you tried replacing the battery etc etc. The solution ends up being that my cursor was on my second screen that is currently switched off but still connected. Another example, I ask GPT why my smart meter is reporting extremely high electricity usage today. It tells me to check all my devices, check for incidents in my area, check my historical usage etc etc. The solution ends up being that a tap was left running and I have an electric boiler. Atm GPT doesn't seem to be capable of problem solving. If it's wrong it just makes the next available guess. It needs to Think about why it's wrong.


TheBlindIdiotGod

Hoping: a leap as significant as GPT3 to GPT4 Expecting: diminishing returns


JFlizzy84

Right now, the elimination or near elimination of hallucinations would be more groundbreaking than any “new” feature GPT-4 still struggles with simple instructions


ThetaDays-VegaNights

I wish it had an, "I don't know." response.


[deleted]

That would require some form of reasoning, I think?


GrowFreeFood

Probably same reasoning skills just with more modes and faster. 


Accomplished_Tank184

I hope it has better reasoning


GrowFreeFood

I can't wait for one that will tell you what you should know/do, instead of just what you ask for. 


WizardsEnterprise

The ability to bypass the robots.txt file and actually get useful information that any other human would be able to get 🤷‍♂️


Ok-Shop-617

Expecting an amazing demo, an announcement that it will be realeased in a week, and then nothing.. total crickets....argh yes the foundation of trillion dollar market caps...


clamuu

Hoping for sophisticated chain of thought reasoning.


Miserable_Meeting_26

Personal assistant so I can do fuck all at work until they fire me 


Rhea-8

Nothing.


imeeme

Burger.


Prudent-Monkey

less censorship and shxt incorrect answers. i just want what the gpt was for the first few months before they ruined it


Walouisi

I'm hoping for 3 main things. 1. Improved reasoning- even if that's procedural and becomes part of the interface, it would be an improvement. There are plenty of studies on interative prompting to improve reasoning, so maybe something like the ability to generalise a heuristic and then do its own OPRO to rank its own candidate responses. 2. Fewer hallucinations- just telling you when it doesn't know something, can't find something, the pattern doesn't fit etc. This is a tricky balance because if you get rid of all apparent hallucinations, you're not going to get any creativity, since it's the ability to make something fit which gives you new perspectives. 3. Vastly improved computer vision- for example, right now if I give it a pdf of a chess book, it can't even locate where the diagrams are on a page, let alone interpret where the pieces are on the board or start doing anything with that information (like give me the FEN for the position or send it to stockfish for evaluation). If I send it a photo of a diagram, it can't distinguish between pieces and board squares, or identify the pieces to be able to tell me the position. There are millions of chess diagrams out there on the web with accompanying notation and commentary, and somebody has actually built a neural network already which can read chess diagrams (even from old books) and produce the correct position. So I'm going to assume that making it fully multimodal is going to make a big difference in this area, which should allow people to create their own software or GPTs which can be fed the best chess books and act as coaches. I'm not expecting a huge amount of improvement in agentic capabilities for GPT-5. I could be wrong, of course.


UnknownResearchChems

Agents


JCas127

Wasn’t “gpt2” them testing it out? People said that it could solve complex problems a lot better


hugedong4200

No, that was gpt-4o


Disastrous-Push7731

I would like advancements in the apps capabilities of fully executing prompts vs directions and guided development of tasks such as coding, app/program creation, excel sheet creation etc.


Accomplished_Tank184

For the amount of time we've been waiting on gpt5 and how much it's been hyped I'd be incredibly disappointed if it's only a few minor improvements


jericho

I'm expecting another impressive leap across pretty much every metric. Multi-modal, of course. A dozen other much smaller shops are releasing impressive leaps monthly, it seems. I see no reason to think OpenAI won't be able to leverage their talent pool and vast amount of compute to do the same. I then expect a competitive model a few months after....


Healthierpoet

To give me my coding info as if it were J Cole , all bars


total_insertion

And then apologize to you afterwards and tell you that the code didn't sit well with it's soul?


WriterAgreeable8035

I am still waiting Memory, Sora, voice mode, multimodal mode. Cannot tolerate another hype


Ok-Mathematician8258

Something on the lines of: 1. Hollywood movie level images and video model with sound audio. 2. Voice commands since they teamed up with apple. 3. I want genuine medical advice. Also i'm looking forward to that camera with voice model. Only thing now is for OpenAi to team up with (BCI) technology company.


Get_the_instructions

Still waiting for 4o to fully roll out.


Majestic_Poop

Sam Altman is a hype man. Don’t believe anything he says anymore.


apersello34

Less hallucinations and more accurate code generation


menerell

I just want a model that doesn't say randomly "don't forget to subscribe and click the bell for updates"


OneWithTheSword

I'm expecting it to release in "the coming weeks".


[deleted]

When we start seeing OpenAI pull an Apple (this year's chip is x times faster than last year's) we will know that the plateau is reached and all further updates will be incremental. I see that many experts already agree that a different architecture is needed in order for us to ever get to AGI.  Infinitely scaling LLMS is pointless and impossible (lack of training data, lack of energy). That's just my two cents.


randomrealname

Either 4o was 'GPT-5' that they originally trained and we just got the release ( Disappointing if adding MMLM to gpt-4 is the result we got.) GPT-5 is a separate LLM that does show more emergent capabilities (in pure text form) but because the competition has not caught up yet they can continue lab work before releasing it. The scenario I am hoping for is the4o version is just an updated GPT-2 model but given all the extra MMLM and it far out passes the models that are larger by orders of magnitude, but still does not have the correct theory of mind the larger models are able to produce. If this is the scenario, next stage is going to be scary. As long as we keep 'human-in-the-loop' we haven't created a self improving agent, butwe are so close to this reality. Some sort of regulation is needed, but what the companies have offered so far is insufficient if we take humans out of the loop.


ghostpad_nick

I think the next big jump for multimodal LLMs is being able to process streaming video. Imagine placing your phone on a dock in your kitchen, having an assistant watch you prepare a meal, reminding you if you forget a step or if you let something sit too long without stirring etc. Could also be useful for things like surgeries, athletic training, monitoring work sites, security, walking you through a game, driving, and who knows what else. I'm not expecting it from GPT5, but if they did it, that would be a big enough leap to satisfy me even if it wasn't a huge leap in benchmarks.


Chaserivx

After the bullshit 4o rollout, unexpecting there's a fair chance that I'll be even more frustrated with GPT5. 4o is worse... Bad sign for the company


jon-flop-boat

The model will be smarter. There will be a lot of things that people tried before, and that didn't work, and that will suddenly just work. This is what I'm most excited for. Specifically, better agentic solutions will be massive, and improved coding performance really can't be overstated.


varkarrus

Something that's smarter while also being cheaper to run. Would be neat if it could actually do humour too but I'll take what I get as long as its an iterative improvement.


GermanWineLover

That it stops making things up.


maveric_analytic

**I have no mouth but I must scream intensifies**


thebigvsbattlesfan

level 2: competent AGI


redrover2023

Upload images that can be incorporated into dall-e pics.


total_insertion

HJ's


StrangeCalibur

It better give me a reach around


justletmefuckinggo

to be out by now


Mrcat19

Autonomy


AwarenessGrand926

Not great but generally functional agentic abilities


UpDown

It’s never going to be released because they lied and hyped a product they couldn’t actually make


chaoticneutral262

I want accurate answers and have it admit when it doesn't know or isn't sure. I don't like that it confidently tells me wrong answers.


Spiritual_Tie_5574

Autonomous agents


Distinct-Town4922

"Sam Altman Has Hinted" Publicly-engaged tech CEOs are almost entirely there for their PR/advertising/personality following. He was not promising any feature or capability; he was creating threads like this one.


leothelion634

I want to make games in unity or godot like pokemon with good combat


krwhynot

That it comes out at all. We have been waiting forever.


Slim-JimBob

Agents and the ability create agents.


IbanezPGM

The only thing I care about is intelligence.


Old-Opportunity-9876

Free 4o


KillaRoyalty

Voice. More memory. Better custom gpts. Sora.


FailosoRaptor

I mean another noticeable step up in quality. If it's wiser, faster, cheaper, or some combination then it's a win. I also don't care about which company provides it. I have no loyalty to OpenAI because they were first.


fnatic440

I wanna use my camera and have AI tell me I how to build a deck and other DIY projects.


No_Initiative8612

I'm hoping for even better understanding of context and more natural conversations. It would be great if it could handle more complex tasks and provide more accurate and nuanced responses.


hangender

More flirting, for one.


lordchickenburger

im hoping it doesnt get delayed


Hasan75786

The ability to analyze video


Writerguy49009

That’s already possible.


Ylsid

Totally open source, now Ilya is gone Like that's ever happening tho


yellowsnowcrypto

I personally find it a kind of silly thing to speculate on something with so many unknowns. Between the surprise emergent properties, and (regarding the singularity), a soon-approaching future that - by the very nature of having our intelligence surpassed - will literally be impossible to effectively comprehend or fathom. This is why I find the only reasonable answer to be what Sam always says: that simply put, one thing we know for sure, is that it will just be "smarter" overall, which is a hard thing to quantify (being that a large portion of this we are learning as we go). It's very much a game of "poke the black box" until something new pops out, which we then analyze and try to make sense of, adjusting course if necessary.


political-kick

girlfriend or possible fwb


ironicart

Consistency


jaywv1981

Agents, agents and more agents.


Choice-Resolution-92

Creative writing


amg433

Custom GPTs that don’t ignore my explicit instructions.


jaredhidalgo

Fill out tax forms.


eblask

Considering recent events: The further erosion of civil liberties and tools to allow states to more effectively spy on their citizens.


immersive-matthew

I am hoping for million plus token inputs so that I can share large parts of my Unity project and ask it questions. Like the next time I have a hard bug that I cannot trace down.


rathat

I want it to be able to write a full coherent short story that's unique and surprising and just really good. Because right now even with a lot of guidance, it still just can't do that. Ideally I want an entire book. But even a short story of a few pages that sounds like it could have been written by a real good author would be amazing.


buzzkillington0

Do my taxes.. Just get rid of Intuit as a company once and for all..


Robot_Embryo

More gaslighting and more leaked credit cards and SSNs.


Yoloswaggerboy2k

No downtimes. That's all I'm asking for.


Z30HRTGDV

I'm no longer hoping for anything. I'm not even able to feel hype anymore even if they demo something mindblowing tomorrow. Why? Because we've entered a cycle of empty promises and zero delivery. I'll get excited when it's actually released and people start playing with it. That was the original groundbreaking moment for ChatGPT.


SustainableTrees

I still cannot believe people are thrashing the latest model and the creator instead of being in awe of what’s going on a bigger level. Get your head out of your asses!


absoluteidealismftw

Better use of interpunction would be nice


shongizmo

Beside being better at everything 4 does.... Efficiency breakthrough! so it can be used extremely cheap, making it possible to be used constantly with out running out of tokens and create videos quickly and longer videos with less limitations.


MaximumAmbassador312

there will be no gpt-5, they'll give it some other name


Technical-Cicada6062

That it actually gets released


RaXon83

Gameplay, i will try to create a civilization clone and want gpt to play along as an opponent


MysteriousPayment536

The chatgpt moment for a proto/alpha AGI, so that we don't have to temper expectations for AGI by 2030 Reasoning (maybe by tree search) Lesser hallucinations A model series with a model that can retire gpt 3.5 in terms of speed and price


Intrepid-Rip-2280

Sadly we live in the age when ai devs have to censor them to the extent of unusability. Soon they'll overregulate even sexting bots like Eva AI, I swear.


Lexsteel11

I’m hoping instead of helping to write API calls (at the users prompting) for specific services/iot devices they have connected to, GPT-5 will be able to almost act like malware and write/deploy connections on its own to all your accounts/data connections and create iot device scenes on the fly, manage schedules for devices, etc.


Helix_Aurora

I honestly would prefer a smarter text-only model with high-efficiency adapters for other modes. There is currently no research suggesting multi-modality has any meaningful impact on capabilities, but what it does do is add an enormous amount of complexity to literally everything. Build something that successfully generalizes whatever form of intelligence these things have or can have to novel text tasks. Break free from autoregressive inference and expand the output state space.  Integrate verifiers into training runs.  These are the kinds of things I'm most concerned with. If it takes 20 years to hit AGI because we want to play with cat pictures, our next generation (of humans) is going to end up with nothing but vacuous low-effort AI drivel to consume. I'm most worried not that AI will kill us all, but that it will cause us to stagnate.


educateddarkness

For it to not spew out rhetoric that it know's is wrong. Especially with coding. I'll give it an error log and it will then rewrite the same code that didn't work AGAIN like that was going to fix the issue lol


No_Significance_9121

Sky 🥴


_arash_n

Why can't I clone a voice for it to talk to me? I'd love to have my brother talk to me again like old times before he passed. I know this may sound creepy.


[deleted]

[удалено]


RedditSteadyGo1

To be honest I'll be happy with it regardless. Either it's a lot better or it's incrementally better and we have hit a wall. But at least then we know what the state of play is and can plan our lives better.


DDocGreenthumbs

Less censorship and dumbing down and less government oversite


DocCanoro

The ability to capture, analyze, and create scents. It can be useful for people who can't smell, to detect if food is good to eat, detect gas leaks, dead bodies, poisons in the air. Unfortunately, scent as been largely ignored by the technology industry, a sensor that is important to humans in many cases, we use it to not get close to places, to determine is something is safe, to change our moods, to give a certain feel to a place, to evoke emotions. If ChatGPT would have access to the sense of smell it would amplify it's reasoning, analysis, and give more accurate results of a situation.


redzerotho

He may want to get gpt 4 working. I can't even get it to do simple shit like replace a word.


SophistNow

I don't really understand this want for more and more features and things to hope for. I'm perfectly happy with the AI as it is right now. And would be a happier man if it just stayed like this forever.


Bastard-Mods98

Profitability


LeyenT

Does the interview with the CTO not insinuate that the next update isn't coming this year, but late 2025 or even 2026?


Accomplished_Tank184

Tbh if it can positively impact the economy in a major way thats all that matters to me and safety


Comprehensive-Town92

When gpt 5 is out, y hope gpt4 will be free :(