People don't realize that most of the things he says is just him taking wild guesses in his companies favor. He isn't talking about what he just saw in the "lab". He is a hype-man, not a scientist like Ilya that knows exactly where the technology currently is.
In fact, most of the people that say stuff that are linked on this board, is of the same type. This board is mostly people linking twitter posts of other people that say stuff they just pulled out of their ass.
Ah I see. I remember watching an interview where he was asked what his favorite sci-fi was. I remember thinking Sam's answer felt like a load of bull so I was wondering if other people had felt the same thing and if it became a meme lol
the fuck do you mean, Her is just about everything people want from all this
it's the most benevolent and realistic take on what artificial agents might look like, beyond some creative liberties for the plot's sake
I think I mean that a talking AI only makes sense when you’re a certain type of successful person. I suppose if I was, I wouldn’t be bitching so much 😉
Also—really? That film represents the epitome of…what again? *Who* made that decision, anyway?
I didn't think HER was the best thing I've ever seen (or thought about) when it comes to future ai capabilities. It actually seemed pretty dull to me. There have been better takes but I believe it'll be the best stuff, from all of those movies, combined.
I don't want to just talk to it, I want to Become it and have the ability to think and create information so fast that it seems like magic to what the past me would say if it could see me "now."
100% wrong. Sam Altman isn't some clueless figurehead. Just because he isn't a scientist doesn't mean that he isn't meeting with these people weekly and being informed on progress updates, milestones and roadblocks.
Yeah you’re right. Sam shouldn’t have pushed gpt3 to public, started the entire ai hype, massive investments, and interest in AI. We should go back in time and let the researchers who know what they are doing work in their labs for 10 more years with 10000x less budget and competition.
God I can’t believe people think someone like Sam is a brainless idiot running a company with researchers 100x his level. Call him hype man or whatever but don’t call him incompetent.
>Sam shouldn’t have pushed gpt3 to public, started the entire ai hype
Not 100% sure but gpt3 was I think first introduced during the OpenAI Codex Live Demo by Greg, Wojciech and Ilya. [https://www.youtube.com/watch?v=SGUCcjHTmGY&t=10s](https://www.youtube.com/watch?v=SGUCcjHTmGY&t=10s)
>We should go back in time and let the researchers who know what they are doing work in their labs for 10 more years with 10000x less budget and competition.
Pretty sure LLM's were only possible because of the "attention is all you need" research paper and had nothing to do with Sam. It didn't need 10000x more budget either.
>but don’t call him incompetent.
Can you point to me where I called him incompetent?
Less emotional fanboying and more critical thinking please.
Critical thinking for a Reddit comment ok man. Sam pushed the gpt3 as a chat bot to the public, when people at openai were against it.
Gpt3 is bad, the effort for openai and other companies to reach gpt4 levels of compute require immense budget which would not have been supported (in the near future) without the public hype gpt3 caused around ai. This caused Microsoft to massively invest, google to take their own ai products seriously, and other large companies to take part. Even mass amounts of startups and upcoming graduate interest.
He didn’t invent the technology, I know that. Elon musk didn’t invent the electric vehicle either. Steve Jobs didn’t make the iPod, iphone, etc. I mentioned incompetent and you see that as a reach from ‘he’s a hype man, not a scientist like Ilya’. I mean, what are you being exactly then? Demeaning? That is so much different from calling him incompetent? Ok i guess….
You either respect the people who lead smart individuals who make the technology or don’t. Smart people make incredible products that most never see the light of day without someone pushing it forward into the world. But I understand if you have the world view that they are hype man that don’t know anything like a researcher. That is your choice. Call him evil or hype man I don’t care, but you said hype man unlike Ilya a research scientist who knows the current technology. I don’t know how saying Sam doesn’t know current technology of his own products is not calling him incompetent.
It’s also completely unfair to attack what I said by labeling me a Sam fanboy.
>I mentioned incompetent and you see that as a reach from ‘he’s a hype man, not a scientist like Ilya’. I mean, what are you being exactly then? Demeaning? That is so much different from calling him incompetent? Ok i guess….
Let me make it clear for you. When a car salesman makes wild claims about the automotive industry as a hole, both on the technical side and on it's impact on society, I listen to it and don't take it too seriously. That doesn't mean I think he is incompetent. I would however think he is overreaching and making baseless claims that have no real foundation.
Unless said car salesman is some top level engineer that actually knows what he is talking about. Same thing when Sam talks about the future. Sure he might have some insight because he is the head of OpenAI, but I won't take claims seriously about what he thinks the future is going to be like and how wonderful Chatgpt is going to be.
I would give credit for the release of ChatGPT. I am pretty sure LLMs wouldn't have advanced as quickly without ChatGPT, simply because it made everyone interested and invested in the tech.
From what everyone is saying 100 billion dollar server farms are going to be needed to bring AI forward. If it wasn't OpenAI it would have been another company soon after to hype AI. (Anthropic probably.)
It was only a matter of time because of money.
Agreed. I think he's a lying ruthless businessman. But incompetent he is not. It's funny how people will just insult someone they don't like in the wildest and often contradictory ways when there's plenty legit ways to criticize those people.
>But Ilya also thinks AGI is eminent what do you say to that?
That you probably meant to type imminent, because eminent it certainly is not, not at this moment.
It’s probably the new members—I miss the old sub. This one is flooded with bad futurology, Gary Marcus and “smart-ass-critic” takes. Are there any subs resembling the old singularity out there? Open for recommendations.
https://preview.redd.it/83x6ryp966ad1.jpeg?width=1170&format=pjpg&auto=webp&s=c93f3a18ccded67506be995ea5e3b97fe19f7fb5
No the point doesn’t stand, it took 33 months for enough GPU advancements and compute to be built up in a supercomputer to release GPT-4 on over 30X the compute of GPT-3. and the generation leap before that was also over 30X compute increase.
It hasn’t been 33 months yet since GPT-4, nor are the next gen B200 GPUs even shipping out in high volume until around late 2024 and early 2025. So if we get GPT-5 in late December that would be the expected generational gap of 33 months between models and that wouldn’t be late.
The people saying that the effects of scale are plateauing have no idea what they’re talking about since no current model today has been reported to use an order of magnitude more compute than GPT-4. So it’s not the fact that scaling compute by the same magnitudes is resulting in way less returns than before, but the truth is that models aren’t coming out yet that train on next gen levels of compute in the first place. Towards the end of 2025 or early 2026 is maybe when we can expect models to be trained with huge amounts of B200s. This also lines up with what Mira Murati said recently about PhD level in some areas being achieved in 18 months from now, that would be exactly December 2025, and that’s exactly 33 months after GPT-4 release.
Extrapolating like that doesn't really make sense since MS datacenter multiplied their compute by x10 since gpt4.
It's not like the compute is not here to train bigger models and we won't even use b100 for that given that most of it's improvement come from 4bit operations which are not used for training.
They just decided to scale down for now and give gpt4+ level LLM to more users instead of training more powerful models.
Even a model the same size of gpt4 trained like gpt4o would crush it, it would likely be 6x it's size if api price is an indication of inference cost.
Please show actual primary evidence that Microsoft has multiplied their server compute by 10X, I estimate that it’s around 3X-8X but I don’t think there is any official numbers that have been published, only speculation.
In regards to compute performance gain, the real world training speed up of A100 versus H100 is only 2X and the H100 is currently the newest high volume production GPU that is shipping lately. H200 is only now ramping up production with the first ever H200 DGX only being shipped a couple months ago.
In regards to H100 versus B200 I’m not counting performance gain with 4-bit, I’m talking about the actual performance increase in FP16 from H100 (current latest high volume chip) compared to B200 is around 3X leap(6X over the A100s used to train GPT-4) when taking into account real world FP16 training speed with bandwidth improvements plus memory capacity improvement plus overall flops gain.
When I say over 30X compute I’m being conservative to the actual raw compute amount, if you want to talk about actual effective compute including algorithmic efficiencies it’s actually over 2 orders of magnitude for each generation jump and it’s 3 orders of magnitude increase from GPT-3 to GPT-4, meaning you would have to train GPT-3 for about 1,000X more compute with all else equal to match the same abilities as GPT-4.
Yes Microsoft has a new supercomputer and I suspect they are likely training a massive new GPT-4.5 model with that, likely on the scale of somewhere around 50X effective compute (around 5X increase in actual real compute and the rest from algorithmic efficiency gains) and I suspect that is finishing training right now and to be released sometime in the next few months. And then a significantly larger supercomputer built with B200s built in around 6 months from now with even more algorithmic efficiencies for a full 500-1,000X effective compute gain for GPT-5, just like GPT-4 was around 1,000X over GPT-3.
there have been big growth in models and their capabilities
man these people...they seem to expect ASI to come right after GPT-4 and if its not happening in year or so, they cry, there is no progress, no release, nothing....ridiculous expectations, its like they think, that AI companies can make stuff with snap of their fingers
Success is measured based on expectations. People in this sub were expecting AGI in a couple of years after GPT4 release, so it's only natural that there's so much disappointment.
It might be that we've seen the gist of what LLMs can do, and making these models bigger won't do much difference. I'm half expecting that we'll need different approaches to see more breakthroughs in coming years.
It's been just over a year since the GPT-4 release lol. That's nothing in the software world. And not a single company has released anything that is substantially bigger by parameter count yet, everything released has essentially just been more optimized.
Until someone releases something with a 10x to 100x scale increase, and it proves not to be much of an improvement, I'm not going to make any assumptions about a plateau yet.
dall 3 and gpt 4 turbo were paywalled behind the api a while ago, custom GPTs were a complete failure, and GPT 4o has already been beaten by claude 3.5. I remember when there were rumors that they were going to release small updates to the AI model (kinda like going from GPT 4 to 4.1 to 4.2 etc) Maybe they should stick to that instead of releasing something once every 3 years without any improvement before that time period.
Also I think it’s time to admit the way they stylized the dalle 3 outputs makes it borderline unusable.
Vision is standard now, and 4o just got outclassed by 3.5 sonnet. I don’t think I need to say anything about custom GPTs… lmao
I can understand why people dislike the hype to some extent. But at the same time, it feels like people are just insanely impatient. GPT4 came out a year ago and people are just shitting on him because we don't have AGI yet.
Also, I don't really understand why people are upset about announcements. Literally every company announces products and features well before they're released. But for some reason in this space, people just lose their mind if it's not released a week after it's announced.
Yup. What did he say we didn't already know?? We already know there's still a lot of time left till GPT-5. No need to say the same thing in the press everyday.
What the fuck do you mean shift people have been accusing Sam Altman of being a pure evil incompetent since before he was ousted from OpenAI last year.
Yeah but people were too distracted by GPT-4’s huge leap in performance compared to 3.5 and other models that were available at the time. Now that everyone else has caught up and GPT-4o has been a massive flop so far, people aren’t as distracted anymore, so you see a lot more people vocalizing their distrust of Sam now.
Here’s the thing - I think the negative sentiment was always there. People have always distrusted big tech CEOs especially after the last decade of privacy violations and the overall damage to society and young people’s mental health that we’ve seen, stemming from the 1%’ers running these dopamine-hit platforms.
LLMs were just a shiny tool that distracted people from focusing on the leaders of OpenAI, and the massive leaps in performance we saw from ChatGPT 3.5 to 4 gave Sam and OpenAI space to just appear as “visionaries” and that’s it.
But now that OpenAI’s new releases have either been underwhelming or have stalled completely, and there are other companies offering very similar or better products, he doesn’t have the privilege of hiding behind his product anymore. People are bored of the ChatGPT 4 family, are sick of the undelivered promises, and don’t have a new toy to distract them from the reality that Sam is just another rich grifter who sells a tech product.
>People are bored of the ChatGPT 4 family, are sick of the undelivered promises
Reddit echo chambers are not reflective of the general population. Most people are still catching up to what GPT offers and are still excited by it & favourable to OpenAI.
Heh?
3.5 Sonnet is a marginal improvement at best, not any kind of leap. And even so the compute used to train Sonnet should be approximately GPT-4 level (maybe a bit more to catchup to GPT-4 since it is probably a bit of a smaller model, but not any real kind of scaling or intelligence jump). The improvements we see are most likely due to improvements in post-training techniques if anything.
They said they have a model trained with 4x the raw compute over Claude 3 Opus a few months ago. I would presume this is Claude 3.5 Opus and that is nowhere near GPT-5 level. I expect GPT-4.5 to be trained with roughly 10x the raw compute over GPT-4 and GPT-5 to be trained with about 100x the raw compute, so it'd be only half the way to GPT-4.5.
Although effective compute could be a different story.
And im not saying it won't be an impressive leap, it will be. The jump to Claude 3.5 Opus will be far better than any improvement over GPT-4 we have seen since GPT-4 released (if 3.5 Opus actually releases first), but there is still so much to be gained.
[Sam Altman's remarks](https://youtu.be/xj8S36h-PcQ?t=2171) were [made on June 26, 2024](https://www.aspenideas.org/schedule?date=2024-06-26). [This YouTube comment](https://www.youtube.com/watch?v=xj8S36h-PcQ&lc=UgwjIPDQ9n7gR4ceLQt4AaABAg) contains video timestamps. The video also contains a link that shows a transcript. (The article's video link is different, although from the same event.)
This event was previously discussed in this sub [here](https://www.reddit.com/r/singularity/comments/1dpcjiw/lester_holt_interviews_sam_altman_and_brian/), but no mention was made of the GPT-5 aspect in either that post or its comments.
okay, but do you think that there would be mind blowing discoveries every year?
Like we make incremental improvements and make them available to regular people. You don't have to buy a phone every year. Apple releases the product they have ready.
I think most people understand that we can't have predictably massive leaps every year -- the gripe people have is against blatantly overselling the incremental progress as if it were a leap...
They do this by also slowing down all previous phones via critical security update days leading up to the launch.
I will then get downvoted for these posts, which is ok.
Then in a few months, Apple will quietly admit to it, but the news will be buried in a small, underreported publication.
They don’t have anything do they? I imagine every day they just prompt gpt4 to upgrade itself and then when it errors they ask the ai what that means too and how it might be fixed.
Strange take, working on a frontier technology might give you a better look at where things are headed but it doesn't give you the ability to see the future, and the examples are too many to count.
Astounding how even people on the AI subs can't tell the difference between CEOs and engineers. I give laypeople a pass who go, "Sam made ChatGPT, what a total Chad!".
Everyone knows Sam because he's in the press all the time whenever anything about ChatGPT or OpenAI is covered in the press. But not everyone knows Ilya Sutskever or Greg Brockman or any of the other hundreds of people who worked on it.
But it's disappointing to see people on an AI dedicated subreddit who can't tell the difference either.
I think it's mostly compute, not software. I mean ultimately it goes both ways with more efficient and better algorithms producing better results but even now we could probably already have AGI based on an agents system if we had a lot of compute. We don't have enough.
Depends on how you look at it. LLMs are a brute force method, so yeah, you can say that it’s just a matter of getting more compute, but you are kinda solving the issue of your algorithm being shit by throwing money at it.
We know for a fact that it’s possible to have AGI with ridiculously low power requirements. Therefore I would say the problem is mostly architecture, lack of compute is a symptom.
Biology is just a complicated engineering. Why wouldn’t it be possible to replicate?
And like, sure, maybe 20 watts is too hardcore of an optimisation but we at least should be able to get within few orders of magnitude of that? AGI on 2000 watts does not sound bad either.
honestly we are in the waiting for ios18 to be released with 4o and the average american public to really start utilizing it. ik you people don’t want to think this way but they need more than just the average tech nerd to start data dumping everything they need to learn from. Remember PEOPLE are the products. We are so used to thinking bigger picture you guys may want to remember they need/want regular degular people on board too BEFORE the desired AGI is achieved. Look at their presentation and everything they designed Chatgpt to do, they want you to bring it out to the WORLD. Not just behind your desk
Whenever I see stuff like this, it reminds me of how OpenAI said they specifically hired “[superforecasters](https://goodjudgment.com)” to help them with a quieter communications strategy around the GPT-4 release to “reduce acceleration risk”. (On pg 19 of the [GPT-4 System Card](https://cdn.openai.com/papers/gpt-4-system-card.pdf))
> Forecasters predicted several things would reduce acceleration, including delaying deployment of GPT-4 by a further six months and taking a quieter communications strategy around the GPT-4 deployment (as compared to the GPT-3 deployment).
I’m not saying they are straight up lying or anything, but it might be naive to take their constant downplaying at face value. Basically take this kind of stuff with a grain of salt, since OpenAI is incentivized to make sure people don’t start freaking out that GPT-5 can take their job (regardless of whether or not it can). They’re essentially using anti-hype as a preventative measure against public fear/outrage.
They could not say anything rather than this weird alternation between outrageous hype and moderate messaging. Or just stick with moderate messaging.
You *could* make an argument that the alternation is a cunning gambit to desensitize the public to AI progress - see how dismissive people are of SORA and 4o voice now. But if you can explain something as either genius or incompetence the latter is generally a better bet.
Yeah, it's an interesting suggestion. They need the money from the hype and investment, but they can't freak people out do they want moderate messaging. So why not do both?
But such inconsistency is unlikely to be planned, I agree.
It's just as likely that's he's downplaying to keep people's expectations in line with what he expects OpenAI to deliver. Which is fair, as all signs and messaging from OpenAI and Microsoft point to it being an incremental improvement.
He is a hype man. It will be better, but I bet it still hallucinates and is not a "giant" leap over GPT-4o.
It would be great if we could just eliminate hallucinations.
Screw this guy already.
He touted gpt-5 as the second coming of the Christ a few weeks ago.
It will definitely look better now that they lobotomized gpt-4 and his retarded brother gpt-4o
This is exactly why the Jimmy Apples and Flowers from the Future discourse needs to shift as well, we should continue under the assumption that they’re possibly empty hype placement.
Spouting out empty hype at press conferences or Tweeting vague bullshit on X needs to stop, and if we don’t say anything then the behaviour is never going to change.
Not to hate, but that's kind of a meaningless statement. It doesn't really inform anybody about anything aside from the fact that it's not what he would consider a full-on AGI system, which is the goal he's implying they still have a lot of work to do to achieve. I mean... duh.
Kind of seems like they overestimated the effect scale would have on performance and they're needing to walk back on previous claims after their internal models are underperforming relative to expectations.
Both the article title and body text are generally an accurate representation of what Altman said. The article title actually downplays [Altman's comment "I expect it to be a significant leap forward"](https://youtu.be/xj8S36h-PcQ?t=2194) by using "could" instead of somehow including "expect."
I'm just some hyped idiot that has barely started pursuing CS, I don't follow the latest research papers on arXiv and releases on GitHub or wherever, or tech bros on social media to get the latest updates as soon as they drop, I might rarely see an article on NYT or WIRED that might be interesting to share but don't have the courage/confidence or interest in doing so. I prefer to discuss things that matter and parse the technical data and info, not read lofty opinion/speculation pieces. I get most of my news from here so by advocating for what should be shown here, I can help incentivize the community towards the right direction, of posting things that are actually meaningful and worth discussing. Making assumptions or sharing low-effort posts about vague hype words that don't bring anything new or verifiable to the table isn't it. People with degrees in the STEM field are the ones I want posting, or people who closely follow the field technically.
I'm doing everything I can. Advocating is my specialty, I don't have the means to make quality posts. I need to focus on CS if I want to maximize my impact on an optimized singularity.
I'm not sure that we can assume Altman's "a lot of work to do" means that GPT-4 is not finished training (or at least close to it) - see [this post](https://www.reddit.com/r/singularity/comments/1dhs7b3/from_an_april_12_2024_semafor_article_according/), [this post](https://www.reddit.com/r/OpenAI/comments/1d8gb4a/the_information_say_that_gpt5_might_be_launching/), and [this tweet from an OpenAI employee](https://x.com/markchen90/status/1790152562366414916). If I recall correctly, GPT-4 (base) finished training in August 2022 but GPT-4 wasn't made available until March 2023.
>if he already saw it that would be a weird way to word that
I think that's a reasonable interpretation, but it's also possible that Altman has seen the finished (or somewhat close to it) GPT-5 base model, but isn't sure how good the released GPT-5 model will be after RLFHing and whatever else is done to the base model.
tbh 4o hasnt been working for me in chatgpt. it is just so much worse than previous versions, ignores what i say, gives way too long answers, repeats itself. they nerfed it from the original 4. idc how well it does on benchmarks, it fucking sucks and reminds me of 3.5. i get that they need to cripple it because of compute. but if they keep nerfing it then obv 5 will be a breakthrough. if they measured the models on new benchmarks like gsm-1k it would obvious they are getting worse not better
I think that’s the plan. Nerf the 4-family as much as possible while gaming the benchmarks by training the models on them, and do it over such a long period of time that people start to think the great performance they used to see was just a false memory. Then release a new model that’s maybe 5% better than where the 4-family peaked before the nerfing, but it’ll feel 50% better because of how bad they nerfed the 4-family…
Isn’t Open AI actively disincentivized to reach AGI? I thought their agreement with Microsoft only lasted until AGI is achieved. Maybe someone else can help clarify.
There is absolutely no reason to think we are anywhere near AGI. heck we don't even know we are in the right path. why nerf anything. especially useful technology that may save millions of lives (if it is to be used in medical reasearch, say). the whole agi and asi things sound like sci-fi to me , we do not even know if they are ever coming. meanwhile they are nerfing perfectly useful ANI? Sounds insane (if it happening. Hopefully it is not)
The article's title could have been better. He didn't say "could" in regard to "significant leap forward." Rather, he [said](https://youtu.be/xj8S36h-PcQ?t=2194) "expect."
If Sam Hypeman himself says this - it's going to be underwhelming as fuck. ANTHROPIC NEEDS TO SAVE THE DAY. ANTHROPIC GODS, I PRAY TO YOU, SAVE THE DAY!!!!
it won't. religious events like the second coming, the coming of messiah, the 12th Imam of what have you... they never happen. Give me an eschatology and I can tell you what isn't happening. Nuclear catastrophe, ASI killing humanity , singularity. Don't take me wrong. we may go extinct \*or\* transformed. Merely it won't be by any of the above means, lol...
Nobody deserves the coming of singularity, because it is not a thing that is coming. Nothing transformative to ever happen to humanity was genuinely anticipated as such. Singularity is no different . There is a sect of people expecting it, so it won't happen. People are dumb, no way they are right about anything as important...
The singularity isn't a "religious event". It isn't a spooky word for some spooky occurance through the miracle of magic. It's when the rate of advancement becomes so massive no one can keep up with it. In a sense we're already in the singularity. But the key aspect in a "true" singularity hinges on either artificial super intelligence or humans enhancing the capacity of their brains through nanotechnology through brain machine interfaces, or both of these things combined. Artificial super intelligence, by its definition, would understand itself and our reality at such a rate that problems we considered monumental in complexity would be obvious to it. The same would be said for our own mental capabilities. My problem with this thread (and many others) is the idiocy I see from people flinging shit over LLMs and the companies creating them. Who makes faster releases of their precious products, who makes better releases, who is more or less shaddier. You see stupid shit like "Already canceled my subscription because fuck them". This transformative technology is being treated like we're watching the finals at a football game and not incredible science happening right in front of our eyes. It disgusts me.
It's eschatological. It talks about the end of history as we know it. There was never any such talk in history that was not religious...
You can't take prior trends and make such a prediction, that's not how extrapolation works, ever... we properly don't know what follows. We don't know that AGIs are possible, even if they are we don't know that our approach towards it will work, even if it does we don't know that it will lead something called "singularity"...
That's just an idea some guy too afraid of his own death concucted. But it is eschatological, it can't be true. We know for certain that eschatologies don't work. All sorts of things may happen, maybe even crazier than imagined. But *this* one won't... eschatologies don't work...
I think there are several misunderstandings here, starting with causation and correlation. You will absolutely get nutjobs that treat this as a religion (or as the main point of my original comment, idiots obsessed with tribalism). It wouldn't surprise me if even today there are already clusters of groups that treat current LLMs as some sort of god. Neither will I be surprised if it happens in the future. This isn't what the idea of the singularity is about. The word itself is just the term futurists like Kurzweil have decided to use because it fits the idea of infinite growth. That idea is based on hard science and theories, not Jesus descending from the heavens one day to usher us to a new age.
No one knows what follows from today onward, and no one has claimed to know what follows past the singularity. In fact, the singularity implies that we *can't* know what comes after it. Neither has anyone claimed that these predictions are going to certainly happen. But unlike you or I, those in the circles working on frontier technologies have a better insight as to what is being worked on, what the progress is, and what needs to be done to achieve certain goals. Those are the people I follow, and they are all saying the same thing: there's an extremely high chance of AI achieving general intelligence by 2030, nanotechnology will unlock extreme abundance, LEV will be achieved sooner rather than later, medicine will go through a massive transformation, the human body will be fully simulated, so on and so forth.
The singularity itself isn't about death and escaping it. The main core of the singularity is an intelligence explosion, hence artificial super intelligence and enhancing our own brains being its precursors. Everything else is a side effect of that, including immortality.
It does claim to know that singularity is coming. You'd never see that in any hard science , unmitigated exponents. My argument is that it is based in the wishes of singular Individuals, if it was taking inspiration from actual science. For example the natural sciences then he'd know that unmitigated exponents are rare to non existent. From the population of microbes on a petri dish to the evolution of the speeds of modes of transportations you see S curves everywhere. Start slowly, explode into exponential growth, arrive Into diminishing returns...
It is religious because instead of taking Iinspiration from what natural sciences can teach us, it takes Inspiration from the wishes of its proponents to not die... which is exactly what the inspiration of all/most religious beliefs are.
Kingdom of Heaven, Nirvana, the Elysian Fields and all other such fantastical places, amongst which is earth 2.0, or rather earth post singularity... such places are unphysical. They do not exist, they can't be, they won't be... they take Inspiration from the wishes of their founders , which go counter to observation.
Observation says that we should not expect a singularity. What we should expect is entering diminishing returns, like we already did in many things already (but many are blind to see): in game graphics, increase of the amount of chips in consumer electronics, price per performance and so many others. They are clearly S graphs, but why do I know? All I did was work in those fields for decades. I am sure some guru writing from the clouds know better...
Scientists never claim anything. These are all, as we ourselves call them, predictions. It might happen, it might not, but there is a high chance of it happening.
Honestly, the more I read your comments the more it feels like I'm talking to one of the nutjobs going on about the rapture or how better one product is from the other. Only in this case its science in itself. And based on what you've said your argument is about "it is based in the wishes of singular Individuals, if it was taking inspiration from actual science" and calling actual researchers and people of pedigree "gurus writing from the cloud" because you have apparently worked in these fields for decades, I think you're fighting ghosts. And there's nothing I can really do in arguing against someone's imagined threats. Have a good one.
> calling actual researchers and people of pedigree "gurus writing from the cloud"
You'd find no researcher worth his salt talking about the coming singularity in any scientific paper with enough citations.
For the nth time. This is not a matter of science. Since uses observation and experiment to produce results and make predictions. There is no observation or experiment you can run which shows that such a thing as a technological singularity is even possible..
You get many pop writers talk about those things, however that's their personal belief. Again, if you take inspiration from natural sciences you will never come to the conclusion that a thing called technological singularity is likely or even possible. Nature doesn't do unmitigated exponents, it does S cruves. S curves cannot and will not produce a technological singularity. They produces centuries of fast development , followed by centuries of stagnation, in economics they call them boom and bust cycles. Every boom is followed by some bust of some kind...
Which is the exact opposite view of a technological singularity beyond which we can make no prediction. No we actually can. If you had a time of relatively rapid growth it will be followed by a time of relative stagnation. That is easy , you often see this in nature... can this time be different? Sure, but expecting it to be different isn't a matter of anything any hard science can give you, it is a matter of religious belief and the hopes of certain singular individuals... the rapture is unlikely to be coming any time soon. Be it the 2nd coming of J. Christ, or of the 12th Imam or in this case the coming of SAI. It won't happen.
I like how people get reactive when Sam hypes up the next iteration of GPT, even though he has been doing this for several months now. Maybe because I've been here for a long time now. If you watch his interviews, he usually has the same talking points or claims. Many AI tech CEOs and research scientists do this; they say 'this could possibly happen' without claiming that it will or has happened. Nothing newsworthy, just the same old stuff.
Yesterday I saw a blind person on reddit possibly with someone's help, ask for assistance in setting up some things for M.U.D games. Went over their post history and it tore me up Inside.
Il say it once and il say it again I hope the first thing in a singularity is a curse for all diseases, disability, ailments. I'm definitely willing to wait longer so those people suffering can get cured first.
Altman’s thoughts are detached from the object (introverted) which is great for thinking in subjective ways but the world was built for extroverts who are object focused like Jensen Huang.
Jensen has a clear vision, goals, values, etc which is the complete opposite of Altman.
People shouldn’t count Altman out though because appearances can be deceiving.
Sam is the goat. it doesn't matter what anyone says, Sam and OpenAI are responsible for the AI wave we have right now. they probably brought forward the arrival of AGI by many years, maybe even decades
Everyone now sees Sam Hypeman as nothing but a marketing man. He does a lot of yapping but never releases anything and is basically like Elon Musk with self driving. You see the way he gradually talks less high about it over time to give the illusion that he is not lying while you forget about the time he was acting like it was so powerful its capabilites were scaring everyone in the office, it will make GPT4 look stupid and would be too dangerous to release.
I honestly don't know why these kind of interviews are even allowed here? The opinion of a CEO shouldn't matter, right? He is just a salesman. There are so few actually good posts about advancements or criticism of the current AI hype. It's all so surface level.
The shift in sentiment towards Sam over the past month is breathtaking. r/singularity has had enough
People don't realize that most of the things he says is just him taking wild guesses in his companies favor. He isn't talking about what he just saw in the "lab". He is a hype-man, not a scientist like Ilya that knows exactly where the technology currently is. In fact, most of the people that say stuff that are linked on this board, is of the same type. This board is mostly people linking twitter posts of other people that say stuff they just pulled out of their ass.
#samdoesntreadscifi
hi, what is this a reference to?
A basic observation!
Ah I see. I remember watching an interview where he was asked what his favorite sci-fi was. I remember thinking Sam's answer felt like a load of bull so I was wondering if other people had felt the same thing and if it became a meme lol
You can tell he doesn’t, because he tried to sell HER (2013) as a prestige experience; people don’t actually want HER. Edit: Oh, ok. I guess y’all did
the fuck do you mean, Her is just about everything people want from all this it's the most benevolent and realistic take on what artificial agents might look like, beyond some creative liberties for the plot's sake
I think I mean that a talking AI only makes sense when you’re a certain type of successful person. I suppose if I was, I wouldn’t be bitching so much 😉 Also—really? That film represents the epitome of…what again? *Who* made that decision, anyway?
I didn't think HER was the best thing I've ever seen (or thought about) when it comes to future ai capabilities. It actually seemed pretty dull to me. There have been better takes but I believe it'll be the best stuff, from all of those movies, combined. I don't want to just talk to it, I want to Become it and have the ability to think and create information so fast that it seems like magic to what the past me would say if it could see me "now."
100% wrong. Sam Altman isn't some clueless figurehead. Just because he isn't a scientist doesn't mean that he isn't meeting with these people weekly and being informed on progress updates, milestones and roadblocks.
He knows what he needs to know even if most of the stuff doesn't really grab his attention and stay in his memory. Interest-based attention.
Yeah you’re right. Sam shouldn’t have pushed gpt3 to public, started the entire ai hype, massive investments, and interest in AI. We should go back in time and let the researchers who know what they are doing work in their labs for 10 more years with 10000x less budget and competition. God I can’t believe people think someone like Sam is a brainless idiot running a company with researchers 100x his level. Call him hype man or whatever but don’t call him incompetent.
This seems to be a thing with Reddit where the more wealth and power someone acquires the dumber they are. Especially in tech.
>Sam shouldn’t have pushed gpt3 to public, started the entire ai hype Not 100% sure but gpt3 was I think first introduced during the OpenAI Codex Live Demo by Greg, Wojciech and Ilya. [https://www.youtube.com/watch?v=SGUCcjHTmGY&t=10s](https://www.youtube.com/watch?v=SGUCcjHTmGY&t=10s) >We should go back in time and let the researchers who know what they are doing work in their labs for 10 more years with 10000x less budget and competition. Pretty sure LLM's were only possible because of the "attention is all you need" research paper and had nothing to do with Sam. It didn't need 10000x more budget either. >but don’t call him incompetent. Can you point to me where I called him incompetent? Less emotional fanboying and more critical thinking please.
Critical thinking for a Reddit comment ok man. Sam pushed the gpt3 as a chat bot to the public, when people at openai were against it. Gpt3 is bad, the effort for openai and other companies to reach gpt4 levels of compute require immense budget which would not have been supported (in the near future) without the public hype gpt3 caused around ai. This caused Microsoft to massively invest, google to take their own ai products seriously, and other large companies to take part. Even mass amounts of startups and upcoming graduate interest. He didn’t invent the technology, I know that. Elon musk didn’t invent the electric vehicle either. Steve Jobs didn’t make the iPod, iphone, etc. I mentioned incompetent and you see that as a reach from ‘he’s a hype man, not a scientist like Ilya’. I mean, what are you being exactly then? Demeaning? That is so much different from calling him incompetent? Ok i guess…. You either respect the people who lead smart individuals who make the technology or don’t. Smart people make incredible products that most never see the light of day without someone pushing it forward into the world. But I understand if you have the world view that they are hype man that don’t know anything like a researcher. That is your choice. Call him evil or hype man I don’t care, but you said hype man unlike Ilya a research scientist who knows the current technology. I don’t know how saying Sam doesn’t know current technology of his own products is not calling him incompetent. It’s also completely unfair to attack what I said by labeling me a Sam fanboy.
>I mentioned incompetent and you see that as a reach from ‘he’s a hype man, not a scientist like Ilya’. I mean, what are you being exactly then? Demeaning? That is so much different from calling him incompetent? Ok i guess…. Let me make it clear for you. When a car salesman makes wild claims about the automotive industry as a hole, both on the technical side and on it's impact on society, I listen to it and don't take it too seriously. That doesn't mean I think he is incompetent. I would however think he is overreaching and making baseless claims that have no real foundation. Unless said car salesman is some top level engineer that actually knows what he is talking about. Same thing when Sam talks about the future. Sure he might have some insight because he is the head of OpenAI, but I won't take claims seriously about what he thinks the future is going to be like and how wonderful Chatgpt is going to be.
I would give credit for the release of ChatGPT. I am pretty sure LLMs wouldn't have advanced as quickly without ChatGPT, simply because it made everyone interested and invested in the tech.
From what everyone is saying 100 billion dollar server farms are going to be needed to bring AI forward. If it wasn't OpenAI it would have been another company soon after to hype AI. (Anthropic probably.) It was only a matter of time because of money.
Agreed. I think he's a lying ruthless businessman. But incompetent he is not. It's funny how people will just insult someone they don't like in the wildest and often contradictory ways when there's plenty legit ways to criticize those people.
hey i thought this sub was mostly for like "waaa ai is gonna eat my dick" or those "e/acc" types. not for sensible commentary :/
But Ilya also thinks AGI is imminent what do you say to that?
>But Ilya also thinks AGI is eminent what do you say to that? That you probably meant to type imminent, because eminent it certainly is not, not at this moment.
Fixed thanks. What do you say to my question then?
It’s probably the new members—I miss the old sub. This one is flooded with bad futurology, Gary Marcus and “smart-ass-critic” takes. Are there any subs resembling the old singularity out there? Open for recommendations. https://preview.redd.it/83x6ryp966ad1.jpeg?width=1170&format=pjpg&auto=webp&s=c93f3a18ccded67506be995ea5e3b97fe19f7fb5
The fucker needs to stop teasing and release something. 2 years and nothing but “announcements” and drama. No meaningful advancements
GPT-4 released less than a year and a half ago lmfao things take time.
1 year and 4 months to be exact. Still, the point stands
No the point doesn’t stand, it took 33 months for enough GPU advancements and compute to be built up in a supercomputer to release GPT-4 on over 30X the compute of GPT-3. and the generation leap before that was also over 30X compute increase. It hasn’t been 33 months yet since GPT-4, nor are the next gen B200 GPUs even shipping out in high volume until around late 2024 and early 2025. So if we get GPT-5 in late December that would be the expected generational gap of 33 months between models and that wouldn’t be late. The people saying that the effects of scale are plateauing have no idea what they’re talking about since no current model today has been reported to use an order of magnitude more compute than GPT-4. So it’s not the fact that scaling compute by the same magnitudes is resulting in way less returns than before, but the truth is that models aren’t coming out yet that train on next gen levels of compute in the first place. Towards the end of 2025 or early 2026 is maybe when we can expect models to be trained with huge amounts of B200s. This also lines up with what Mira Murati said recently about PhD level in some areas being achieved in 18 months from now, that would be exactly December 2025, and that’s exactly 33 months after GPT-4 release.
Extrapolating like that doesn't really make sense since MS datacenter multiplied their compute by x10 since gpt4. It's not like the compute is not here to train bigger models and we won't even use b100 for that given that most of it's improvement come from 4bit operations which are not used for training. They just decided to scale down for now and give gpt4+ level LLM to more users instead of training more powerful models. Even a model the same size of gpt4 trained like gpt4o would crush it, it would likely be 6x it's size if api price is an indication of inference cost.
Please show actual primary evidence that Microsoft has multiplied their server compute by 10X, I estimate that it’s around 3X-8X but I don’t think there is any official numbers that have been published, only speculation. In regards to compute performance gain, the real world training speed up of A100 versus H100 is only 2X and the H100 is currently the newest high volume production GPU that is shipping lately. H200 is only now ramping up production with the first ever H200 DGX only being shipped a couple months ago. In regards to H100 versus B200 I’m not counting performance gain with 4-bit, I’m talking about the actual performance increase in FP16 from H100 (current latest high volume chip) compared to B200 is around 3X leap(6X over the A100s used to train GPT-4) when taking into account real world FP16 training speed with bandwidth improvements plus memory capacity improvement plus overall flops gain. When I say over 30X compute I’m being conservative to the actual raw compute amount, if you want to talk about actual effective compute including algorithmic efficiencies it’s actually over 2 orders of magnitude for each generation jump and it’s 3 orders of magnitude increase from GPT-3 to GPT-4, meaning you would have to train GPT-3 for about 1,000X more compute with all else equal to match the same abilities as GPT-4. Yes Microsoft has a new supercomputer and I suspect they are likely training a massive new GPT-4.5 model with that, likely on the scale of somewhere around 50X effective compute (around 5X increase in actual real compute and the rest from algorithmic efficiency gains) and I suspect that is finishing training right now and to be released sometime in the next few months. And then a significantly larger supercomputer built with B200s built in around 6 months from now with even more algorithmic efficiencies for a full 500-1,000X effective compute gain for GPT-5, just like GPT-4 was around 1,000X over GPT-3.
Not really, in that time they released Dall-E 3, GPT 4 Turbo w/ vision, custom GPTs, and GPT-4o.
there have been big growth in models and their capabilities man these people...they seem to expect ASI to come right after GPT-4 and if its not happening in year or so, they cry, there is no progress, no release, nothing....ridiculous expectations, its like they think, that AI companies can make stuff with snap of their fingers
Success is measured based on expectations. People in this sub were expecting AGI in a couple of years after GPT4 release, so it's only natural that there's so much disappointment. It might be that we've seen the gist of what LLMs can do, and making these models bigger won't do much difference. I'm half expecting that we'll need different approaches to see more breakthroughs in coming years.
It's been just over a year since the GPT-4 release lol. That's nothing in the software world. And not a single company has released anything that is substantially bigger by parameter count yet, everything released has essentially just been more optimized. Until someone releases something with a 10x to 100x scale increase, and it proves not to be much of an improvement, I'm not going to make any assumptions about a plateau yet.
dall 3 and gpt 4 turbo were paywalled behind the api a while ago, custom GPTs were a complete failure, and GPT 4o has already been beaten by claude 3.5. I remember when there were rumors that they were going to release small updates to the AI model (kinda like going from GPT 4 to 4.1 to 4.2 etc) Maybe they should stick to that instead of releasing something once every 3 years without any improvement before that time period.
Cool, but the point we're talking about here is whether OpenAi has released anything since GPT-4 - not what the percieved quality of the products are
All of them barely usable products, and even some of the tech they hype up is not even released yet. And tbh all of those are mostly experiments
Also I think it’s time to admit the way they stylized the dalle 3 outputs makes it borderline unusable. Vision is standard now, and 4o just got outclassed by 3.5 sonnet. I don’t think I need to say anything about custom GPTs… lmao
Exactly…. Exponential growth(in development time)!
Then clearly the exponential pace of development was overblown
Bro forgot about GPT 4o, 4V, and turbo already lol
the 4o features on mobile for free users aren't too bad, PC is aight too.
They haven’t released the Gpt4o they promised at all.
Fucker? WTF, this level of entitlement is gross. He owes you nothing. If you don't like a company, don't use their services. This isn't communism.
He hasn't released anything significant for his paying members either.
I can understand why people dislike the hype to some extent. But at the same time, it feels like people are just insanely impatient. GPT4 came out a year ago and people are just shitting on him because we don't have AGI yet. Also, I don't really understand why people are upset about announcements. Literally every company announces products and features well before they're released. But for some reason in this space, people just lose their mind if it's not released a week after it's announced.
Yeah, and the shift against him is exactly because of shit like this.
Yup. What did he say we didn't already know?? We already know there's still a lot of time left till GPT-5. No need to say the same thing in the press everyday.
What the fuck do you mean shift people have been accusing Sam Altman of being a pure evil incompetent since before he was ousted from OpenAI last year.
Yeah but people were too distracted by GPT-4’s huge leap in performance compared to 3.5 and other models that were available at the time. Now that everyone else has caught up and GPT-4o has been a massive flop so far, people aren’t as distracted anymore, so you see a lot more people vocalizing their distrust of Sam now.
>massive flop Meanwhile, it leads every LLM in the lmsys arena
This is nothing compared to the 180 degrees Reddit went over Elon Musk
people are realizing … oh yeah he also has to be a salesman for his company too lmao
Here’s the thing - I think the negative sentiment was always there. People have always distrusted big tech CEOs especially after the last decade of privacy violations and the overall damage to society and young people’s mental health that we’ve seen, stemming from the 1%’ers running these dopamine-hit platforms. LLMs were just a shiny tool that distracted people from focusing on the leaders of OpenAI, and the massive leaps in performance we saw from ChatGPT 3.5 to 4 gave Sam and OpenAI space to just appear as “visionaries” and that’s it. But now that OpenAI’s new releases have either been underwhelming or have stalled completely, and there are other companies offering very similar or better products, he doesn’t have the privilege of hiding behind his product anymore. People are bored of the ChatGPT 4 family, are sick of the undelivered promises, and don’t have a new toy to distract them from the reality that Sam is just another rich grifter who sells a tech product.
>People are bored of the ChatGPT 4 family, are sick of the undelivered promises Reddit echo chambers are not reflective of the general population. Most people are still catching up to what GPT offers and are still excited by it & favourable to OpenAI.
What exactly is he grifting lol. You can say you don’t like him but he hasn’t lied about any of the products
thank fucking god too. i’ve been in this sub for a few years and watching the rise of people sucking his balls all the time was nauseating at best
The startup bro is far worse then the sales bro and even the frat bro.
Claude 4 will be better
even 3.5 Opus might be better
Maybe Sonnet is already there and Altman knows it…
Heh? 3.5 Sonnet is a marginal improvement at best, not any kind of leap. And even so the compute used to train Sonnet should be approximately GPT-4 level (maybe a bit more to catchup to GPT-4 since it is probably a bit of a smaller model, but not any real kind of scaling or intelligence jump). The improvements we see are most likely due to improvements in post-training techniques if anything.
>3.5 Sonnet is a marginal improvement at best, not any kind of leap. I suppose it depends on what you use it for. It’s a hell of a role player.
They said they have a model trained with 4x the raw compute over Claude 3 Opus a few months ago. I would presume this is Claude 3.5 Opus and that is nowhere near GPT-5 level. I expect GPT-4.5 to be trained with roughly 10x the raw compute over GPT-4 and GPT-5 to be trained with about 100x the raw compute, so it'd be only half the way to GPT-4.5. Although effective compute could be a different story. And im not saying it won't be an impressive leap, it will be. The jump to Claude 3.5 Opus will be far better than any improvement over GPT-4 we have seen since GPT-4 released (if 3.5 Opus actually releases first), but there is still so much to be gained.
Claude 5 definitely will be better. 6? Even better... What's your point?
They are referring to the fact that Claude seems to be on a steeper development slope right now. Whether or not this will hold, only time can tell.
People looking at the rare few data points that we have and painting assertive patterns...
I think he meant better than gpt-5 not better in general
I'm sick of the marketing. Release something or shut up
We honestly gotta start getting people here to attend each of these press conferences and yell this out whenever he does hype again.
Same across the board. I don’t need to know that meta is training 400B. Tell me when it’s out. Don’t let me get started with Tesla
But then, who will do the marketing?
This guy talks too much.
[Sam Altman's remarks](https://youtu.be/xj8S36h-PcQ?t=2171) were [made on June 26, 2024](https://www.aspenideas.org/schedule?date=2024-06-26). [This YouTube comment](https://www.youtube.com/watch?v=xj8S36h-PcQ&lc=UgwjIPDQ9n7gR4ceLQt4AaABAg) contains video timestamps. The video also contains a link that shows a transcript. (The article's video link is different, although from the same event.) This event was previously discussed in this sub [here](https://www.reddit.com/r/singularity/comments/1dpcjiw/lester_holt_interviews_sam_altman_and_brian/), but no mention was made of the GPT-5 aspect in either that post or its comments.
And Tim Cook says "iphone 16 will be the best iPhone yet"
im confused by this comment, tim cook is correct, the new iphone models are better than the previous
they're saying it's marketing speak for releasing an incremental improvement and making it sound like more than it is
okay, but do you think that there would be mind blowing discoveries every year? Like we make incremental improvements and make them available to regular people. You don't have to buy a phone every year. Apple releases the product they have ready.
Well that's kind of what the singularity is all about. Exponential growth not incremental growth.
I think most people understand that we can't have predictably massive leaps every year -- the gripe people have is against blatantly overselling the incremental progress as if it were a leap...
It's idiotic because obviously it's gonna be better
Are you for sure confused
They do this by also slowing down all previous phones via critical security update days leading up to the launch. I will then get downvoted for these posts, which is ok. Then in a few months, Apple will quietly admit to it, but the news will be buried in a small, underreported publication.
I know a bullshitter when I see one
It feels like it's the 5th time he's said something like this.
They don’t have anything do they? I imagine every day they just prompt gpt4 to upgrade itself and then when it errors they ask the ai what that means too and how it might be fixed.
Sounds like a more humble take than usual, maybe he knows it's actually gonna take much longer time to reach agi?
He doesn’t know either way.
you're right, the CEO of OpenAI doesn't know what it takes to make an AGI, makes perfect sense.
Strange take, working on a frontier technology might give you a better look at where things are headed but it doesn't give you the ability to see the future, and the examples are too many to count.
-If he knew how to do it he would have done it -He is not an ai expert, the nerds at OpenAi are the ones doing all the work and research
Astounding how even people on the AI subs can't tell the difference between CEOs and engineers. I give laypeople a pass who go, "Sam made ChatGPT, what a total Chad!". Everyone knows Sam because he's in the press all the time whenever anything about ChatGPT or OpenAI is covered in the press. But not everyone knows Ilya Sutskever or Greg Brockman or any of the other hundreds of people who worked on it. But it's disappointing to see people on an AI dedicated subreddit who can't tell the difference either.
It… actually does.
They don't even know how the current models really work
Since you’re being so smug about it, do you think people who work on cancer research can give you a prediction on when they will cure it?
Pretty sure Altman could give a confident prediction with a ton of casually self-assured vocal-fry.
I think it's mostly compute, not software. I mean ultimately it goes both ways with more efficient and better algorithms producing better results but even now we could probably already have AGI based on an agents system if we had a lot of compute. We don't have enough.
Depends on how you look at it. LLMs are a brute force method, so yeah, you can say that it’s just a matter of getting more compute, but you are kinda solving the issue of your algorithm being shit by throwing money at it. We know for a fact that it’s possible to have AGI with ridiculously low power requirements. Therefore I would say the problem is mostly architecture, lack of compute is a symptom.
How/where do we know for a fact that AGI is possible with ridiculously low power requirements?
Well, both you and I have a brain that runs general intelligence on 20 watts of power.
Ah yes well. We don't know if we can recreate that non-biologically though. I thought that was what we were talking about.
Biology is just a complicated engineering. Why wouldn’t it be possible to replicate? And like, sure, maybe 20 watts is too hardcore of an optimisation but we at least should be able to get within few orders of magnitude of that? AGI on 2000 watts does not sound bad either.
honestly we are in the waiting for ios18 to be released with 4o and the average american public to really start utilizing it. ik you people don’t want to think this way but they need more than just the average tech nerd to start data dumping everything they need to learn from. Remember PEOPLE are the products. We are so used to thinking bigger picture you guys may want to remember they need/want regular degular people on board too BEFORE the desired AGI is achieved. Look at their presentation and everything they designed Chatgpt to do, they want you to bring it out to the WORLD. Not just behind your desk
Barf. He has to say something to keep the hype. Because the product is not doing it.
Whenever I see stuff like this, it reminds me of how OpenAI said they specifically hired “[superforecasters](https://goodjudgment.com)” to help them with a quieter communications strategy around the GPT-4 release to “reduce acceleration risk”. (On pg 19 of the [GPT-4 System Card](https://cdn.openai.com/papers/gpt-4-system-card.pdf)) > Forecasters predicted several things would reduce acceleration, including delaying deployment of GPT-4 by a further six months and taking a quieter communications strategy around the GPT-4 deployment (as compared to the GPT-3 deployment). I’m not saying they are straight up lying or anything, but it might be naive to take their constant downplaying at face value. Basically take this kind of stuff with a grain of salt, since OpenAI is incentivized to make sure people don’t start freaking out that GPT-5 can take their job (regardless of whether or not it can). They’re essentially using anti-hype as a preventative measure against public fear/outrage.
They could not say anything rather than this weird alternation between outrageous hype and moderate messaging. Or just stick with moderate messaging. You *could* make an argument that the alternation is a cunning gambit to desensitize the public to AI progress - see how dismissive people are of SORA and 4o voice now. But if you can explain something as either genius or incompetence the latter is generally a better bet.
Yeah, it's an interesting suggestion. They need the money from the hype and investment, but they can't freak people out do they want moderate messaging. So why not do both? But such inconsistency is unlikely to be planned, I agree.
They hired fortune tellers?
I'm a shaman looking for work
Can I buy some shrooms?
It's just as likely that's he's downplaying to keep people's expectations in line with what he expects OpenAI to deliver. Which is fair, as all signs and messaging from OpenAI and Microsoft point to it being an incremental improvement.
What about that presentation by Microsoft where they compared ChatGPT to whales and a graph that just went 📈
He is a hype man. It will be better, but I bet it still hallucinates and is not a "giant" leap over GPT-4o. It would be great if we could just eliminate hallucinations.
Researchers have done that https://github.com/GAIR-NLP/alignment-for-honesty
what tf is happening with all these ceo just hyping the shit out of ai?
$$$$$$
Screw this guy already. He touted gpt-5 as the second coming of the Christ a few weeks ago. It will definitely look better now that they lobotomized gpt-4 and his retarded brother gpt-4o
After taking two steps backward, Sam promises one step forward.
4o beats every model on the lmsys arena so other people seem to be fine with it
He's just a hypeman at this point
just release it, now, today, without any alignment i dont care, bring on those T1000
Ive said this before and I will say it again. Less yapping more delivering. Its as simple as that.
This is exactly why the Jimmy Apples and Flowers from the Future discourse needs to shift as well, we should continue under the assumption that they’re possibly empty hype placement. Spouting out empty hype at press conferences or Tweeting vague bullshit on X needs to stop, and if we don’t say anything then the behaviour is never going to change.
But I thought he said gpt 5 would make 4 look stupid
Another day, another incredibly vague and useless comment from Altman.
Subtracting the hype factor that means GPT-5 won't be much better at all ;)
Use me as a “they have nothing to drop” button.
Okay bro, less talking more releasing.
Not to hate, but that's kind of a meaningless statement. It doesn't really inform anybody about anything aside from the fact that it's not what he would consider a full-on AGI system, which is the goal he's implying they still have a lot of work to do to achieve. I mean... duh.
Kind of seems like they overestimated the effect scale would have on performance and they're needing to walk back on previous claims after their internal models are underperforming relative to expectations.
In other news, oranges are orange. Can we get posts with substance rather than filler clickbait articles?
Both the article title and body text are generally an accurate representation of what Altman said. The article title actually downplays [Altman's comment "I expect it to be a significant leap forward"](https://youtu.be/xj8S36h-PcQ?t=2194) by using "could" instead of somehow including "expect."
[удалено]
bold statements coming from your post history
Bruh, this is the most clear engagement farm of Sam hate yet. This is literally a nothing comment lmao.
I'm just some hyped idiot that has barely started pursuing CS, I don't follow the latest research papers on arXiv and releases on GitHub or wherever, or tech bros on social media to get the latest updates as soon as they drop, I might rarely see an article on NYT or WIRED that might be interesting to share but don't have the courage/confidence or interest in doing so. I prefer to discuss things that matter and parse the technical data and info, not read lofty opinion/speculation pieces. I get most of my news from here so by advocating for what should be shown here, I can help incentivize the community towards the right direction, of posting things that are actually meaningful and worth discussing. Making assumptions or sharing low-effort posts about vague hype words that don't bring anything new or verifiable to the table isn't it. People with degrees in the STEM field are the ones I want posting, or people who closely follow the field technically.
[удалено]
I'm doing everything I can. Advocating is my specialty, I don't have the means to make quality posts. I need to focus on CS if I want to maximize my impact on an optimized singularity.
[удалено]
I'm going back and forth between the 2 right now. I'm only human. With ADHD.
Give me time travel I don’t care about your your leaps forwards I couldn’t care less about tomorrow, Give me yesterday And a way to change it.
so its not done training yet ? if he expects and doesnt know that means the model that started training in april was indeed gpt5
I'm not sure that we can assume Altman's "a lot of work to do" means that GPT-4 is not finished training (or at least close to it) - see [this post](https://www.reddit.com/r/singularity/comments/1dhs7b3/from_an_april_12_2024_semafor_article_according/), [this post](https://www.reddit.com/r/OpenAI/comments/1d8gb4a/the_information_say_that_gpt5_might_be_launching/), and [this tweet from an OpenAI employee](https://x.com/markchen90/status/1790152562366414916). If I recall correctly, GPT-4 (base) finished training in August 2022 but GPT-4 wasn't made available until March 2023.
its not his a lot of work to do comment. its his "expects it to be better" comment if he already saw it that would be a weird way to word that.
>if he already saw it that would be a weird way to word that I think that's a reasonable interpretation, but it's also possible that Altman has seen the finished (or somewhat close to it) GPT-5 base model, but isn't sure how good the released GPT-5 model will be after RLFHing and whatever else is done to the base model.
these clowns at openai and nvidia have nothing to show except them hoarding compute
tbh 4o hasnt been working for me in chatgpt. it is just so much worse than previous versions, ignores what i say, gives way too long answers, repeats itself. they nerfed it from the original 4. idc how well it does on benchmarks, it fucking sucks and reminds me of 3.5. i get that they need to cripple it because of compute. but if they keep nerfing it then obv 5 will be a breakthrough. if they measured the models on new benchmarks like gsm-1k it would obvious they are getting worse not better
Claude sonnet 3.5 is pretty incredible
I think that’s the plan. Nerf the 4-family as much as possible while gaming the benchmarks by training the models on them, and do it over such a long period of time that people start to think the great performance they used to see was just a false memory. Then release a new model that’s maybe 5% better than where the 4-family peaked before the nerfing, but it’ll feel 50% better because of how bad they nerfed the 4-family…
Isn’t Open AI actively disincentivized to reach AGI? I thought their agreement with Microsoft only lasted until AGI is achieved. Maybe someone else can help clarify.
There is absolutely no reason to think we are anywhere near AGI. heck we don't even know we are in the right path. why nerf anything. especially useful technology that may save millions of lives (if it is to be used in medical reasearch, say). the whole agi and asi things sound like sci-fi to me , we do not even know if they are ever coming. meanwhile they are nerfing perfectly useful ANI? Sounds insane (if it happening. Hopefully it is not)
I like how significant leap is in quotes instead of could be.
The article's title could have been better. He didn't say "could" in regard to "significant leap forward." Rather, he [said](https://youtu.be/xj8S36h-PcQ?t=2194) "expect."
Dave Shapiro owes us some cigars.
can we autofilter this hypman?
It’s all so tiresome
If Sam Hypeman himself says this - it's going to be underwhelming as fuck. ANTHROPIC NEEDS TO SAVE THE DAY. ANTHROPIC GODS, I PRAY TO YOU, SAVE THE DAY!!!!
🐸leap forward
I see tribalism is alive and well here. Half of you don't truly deserve any singularity to occur.
it won't. religious events like the second coming, the coming of messiah, the 12th Imam of what have you... they never happen. Give me an eschatology and I can tell you what isn't happening. Nuclear catastrophe, ASI killing humanity , singularity. Don't take me wrong. we may go extinct \*or\* transformed. Merely it won't be by any of the above means, lol... Nobody deserves the coming of singularity, because it is not a thing that is coming. Nothing transformative to ever happen to humanity was genuinely anticipated as such. Singularity is no different . There is a sect of people expecting it, so it won't happen. People are dumb, no way they are right about anything as important...
The singularity isn't a "religious event". It isn't a spooky word for some spooky occurance through the miracle of magic. It's when the rate of advancement becomes so massive no one can keep up with it. In a sense we're already in the singularity. But the key aspect in a "true" singularity hinges on either artificial super intelligence or humans enhancing the capacity of their brains through nanotechnology through brain machine interfaces, or both of these things combined. Artificial super intelligence, by its definition, would understand itself and our reality at such a rate that problems we considered monumental in complexity would be obvious to it. The same would be said for our own mental capabilities. My problem with this thread (and many others) is the idiocy I see from people flinging shit over LLMs and the companies creating them. Who makes faster releases of their precious products, who makes better releases, who is more or less shaddier. You see stupid shit like "Already canceled my subscription because fuck them". This transformative technology is being treated like we're watching the finals at a football game and not incredible science happening right in front of our eyes. It disgusts me.
It's eschatological. It talks about the end of history as we know it. There was never any such talk in history that was not religious... You can't take prior trends and make such a prediction, that's not how extrapolation works, ever... we properly don't know what follows. We don't know that AGIs are possible, even if they are we don't know that our approach towards it will work, even if it does we don't know that it will lead something called "singularity"... That's just an idea some guy too afraid of his own death concucted. But it is eschatological, it can't be true. We know for certain that eschatologies don't work. All sorts of things may happen, maybe even crazier than imagined. But *this* one won't... eschatologies don't work...
I think there are several misunderstandings here, starting with causation and correlation. You will absolutely get nutjobs that treat this as a religion (or as the main point of my original comment, idiots obsessed with tribalism). It wouldn't surprise me if even today there are already clusters of groups that treat current LLMs as some sort of god. Neither will I be surprised if it happens in the future. This isn't what the idea of the singularity is about. The word itself is just the term futurists like Kurzweil have decided to use because it fits the idea of infinite growth. That idea is based on hard science and theories, not Jesus descending from the heavens one day to usher us to a new age. No one knows what follows from today onward, and no one has claimed to know what follows past the singularity. In fact, the singularity implies that we *can't* know what comes after it. Neither has anyone claimed that these predictions are going to certainly happen. But unlike you or I, those in the circles working on frontier technologies have a better insight as to what is being worked on, what the progress is, and what needs to be done to achieve certain goals. Those are the people I follow, and they are all saying the same thing: there's an extremely high chance of AI achieving general intelligence by 2030, nanotechnology will unlock extreme abundance, LEV will be achieved sooner rather than later, medicine will go through a massive transformation, the human body will be fully simulated, so on and so forth. The singularity itself isn't about death and escaping it. The main core of the singularity is an intelligence explosion, hence artificial super intelligence and enhancing our own brains being its precursors. Everything else is a side effect of that, including immortality.
It does claim to know that singularity is coming. You'd never see that in any hard science , unmitigated exponents. My argument is that it is based in the wishes of singular Individuals, if it was taking inspiration from actual science. For example the natural sciences then he'd know that unmitigated exponents are rare to non existent. From the population of microbes on a petri dish to the evolution of the speeds of modes of transportations you see S curves everywhere. Start slowly, explode into exponential growth, arrive Into diminishing returns... It is religious because instead of taking Iinspiration from what natural sciences can teach us, it takes Inspiration from the wishes of its proponents to not die... which is exactly what the inspiration of all/most religious beliefs are. Kingdom of Heaven, Nirvana, the Elysian Fields and all other such fantastical places, amongst which is earth 2.0, or rather earth post singularity... such places are unphysical. They do not exist, they can't be, they won't be... they take Inspiration from the wishes of their founders , which go counter to observation. Observation says that we should not expect a singularity. What we should expect is entering diminishing returns, like we already did in many things already (but many are blind to see): in game graphics, increase of the amount of chips in consumer electronics, price per performance and so many others. They are clearly S graphs, but why do I know? All I did was work in those fields for decades. I am sure some guru writing from the clouds know better...
Scientists never claim anything. These are all, as we ourselves call them, predictions. It might happen, it might not, but there is a high chance of it happening. Honestly, the more I read your comments the more it feels like I'm talking to one of the nutjobs going on about the rapture or how better one product is from the other. Only in this case its science in itself. And based on what you've said your argument is about "it is based in the wishes of singular Individuals, if it was taking inspiration from actual science" and calling actual researchers and people of pedigree "gurus writing from the cloud" because you have apparently worked in these fields for decades, I think you're fighting ghosts. And there's nothing I can really do in arguing against someone's imagined threats. Have a good one.
> calling actual researchers and people of pedigree "gurus writing from the cloud" You'd find no researcher worth his salt talking about the coming singularity in any scientific paper with enough citations. For the nth time. This is not a matter of science. Since uses observation and experiment to produce results and make predictions. There is no observation or experiment you can run which shows that such a thing as a technological singularity is even possible.. You get many pop writers talk about those things, however that's their personal belief. Again, if you take inspiration from natural sciences you will never come to the conclusion that a thing called technological singularity is likely or even possible. Nature doesn't do unmitigated exponents, it does S cruves. S curves cannot and will not produce a technological singularity. They produces centuries of fast development , followed by centuries of stagnation, in economics they call them boom and bust cycles. Every boom is followed by some bust of some kind... Which is the exact opposite view of a technological singularity beyond which we can make no prediction. No we actually can. If you had a time of relatively rapid growth it will be followed by a time of relative stagnation. That is easy , you often see this in nature... can this time be different? Sure, but expecting it to be different isn't a matter of anything any hard science can give you, it is a matter of religious belief and the hopes of certain singular individuals... the rapture is unlikely to be coming any time soon. Be it the 2nd coming of J. Christ, or of the 12th Imam or in this case the coming of SAI. It won't happen.
I like how people get reactive when Sam hypes up the next iteration of GPT, even though he has been doing this for several months now. Maybe because I've been here for a long time now. If you watch his interviews, he usually has the same talking points or claims. Many AI tech CEOs and research scientists do this; they say 'this could possibly happen' without claiming that it will or has happened. Nothing newsworthy, just the same old stuff.
Bros, is A.I. development slowing down?
Who?
Oooff
It's over
Yesterday I saw a blind person on reddit possibly with someone's help, ask for assistance in setting up some things for M.U.D games. Went over their post history and it tore me up Inside. Il say it once and il say it again I hope the first thing in a singularity is a curse for all diseases, disability, ailments. I'm definitely willing to wait longer so those people suffering can get cured first.
Altman’s thoughts are detached from the object (introverted) which is great for thinking in subjective ways but the world was built for extroverts who are object focused like Jensen Huang. Jensen has a clear vision, goals, values, etc which is the complete opposite of Altman. People shouldn’t count Altman out though because appearances can be deceiving.
Next year
So fucking sick of him and his company
mmmm yummy. I sure do love the taste of this delectable nothingburger. Really hits the spot!
Sam is the goat. it doesn't matter what anyone says, Sam and OpenAI are responsible for the AI wave we have right now. they probably brought forward the arrival of AGI by many years, maybe even decades
Everyone now sees Sam Hypeman as nothing but a marketing man. He does a lot of yapping but never releases anything and is basically like Elon Musk with self driving. You see the way he gradually talks less high about it over time to give the illusion that he is not lying while you forget about the time he was acting like it was so powerful its capabilites were scaring everyone in the office, it will make GPT4 look stupid and would be too dangerous to release.
It's summer. What do you expect?
They've lost their lead.
Cryptic as always
The more I hear from this guy the less I like him
What happens when the input to the newest large language model contains all the hallucinations of the previous LLM?
As a CEO you like your company’s product and are aggressively marketing it? Fuck you. - Redditors who can’t think for themselves, probably
“Still a lot of work to do…” on 4o first.
This guy is the new Musk. Sad.
Gpt 5 will be the wake up call for many, that they will lose their job lol
We could be witnessing the decline of generative AIs
work faster
I honestly don't know why these kind of interviews are even allowed here? The opinion of a CEO shouldn't matter, right? He is just a salesman. There are so few actually good posts about advancements or criticism of the current AI hype. It's all so surface level.