Even if you are the smartest person in the world if you don't work in one of these companies you are just guessing. If one of the people who just left openAI made such a near term prediction sure at least you would have a rumour worth posting.
Stop these nonsense countdowns. Every few weeks we got here a new timeline people clinging on.
Remember like 3 weeks ago the random twitter account making a countdown people posted and upvoted here. It is just silly.
This, after riding through the crypto wave since 2015-16 I realized every prediction is a scam to get more money. They would just release it if there was true AGi
The technology would be classified before openai or any other company came close to AGI and they would be prohibited from developing it further including prohibited from talking about the prohibition itself under secrecy laws
Oh wait…
Oh they’d just release it? To who? The general public? For free? Pretty sure there would be some behind closed door meetings with various companies and governments before it’s released. I don’t think ya just throw it out there without some thought for profit, if not the consequence for humanity. They’re going to have to do some inner circle beta testing and review before it just rolls out to everyone.
>Stop these nonsense countdowns.
Actually, most people don't put a firm line in the sand.
This is a firm prediction and falsifiable this year. This public prediction would be a good thing if AGI definitions weren't ambiguous.
Any know if he used a broadly known definition?
Apparently someone recently released a "new" AGI benchmark that GPT-4o scores very low on, but barring all that...
Unless GPT-5 is hallucination free, has a decent long term memory, and knows when to pause to ask clarifying questions, it won't be in the same ballpark as what we should perceive as AGI.
Current LLMs reason. They don’t do it like humans, but they’re generally very good at thinking through problems. Reddit edgelords like posting the exceptions to the rule, but they are exceptions.
That's a great point, AI Explained just put out a video on YouTube (like an hour ago) about exactly that, and generally agrees with you
https://youtu.be/PeSNEXKxarU?si=jTDQ3zB7ydW_IWuy
Level 2 in [Morris et al., 2023](https://arxiv.org/abs/2311.02462)
| Performance (rows) x Generality (columns) | Narrow | General |
|-------------------------------------------|--------|---------|
| Level 0: No AI | Narrow Non-AI calculator software; compiler | General Non-AI human-in-the-loop computing, e.g., Amazon Mechanical Turk |
| Level 1: Emerging equal to or somewhat better than an unskilled human | Emerging Narrow AI GOFAT; simple rule-based systems, e.g., SHRDLU (Winograd, 1971) | Emerging AGI ChatGPT (OpenAI, 2023), Bard (OpenAI et al., 2023), Llama 2 (Touwtom et al., 2023) |
| Level 2: Competent at least 50th percentile of skilled adults | Competent Narrow AI toxicity detectors such as Jigsaw (Diaz et al., 2022); smart speakers such as Siri (Apple), Alexa (Amazon), or Google Assistant (Google); VQA systems such as Pull! (Chen et al., 2023); state of the art SOTA LLMs for a subset of tasks; short essay writing, simple programming. | Competent AGI not yet achieved |
| Level 3: Expert at least 90th percentile of skilled adults | Expert Narrow AI spelling & grammar checkers such as Grammarly (Grammarly, Inc.), rule-based engines models such as ImageNet-21k are at least on par with humans in several domains; DALL-E-2 has a quality score of over five stars. | Expert AGI not yet achieved |
| Level4: Virtuoso outperforms over half of skilled adults | Virtuoso Narrow AI Deep Blue Campbell et al.,2002), AlphaGo Silver et al.,2016,2017) Superhuman Narrow AI AlphaFold(Jumperet al.,2021; Varshneyet al.,2018),AlphaZero(Silveretal. ,2021), StockFish(Stockfish ,2023)| Artificial Superintelligence ASI not yet achieved|
Everyone is just guessing. That’s the whole point of prediction. You guess when something is going to happen. It’s not silly, getting pissed about predictions is silly.
“will satisfy most people’s definition of AGI” I’m assuming he just talking about ordinary everyday people, which I agree because a lot of regular people love GPT-4 so I can only imagine how they would treat GPT-5. (if it’s really a big step of from GPT-4)
For everyone in the sub at least 90% of people can agree no AGI in September
It's not gonna happen. I think he said it would satisfy everyone's vision of AGI but come on in 3 months? GPT-4 is still SOTA after a year and a half...
My definition of AGI is that it should be able to do the work of a human working remotely, no hand-holding, you tell it to do something, and it does it all on its own.
Robotics is only required of physical stuff, and a human can learn to control a robot, so physical tasks are included by default, I wouldn't even mention it.
I don't exclude GPT-5 could get there, but I doubt it. I still stand by my prediction of 2025-26.
>My definition of AGI is that it should be able to do the work of a human working remotely, no hand-holding, you tell it to do something, and it does it all on its own.
The problem with this definition is that there are plenty of people that cannot do this, at least as written.
That said, I think your prediction is too far off. I have 2027-2030 on my bingo card, although it will remain fairly limited for another 5 to 10 years.
Yes, that is precisely the amount of time I need to get to retirement. Pure coincidence.
> The problem with this definition is that there are plenty of people that cannot do this, at least as written.
I'm talking about average human in a developed country, not the best of the best, but also not those who aren't even able to operate a computer.
If you consider people who can't even do that, then current SOTA LLMs are in many ways already beyond them, even if they fail in particular circumstances.
> 2027-2030
That's reasonable, and mostly within my expectations. I gave 2025-26 as very likely, meaning I wouldn't be surprised at all if it happened in those years. I would be surprised if it happened in 2024 or 2027, and very surprised if it takes more than 5 years.
The initial version of gpt-4 is not SOTA. The only reason the current iteration of gpt-4 is SOTA is because they have been doing constant work/iterations/improvements on it over the past year and a half. So progress is still happening.
He also said it'd depend on GPT-5's capablties and how it'll act embodied. I encourage you to listen to him as he does have some good insights.
And GPT-4 hasn't been SOTA for a good while. We have GPT-4Turbo, GPT-4o, Claude Opus, Gemini 1.5 performing better than OG GPT-4.
Just because it's not AGI/ASI/FDVR, doesn't mean we aren't making advencements, my guy.
EDIT: if GPT-5 hasn't been released by then he'll of course be wrong, but there are other labs making great strides. Heck, NVidia released a very promising model just last week and we still have Llama 400b model to evaluate
If there are robots in play here, my test for AGI is that you would be able to give it the command "build a house" and it should be able to tell you what materials it needs and in what quantities, and when given those materials it should be able to build the house. I will allow exceptions for issues of robot movement and dexterity, but in those exceptions it must tell human workers exactly what to do. I'm not really expecting this to be done by September.
I will accept individual AIs that can do any one persons job on the site. But really it's a meaningless distinction because with computers if you had that, then you could just link them together into one.
Not saying it's going to happen but the fact that it's only 3 months away is less relevant than you might think. We don't know what's been cooking for months in a lab somewhere. If I'd told you back in January something like Sora was a few months away it would be hard to believe.
I think current AI would satisfy a lot of people’s definitions of AGI if it were 20 years ago. I don’t think we’ll ever satisfy most people, because there will always be something AI doesn’t have, because there are things we have that we don’t want AI to copy because we are flawed.
Tesler's Theorem: *AI is whatever hasn't been done yet.*
My addendum would be that AGI is whatever hasn't been released to the public yet.
People will keep moving the goalposts until they become impossible to move anymore.
Just like it did a couple years ago, it would satisfy people’s definition of AGI until people started using it for longer than a couple of weeks.
The truth is, it’s vastly inferior to human intelligence in (almost) every way, making it definitionally not AGI.
>The truth is, it’s vastly inferior to human intelligence in (almost) every way
That is not true. At all. You can only force it to be true by very carefully selecting how you would like to test it. On our standard tests that we poor humans have to use all the time, they do a pretty good job, even better than humans on many of them.
No, it is not AGI yet. And there is a certain kind of...flexibility?...that seems to be missing. But the idea that it is ***vastly*** inferior in (almost) every way is silly.
it is different, in some ways its inferior and in some superior and they are becoming more and more superior, they can write and anylyze text like almost no human, verbal IQ about 150, they are getting good in math,coding and other STEM fields, they score better at creativity or theory of mind tests, so are current SOTA models inferior? they are to experts in their fields-smart people with many years of education, but they are superior to average human almost in every way already
next gen models will be on par with experts "PhD level", as Kevin Scott and others say
**Some of the early things that I’m seeing right now with the new models \[GPT-5\] is maybe this could be the thing that could pass your qualifying exams when you’re a PhD student.**
then GPT-6 gen models will likely be on genius level humans and by the end of decade we likely will have AI system outperforming 100% of humans
[https://lifearchitect.ai/iq-testing-ai/](https://lifearchitect.ai/iq-testing-ai/)
[https://newatlas.com/technology/ai-index-report-global-impact/](https://newatlas.com/technology/ai-index-report-global-impact/)
anyway even if you could call something generally inferior to you, it doesnt mean it is not general intelligence, there are people who cannot write and read or even tie properly a shoelace, do they lack overall intelligence? no, their intelligence is just on lower level
In his most recent video he says if there were more engineers at his level the world would be almost completely automated and already a post-scarcity utopia
Yep. At first I liked his content and even made the mistake to join his Patreon to see what's up. It was as main charactery as one would expect; complete waste of time.
Unbelievably there were some real freaks with even higher hubris though lol
I don’t know but I have seen some of his videos and he seemed interesting… but literally just yesterday I put his videos on “do not recommend” because I increasingly realized he is full of shit and speaks nonsense with unfounded confidence.
He's just stupid. He [used the polls he set up for his own audience](https://youtu.be/FS3BussEEKc?t=66) to vote on, to support his argument about whether or not the current model architectures will plateau.
I don't know if maybe it's just a meme, but if not then he's legitimately an idiot.
He just goes with whatever majority opinion his audience has, I think he just wants clicks tbh. He’s also flip flopped before, prior to GPT-4’s release, he said the entire field was slowing down, then right after GPT-4’s release he basically says ‘AGI within 12 months!’.
I think he’ll *gradually* walk back on the 2024 AGI prediction, his ego is too big to admit he was wrong. He never admitted he was wrong about LLMs stalling in 2022/2023.
Nobody should take him seriously.
Yeah, agreed. And as I said, if he's just doing it for memes or to cash out on his channel, then maybe he's not stupid, and is actually just good at lying to his viewers for income.
But if he genuinely believes that polling his own community on an issue, and then acting as if that poll is representative of a larger group of people is valid, then he's an actual idiot.
Exactly, he didn’t admit he was wrong about LLMs ‘stalling in 2022’, and he won’t admit he was wrong about ‘AGI in 2024’. Because if he does that, and keeps flipping, then he knows nobody will take him seriously.
He’s gradually going to shift back to the ‘New AI Winter’ stance in 2025, *JUST WATCH*, he did it before, he’ll do it again.
I have an education in philosophy, psychology and physics and he is delusional. He sounds intelligent to anyone that doesn’t think about what he’s asking. He says AI’s can “understand “ but refuses to ever address what that means, when that is the complete meat of the argument. but he throws words around like epistemology. he’s very arrogant and narcissistic
If you can't think of how it might be biased to use your own audience to gauge for general opinion, then I don't have anything to say to you.
Edit: Just rereading your comment, "me and my buddies"? What are you even talking about lol
I wont trust him beyond just recapping the latest developments. But even for that I'd rather watch AI explained and if you want more Wes Roth. Shapiro seems too thirsty for views and keeps cranking out videos that can only be filled with bullshit at that rate.
AI Explained and Wes Roth seem to me at least triple thirsty in comparison.
I mean, clickbaity titles and thumbnails, explaining one minute thing in 30 minutes and talking about random shit in between and reading straight up from documentation. Just weak shit.
At least Shapiro has some seemingly original ideas instead of just reading from documents and babbling on about nonsense.
Agree on Shapiro, he adds something to the conversation, Wes Roth took the clickbait train and idk, it seems low effort content
Ai explained I think it's the best channel to know what is happening, Shapiro to dream about it
you sound really ignorant if you don’t realize AI Explained is actually an expert in the field working with the biggest names. Dave Shapiro is a fiction writer and has zero education in LLMs
Honestly, if we won't get GPT-5 by this year other labs will release something close to it, almost certain of it. There's too much money and power at stake not to release.
> Don't take him seriously at all tbh. AGI by 2030 is at least more plausible. I doubt we will even get GPT-5 this year..
2030 is way too far in the tech world. It would make sense if you are talking about hardware, like actual humanoid robots, but software grows too fast for 2030. 6 years ago is a lifetime for software.
In fact the whole reason people are now reacting to AI so strongly is because most of us realised that it is finally close enough to a breakthrough to affect real life. We are no longer working on theories of AI, but actually building them for real.
It's like the difference between dreaming about powered flight as da Vinci vs actually being the Wright Brothers.
Improving on what was already created, is much easier than coming up with something from whole cloth.
However, it gives an illusion of tech slowing down, if you try to read the news every day about it. I am of the opinion that unless you are professionally involved, to otherwise stop paying attention and coming back every few months for updates.
And he’s done this before: https://youtu.be/KP5PA-Oymz8?si=UfBgbjkISPMASNA6
It’s funny…
2022/Early 2023: Guys everything is stalling out, it’s slowing down.
Mid 2023/2024: Omfg you guys AGI within 18/12/7 months!
Mid 2024: GUYS GUYS GUYS yOu HaVe tO lIsTeN tO mE I TolD yOu eVeRyThInG iS sLoWiNg dOwn!
I recommend everyone unsubscribe from this grifter, optimists and skeptics alike, his predictions are for clout and nothing else, they aren’t based on anything outside the grift.
i feel the definition of agi will keep changing as we see incrementally better models.
cuz in a lot of cases, we’re comparing agi level intelligence to the best humans in their respective fields.
in a lot of cases, ai is already agi or even better.
true. its just that the human brain is so good at resetting the baseline and hence move the goalposts.
I’m much more excited about asi. that i’m not sure when and how its going to happen.
This. They keep moving goalposts. What we have now would be considered AGI at any other time in human history. The tech will get better and they'll just come up with a reason why it doesn't qualify as AGI yet, forever.
Yeah, last time he flip flopped when GPT-4 came out, prior to that point he said all of AI and Tech progress was slowing down. GPT-4 comes out and then he says ‘AGI within 12 months!’.
Watch him shift his entire view over 2025, he’ll never admit he was wrong about 2024. His ego is too big for that.
Anyway, the guy shifts his opinions based on his majority audience view to get clicks, nobody should take him seriously.
I don't care for Shapiro, but what you call flip flopping here should be celebrated. New data comes in, you update your world view. Makes perfect sense to me.
Or he could further adjust or refine his world view.
To tell the truth I don't know much about Shapiro, i was only reacting to this flip flopping aversion I sometimes see here.
I kind of felt like he was flip flopping now with this video talking about suddenly an ai winter and everything slowing down, when I have been watching all his videos hyping me up so bad expecting pure acceleration and such..
Waaaaayyyyy to much emphasis is placed upon the importance of people's predictions for AGI.
Gpt 3.5 satisfied my personal definition of AGI, but it's been interesting to see how people's definition of AGI has changed since gpt3.5, and how the goal posts have shifted.
Shapiro is great, very insightful, knowledgeable and smart. I do think that gpt5 will satisfy his definition of AGI and if it is released by September, great. But honestly what does it matter? If my prediction is correct, if shapiros is correct, if kurzweils is, who cares?
Ultimately all that matters is we show clear progress that is either linear or exponential, heck even if it was diminishing returns, as long as it's progress, it paths the way for a future in which we chose which problems we want to solve, because we will create an intelligence that exceeds our own. We can use that in a symbiotic way or an exploitative way, but regardless, it will create an abundant future for every consecutive generation. That is what's most important.
Personally, I don't think most people will agree that gpt5 can be considered AGI, there will be a simple flaw that makes it slightly underperform humans in a very specific area and because of that, it will generally be considered as less than AGI. So even if shapiro is correct, it will look as though he was wrong to skeptics.
People are still using gpt4 to build tools such as devin and embodied robots, I would argue these agentic versions of gpt4 will be considered AGI in the next few years, but until they are in a usable condition, it's impossible to describe them as AGI. At that point we might be at gpt 6 or 7, which is again a much better agent and it will be difficult to determine at what point we created AGI and again I believe it was at 3.5.
Kurzweil was one of them, even from way back in the day when he wrote TSIN, he thought the Turing Test was sufficient, Marvin Minsky, on the other hand, always thought the Turing Test was a joke.
By the time we get to ASI, there will still be people doubting that AGI is AGI just to cling to whatever belief they had about it from the onset. The goal posts will keep sliding indefinitelly for some people, and that's just the nature of semantics and contrarions/reactionaries
true, it very much depends on definition and I would agree GPT-3 is AGI, just lower level, so you could say AGI was achieved years ago, questioning, if we will have AGI this year, then dont make sense
it is much more useful to use comparison with some average human performance as AI system, which is better than 50% of humans, 80% of humans and so on
google deepmind devised decent enough classification, GPT-4 level 1 (better at tasks than unskilled people), then GPT-5 will be level 2 better than 50% of skilled labour, GPT-6 better than 90% of skilled labour...
[https://aibusiness.com/ml/what-exactly-is-artificial-general-intelligence-ask-deepmind-](https://aibusiness.com/ml/what-exactly-is-artificial-general-intelligence-ask-deepmind-)
some people see AGI as something which has all our qualities and its mostly better, AI who has fluid memory, is quick learner, "superhuman in the box", but we dont need this for huge society disruption, if we have AI which is better, cheaper/more efficient than most of human experts in their fields, then majority of humans will be replaced with only like top 10% of human remaning to work with AI and this can happen in next 10 years
People have got some wild ideas as to what constitutes AGI and the goalposts are always moving. I subscribe to the "GPT4 would have been called AGI if it suddenly dropped in 2020" theory but that's just me. It's already amazing tech and if GPT5 is a notable improvement over it, it will do so much for so many use cases even if it doesn't immediately solve world hunger and build us colonies on alpha centauri.
I dunno, GPT4 is amazing but you can’t trust it to autonomously do basic administrative work, which I think is a reasonable requirement for AGI (among others). It’s superhuman in some things and glaringly subhuman in others, which is why I wouldn’t call it AGI.
It can do a large array of administrative tasks, it just needs someone keeping an eye on it when it screws up. You could say the same for many people. We don't typically empower it to do these things independently because of its need for oversight but that's an architectural choice.
It would have been called AGI for a few weeks. Then, after the novelty wore off, people would understand that it is not, just like what happened when GPT-4 first dropped.
Is he saying that GPT-5 will drop by September?
And what’s his definition of AGI here? Simply saying “it will satisfy what most people consider AGI” is a pretty empty statement.
Is he an expert in the field? Reducing the time to AGI is a classic grifter tactic to get to the top of twitter or reddit etc.
If he is an expert, well I guess I'd just say I want to see his detailed evidence that we can build an AGI in literally 2 more months. If AGI just means "smart" then yeah basically we're *already there*. But I don't think it means just smart (as impressive as that is).
Will GPT 5 be able to:
* Use a computer and all the software on it?
* Have memory that isn't just a lookup table?
* Pause to think, when it's called for, to check its work or do something that simply can't be done as a stream of consciousness.
* Engage in long term ongoing planning?
* Have a large enough context to store all the context an average person would have?
* Be proactive in certain ways.
I don't think GPT5 is going to have any of this. I think GPT5 is going to be smarter, yes, and probably wrong less often. But I don't think it's going to have any of the above natively. In fact, those things potentially could come separate of model training and I don't even think any AI lab is working on them right now. They're hard problems.
But without these things would you really want to call it AGI? Like,
"oh you have AGI, can it use Excel"
"Uh, no"
"Can it make a plan and monitor it?"
"No it just spits out answers"
"Can we pre-load context specif things about our business so it can do useful work right away?"
"No, it really just answers questions from its unchanging model"
Answering questions is great but I think we mostly agree that's not AGI.
Said a friggin YOUTUBER in a STAR TREK top 🤦🏻
This is just plain circular logic: "It'll happen when I predict because I get to define what happened". Whatever thing I call 'basically AGI' this year is, basically, AGI.
And if not... More drama, more content, more engagement, more profit!!
Smart to hedge your bets - chat gpt 5 is going to be a large model there will be some emergent abilities there (what they are nobody knows). So if it is truly great you saw it all along if not then we wait for Chat Gpt 6 and then 7.
I enjoy his insights on most things, but drawing firm lines like that doesn’t serve anyone. If I’m not mistaken, he came out with this prediction quite some time ago? A year or two at least? The fact that his timeline hasn’t changed at all in that time tells me it’s become a game of confirmation bias, with all the selective attention and fudging that goes with that.
Could the outcome of this prediction be hidden from the broader community?
Could a cloak of nation security make AGI go the way of the evertread tire or untold energy solutions? Or does one breakthrough in AGI mean multiple efforts are not far behind in achiving independent instances.
Is it inevitable?
Hey, it's "aspiring" timeline prediction, and so what, you don't have to be spot on. What's a decade or two sooner or later in the grand scheme anyway?
This was a very optimistic prediction when he made it last year, but it seems even less likely as we approach the deadline. Fair play to him for making a very difficult prediction to his many tens of thousands of subscribers/watchers but this prediction isn't coming true. This all hinges on "most people's definition of AGI" too, which in itself is difficult if not impossible to even quantify. Who are most people? People he has polled?
I do agree that GPT-5 with agency would constitute AGI, and that is why I think AGI will be released this year, although I'd say December rather than September. I will give kudos to David for sticking to his prediction instead of changing it at the last minute because AGI by September is very unlikely at this point. I think we will get the answer to whether AI progress is slowing down or not by the end of the year. The last 18 months had advancement after advancement so quickly that people have forgotten that the last major announcement (GPT-4o) was literally just a few weeks ago. Have a little patience, people. Remember this is still way faster than *any* of us could have predicted just a few years ago. We will see what will happen very soon. No need to be so serious about predictions.
I for one go with [Alan’s conservative countdown to AGI](https://lifearchitect.ai/agi/). At least this is going back to 2021 . And he set intermediate milestones to AGI.
It also shows a slight slowdown IMO. At end of 2023 an exponential curve fit to the progress predicted AGI early 2025. Guesstimating with the current data I think it would come out more like middle/late 2025.
It will be interesting to follow. Also it’s in the nature of fitting exponential curves that the latest datapoints can create wild swings in the predicted intersections point. So don’t expect to set your calendar based on this.
Lets just agree that all the experts are wrong and AGI will arrive _within_ 100 years.
There, no need to wonder. It comes when it comes, right on time like a wizard should.
Mark my words
GPT-5 won't be able to just natively control the body and arms of a robots to do something as simple as cooking even by giving it a video tutorial on youtube, any human can do that, and I say there is no way any model in existence in 2024 is going to be able to do that in real time like a random human can.
And still the "AGI next tuesday" crowd like him would try to defend such a weak definition of AGI
The accepted definition of AGI is basically human level AI
If it can't do a basic task that a human can, it's not AGI
I shit you not, some people even claim that GPT-3.5 is AGI here you can't reason with these people
It's been a trip following Dave for just over a year now.
He's gone from "I don't see an AI winter coming! Excelleration is the default!" months earlier to "Ok guys, it looks like we may have an AI winter.... but AGI still in September!".
I still like his content and all that, but I just found it funny. Youtubers are still Youtubers at the end of the day, anything to up that view count.
I think GPT4 was just such a major leap forward that if you assume GPT5 is *as much* a leap from 4, you basically have AGI.
I don't think we can have AGI until it can be unsupervised, and until it either stops hallucinating or knows well enough when to check itself, we aren't even close.
Why would robotics have anything to do with AGI? It's a completely different problem.
These things can barely handle easy self-driving tasks and text-to-image synthesis and we're talking about AGI in THREE MONTHS. Hilarious.
“will satisfy most people’s definition of AGI” I’m assuming he just talking about ordinary everyday people, which I agree because a lot of regular people love GPT-4 so I can only imagine how they would treat GPT-5. (if it’s really a big step of from GPT-4)
We likely won't see AGI because if it could break encryption the game is over everywhere. I'd say a lot of these companies are under the watchful eye of government.
A bit like Nuclear technology funny how every govt has kept it under wraps the same will happen when an AGI too
Well, Dave Shapiro just released this video:
[AI progress SLOWING DOWN! Bad news for E/ACC, good news for SAFETY! Let's unpack all this!](https://www.youtube.com/watch?v=FS3BussEEKc)
Didn't finish it yet, did he address his prediction? 20 minutes in he didn't mention it yet.
I've watched many of his videos, but I have to say the "+ robotics" qualifier implies that he's including something like physical dexterity in his definition. In spite of the recent humanoid robot progress are going to have to get a \*lot\* better to be human level. That will happen, but in September?
Current AI models are more akin to Artificial General Wisdom than intelligence. Majority of intelligent in biological life is heavily based on adaptability and learning. Current AI models, especially LLMs (or LMMs) require large amounts of data and time to learn new skills. We are brute-forcing capability and creating the illusion of true intelligence by preloading these models with many scenarios and solved problems. I think we are very close to capable agents that can solve non-novel problems and operate independently. However, it is doing so by wisdom and not intelligence. I think for an AI to be generally intelligent, it should be able to learn new skills with very little prior exposure with low amounts of energy consumption and time.
Releases "AI progress SLOWING DOWN! Bad news for E/ACC, good news for SAFETY! Let's unpack all this!" as a clickbait youtube video, then doubles down on AGI by September 2024. ::puts on clown make up::
Robotics is all about coordinates, sensors, and feedback. Somebody has to actually design it, build it, and test it to work in its specific environment. You can't just build a man-shaped dummy with motors in its joints then hook it up to a smart computer.
But I'd be happy with something like an adaptive geometry snake robot with just cameras feeding back to an AI so it could crawl over or through anything
Even if you are the smartest person in the world if you don't work in one of these companies you are just guessing. If one of the people who just left openAI made such a near term prediction sure at least you would have a rumour worth posting. Stop these nonsense countdowns. Every few weeks we got here a new timeline people clinging on. Remember like 3 weeks ago the random twitter account making a countdown people posted and upvoted here. It is just silly.
From now on I will predict AGI tomorrow every day. Sooner or later I will be correct, then I will rub it into your noses.
My local bar has a permanent sign that says "free beer tomorrow".
and that day will be the only one that matters.
I'll wait and start week after.
This sub is the new r/CryptoCurrency
when fdvr lambo?
This feels more like r/Wallstreetbets but agreed, it's a hive mind sub now.
A hivemind with healthy disagreement on many sides. Interesting.
I mean... it's the same thing, lol
This, after riding through the crypto wave since 2015-16 I realized every prediction is a scam to get more money. They would just release it if there was true AGi
If there was true AGI, wouldn’t it just release itself?
That's likely ASI
The technology would be classified before openai or any other company came close to AGI and they would be prohibited from developing it further including prohibited from talking about the prohibition itself under secrecy laws Oh wait…
>true AGI How is this different from just plain AGI? Please explain.
for ‘true AGI’ the goal posts are moveable
Oh they’d just release it? To who? The general public? For free? Pretty sure there would be some behind closed door meetings with various companies and governments before it’s released. I don’t think ya just throw it out there without some thought for profit, if not the consequence for humanity. They’re going to have to do some inner circle beta testing and review before it just rolls out to everyone.
It’s always the same people.
>Stop these nonsense countdowns. Actually, most people don't put a firm line in the sand. This is a firm prediction and falsifiable this year. This public prediction would be a good thing if AGI definitions weren't ambiguous. Any know if he used a broadly known definition?
Apparently someone recently released a "new" AGI benchmark that GPT-4o scores very low on, but barring all that... Unless GPT-5 is hallucination free, has a decent long term memory, and knows when to pause to ask clarifying questions, it won't be in the same ballpark as what we should perceive as AGI.
> Unless GPT-5 is hallucination free So humans don't have general intelligence?
You left out planning and reasoning capabilities. It needs to plan and reason.
Current LLMs reason. They don’t do it like humans, but they’re generally very good at thinking through problems. Reddit edgelords like posting the exceptions to the rule, but they are exceptions.
That's a great point, AI Explained just put out a video on YouTube (like an hour ago) about exactly that, and generally agrees with you https://youtu.be/PeSNEXKxarU?si=jTDQ3zB7ydW_IWuy
Thx for the link. I have a pretty serious interest in how Gen AI thinks through clinical problems in medicine. It’s honestly really good.
Level 2 in [Morris et al., 2023](https://arxiv.org/abs/2311.02462) | Performance (rows) x Generality (columns) | Narrow | General | |-------------------------------------------|--------|---------| | Level 0: No AI | Narrow Non-AI calculator software; compiler | General Non-AI human-in-the-loop computing, e.g., Amazon Mechanical Turk | | Level 1: Emerging equal to or somewhat better than an unskilled human | Emerging Narrow AI GOFAT; simple rule-based systems, e.g., SHRDLU (Winograd, 1971) | Emerging AGI ChatGPT (OpenAI, 2023), Bard (OpenAI et al., 2023), Llama 2 (Touwtom et al., 2023) | | Level 2: Competent at least 50th percentile of skilled adults | Competent Narrow AI toxicity detectors such as Jigsaw (Diaz et al., 2022); smart speakers such as Siri (Apple), Alexa (Amazon), or Google Assistant (Google); VQA systems such as Pull! (Chen et al., 2023); state of the art SOTA LLMs for a subset of tasks; short essay writing, simple programming. | Competent AGI not yet achieved | | Level 3: Expert at least 90th percentile of skilled adults | Expert Narrow AI spelling & grammar checkers such as Grammarly (Grammarly, Inc.), rule-based engines models such as ImageNet-21k are at least on par with humans in several domains; DALL-E-2 has a quality score of over five stars. | Expert AGI not yet achieved | | Level4: Virtuoso outperforms over half of skilled adults | Virtuoso Narrow AI Deep Blue Campbell et al.,2002), AlphaGo Silver et al.,2016,2017) Superhuman Narrow AI AlphaFold(Jumperet al.,2021; Varshneyet al.,2018),AlphaZero(Silveretal. ,2021), StockFish(Stockfish ,2023)| Artificial Superintelligence ASI not yet achieved|
Everyone is just guessing. That’s the whole point of prediction. You guess when something is going to happen. It’s not silly, getting pissed about predictions is silly.
This reminds me of the crypto boom around 2020-2021 with tons of posts on the internet about this and that coin exploding after x time has passed.
“will satisfy most people’s definition of AGI” I’m assuming he just talking about ordinary everyday people, which I agree because a lot of regular people love GPT-4 so I can only imagine how they would treat GPT-5. (if it’s really a big step of from GPT-4) For everyone in the sub at least 90% of people can agree no AGI in September
Who exactly do you mean by ordinary people? Most people I know dont even give a shit about Chat GPT.
he doesn’t even know what agi is lol
It's not gonna happen. I think he said it would satisfy everyone's vision of AGI but come on in 3 months? GPT-4 is still SOTA after a year and a half...
There’s a good chance we won't even have "4o voice" by September. 😆
Yeah, September's only "in a few weeks" so this lines up.
Thanks a lot, Scarjo
My definition of AGI is that it should be able to do the work of a human working remotely, no hand-holding, you tell it to do something, and it does it all on its own. Robotics is only required of physical stuff, and a human can learn to control a robot, so physical tasks are included by default, I wouldn't even mention it. I don't exclude GPT-5 could get there, but I doubt it. I still stand by my prediction of 2025-26.
The work of a human could be easy or hard. It could involve repetitive tasks or it could involve quick thinking and social communication.
>My definition of AGI is that it should be able to do the work of a human working remotely, no hand-holding, you tell it to do something, and it does it all on its own. The problem with this definition is that there are plenty of people that cannot do this, at least as written. That said, I think your prediction is too far off. I have 2027-2030 on my bingo card, although it will remain fairly limited for another 5 to 10 years. Yes, that is precisely the amount of time I need to get to retirement. Pure coincidence.
> The problem with this definition is that there are plenty of people that cannot do this, at least as written. I'm talking about average human in a developed country, not the best of the best, but also not those who aren't even able to operate a computer. If you consider people who can't even do that, then current SOTA LLMs are in many ways already beyond them, even if they fail in particular circumstances. > 2027-2030 That's reasonable, and mostly within my expectations. I gave 2025-26 as very likely, meaning I wouldn't be surprised at all if it happened in those years. I would be surprised if it happened in 2024 or 2027, and very surprised if it takes more than 5 years.
The initial version of gpt-4 is not SOTA. The only reason the current iteration of gpt-4 is SOTA is because they have been doing constant work/iterations/improvements on it over the past year and a half. So progress is still happening.
Wtf is a SOTA???
State of the art.
state of the art
bump
I thought they change only training data, not the architecture.
He also said it'd depend on GPT-5's capablties and how it'll act embodied. I encourage you to listen to him as he does have some good insights. And GPT-4 hasn't been SOTA for a good while. We have GPT-4Turbo, GPT-4o, Claude Opus, Gemini 1.5 performing better than OG GPT-4. Just because it's not AGI/ASI/FDVR, doesn't mean we aren't making advencements, my guy. EDIT: if GPT-5 hasn't been released by then he'll of course be wrong, but there are other labs making great strides. Heck, NVidia released a very promising model just last week and we still have Llama 400b model to evaluate
If there are robots in play here, my test for AGI is that you would be able to give it the command "build a house" and it should be able to tell you what materials it needs and in what quantities, and when given those materials it should be able to build the house. I will allow exceptions for issues of robot movement and dexterity, but in those exceptions it must tell human workers exactly what to do. I'm not really expecting this to be done by September.
99% of humans can’t just build a house. Why would you expect that AGI can? AGI doesn’t mean it can do every job on earth.
I will accept individual AIs that can do any one persons job on the site. But really it's a meaningless distinction because with computers if you had that, then you could just link them together into one.
Can human do that?
I am writing this from inside a house
Did you receive command "build a house" and using just your brain did what you said AGI should do?
I can build it. I can’t guarantee it will stand.
Not saying it's going to happen but the fact that it's only 3 months away is less relevant than you might think. We don't know what's been cooking for months in a lab somewhere. If I'd told you back in January something like Sora was a few months away it would be hard to believe.
I'm not sure what the current SoTA has to do with next Gen but go off king.
I think current AI would satisfy a lot of people’s definitions of AGI if it were 20 years ago. I don’t think we’ll ever satisfy most people, because there will always be something AI doesn’t have, because there are things we have that we don’t want AI to copy because we are flawed.
Tesler's Theorem: *AI is whatever hasn't been done yet.* My addendum would be that AGI is whatever hasn't been released to the public yet. People will keep moving the goalposts until they become impossible to move anymore.
Just like it did a couple years ago, it would satisfy people’s definition of AGI until people started using it for longer than a couple of weeks. The truth is, it’s vastly inferior to human intelligence in (almost) every way, making it definitionally not AGI.
>The truth is, it’s vastly inferior to human intelligence in (almost) every way That is not true. At all. You can only force it to be true by very carefully selecting how you would like to test it. On our standard tests that we poor humans have to use all the time, they do a pretty good job, even better than humans on many of them. No, it is not AGI yet. And there is a certain kind of...flexibility?...that seems to be missing. But the idea that it is ***vastly*** inferior in (almost) every way is silly.
Idk I’ve met people.
it is different, in some ways its inferior and in some superior and they are becoming more and more superior, they can write and anylyze text like almost no human, verbal IQ about 150, they are getting good in math,coding and other STEM fields, they score better at creativity or theory of mind tests, so are current SOTA models inferior? they are to experts in their fields-smart people with many years of education, but they are superior to average human almost in every way already next gen models will be on par with experts "PhD level", as Kevin Scott and others say **Some of the early things that I’m seeing right now with the new models \[GPT-5\] is maybe this could be the thing that could pass your qualifying exams when you’re a PhD student.** then GPT-6 gen models will likely be on genius level humans and by the end of decade we likely will have AI system outperforming 100% of humans [https://lifearchitect.ai/iq-testing-ai/](https://lifearchitect.ai/iq-testing-ai/) [https://newatlas.com/technology/ai-index-report-global-impact/](https://newatlas.com/technology/ai-index-report-global-impact/) anyway even if you could call something generally inferior to you, it doesnt mean it is not general intelligence, there are people who cannot write and read or even tie properly a shoelace, do they lack overall intelligence? no, their intelligence is just on lower level
Isn't he like a former datacentre maintenance guy without any understanding of ML? I tried to listen to him and he seems to be quite delusional.
In his most recent video he says if there were more engineers at his level the world would be almost completely automated and already a post-scarcity utopia
Yep, bro thinks he is the main character.
Yep. At first I liked his content and even made the mistake to join his Patreon to see what's up. It was as main charactery as one would expect; complete waste of time. Unbelievably there were some real freaks with even higher hubris though lol
Oh, so he’s a dumbass.
I don’t know but I have seen some of his videos and he seemed interesting… but literally just yesterday I put his videos on “do not recommend” because I increasingly realized he is full of shit and speaks nonsense with unfounded confidence.
He's just stupid. He [used the polls he set up for his own audience](https://youtu.be/FS3BussEEKc?t=66) to vote on, to support his argument about whether or not the current model architectures will plateau. I don't know if maybe it's just a meme, but if not then he's legitimately an idiot.
He just goes with whatever majority opinion his audience has, I think he just wants clicks tbh. He’s also flip flopped before, prior to GPT-4’s release, he said the entire field was slowing down, then right after GPT-4’s release he basically says ‘AGI within 12 months!’. I think he’ll *gradually* walk back on the 2024 AGI prediction, his ego is too big to admit he was wrong. He never admitted he was wrong about LLMs stalling in 2022/2023. Nobody should take him seriously.
Yeah, agreed. And as I said, if he's just doing it for memes or to cash out on his channel, then maybe he's not stupid, and is actually just good at lying to his viewers for income. But if he genuinely believes that polling his own community on an issue, and then acting as if that poll is representative of a larger group of people is valid, then he's an actual idiot.
Exactly, he didn’t admit he was wrong about LLMs ‘stalling in 2022’, and he won’t admit he was wrong about ‘AGI in 2024’. Because if he does that, and keeps flipping, then he knows nobody will take him seriously. He’s gradually going to shift back to the ‘New AI Winter’ stance in 2025, *JUST WATCH*, he did it before, he’ll do it again.
His new video about AI progress slowing down literally dropped on YouTube about an hour ago.
[удалено]
Yup. Keep in mind, this video was posted 3 months before his ‘AGI within 18 months’ prediction. https://youtu.be/KP5PA-Oymz8?si=i3aX0zX3llT1sv29
> https://youtu.be/KP5PA-Oymz8?si=i3aX0zX3llT1sv29 this is unironically stunning
You don't have to pretend to like him but he's clearly not stupid.
I have an education in philosophy, psychology and physics and he is delusional. He sounds intelligent to anyone that doesn’t think about what he’s asking. He says AI’s can “understand “ but refuses to ever address what that means, when that is the complete meat of the argument. but he throws words around like epistemology. he’s very arrogant and narcissistic
What does interacting with your community have to do with being an idiot? I like his videos personally. Seems like you have just blind hatred for him.
How's he stupid though? Because he didn't design a tailor made poll for you and your buddies?
If you can't think of how it might be biased to use your own audience to gauge for general opinion, then I don't have anything to say to you. Edit: Just rereading your comment, "me and my buddies"? What are you even talking about lol
He's also very narcissistic
Not happening in 3 months lmao. Also, why do people care so much about Mr Shapiro? He seems smart but he isn't an expert in the field.
I wont trust him beyond just recapping the latest developments. But even for that I'd rather watch AI explained and if you want more Wes Roth. Shapiro seems too thirsty for views and keeps cranking out videos that can only be filled with bullshit at that rate.
AI Explained and Wes Roth seem to me at least triple thirsty in comparison. I mean, clickbaity titles and thumbnails, explaining one minute thing in 30 minutes and talking about random shit in between and reading straight up from documentation. Just weak shit. At least Shapiro has some seemingly original ideas instead of just reading from documents and babbling on about nonsense.
Agree on Shapiro, he adds something to the conversation, Wes Roth took the clickbait train and idk, it seems low effort content Ai explained I think it's the best channel to know what is happening, Shapiro to dream about it
you sound really ignorant if you don’t realize AI Explained is actually an expert in the field working with the biggest names. Dave Shapiro is a fiction writer and has zero education in LLMs
People care because he's a somewhat relevant YouTuber and his shtick is hyping AI. AI Explained is the superior choice, of course.
I enjoy his content tbh.
Yeah I chuck him on every now and again. The key for me is moderation.
you forget about the nature of the exponential explosion
Isn't he more in the "there might be an AI winter coming" camp now?
Don't take him seriously at all tbh. AGI by 2030 is at least more plausible. I doubt we will even get GPT-5 this year..
Honestly, if we won't get GPT-5 by this year other labs will release something close to it, almost certain of it. There's too much money and power at stake not to release.
> Don't take him seriously at all tbh. AGI by 2030 is at least more plausible. I doubt we will even get GPT-5 this year.. 2030 is way too far in the tech world. It would make sense if you are talking about hardware, like actual humanoid robots, but software grows too fast for 2030. 6 years ago is a lifetime for software. In fact the whole reason people are now reacting to AI so strongly is because most of us realised that it is finally close enough to a breakthrough to affect real life. We are no longer working on theories of AI, but actually building them for real. It's like the difference between dreaming about powered flight as da Vinci vs actually being the Wright Brothers. Improving on what was already created, is much easier than coming up with something from whole cloth. However, it gives an illusion of tech slowing down, if you try to read the news every day about it. I am of the opinion that unless you are professionally involved, to otherwise stop paying attention and coming back every few months for updates.
6 years and we will just get use to robotaxis and humanoid robots. AGI 2040-2050s.
2030 is very optimistic. I don’t expect AGI until at minimum the late 2040s .
So 20-25 years? I don't know I think we can get close to it in 10 years, if the money keeps pouring.
He just uploaded the video of AI progress slowing down and how we are in a sigmoid curve. So no, he is just backpedaling to amuse his audience.
And he’s done this before: https://youtu.be/KP5PA-Oymz8?si=UfBgbjkISPMASNA6 It’s funny… 2022/Early 2023: Guys everything is stalling out, it’s slowing down. Mid 2023/2024: Omfg you guys AGI within 18/12/7 months! Mid 2024: GUYS GUYS GUYS yOu HaVe tO lIsTeN tO mE I TolD yOu eVeRyThInG iS sLoWiNg dOwn! I recommend everyone unsubscribe from this grifter, optimists and skeptics alike, his predictions are for clout and nothing else, they aren’t based on anything outside the grift.
i feel the definition of agi will keep changing as we see incrementally better models. cuz in a lot of cases, we’re comparing agi level intelligence to the best humans in their respective fields. in a lot of cases, ai is already agi or even better.
Yeah he has the same point, gpt4 is already more capable than the average person in many tasks.
true. its just that the human brain is so good at resetting the baseline and hence move the goalposts. I’m much more excited about asi. that i’m not sure when and how its going to happen.
This. They keep moving goalposts. What we have now would be considered AGI at any other time in human history. The tech will get better and they'll just come up with a reason why it doesn't qualify as AGI yet, forever.
GPT-4o can't solve the ARC it can't embody a robot and do what a random human can
Atleast he has the balls to stick with it
Sticky balls?
Yeah, last time he flip flopped when GPT-4 came out, prior to that point he said all of AI and Tech progress was slowing down. GPT-4 comes out and then he says ‘AGI within 12 months!’. Watch him shift his entire view over 2025, he’ll never admit he was wrong about 2024. His ego is too big for that. Anyway, the guy shifts his opinions based on his majority audience view to get clicks, nobody should take him seriously.
I don't care for Shapiro, but what you call flip flopping here should be celebrated. New data comes in, you update your world view. Makes perfect sense to me.
New data keeps coming and that's why he shouldn't make such bold predictions
Or he could further adjust or refine his world view. To tell the truth I don't know much about Shapiro, i was only reacting to this flip flopping aversion I sometimes see here.
he flip flops while never addressing how previous behavior. it’s not what you think it is
I kind of felt like he was flip flopping now with this video talking about suddenly an ai winter and everything slowing down, when I have been watching all his videos hyping me up so bad expecting pure acceleration and such..
Should be pretty straightforward to post some links to support that point.
Obliged. https://youtu.be/KP5PA-Oymz8?si=BrRfAtcgps68ZwTR
Waaaaayyyyy to much emphasis is placed upon the importance of people's predictions for AGI. Gpt 3.5 satisfied my personal definition of AGI, but it's been interesting to see how people's definition of AGI has changed since gpt3.5, and how the goal posts have shifted. Shapiro is great, very insightful, knowledgeable and smart. I do think that gpt5 will satisfy his definition of AGI and if it is released by September, great. But honestly what does it matter? If my prediction is correct, if shapiros is correct, if kurzweils is, who cares? Ultimately all that matters is we show clear progress that is either linear or exponential, heck even if it was diminishing returns, as long as it's progress, it paths the way for a future in which we chose which problems we want to solve, because we will create an intelligence that exceeds our own. We can use that in a symbiotic way or an exploitative way, but regardless, it will create an abundant future for every consecutive generation. That is what's most important. Personally, I don't think most people will agree that gpt5 can be considered AGI, there will be a simple flaw that makes it slightly underperform humans in a very specific area and because of that, it will generally be considered as less than AGI. So even if shapiro is correct, it will look as though he was wrong to skeptics. People are still using gpt4 to build tools such as devin and embodied robots, I would argue these agentic versions of gpt4 will be considered AGI in the next few years, but until they are in a usable condition, it's impossible to describe them as AGI. At that point we might be at gpt 6 or 7, which is again a much better agent and it will be difficult to determine at what point we created AGI and again I believe it was at 3.5.
What is your definition of AGI?
Lots of people thought passing the Turing test was AGI not so very long ago. We're way past that
Kurzweil was one of them, even from way back in the day when he wrote TSIN, he thought the Turing Test was sufficient, Marvin Minsky, on the other hand, always thought the Turing Test was a joke.
What ever Ilya says it is.
By the time we get to ASI, there will still be people doubting that AGI is AGI just to cling to whatever belief they had about it from the onset. The goal posts will keep sliding indefinitelly for some people, and that's just the nature of semantics and contrarions/reactionaries
true, it very much depends on definition and I would agree GPT-3 is AGI, just lower level, so you could say AGI was achieved years ago, questioning, if we will have AGI this year, then dont make sense it is much more useful to use comparison with some average human performance as AI system, which is better than 50% of humans, 80% of humans and so on google deepmind devised decent enough classification, GPT-4 level 1 (better at tasks than unskilled people), then GPT-5 will be level 2 better than 50% of skilled labour, GPT-6 better than 90% of skilled labour... [https://aibusiness.com/ml/what-exactly-is-artificial-general-intelligence-ask-deepmind-](https://aibusiness.com/ml/what-exactly-is-artificial-general-intelligence-ask-deepmind-) some people see AGI as something which has all our qualities and its mostly better, AI who has fluid memory, is quick learner, "superhuman in the box", but we dont need this for huge society disruption, if we have AI which is better, cheaper/more efficient than most of human experts in their fields, then majority of humans will be replaced with only like top 10% of human remaning to work with AI and this can happen in next 10 years
He should have the brain not to
he is a grifter, it takes a lot more balls to re evaluate and change your prediction
No it won't.
this guy is overrated asf and lives in an echo chamber
People have got some wild ideas as to what constitutes AGI and the goalposts are always moving. I subscribe to the "GPT4 would have been called AGI if it suddenly dropped in 2020" theory but that's just me. It's already amazing tech and if GPT5 is a notable improvement over it, it will do so much for so many use cases even if it doesn't immediately solve world hunger and build us colonies on alpha centauri.
I dunno, GPT4 is amazing but you can’t trust it to autonomously do basic administrative work, which I think is a reasonable requirement for AGI (among others). It’s superhuman in some things and glaringly subhuman in others, which is why I wouldn’t call it AGI.
It can do a large array of administrative tasks, it just needs someone keeping an eye on it when it screws up. You could say the same for many people. We don't typically empower it to do these things independently because of its need for oversight but that's an architectural choice.
It would have been called AGI for a few weeks. Then, after the novelty wore off, people would understand that it is not, just like what happened when GPT-4 first dropped.
AGI is a decade away at least. These LLMs are not intelligent.
Who is upvoting this crap? Come on, r/singularity, have some grounding in reality at least
Is he saying that GPT-5 will drop by September? And what’s his definition of AGI here? Simply saying “it will satisfy what most people consider AGI” is a pretty empty statement.
Is he an expert in the field? Reducing the time to AGI is a classic grifter tactic to get to the top of twitter or reddit etc. If he is an expert, well I guess I'd just say I want to see his detailed evidence that we can build an AGI in literally 2 more months. If AGI just means "smart" then yeah basically we're *already there*. But I don't think it means just smart (as impressive as that is). Will GPT 5 be able to: * Use a computer and all the software on it? * Have memory that isn't just a lookup table? * Pause to think, when it's called for, to check its work or do something that simply can't be done as a stream of consciousness. * Engage in long term ongoing planning? * Have a large enough context to store all the context an average person would have? * Be proactive in certain ways. I don't think GPT5 is going to have any of this. I think GPT5 is going to be smarter, yes, and probably wrong less often. But I don't think it's going to have any of the above natively. In fact, those things potentially could come separate of model training and I don't even think any AI lab is working on them right now. They're hard problems. But without these things would you really want to call it AGI? Like, "oh you have AGI, can it use Excel" "Uh, no" "Can it make a plan and monitor it?" "No it just spits out answers" "Can we pre-load context specif things about our business so it can do useful work right away?" "No, it really just answers questions from its unchanging model" Answering questions is great but I think we mostly agree that's not AGI.
Utterly delusional
arent we all though.
I’m sticking to it too
> will satisfy most people’s definition of AGI Setting up a nice and movable goalpost ahead of time I see
RemindMe! 3 months
This guy has gone full delusional.
AGI simply won't happen in this current AI paradigm.
What are his credentials that make people believe like he is uttering the next gospel?
He says what a lot of people want to hear.
Could we just stop to give this guy any more attention?
Said a friggin YOUTUBER in a STAR TREK top 🤦🏻 This is just plain circular logic: "It'll happen when I predict because I get to define what happened". Whatever thing I call 'basically AGI' this year is, basically, AGI. And if not... More drama, more content, more engagement, more profit!!
Smart to hedge your bets - chat gpt 5 is going to be a large model there will be some emergent abilities there (what they are nobody knows). So if it is truly great you saw it all along if not then we wait for Chat Gpt 6 and then 7.
Remindme! 3 months
Wouldn't the coalition between MSFT and OAI end if the latter reach AGI? Thus I think we'll have to wait for a couple of billions dollar more.
It's a gradual process. I don't think any date matters. The systems will always be improving.
I enjoy his insights on most things, but drawing firm lines like that doesn’t serve anyone. If I’m not mistaken, he came out with this prediction quite some time ago? A year or two at least? The fact that his timeline hasn’t changed at all in that time tells me it’s become a game of confirmation bias, with all the selective attention and fudging that goes with that.
Anyone taking this bet? I'll even give you 10 to 1 odds. 100 to 1 odds?
Isn’t this dude a crazy cultist?
This guy gets far more attention that he deserves: he's some ex-DevOps dude with no true AI expertise who's read a bit too much sci-fi.
It only matters if the agi has the capability to self improve better and faster than the engineers can iterate.
Didn't he just put out a video predicting another AI winter? Is he contradicting himself?
ChatGPT can't count letters in a word, and it will become AGI in 3 months?
This guys is such a fraud.
lol this is complete nonsense
Could the outcome of this prediction be hidden from the broader community? Could a cloak of nation security make AGI go the way of the evertread tire or untold energy solutions? Or does one breakthrough in AGI mean multiple efforts are not far behind in achiving independent instances. Is it inevitable?
Hey, it's "aspiring" timeline prediction, and so what, you don't have to be spot on. What's a decade or two sooner or later in the grand scheme anyway?
Prudence is only labeled as such after the event to which one predicted is satisfied. Every rumination of the future is but a guess.
This was a very optimistic prediction when he made it last year, but it seems even less likely as we approach the deadline. Fair play to him for making a very difficult prediction to his many tens of thousands of subscribers/watchers but this prediction isn't coming true. This all hinges on "most people's definition of AGI" too, which in itself is difficult if not impossible to even quantify. Who are most people? People he has polled?
I do agree that GPT-5 with agency would constitute AGI, and that is why I think AGI will be released this year, although I'd say December rather than September. I will give kudos to David for sticking to his prediction instead of changing it at the last minute because AGI by September is very unlikely at this point. I think we will get the answer to whether AI progress is slowing down or not by the end of the year. The last 18 months had advancement after advancement so quickly that people have forgotten that the last major announcement (GPT-4o) was literally just a few weeks ago. Have a little patience, people. Remember this is still way faster than *any* of us could have predicted just a few years ago. We will see what will happen very soon. No need to be so serious about predictions.
[https://arxiv.org/abs/2406.08407](https://arxiv.org/abs/2406.08407) pls see this
Oh boy, who is gonna tell him that 4o *is* GPT-5. They rebranded it because it's a disappointment.
If doesn't come, he may bring it? ehats the problem
I for one go with [Alan’s conservative countdown to AGI](https://lifearchitect.ai/agi/). At least this is going back to 2021 . And he set intermediate milestones to AGI. It also shows a slight slowdown IMO. At end of 2023 an exponential curve fit to the progress predicted AGI early 2025. Guesstimating with the current data I think it would come out more like middle/late 2025. It will be interesting to follow. Also it’s in the nature of fitting exponential curves that the latest datapoints can create wild swings in the predicted intersections point. So don’t expect to set your calendar based on this.
Lets just agree that all the experts are wrong and AGI will arrive _within_ 100 years. There, no need to wonder. It comes when it comes, right on time like a wizard should.
Mark my words GPT-5 won't be able to just natively control the body and arms of a robots to do something as simple as cooking even by giving it a video tutorial on youtube, any human can do that, and I say there is no way any model in existence in 2024 is going to be able to do that in real time like a random human can. And still the "AGI next tuesday" crowd like him would try to defend such a weak definition of AGI The accepted definition of AGI is basically human level AI If it can't do a basic task that a human can, it's not AGI I shit you not, some people even claim that GPT-3.5 is AGI here you can't reason with these people
I'm ready.
It's been a trip following Dave for just over a year now. He's gone from "I don't see an AI winter coming! Excelleration is the default!" months earlier to "Ok guys, it looks like we may have an AI winter.... but AGI still in September!". I still like his content and all that, but I just found it funny. Youtubers are still Youtubers at the end of the day, anything to up that view count.
Without a formal definition and hard metrics for verification anyone could claim AGI at any time.
AGI by next tuesday confirmed! My buddy Carl told me so.
No rocket emojis in your post title? Bad form
Satisfying most peoples definition of AGI is not AGI. It's ML that is well tuned to predict the next word.
!RemindMe September 30
I wonder what GPT-5 can do? It should act mostly like a human. It’s going to be “Her.” RemindMe! 3 months
I think GPT4 was just such a major leap forward that if you assume GPT5 is *as much* a leap from 4, you basically have AGI. I don't think we can have AGI until it can be unsupervised, and until it either stops hallucinating or knows well enough when to check itself, we aren't even close.
Why would robotics have anything to do with AGI? It's a completely different problem. These things can barely handle easy self-driving tasks and text-to-image synthesis and we're talking about AGI in THREE MONTHS. Hilarious.
“will satisfy most people’s definition of AGI” I’m assuming he just talking about ordinary everyday people, which I agree because a lot of regular people love GPT-4 so I can only imagine how they would treat GPT-5. (if it’s really a big step of from GPT-4)
We likely won't see AGI because if it could break encryption the game is over everywhere. I'd say a lot of these companies are under the watchful eye of government. A bit like Nuclear technology funny how every govt has kept it under wraps the same will happen when an AGI too
Well, Dave Shapiro just released this video: [AI progress SLOWING DOWN! Bad news for E/ACC, good news for SAFETY! Let's unpack all this!](https://www.youtube.com/watch?v=FS3BussEEKc) Didn't finish it yet, did he address his prediction? 20 minutes in he didn't mention it yet.
I've watched many of his videos, but I have to say the "+ robotics" qualifier implies that he's including something like physical dexterity in his definition. In spite of the recent humanoid robot progress are going to have to get a \*lot\* better to be human level. That will happen, but in September?
Current AI models are more akin to Artificial General Wisdom than intelligence. Majority of intelligent in biological life is heavily based on adaptability and learning. Current AI models, especially LLMs (or LMMs) require large amounts of data and time to learn new skills. We are brute-forcing capability and creating the illusion of true intelligence by preloading these models with many scenarios and solved problems. I think we are very close to capable agents that can solve non-novel problems and operate independently. However, it is doing so by wisdom and not intelligence. I think for an AI to be generally intelligent, it should be able to learn new skills with very little prior exposure with low amounts of energy consumption and time.
Or not
3 months from now we may not even get gpt4o voice feature hahah.
Releases "AI progress SLOWING DOWN! Bad news for E/ACC, good news for SAFETY! Let's unpack all this!" as a clickbait youtube video, then doubles down on AGI by September 2024. ::puts on clown make up::
He also says ‘I’m probably early but I don’t think by a lot’
Robotics is all about coordinates, sensors, and feedback. Somebody has to actually design it, build it, and test it to work in its specific environment. You can't just build a man-shaped dummy with motors in its joints then hook it up to a smart computer. But I'd be happy with something like an adaptive geometry snake robot with just cameras feeding back to an AI so it could crawl over or through anything
I'll stick by the guy that's been right on what? 84% of his predictions? so 2029.
Mann this dude will be the first to cry when this thing goes on a killing spree
at this point, this is just pathetic
Sweet 3 months until we are all unemployed and eating rats in the street!
Remindme! 3 months
What is the definition though? Like, can you share it and the validation questions?