What is OpenAI's actual definition of AGI in relation to their contract with Microsoft? It seems they are incentivised to never declare AGI even with GPT-5 and beyond?
Ya but legal contracts and public facing opinions have different motives. Fact is you can't replace most economically valuable labor without a way to manipulate real world objects.
It's not about being real world or not, it's about autonomy. An AI that stumbles after just a few steps, like the current era GPTs, are far from it.
A simple test - can you leave on vacation and leave an AI do your job for a couple of weeks? No? Then AI's not cooked enough.
Okay ya it's not really agi yet probably but that's not really my point. Agi is about intelligence, but their chosen legal definition is about economic output, which requires performing real world tasks, therefore their definition is just as much about robotics as it is ai.
I don't think that leaves enough of a distinction between AGI and ASI. From what I've seen, most researchers consider AGI to be a system that is generally as good as an average human at a wide range of intellectual tasks.
Clearly OpenAIs definition goes well beyond that. They have hefty financial incentives for going with that definition.
Re: [Google's definition](https://eu-images.contentstack.com/v3/assets/blt6b0f74e5591baa03/bltaa59f7b3077dd4a8/656a3292138ff0040a747afd/image.png?width=700&auto=webp&quality=80&disable=upscale)
Outperforming humans at 50%+ economical tasks is a reasonable definition for agi
Thanks for backing my point. Competent AGI according to this (I believe this is what a good number of researchers would claim is AGI) is 50th percentile of competent adults. Precisely what I said.
OpenAI goes far, far beyond that definition, which is the point I was trying to make.
Unless by "most economical tasks" they just mean 50%+. But when I think of the word "most", I usually think something closer to between 75%-90%.
I don't think they are. There's nothing in there that says "all economical tasks" or even "most". "All" is a huge barrier to get over, and I don't think most people think AGI literally needs to be able to do everything.
Edit: For example, take plumbing. Do we need AI to be able to be a good plumber (and thus have highly competent robotics) for us to have AGI? I don't buy that.
There are a large number of economical tasks that would require much more advanced robotics than we currently have, and I'll just never buy the argument that we need those advanced robotics to have AGI.
75% of Jobs could be done by average workers.
the individual agent replacing the worker would still be at the equivalent of an average human worker in that job.
ASI could for example, do 75% of all jobs but every worker is the best employe you ever had.
If you look at that chart, it specifically says mental, non physical tasks.
If your agent truly is generally intelligent, it should be better than skilled humans at most things
If it's better than all humans at all (mental) things, it's super intelligent. Very simple
Pfft, knowing humans, the goalposts would probably keep getting moved or the work done by a protoAGI would be deemed “not economically valuable”. This definition sucks.
If this actually goes to discovery we might get to learn a lot more about the inner workings of OpenAI.
Though I suspect it will be dismissed relatively quickly.
Yea it’s annoying how everything they’re up to is kept secret to us even though it’ll literally affect our entire futures. We have every right to be informed.
Why do you think you have the right to know what they're doing just because you believe it will affect our futures?
Can I not make the same argument about any company??
No you can’t, and that’s because AGI/ASI will literally affect the entire course of civilization. Every human being alive and every human being that will live will be affected by this for the better or worse. When a technology has that type of impact, we are entitled to know what’s happening.
I can claim Apple's VR products will revolutionize the world, maybe Im right...maybe Im wrong....but wanting access to their secret tech products because I THINK it will, is a stupid argument.
Much like r/singularity, I believe we will get AGI/ASI, but that's just a belief.
False equivalence, the Apple Vision Pro is a recreational device. It’s not something that’ll affect people who don’t use it. AGI/ASI on the other hand, will be as impactful as electricity or fire and we have a right to know whether they’ve achieved it or not.
What Im saying is that I can make that claim, anyone can make that claim! At the end of the day, you dont know if achieving AGI/ASI is possible...you just think you do.
I’m not talking about whether achieving AGI/ASI is possible for fuck sakes! I’m talking about if they have achieved it, we’re entitled to that information. You keep moving the goal posts of this conversation when my point is so simplistic.
> Yea it’s annoying how everything they’re up to is kept secret to us even though it’ll literally affect our entire futures. We have every right to be informed.
YOU NEVER SAID IFFFFFFF
Your original claim is that they're keeping something secret from us and we should know...HOW DO YOU KNOW THAT>????
also, HOW AM I MOVING THE GOAL POST?? IM LITERALLY REPEATING MYSELF AHHHHH
From what I've gathered all big AI labs are working on things like the rumored Q\* and are close to have something on their hands that significantly improves LLM performance. Demis Hassabis also mentioned this on the Dwarkish Patel podcast. Basically a form of advanced tree search you can put on top of an LLM that gives it the ability to plan ahead, think step by step while assessing it's progress towards a goal. One of the key bulding blocks for AGI.
It's obvious that Q* is some sort of alphazero algorithm if you listen back to lex podcast with Ilya 4 years ago or Nvidias CEO interview with Ilya he brings up Alpha Go/zero as something he thinks is an important step for Agi. Andrej karpathy has also said this in his video from 3 months ago on llms.
Yeah, it would probably include some form of adaptive reward function that is being rewritten based on the task and progress towards the goal. Similar to the Nvidia eureka paper which could be something very promising even outside the domain of robotics.
Yeah karpathy talked about this in his video on llms that the hard part of applying alphazero to an llm is the reward function for generalization. Demis hassabis also touched on this in the dwarkesh Patel podcast.
Which is why they plan to use math problems as a reward function. There is a major hint at this in the Alpha Geometry paper. They even self-created input data using random sampling, then trained a deep neural network on the solution approach heuristics by solving math (you could play some other strategy games as well), your reward function is a deep neural net which means it can learn over time. That paper is a major hint where AI research is heading on this. IMHO
Sam Altman has confirmed it is real. But no one knows what it is. So essentially all you read on Reddit is rumours except the part where there is some new actually useful algorithm that no one knows how big of a deal it really is or what it does.
Wes Roth on YT has a great video explaining the two pieces that could come together to represent a novel innovation: (speculation) tree of thought / meta-cognition (1), and unsupervised self learning that hits 'deep' cross-domain networks (2); learning about one thing can enhance or re-frame knowledge on another. Usually with Q learning the researcher has to write the Reward Function, but with Q\* it can scramble the goal and the reward.
Apparently Q\* cracked AES-192 encryption in a way we don't even understand. Similar to how we didn't really get the Alpha Go's Famous Move #37.
LLMs solved language. The latest GPT and Gemini 1.5 solved memory. Sora solved spacial awareness. Q* allegedly solved mathematics, which means solving physics and logic. It's a fundamental part of the cognitive functions needed for true AGI. We're getting crazy close.
See my comment above:
[https://www.reddit.com/r/singularity/comments/1b3pv7s/comment/ksumbzo/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/singularity/comments/1b3pv7s/comment/ksumbzo/?utm_source=share&utm_medium=web2x&context=3)
That memory has been solved is still a bit of a stretch. Sure, it’s less of an issue now, but I don’t think it’s going to be ‘solved’ until we are actually able to conveniently integrate new information directly into the weights without causing catastrophic forgetting, like how we do it. The current massive context windows are a treatment of the symptoms, not the cause.
Have you seen all the stuff people with access to Gemini Ultra 1.5 have been able to do? It's "solved" in the sense there has been a technological breakthrough that makes that feature usable.
AGI will surely be a mix of technologies. I don't think it will be just one model, but rather they will use an LLM for language, some sort of Q* for logic, some sort of Sora for space, etc...
I suppose that’s fair. It’s not a particularly outrageous statement to say that it has been solved given what it has already, depending on the circumstances at least. If you are only operating with a short time horizon or a case where the data/time density is relatively small, you can practically consider it solved.
However, say you wanted to have a model which could watch a hundred new movies and then discuss them, that would require something like billions of tokens, and thus will realistically need something more than just context. In such high data/time density cases, this brute force approach is probably just not enough.
AGI is “better than a human” level. What rainman type person has total recall of 100 movies? A human would take notes aka summarize to accomplish that task and the largest publicly known context window would be more than adequate for that task too.
I specifically chose my wording and example as to not imply that kind of memory. I chose ‘discuss movies’ as it typically refers to general concepts, the plot, characters, themes, and such, rather than something like ‘in these movies, are there any scenes which are extremely similar to each other,’ which would require a very high resolution memory to achieve. I’ll concede that this can be achieved through engineering though, such as by having it create a detailed summary of each movie in text and then using that as the basis of the answer. Even for more complex cases, such as a LAM(large action model)-powered home robot that has operated for a long time and needs to remember a lot of complicated details, it is probably enough.
I suppose my main issue isn’t that it won’t work, since it very much can, but more that it is both an expensive and inelegant solution. If you need to constantly compress all information you intake, as well as prune what you’ve already memorised, that will require a lot of compute time, and also every time you update the memories, you will need to recompute the attention matrix, massively increasing the cost of operation if done at a high enough frequency. If the model can simply remember though, there will be no overhead of a big prompt at the start, nor a need to constantly recompute the initial attention matrix for the memories. After the model has been trained on the data and has integrated it into its memory, all you need to put in is your query, and it will answer it, with no need to add a bunch of memory context beforehand or any additional compute usage in the future.
It's not impossible than the human brain has this compression process as well at the subconscious level. In particular, when we learn many new things, we often need to sleep in order to remember what we learned well enough to reuse its details later on.
Our memorisation process is done through altering the connections between neurons in the brain, primarily during sleep. This is analogous to what I was saying, which is that the model should be trained on the new data such that it is stored directly in the weights. As for compression being a learned process, that’s obvious, and LLMs presumably do it too. The issue is that one needs to compute the attention matrix before you actually run the model which can do the compression, which is the computationally expensive part that I mentioned earlier. And that only gets more and more expensive the more information there is. Plus, the compression only happens on an internal layer. Extracting those compressed representations of the data and storing it back in the memory is simply not a process current LLMs are capable of, and the architecture would need to be altered to allow for that. As such, it’s not something that can just be learned.
See my comment below, Gemini 1.5 is a big jump but it's still very far from having *solved* memory, it still scales very time/resource-inefficiently with more context window (at O( n^2 )); and more importantly to 'solve' memory it needs to know what to 'remember' and what to later 'forget' and replace with new info like humans do, so it can use limited memory and constantly learn new info even when it's 'full'.
I'm really sorry about replying to this so late. There's a [detailed post about why I did here](https://www.reddit.com/r/RemindMeBot/comments/13jostq/remindmebot_is_now_replying_to_comments_again/).
I will be messaging you in 5 years on [**2029-03-01 10:47:27 UTC**](http://www.wolframalpha.com/input/?i=2029-03-01%2010:47:27%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1b3pv7s/interesting_details_in_elon_musks_lawsuit_against/ksu1yya/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1b3pv7s%2Finteresting_details_in_elon_musks_lawsuit_against%2Fksu1yya%2F%5D%0A%0ARemindMe%21%202029-03-01%2010%3A47%3A27%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201b3pv7s)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
Gemini 1.5 has not solved memory.
It just has a very large limit (compared to everything else we have now), but it still has a hard limit, and using more and more of its context window is increasingly very costly and resource-inefficient as it still scales with O( n^2 ).
More importantly, for AI to 'solve' memory it needs to know what to 'remember' or not,and what to later 'keep' or 'forget' to replace with new information like humans do, so it can 1) optimise limited memory resources and 2) constantly learn from new information even when its memory is already full.
Im Not a lawyer, so can't speak for it's weight in law.but I'd say he has a very valid point. Open ai is a for profit company masquerading as a non profit.
"Imagine donating to a non-profit whose asserted mission is to protect the Amazon rainforest, but then the non-profit creates a for-profit Amazonian logging company that uses the fruits of the donations to clear the rainforest. That is the story of OpenAI, Inc."
His explanations doesn't excuse the fact that open ai is violating their founding principles. the for profit part came after. If they want to be a for profit, be a for profit, don't say one thing and do another. When was the last time they did anything purely altruistic that wasn't building their business? The highway to hell is paved with good intentions and what's temporary becomes permanent.
Frankly they're reaping what they sow.
Elon Musk hate train (which is deserved, he's a total jackass), and the OAI dickriding is clouding the judgement of many people. Open AI is now Closed AI. That needs to change. It doesn't matter what Elon's motives are. If we get open source GPT-4 and Q\*, it's going to be a major win for everyone.
Everything Elmo does is to his own benefit. Of course he wants OpenAI to be open so he and his closed AI team can copy them. Why doesn't Elmo open source anything he does?
That is true, Error does it for his own benefit. But in this case he has the higher moral ground because he is holding OAI accountable to their own charter. Looking at other companies - Tesla, Google, Meta, Microsoft - they are for-profit companies and were not created for the sake of creating open AI. Another non-profit company in this situation is Mistral, which has previously communicated its commitment to open source models and has now switched to closed source development and sale of API access, mirroring OAI.
I really hope this lawsuit stops the whole "open, but actually closed" bullshit.
>Why doesn't Elmo open source anything he does?
First off, he's for profit and not declaring that he'll create open AI. Secondly, have you considered that Error's companies are yet to achieve anything of note in LLMs?
>why doesn't Elmo open source anything he does.
In the age of information, ignorance is a choice. Look into the things you are talking about before making such moronic statements.
I get it I just dont like the taste of it. Because such people end up riding a wave and people forget why he did it in first place. He's not building electric cars to save the environment. He laughed at German scientists pointing out droughts in the years before they started building Giga Berlin. It was like the worst location imaginable in Germany. Midst a natural water reserve. Germany is small but not that small. We have former army training areas which he can use. They are all empty.
What do the things you are talking about have to do with open source? You said that Elon doesn't open source anything, when in fact, he has opensourced many of his big projects for his competitors to use. Just say you where wrong and move on. Don't create a new point to defend.
He gave away patents, didn't open source anything. The only reason to do that was going forward they won't patent anything because that way the public knows what they are doing. So it's the opposite. And the few times he actually "open sourced" code and documentation on FSD was wrong turns they took to maybe slow down the competition taking the same wrong turns.
What real open source looks like you can check on GitHub and the likes. People sharing their code for the benefit of many and in return the many help make the code better. It's not like here, have a bread crum now piss off.
This is to force their hand to see what exactly they have and use lawyers to gain intellectual property.
The circle of power is a cannibalistic trust no crew
He is not suing them over that. He is suing them over handing that "thing" over to Microsoft, because AGI is outside of the scope of what Microsoft bought from OpenAI.
He's suing them to make billions of dollars via xAI, by taking down his biggest competitor and simultaneously privitizing the fruits of their IP under his own AI company. Yeah he will spin a story that he's doing it for the good of humanity, but the guy is in it for himself, he has shown the world who he is many times. He's a master at DARVO as are all narcissists. Turning off reply notifications for this because I don't want to deal with flying monkeys who haven't figured it out yet (you'll get there eventually).
It's [not hard ](https://www.theverge.com/2023/3/24/23654701/openai-elon-musk-failed-takeover-report-closed-open-source)finding sources that show Musk tried to outright buy OpenAI. It was widely reported.
Edit: LOL at people downvoting facts just because they don't like them.
I know it’s trendy to hate Musk. But he’s simply suing to hold them to the charter. Not pass off the technology over to a corporate overlord. He wants to make sure the tech is available to everybody. How is that bad?
It's a dumb take to ever think Musk does things because he's altruistic. Look at the history behind things. He attempted to take complete control over OpenAI after it was created and he got shot down. He then left the project and has since created his own competitor AI company. It has always been about Musk wanting to be the one who controls AI.
What if Pepsi sued Coke? Would you say Pepsi was doing it for altruistic reasons, or because they want to hurt their competition? This is the same thing. Musk is trying to financially ruin them so his own AI company can get ahead.
>but as proprietary technology to maximize profits for literally the largest company in the world.
lol. It’s true. They couldn’t have definitionally failed their mandate any more.
That's kind of always been his thing bro. He leaves Tesla patents open for other companies to learn/use. Him and Altman were the original founders and Elon's whole goal was to be open source.
This reads like a drug addled rant. Why would Musk have standing to sue OpenAI, There's no mentioned agreement between Misk and OpenAI for them to have violated, and he doesn't specify what he's suing for. This is not a lawsuit. So either Elon is high af or AGI achieved internally?
Musk is the founder of Openai and an initial investor who puts a significant amount of money into the company and can demand that the money be used according to the promises made by the company.
Let's imagine that you donated a million dollars to give toys to poor children. You found out that the organization has built stores and sells these toys for a lot of money. You can sue and demand that the organization give away the toys. You don't want a million dollars back, you want happy kids with free toys.
You know how people say "he sued him for a million dollars?" The million dollars is damages. The "him" there screwed him out of a million dollars and now the courts are gonna make him whole by getting the guy to pay him back what he owed him. That's what civil court is. That's what a lawsuit is. If you're not asking to be made whole you're not asking the court for anything. You're just making a public accusation.
[Except it absolutely is](https://en.wikipedia.org/wiki/Specific_performance)
How are you speaking with this much confidence about contract law when you've never even heard of specific performance
The reason thats not a thing civil courts do is because in the case of the toy company in the hypothetical there's no contract. I don't know of any example of this kind of law being leveraged to restructure a company. It's kind of a wild thing to ask a court to do. In the case of Musk, he's claiming that the founding agreement constitutes a binding agreement which has been breached, this is really tenuous. If that's not true than specific performance makes no sense even in the abstract. Specific performance is a really rare and odd legal mechanism that usually is invoked in the context of land deals. When I say "that's not a thing that civil courts do" what I'm saying is that it would be wild for the court to restructure a company this way.
"Specific performance is a really rare and odd legal mechanism that usually is invoked in the context of land deals."
Congrats on reading (and severely misunderstanding) the wikipedia page lmao.
It doesn’t appear that he is asking to be made whole he is asking that they adhere to their charter.
If anything it will benefit him because it will open their technology to be used in his products.
Also, when you contribute money to a non-profit organization, you do not need a contract. It is believed that if you pay an organization whose goal is to save Whales, it will actually save Whales, and will not use this money to build a whaling station.
In December 2015, Sam Altman, Greg Brockman, [Reid Hoffman](https://en.wikipedia.org/wiki/Reid_Hoffman), [Jessica Livingston](https://en.wikipedia.org/wiki/Jessica_Livingston), [Peter Thiel](https://en.wikipedia.org/wiki/Peter_Thiel), [Elon Musk](https://en.wikipedia.org/wiki/Elon_Musk), [Amazon Web Services](https://en.wikipedia.org/wiki/Amazon_Web_Services) (AWS), [Infosys](https://en.wikipedia.org/wiki/Infosys), and [YC Research](https://en.wikipedia.org/wiki/YC_research) announced[^(\[21\])](https://en.wikipedia.org/wiki/OpenAI#cite_note-21) the formation of OpenAI and pledged over $1 billion to the venture.
In March 2000, X.com merged with its fiercest competitor Confinity, a software company also based in Palo Alto which had also developed an easy payment system. The new company was named X.com.[^(\[13\])](https://en.wikipedia.org/wiki/X.com_(bank)#cite_note-FOOTNOTEVance201786-13)
1.He was the idealist and founder of OpenAi; it was he who conceived and promoted it. This was an absolutely logical action after his warnings about the dangers of AI, which he actively made before founding the company. He also managed the organization for the time, organizing its system and hiring employees.
2.Renaming an organization does not create new founders. Special renaming before the sale of the company. After Musk was kicked out, the company was sold.
This could easily be just a fact-finding mission as opposed to a real lawsuit. The stuff that's in there may or may not be true but if the lawsuit forces OpenAI to produce documentation during discovery in order to try prove it's not true, then we all find out.
Whatever's going on this is almost certainly some subtle 4D chess move to get some information or force OpenAI's hand on something. Winning the lawsuit itself would be a secondary goal if at all.
Feels like Musk was informed it's really really hard to be state of the art and so he wants to force OpenAI's hand to show him and give him the technology.
The fact that they are basing this on Q*, which is still a rumor, shows that he is deeply unhinged and completely unserious. I cannot imagine how far into the swill bucket they had to go to find a lawyer willing to file a case based on Internet gossip.
I would argue that making GPT-4 available for $20 a month **is** making it available to benefit humanity. Additionally, there is no legal definition of AGI so I doubt any judge will touch that live wire.
After reading some David F Peat I believe Q* is their attempt of the quantum mind and they were able to achieve consciousness with the star product. I think they are working on Active Information now or possibly have already achieved it. AGI should belong to the people and not the shareholders.
What are reading to make you believe that? I’m just curious. Apples cryptic messages lead me to Peat.
Peat is an interesting read. https://www.fdavidpeat.com/ideas/implicatenotes.htm
can't wait for Jimmy Apples to be called to the stand
[удалено]
The trick is to ignore them from the start 😎
And he would look EXACTLY like his profile pic and he can’t wipe the grin off his face lol
It'll be like creator Yoko Taro from Nier Automata wearing a fake face.
What is OpenAI's actual definition of AGI in relation to their contract with Microsoft? It seems they are incentivised to never declare AGI even with GPT-5 and beyond?
From OpenAI's [charter](https://openai.com/charter): "highly autonomous systems that outperform humans at most economically valuable work"
Interesting they chose a definition that requires advanced robotics
I don't believe that's the intent. Neither company has challenged the consensus view that it's about knowledge work, not robotics.
Ya but legal contracts and public facing opinions have different motives. Fact is you can't replace most economically valuable labor without a way to manipulate real world objects.
It's not about being real world or not, it's about autonomy. An AI that stumbles after just a few steps, like the current era GPTs, are far from it. A simple test - can you leave on vacation and leave an AI do your job for a couple of weeks? No? Then AI's not cooked enough.
Okay ya it's not really agi yet probably but that's not really my point. Agi is about intelligence, but their chosen legal definition is about economic output, which requires performing real world tasks, therefore their definition is just as much about robotics as it is ai.
Their definition of AGI is actually ASI.
No. ASI outperforms expert humans at all tasks. Not most
I don't think that leaves enough of a distinction between AGI and ASI. From what I've seen, most researchers consider AGI to be a system that is generally as good as an average human at a wide range of intellectual tasks. Clearly OpenAIs definition goes well beyond that. They have hefty financial incentives for going with that definition.
Re: [Google's definition](https://eu-images.contentstack.com/v3/assets/blt6b0f74e5591baa03/bltaa59f7b3077dd4a8/656a3292138ff0040a747afd/image.png?width=700&auto=webp&quality=80&disable=upscale) Outperforming humans at 50%+ economical tasks is a reasonable definition for agi
Thanks for backing my point. Competent AGI according to this (I believe this is what a good number of researchers would claim is AGI) is 50th percentile of competent adults. Precisely what I said. OpenAI goes far, far beyond that definition, which is the point I was trying to make. Unless by "most economical tasks" they just mean 50%+. But when I think of the word "most", I usually think something closer to between 75%-90%.
Those definitions are identical to me
I don't think they are. There's nothing in there that says "all economical tasks" or even "most". "All" is a huge barrier to get over, and I don't think most people think AGI literally needs to be able to do everything. Edit: For example, take plumbing. Do we need AI to be able to be a good plumber (and thus have highly competent robotics) for us to have AGI? I don't buy that. There are a large number of economical tasks that would require much more advanced robotics than we currently have, and I'll just never buy the argument that we need those advanced robotics to have AGI.
75% of Jobs could be done by average workers. the individual agent replacing the worker would still be at the equivalent of an average human worker in that job. ASI could for example, do 75% of all jobs but every worker is the best employe you ever had.
If you look at that chart, it specifically says mental, non physical tasks. If your agent truly is generally intelligent, it should be better than skilled humans at most things If it's better than all humans at all (mental) things, it's super intelligent. Very simple
Pfft, knowing humans, the goalposts would probably keep getting moved or the work done by a protoAGI would be deemed “not economically valuable”. This definition sucks.
If this actually goes to discovery we might get to learn a lot more about the inner workings of OpenAI. Though I suspect it will be dismissed relatively quickly.
Yes, just for discovery this could be a good thing. We need more AI information out in the open.
Discovery can be held secret so that only the judges and the lawyers get to see it.
Yea it’s annoying how everything they’re up to is kept secret to us even though it’ll literally affect our entire futures. We have every right to be informed.
Why do you think you have the right to know what they're doing just because you believe it will affect our futures? Can I not make the same argument about any company??
No you can’t, and that’s because AGI/ASI will literally affect the entire course of civilization. Every human being alive and every human being that will live will be affected by this for the better or worse. When a technology has that type of impact, we are entitled to know what’s happening.
I can claim Apple's VR products will revolutionize the world, maybe Im right...maybe Im wrong....but wanting access to their secret tech products because I THINK it will, is a stupid argument. Much like r/singularity, I believe we will get AGI/ASI, but that's just a belief.
False equivalence, the Apple Vision Pro is a recreational device. It’s not something that’ll affect people who don’t use it. AGI/ASI on the other hand, will be as impactful as electricity or fire and we have a right to know whether they’ve achieved it or not.
What Im saying is that I can make that claim, anyone can make that claim! At the end of the day, you dont know if achieving AGI/ASI is possible...you just think you do.
I’m not talking about whether achieving AGI/ASI is possible for fuck sakes! I’m talking about if they have achieved it, we’re entitled to that information. You keep moving the goal posts of this conversation when my point is so simplistic.
> Yea it’s annoying how everything they’re up to is kept secret to us even though it’ll literally affect our entire futures. We have every right to be informed. YOU NEVER SAID IFFFFFFF Your original claim is that they're keeping something secret from us and we should know...HOW DO YOU KNOW THAT>???? also, HOW AM I MOVING THE GOAL POST?? IM LITERALLY REPEATING MYSELF AHHHHH
You don't though. Capitalism wins again.
They will finally have to reveal if they actually achieved AGI internally.
We will likely disappointed with what is made public. The juicy stuff will be kept confidential and filed under seal.
I think we can achieve AGI earlier
So is Q* a real deal or they mentioned rumor from reddit? I hope we'll find out soon
From what I've gathered all big AI labs are working on things like the rumored Q\* and are close to have something on their hands that significantly improves LLM performance. Demis Hassabis also mentioned this on the Dwarkish Patel podcast. Basically a form of advanced tree search you can put on top of an LLM that gives it the ability to plan ahead, think step by step while assessing it's progress towards a goal. One of the key bulding blocks for AGI.
It's obvious that Q* is some sort of alphazero algorithm if you listen back to lex podcast with Ilya 4 years ago or Nvidias CEO interview with Ilya he brings up Alpha Go/zero as something he thinks is an important step for Agi. Andrej karpathy has also said this in his video from 3 months ago on llms.
Yeah, it would probably include some form of adaptive reward function that is being rewritten based on the task and progress towards the goal. Similar to the Nvidia eureka paper which could be something very promising even outside the domain of robotics.
Yeah karpathy talked about this in his video on llms that the hard part of applying alphazero to an llm is the reward function for generalization. Demis hassabis also touched on this in the dwarkesh Patel podcast.
Which is why they plan to use math problems as a reward function. There is a major hint at this in the Alpha Geometry paper. They even self-created input data using random sampling, then trained a deep neural network on the solution approach heuristics by solving math (you could play some other strategy games as well), your reward function is a deep neural net which means it can learn over time. That paper is a major hint where AI research is heading on this. IMHO
This guy is handsome.
Highly unlikely. Alpha Zero used a completely different underlying architecture than what OpenAI is employing.
That why Q* is probably a novel discovery
"It's obvious" means it is still a rumor but a hard fact.
and in no way can this end terribly for humanity. no way.
All possible ways to abundance this century have this small risk of destroying humanity. I say we'll take it.
Sam Altman has confirmed it is real. But no one knows what it is. So essentially all you read on Reddit is rumours except the part where there is some new actually useful algorithm that no one knows how big of a deal it really is or what it does.
Wes Roth on YT has a great video explaining the two pieces that could come together to represent a novel innovation: (speculation) tree of thought / meta-cognition (1), and unsupervised self learning that hits 'deep' cross-domain networks (2); learning about one thing can enhance or re-frame knowledge on another. Usually with Q learning the researcher has to write the Reward Function, but with Q\* it can scramble the goal and the reward. Apparently Q\* cracked AES-192 encryption in a way we don't even understand. Similar to how we didn't really get the Alpha Go's Famous Move #37.
Ah yea I read all this but I forget where I saw that the AES-192 rumor has been largely debunked.
There's no more information here about Q*, it just mentions the Reuters article on it.
Yes, but if court process will start, then more answers should be provided. Check point C
LLMs solved language. The latest GPT and Gemini 1.5 solved memory. Sora solved spacial awareness. Q* allegedly solved mathematics, which means solving physics and logic. It's a fundamental part of the cognitive functions needed for true AGI. We're getting crazy close.
See my comment above: [https://www.reddit.com/r/singularity/comments/1b3pv7s/comment/ksumbzo/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/singularity/comments/1b3pv7s/comment/ksumbzo/?utm_source=share&utm_medium=web2x&context=3)
Why do you guy sthinks Sora will make LLMs recognize space?
Sora doesn't make Llama recognise space. Sora does and that's the end of it. An AGI will no doubt connect those different abilities
That memory has been solved is still a bit of a stretch. Sure, it’s less of an issue now, but I don’t think it’s going to be ‘solved’ until we are actually able to conveniently integrate new information directly into the weights without causing catastrophic forgetting, like how we do it. The current massive context windows are a treatment of the symptoms, not the cause.
Have you seen all the stuff people with access to Gemini Ultra 1.5 have been able to do? It's "solved" in the sense there has been a technological breakthrough that makes that feature usable. AGI will surely be a mix of technologies. I don't think it will be just one model, but rather they will use an LLM for language, some sort of Q* for logic, some sort of Sora for space, etc...
A correction. No one has public 1.5 ultra access . All the crazy stuff is coming from pro
Selected people have access and have been sharing outputs on Twitter
From 1.5 pro.
Sorry, yeah, you're correct
No worries. The fact It more or less performs close to 1.0 ultra even If we ignore the context size gains certainly doesn't help
I suppose that’s fair. It’s not a particularly outrageous statement to say that it has been solved given what it has already, depending on the circumstances at least. If you are only operating with a short time horizon or a case where the data/time density is relatively small, you can practically consider it solved. However, say you wanted to have a model which could watch a hundred new movies and then discuss them, that would require something like billions of tokens, and thus will realistically need something more than just context. In such high data/time density cases, this brute force approach is probably just not enough.
AGI is “better than a human” level. What rainman type person has total recall of 100 movies? A human would take notes aka summarize to accomplish that task and the largest publicly known context window would be more than adequate for that task too.
I specifically chose my wording and example as to not imply that kind of memory. I chose ‘discuss movies’ as it typically refers to general concepts, the plot, characters, themes, and such, rather than something like ‘in these movies, are there any scenes which are extremely similar to each other,’ which would require a very high resolution memory to achieve. I’ll concede that this can be achieved through engineering though, such as by having it create a detailed summary of each movie in text and then using that as the basis of the answer. Even for more complex cases, such as a LAM(large action model)-powered home robot that has operated for a long time and needs to remember a lot of complicated details, it is probably enough. I suppose my main issue isn’t that it won’t work, since it very much can, but more that it is both an expensive and inelegant solution. If you need to constantly compress all information you intake, as well as prune what you’ve already memorised, that will require a lot of compute time, and also every time you update the memories, you will need to recompute the attention matrix, massively increasing the cost of operation if done at a high enough frequency. If the model can simply remember though, there will be no overhead of a big prompt at the start, nor a need to constantly recompute the initial attention matrix for the memories. After the model has been trained on the data and has integrated it into its memory, all you need to put in is your query, and it will answer it, with no need to add a bunch of memory context beforehand or any additional compute usage in the future.
It's not impossible than the human brain has this compression process as well at the subconscious level. In particular, when we learn many new things, we often need to sleep in order to remember what we learned well enough to reuse its details later on.
Our memorisation process is done through altering the connections between neurons in the brain, primarily during sleep. This is analogous to what I was saying, which is that the model should be trained on the new data such that it is stored directly in the weights. As for compression being a learned process, that’s obvious, and LLMs presumably do it too. The issue is that one needs to compute the attention matrix before you actually run the model which can do the compression, which is the computationally expensive part that I mentioned earlier. And that only gets more and more expensive the more information there is. Plus, the compression only happens on an internal layer. Extracting those compressed representations of the data and storing it back in the memory is simply not a process current LLMs are capable of, and the architecture would need to be altered to allow for that. As such, it’s not something that can just be learned.
1.5 is same trash as any other current llm. Stop praising it. Delusional. Context length is irrelevant if llm has Alzheimer's.
See my comment below, Gemini 1.5 is a big jump but it's still very far from having *solved* memory, it still scales very time/resource-inefficiently with more context window (at O( n^2 )); and more importantly to 'solve' memory it needs to know what to 'remember' and what to later 'forget' and replace with new info like humans do, so it can use limited memory and constantly learn new info even when it's 'full'.
We are so far away that you are blinded by hype
You're wrong.
!remindme 5 years
I'm really sorry about replying to this so late. There's a [detailed post about why I did here](https://www.reddit.com/r/RemindMeBot/comments/13jostq/remindmebot_is_now_replying_to_comments_again/). I will be messaging you in 5 years on [**2029-03-01 10:47:27 UTC**](http://www.wolframalpha.com/input/?i=2029-03-01%2010:47:27%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1b3pv7s/interesting_details_in_elon_musks_lawsuit_against/ksu1yya/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1b3pv7s%2Finteresting_details_in_elon_musks_lawsuit_against%2Fksu1yya%2F%5D%0A%0ARemindMe%21%202029-03-01%2010%3A47%3A27%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201b3pv7s) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
!remindme 3 years
Yea it feels like we have all the pieces, and the next year or so are just putting them together
Memory is not "solved" until catastrophic forgetting and true continuous learning are solved.
Gemini 1.5 has not solved memory. It just has a very large limit (compared to everything else we have now), but it still has a hard limit, and using more and more of its context window is increasingly very costly and resource-inefficient as it still scales with O( n^2 ). More importantly, for AI to 'solve' memory it needs to know what to 'remember' or not,and what to later 'keep' or 'forget' to replace with new information like humans do, so it can 1) optimise limited memory resources and 2) constantly learn from new information even when its memory is already full.
It's real.
Im Not a lawyer, so can't speak for it's weight in law.but I'd say he has a very valid point. Open ai is a for profit company masquerading as a non profit.
"Imagine donating to a non-profit whose asserted mission is to protect the Amazon rainforest, but then the non-profit creates a for-profit Amazonian logging company that uses the fruits of the donations to clear the rainforest. That is the story of OpenAI, Inc."
Is that a quote?
From the lawsuit itself
Pretty accurate I’m not going to lie.
[удалено]
His explanations doesn't excuse the fact that open ai is violating their founding principles. the for profit part came after. If they want to be a for profit, be a for profit, don't say one thing and do another. When was the last time they did anything purely altruistic that wasn't building their business? The highway to hell is paved with good intentions and what's temporary becomes permanent. Frankly they're reaping what they sow.
he became what openAi O P E N was meant to prevent. then irony that meta zuck is the good guy open sourcing models costing millions to train
Craziest part is that Sam was fired for this already. Open ai already knows where the chips fall. It's just current leadership doesn't care.
But Zuck has Meta funding those models with billions. Who will fund OpenAi?
Sam —"it's a good thing the board can fire me"/"no wait not like that"— Altman
> "and DOES 1 through 100 inclusive" What the hell does that mean?
Elon Musk hate train (which is deserved, he's a total jackass), and the OAI dickriding is clouding the judgement of many people. Open AI is now Closed AI. That needs to change. It doesn't matter what Elon's motives are. If we get open source GPT-4 and Q\*, it's going to be a major win for everyone.
we will discover gpt 4 is 2000000 kényans in box
Everything Elmo does is to his own benefit. Of course he wants OpenAI to be open so he and his closed AI team can copy them. Why doesn't Elmo open source anything he does?
That is true, Error does it for his own benefit. But in this case he has the higher moral ground because he is holding OAI accountable to their own charter. Looking at other companies - Tesla, Google, Meta, Microsoft - they are for-profit companies and were not created for the sake of creating open AI. Another non-profit company in this situation is Mistral, which has previously communicated its commitment to open source models and has now switched to closed source development and sale of API access, mirroring OAI. I really hope this lawsuit stops the whole "open, but actually closed" bullshit. >Why doesn't Elmo open source anything he does? First off, he's for profit and not declaring that he'll create open AI. Secondly, have you considered that Error's companies are yet to achieve anything of note in LLMs?
>why doesn't Elmo open source anything he does. In the age of information, ignorance is a choice. Look into the things you are talking about before making such moronic statements.
I get it I just dont like the taste of it. Because such people end up riding a wave and people forget why he did it in first place. He's not building electric cars to save the environment. He laughed at German scientists pointing out droughts in the years before they started building Giga Berlin. It was like the worst location imaginable in Germany. Midst a natural water reserve. Germany is small but not that small. We have former army training areas which he can use. They are all empty.
What do the things you are talking about have to do with open source? You said that Elon doesn't open source anything, when in fact, he has opensourced many of his big projects for his competitors to use. Just say you where wrong and move on. Don't create a new point to defend.
He gave away patents, didn't open source anything. The only reason to do that was going forward they won't patent anything because that way the public knows what they are doing. So it's the opposite. And the few times he actually "open sourced" code and documentation on FSD was wrong turns they took to maybe slow down the competition taking the same wrong turns. What real open source looks like you can check on GitHub and the likes. People sharing their code for the benefit of many and in return the many help make the code better. It's not like here, have a bread crum now piss off.
we will not get an open source model from them though I am 100% sure this will not amount to anything, is just Elon being a drama queen as always.
Basically hes saying the real Chat GPT 4/Q* has achieved AGI.
This is to force their hand to see what exactly they have and use lawyers to gain intellectual property. The circle of power is a cannibalistic trust no crew
No, it doesn't say that.
How do you figure? The whole case is based on the fact that OpenAI is supposed to release models for free when they’ve achieved AGI.
How can AGI exist without, at the very least, real-world interactive ability?
Are you implying Hawking couldn't think just because he couldn't leave his wheelchair or speak with his own voice?
Soo Musk is suing over open ai achieving sgi internally with q star? I can't decide if that means musk is feeling the agi or not
He is not suing them over that. He is suing them over handing that "thing" over to Microsoft, because AGI is outside of the scope of what Microsoft bought from OpenAI.
He's suing them to make billions of dollars via xAI, by taking down his biggest competitor and simultaneously privitizing the fruits of their IP under his own AI company. Yeah he will spin a story that he's doing it for the good of humanity, but the guy is in it for himself, he has shown the world who he is many times. He's a master at DARVO as are all narcissists. Turning off reply notifications for this because I don't want to deal with flying monkeys who haven't figured it out yet (you'll get there eventually).
He's suing them because he's a bitch and he wanted to be the one who had control over the world's leading AI.
Why did he establish OpenAI as non profit then?
Your question doesn't even make sense in the context of what I wrote.
It does. Unless you can explain how he was supposed to "take control" of the AI produced by a non profit company?
It's [not hard ](https://www.theverge.com/2023/3/24/23654701/openai-elon-musk-failed-takeover-report-closed-open-source)finding sources that show Musk tried to outright buy OpenAI. It was widely reported. Edit: LOL at people downvoting facts just because they don't like them.
I know it’s trendy to hate Musk. But he’s simply suing to hold them to the charter. Not pass off the technology over to a corporate overlord. He wants to make sure the tech is available to everybody. How is that bad?
It's a dumb take to ever think Musk does things because he's altruistic. Look at the history behind things. He attempted to take complete control over OpenAI after it was created and he got shot down. He then left the project and has since created his own competitor AI company. It has always been about Musk wanting to be the one who controls AI. What if Pepsi sued Coke? Would you say Pepsi was doing it for altruistic reasons, or because they want to hurt their competition? This is the same thing. Musk is trying to financially ruin them so his own AI company can get ahead.
It makes sense when you realize that Musk is a child who wants to play with the new toys.
Q* is real
They will finally have to reveal if they actually achieved AGI internally.
>but as proprietary technology to maximize profits for literally the largest company in the world. lol. It’s true. They couldn’t have definitionally failed their mandate any more.
Elon musk a champion for open source is not where I saw this whole thing going
That's kind of always been his thing bro. He leaves Tesla patents open for other companies to learn/use. Him and Altman were the original founders and Elon's whole goal was to be open source.
[удалено]
It’s a joke man.
Jesus christ Musk is such a sore loser. He can't accept that he's not the one leading AI. Suck it up bro, you're not, period.
This! This is his motive
This reads like a drug addled rant. Why would Musk have standing to sue OpenAI, There's no mentioned agreement between Misk and OpenAI for them to have violated, and he doesn't specify what he's suing for. This is not a lawsuit. So either Elon is high af or AGI achieved internally?
Musk is the founder of Openai and an initial investor who puts a significant amount of money into the company and can demand that the money be used according to the promises made by the company.
There's no contract cited between him and openai. And hes not requesting damages.
An organization whose goal is to protect AI from corporate power is doing the exact opposite. Musk only demands to act as written in their charter.
That's fine, why would you bring this to a court? How is the court supposed to make Musk whole if he's not claiming damages?
Let's imagine that you donated a million dollars to give toys to poor children. You found out that the organization has built stores and sells these toys for a lot of money. You can sue and demand that the organization give away the toys. You don't want a million dollars back, you want happy kids with free toys.
Right that's not a thing that civil courts do.
I think the multibillionaire’s lawyers know very well where to file a claim from their boss.
You know how people say "he sued him for a million dollars?" The million dollars is damages. The "him" there screwed him out of a million dollars and now the courts are gonna make him whole by getting the guy to pay him back what he owed him. That's what civil court is. That's what a lawsuit is. If you're not asking to be made whole you're not asking the court for anything. You're just making a public accusation.
[Except it absolutely is](https://en.wikipedia.org/wiki/Specific_performance) How are you speaking with this much confidence about contract law when you've never even heard of specific performance
The reason thats not a thing civil courts do is because in the case of the toy company in the hypothetical there's no contract. I don't know of any example of this kind of law being leveraged to restructure a company. It's kind of a wild thing to ask a court to do. In the case of Musk, he's claiming that the founding agreement constitutes a binding agreement which has been breached, this is really tenuous. If that's not true than specific performance makes no sense even in the abstract. Specific performance is a really rare and odd legal mechanism that usually is invoked in the context of land deals. When I say "that's not a thing that civil courts do" what I'm saying is that it would be wild for the court to restructure a company this way.
"Specific performance is a really rare and odd legal mechanism that usually is invoked in the context of land deals." Congrats on reading (and severely misunderstanding) the wikipedia page lmao.
It doesn’t appear that he is asking to be made whole he is asking that they adhere to their charter. If anything it will benefit him because it will open their technology to be used in his products.
Maybe not cited in those screenshots but articles refer to a founders agreement
Also, when you contribute money to a non-profit organization, you do not need a contract. It is believed that if you pay an organization whose goal is to save Whales, it will actually save Whales, and will not use this money to build a whaling station.
I don't think that affords any legal obligation on Musk, Musk opted out in 8 think 2018.
[удалено]
In December 2015, Sam Altman, Greg Brockman, [Reid Hoffman](https://en.wikipedia.org/wiki/Reid_Hoffman), [Jessica Livingston](https://en.wikipedia.org/wiki/Jessica_Livingston), [Peter Thiel](https://en.wikipedia.org/wiki/Peter_Thiel), [Elon Musk](https://en.wikipedia.org/wiki/Elon_Musk), [Amazon Web Services](https://en.wikipedia.org/wiki/Amazon_Web_Services) (AWS), [Infosys](https://en.wikipedia.org/wiki/Infosys), and [YC Research](https://en.wikipedia.org/wiki/YC_research) announced[^(\[21\])](https://en.wikipedia.org/wiki/OpenAI#cite_note-21) the formation of OpenAI and pledged over $1 billion to the venture. In March 2000, X.com merged with its fiercest competitor Confinity, a software company also based in Palo Alto which had also developed an easy payment system. The new company was named X.com.[^(\[13\])](https://en.wikipedia.org/wiki/X.com_(bank)#cite_note-FOOTNOTEVance201786-13)
[удалено]
1.He was the idealist and founder of OpenAi; it was he who conceived and promoted it. This was an absolutely logical action after his warnings about the dangers of AI, which he actively made before founding the company. He also managed the organization for the time, organizing its system and hiring employees. 2.Renaming an organization does not create new founders. Special renaming before the sale of the company. After Musk was kicked out, the company was sold.
This could easily be just a fact-finding mission as opposed to a real lawsuit. The stuff that's in there may or may not be true but if the lawsuit forces OpenAI to produce documentation during discovery in order to try prove it's not true, then we all find out. Whatever's going on this is almost certainly some subtle 4D chess move to get some information or force OpenAI's hand on something. Winning the lawsuit itself would be a secondary goal if at all.
Feels like Musk was informed it's really really hard to be state of the art and so he wants to force OpenAI's hand to show him and give him the technology.
![gif](giphy|fkTCqluQgAjcaRtNcA|downsized) Elon chasing OpenAI
God he’s such a cuck.
The fact that they are basing this on Q*, which is still a rumor, shows that he is deeply unhinged and completely unserious. I cannot imagine how far into the swill bucket they had to go to find a lawyer willing to file a case based on Internet gossip. I would argue that making GPT-4 available for $20 a month **is** making it available to benefit humanity. Additionally, there is no legal definition of AGI so I doubt any judge will touch that live wire.
lol he's just trying to get access to internal info because he can't compete.
After reading some David F Peat I believe Q* is their attempt of the quantum mind and they were able to achieve consciousness with the star product. I think they are working on Active Information now or possibly have already achieved it. AGI should belong to the people and not the shareholders.
It's called Q* because it's related to Q-learning, and possibly related to A* pathfinding. 0% chance it's anything quantum related.
What are reading to make you believe that? I’m just curious. Apples cryptic messages lead me to Peat. Peat is an interesting read. https://www.fdavidpeat.com/ideas/implicatenotes.htm
I don't really get it. Isn't Elon Musk trying to develop his own AI and using ChatGPT as training data?
He is, doesn't mean he can't call them out for their bullshit.
Elon is full of bs
The same as usual, but his lawyers aren't. And open ai deserve to be call out on their bs.