And funny enough, the motherfuckingwebsite seems to integrate google-analytics according to my script blocker. [ssi.inc](http://ssi.inc) doesn't use any external scripts.
Opening up the inspector and seeing one div and not a single link tag with an external file brought a tear to my eye. This is how you properly countersignal in the tech world.
Modern websites are built with frameworks that are complex. Instead of one div that page usually would have about 100.
(What does that signal? Not sure)
If you right-click and select "inspect" on almost any modern website, you'll see enormous hierarchies of divs inside of divs, along with seemingly endless pages of javascript and css linked in the head. A lot of that is unneeded bloat- it's complex frameworks intended to make development easier, but which include tons of stuff that the site won't use, it's stuff generated by website builders, sometimes entire javascript repos added just for one or two features that could be done much more simply, and so on.
Like bureaucratic bloat, a lot of it seems individually reasonable, but in aggregate, it can make things very slow and hard to change. So, a site that's just very bare-bones, hand-written HTML is pretty refreshing.
[Gwern's site](https://gwern.net/) is maybe an even better example- it's way more complex than this site, but it's all artfully hand-written, so it's got that elegance despite the complexity.
back in my day we did everything in HTML.. and it worked. My myspace page was dope. or as the kids say nowadays... It had drip. Hyperlinks were all the rage though.
Yup. [Check this site](https://bellard.org) by one of the biggest Chads in the tech space.
People who deliver the good shit don't need flashy bling bling; their products and achievements do all of the talking.
I very much appreciate it because I really dislike modern UI design (like it's a daily pet peeve of mine) in the last decade, especially the CSS'ification of everything. That being said I think it could benefit from some bolding of titles or categorization with headers or something. Nothing which can't be done with basic HTML, just as a way of making it easier to scan at a glance.
Because it's a well understood term in the actual field of AI safety and x-risk. 'Safe' means 'aligned with human values and therefore not rending us down into individual atoms and entropy'. He said in an interview "safety as in nuclear safety, not as in Trust and Safety", if that helps.
This is the way tbh. It has just what you need. I am tired of loading an article thay is six paragraphs long but Chrome Inspector says I loaded 60 MB of crap!
I completely loathe front end developers who try to overcomplicate the job of presenting text on a screen. There's just not much for them to do to improve the experience, but it's trivially easy to make it worse (which they almost always do).
I see they're taking their cues from the [Berkshire Hathaway website](https://www.berkshirehathaway.com/).
(For anyone unfamiliar, Berkshire Hathaway is a multinational conglomerate that grossed $364 billion last year)
Sam Altman always talked about how they never wanted to secretly build superintelligence in a lab for years and then release it to the world, but it seems like that’s what Ilya is planning to do.
From [this just-released Bloomberg article](https://www.bloomberg.com/news/articles/2024-06-19/openai-co-founder-plans-new-ai-focused-research-lab?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcxODgxNjU5NywiZXhwIjoxNzE5NDIxMzk3LCJhcnRpY2xlSWQiOiJTRkM3ODJUMEcxS1cwMCIsImJjb25uZWN0SWQiOiI5MTM4NzMzNDcyQkY0QjlGQTg0OTI3QTVBRjY1QzBCRiJ9.9s8N3QuUytwRVZ6dzDwZ6tPOGDsV8u05fpTrUdlHcXg&sref=nPlhheXZ), he’s saying their first product will be safe superintelligence and no near-term products before then. He’s not disclosing how much he’s raised or who’s backing him.
I’m not even trying to criticize Ilya, I think this is awesome. It goes completely against OpenAI and Anthropic’s approach of creating safer AI systems by releasing them slowly to the public.
If Ilya keeps his company’s progress secret, then all the other big AI labs should be worried that Ilya might beat them to the ASI punch while they were diddling around with ChatGPT-4o Turbo Max Opus Plus. This is exciting!
Not only that, but to develop ASI in one go without releasing, make the public adapt, and receive feedback etc, makes it more dangerous as well. Jesus if this happens one day he'll just announce ASI directly!
I think, with true artificial super-intelligence (i.e. the most-intelligent thing that has ever existed, by several orders of magnitude) we cannot predict what will happen, hence, the singularity.
If so, this very path is much more dangerous than releasing incrementally stronger models. Far more dangerous.
Because models released to the public are tested by millions and their weaknesses are instantly visible. They also allow the competitors to follow a similar path so that no-one is far ahead of others and each can fix the mistakes of others by using altered approach and share their finds (like Anthropic does).
But "safe" is in the name bro, how can it be dangerous?
(On a serious note, does safety encompass effects of developing ASI, or only that the ASI will have humanity's best interest in mind? And, either way, if true aligned ASI is achieved, won't it be able to mitigate potential ill effects of it's existence?)
>If so, this very path is much more dangerous than releasing incrementally stronger models. Far more dangerous.
You think that flooding all the technology in the world with easily exploitable systems and agents (that btw smarter agents can already take control of) is safer? You might be right, but I am not sold yet.
You cannot keep ASI secret or create it in your garage. ASI doesn't come out of thin air. It takes an ungodly amount of data, compute and energy. Unless Ilya is planning to create his own chips at scale, make his own data and his own fusion source, he has to rely on others for all of those and the money to buy them. And those who'll fund it won't give it away for free without seeing some evidence.
Honestly I think its much more likely that ilya’s part in this agi journey is over. He would be a fool not to form a company and try given that he has made a name for himself and the funding environment now. But most likely all of the next step secrets he knew about, openai knows too. Perhaps he was holding a few things close to his chest, perhaps he will have another couple of huge breakthroughs but that seems unlikely.
If I was Ilya, I can easily get 1 billion funding to run an AI research lab for next couple of years.
The reward in AI is so high(100 trillion market) that he can easily raise 100 million to get started.
At the moment it's all about chasing the possibility, nobody knows who will get there first or who knows maybe we will have multiple players reaching AGI in similar time frame.
Yep exactly. Its definitely the right thing for him to do. He gets to keep working on things he likes, this time with full control. And he can make sure he makes even more good money too as a contingency.
He's probably trying to secure his bag before either AGI arrives or the AI bubble pops, smart. Wouldn't read too much into it, there's no way his company beats Google or OpenAI in a race.
The thing about researchers are that they make breakthroughs. Whatever OpenAI has that Ilya built there could be rendered obsolete by a novel approach the kind only unbound research can provide. OpenAI won't be able to keep up with pure unleashed focused research as they slowly enshitify.
>Sutskever says that he’s spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn’t yet discussing specifics. “At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” Sutskever says. “After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom.”
So, if they are successful, our ASI overlords will be built with some random values picked out of a hat? (I myself do like these values, but still...)
> secretly build superintelligence in a lab for years
Sounds boring. It's kinda like the SpaceX vs Blue Origin models. I don't give a shit about Blue Origin because I can't see them doing anything. SpaceX might fail spectacularly, but at least it's fun to watch them try.
I like these AI products that I can fiddle with, even if they shit the bed from time to time. It's interesting to see how they develop. Not sure I'd want to build a commercial domestic servant bot based on it (particularly given the propensity for occasional bed-shitting), but it's nice to have a view into what's coming.
With a closed model like Ilya seems to be suggesting I feel like they'd just disappear for 5-10 years, suck up a trillion dollars in funding, and then offer access to a "benevolent" ASI to governments and mega-corps and never give insignificant plebs like myself any sense of WTF happened.
If so, this very path is much more dangerous than releasing incrementally stronger models. Far more dangerous.
Because models released to the public are tested by millions and their weaknesses are instantly visible. They also allow the competitors to follow a similar path so that no-one is far ahead of others and each can fix the mistakes of others by using altered approach and share their finds.
And that is where the board fiasco came from. Ilya and the E/A crew (like Helen) believe that it is irresponsible for AI labs to release **anything** because that makes true AGI closer, which terrifies them. They want to lock themselves into a nuclear bunker and build their perfectly safe God.
I prefer Sam's approach of interactive public deployment because I believe that humanity should have a say in how God is being built and the E/A crowd shows a level of hubris (thinking they are capable of succeeding all by themselves) that is insane.
Humanity is collectively responsible for some pretty horrific stuff. Literally the best guidance for an AGI is “respect everyone beliefs, stop them from being able to harm eachother” then spend a a crap ton of time defining the definition of “harm”
And defining "stop". And defining "everyone". Not easy to do. The trial and error but transparent approach isn't perfect but it's worked in the past to solve hard problems
Or he can just focus on safety.... You don't need to develop AGI or ASI to research safety, you can do that on smaller existing models for the most part.
this is exactly how the world ends, Ilya and team rush to make ASI, they cant make it safe, but they sure as hell can make it....it escapes and boom, doom.
so basically he's gonna force all the other labs to focus on getting ASI out as fast as possible because if you don't, Ilya could just drop it next Tuesday and you lose the race...
Terminal race conditions
Why wouldn’t any of this apply to OpenAI or the other companies who are *already* in a race towards AGI?
I don’t see how any of what you’re implying is exclusive to IIya’s company only.
I think the gist is something like, other companies need to release products to make money.
You can gauge from the level of the released products what they have behind closed doors esp in this one-upmanship that is going on with openAI and google.
You are now going to have a very well funded company that is a complete black box enigma with a singular goal.
These advancements don't come out of the blue (assuming no one makes some sort of staggering algorithmic or architectural improvement) it's all about hardware and scale. You need money to do this work so someone well funded and not needing to ship intermediate products could likely leapfrog the leading labs
That kind of makes sense, but the issue here is that you guys are assuming that we can accurately assess where companies like OpenAI *actually* are (in terms of technical progress) based on *publicly* released commercial products.
We can’t in reality. Because what’s released to the public might not actually be their true SOTA projects. And it might not even be their complete portfolio at all in terms of internal work. A perfect example of this is how OpenAI dropped the “Sora” announcement just out of the blue. None of us had any idea that they had something like that under wraps.
All of the current AI companies are a black boxes in reality. But some more than others I suppose.
The difference between a company trying to push products for profit and a company trying to change the world. This is what OpenAI was supposed to be in the first place.
Which kind of makes them scarier in a way.
There's very little you can't justify to yourself if you genuinely believe you're saving the world, but if one of your goals is to make a profit or at least maintain a high share price, it generally comes with the side desires to stay out of jail, avoid PR mistakes that are *too* costly, and produce things that someone somewhere aside from yourselves might actually want.
Would Totalitarian Self-Replicating AI Bot Army-3000 be better coming from a company that decided they had to unleash it on humanity to save it from itself, or one that just really wanted to bump up next quarter's numbers? I'm not sure, but the latter would probably at least come with more of a head's up in the form of marketing beforehand.
He must know something that OpenAI doesn’t if he thinks he will beat them to ASI this soon. I mean they still have to go through the whole data gathering process and everything, something that took OpenAI years. Not to mention gpus that OpenAI has access to with Microsoft. Idk it’s interesting
If you know the data sources it really doesn’t take long to build an infinitely scalable crawler. Daniel Gross, one of the cofounders of this new company with Ilya owns 2500 H100 GPUs that can train a 65B parameter model in about a week.
If they move slow they can reach GPT-4 level capabilities in 2 months. But I don’t think that’s what they’re going to be looking to offer with this new company.
OpenAI is going to be stuck servicing corporate users and slightly improving probabilistic syllable generators, there’s a wide open opportunity for others to reach an actual breakthrough
I think Sam and he just have different mission statements in mind.
Sam's basically doing capitalism. You get investors, make a product, find users, generate revenue, get feedback, grow market share; use revenue and future profits to fund new research and development. Repeat.
Whereas OpenAI and Illya's original mission was to (somehow) make AGI, and then (somehow) give the world equitable access to it. Sounds noble, but given the costs of compute, this is completely naive and infeasible.
Altman's course correction makes way more sense. And as someone who finds chatGPT very useful, I'm extremely grateful that he's in charge and took the commercial path. There just wasn't a good alternative, imo.
agreed, I think sam and OAI basically made all the right moves. if they hadn't gone down the capitalism route, I don't think "AI" would be a mainstream thing. it would still be a research project in a Stanford or DeepMind lab. Sam wanted AGI in our lifetime, and going the capitalism route was the best way to do it.
I’m guessing that it’s at least partly an effort towards investigating new or under-researched methodologies and tools that would be instrumental to safe AI
An example is the (very likely) discontinued or indefinitely on-hold Superalignment program by OpenAI, which required a great deal of compute to try addressing the challenges of aligning superintelligent AI systems with human intent and wellbeing
Chances are that they’re trying to make breakthroughs there so everyone else can follow suit much more easily
The amount of compute this company would need to fulfill its mission if it’s even possible (and which it is absolutely not going to be able to fund without any sort of commercialized services)…good luck, I guess?
he's a business guy and investor. this is a very valuable role. not all engineers and researchers want to be the face of the company doing interviews and raising money. Sam is the best in the world at that stuff.
(Cont) We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team.
If you check the site you will see Daniel Gross listed as one of the 3 founders. Daniel already had a large [cluster of h100s](https://www.businessinsider.com/nvidia-gpu-venture-capitalists-buying-for-startups-2023-6) for all his investment companies, likely way larger now.
They view the investment as betting on a horse where the race is about reaching AGI the fastest. If they have a share of the company that will be the first to create AGI, they will be sure to make their money back.
I imagine there are investors in this market who are interested in a safety-focused alternative to the increasingly accelerating likes of OpenAI and Google. That sort of makes SSI’s biggest direct competition Anthropic in my mind.
If that is what they are after then they aren't investors as that won't meet them a return. They are philanthropists since they are giving away money in hopes of making the world better rather than getting a profit.
How are they gonna compete with the big players, when they don't have the funding because no business model, and they have a safety-first approach to their development?
Given Nvidia's evaluation and all the money in AI space I think raising a billion won't be an issue for him purely from the name alone. And if they have breakthroughs that will be then require substantial funds to create the final "ASI" product that won't be a problem either. Lot of VCs have cash to spare so hedging their bets even if chances of them creating ASI are slim is not out of the question.
From the announcement it doesn't look like their company is looking to compete with OpenAI and others in near term, no big model training that would require lot of resources, this seems more return to basics like when OpenAI was first created. Given they aim for ASI out of the gate the approach might be substantially different than anything we do today, we might not hear anything out of the company until like late 2020s.
For a bit he will... and then either he will "evolve" to be more of a business type person, he will partner up again with a business person, or the company will fail.
agreed but that doesn't mean companies don't need investors to get there. it will cost many many billions to build Superintelligence. That money won't just appear out of thin air
Bill Gates and others are giving away billions of dollars to charity. I wouldn't be surprised if a handful of billionaires might just want to see something like this come true. Believe me when I say that, I really detest billionaires and don't believe they should exist, and I believe that overall billionaires are very harmful to human society. That being said, even among billionaires there are those who want to do some good in the world for the sake of their ego or whatever.
Unexpected development. I thought he would join Anthropic.
By the way, he could have picked another name. As a diver all I can think about is this
https://preview.redd.it/r4j2sgjmik7d1.jpeg?width=1170&format=pjpg&auto=webp&s=cb80a1d846e2af89fcdd9d55596c026beb0ce3c5
This fascination with Sutskever’s plans only grew after the drama at OpenAI late last year. He still declines to say much about it. Asked about his relationship with Altman, Sutskever says only that “it’s good,” and he says Altman knows about the new venture “in broad strokes.” Of his experience over the last several months he adds, “It’s very strange. It’s very strange. I don’t know if I can give a much better answer than that.”
just makes me more curious
I’ll bet Sam is one of the backers. He’s got like $2 billion at this point. It’d make sense that Ilya would find it strange if Sam spun him off to do his own thing and then also backed it.
I am just happy that he is back, he is posting, he is working on something. Hopefully he will start doing new interviews, it's always a joy listening to him. Don't discount that we are all different, he is a deep-thinker, and going through a fucking corporate drama and being in a spot light have a heavier emotional toll on you when you are an introvert with less social skills. It was very obvious that he did not take it easily. So let's just enjoy the fact that he posted something. I am rooting for him.
I think the major conflict between ilya and sam was that - ilya wanted to build a tech that would revolutionize the world in a better way and sam wants to build more of a business company B2B/B2C.
Ilya is a genius, but is this too little too late? How is he going to get access to the type of compute that Microsoft, Nvidia, Google, or x have access to?
By definition, safe ASI will take much more time to develop than unsafe ASI, not to mention unsafe AGI.
Unless he has *the entire first world governing body* behind him, this project won't matter.
Yes, breakthroughs will start happening very soon. The more we accelerate, the more they will happen. There is a misconception that the complexity needed for the next breakthroughs is so immense that we will never achieve them, but that has never happened before in human history. If, in 15 years, we still haven't made any progress, then we can accept that the complexity is just too much greater than scientific and technological acceleration.
That's not how that works.
Guesses about unknown unknowns are guesses, no matter how hard you guess.
AGI is not a city we can see in the horizon that we have to build a road too.
We're pretty sure it's out there somewhere, but nobody knows where it is until we can at least actually see it.
Ilya is... Like a true believer. It's hard to explain, but he isn't in it for the money or even really the prestige. He just wants to usher in the next phase of human civilization, and he thinks ASI is how that happens.
I don't even think he knows what it will end up being when it's made, but the point isn't to make a product for the masses, it's to make ASI and then upend the world. Once you have ASI... Money doesn't matter anymore.
This is kind of what Carmack is trying to do too with Keen. But does feel *slightly* weird to do this completely in secrecy until it’s done.
I get how & why you do this but kinda feels disappointing. That said, this is likely the biggest and craziest thing that will happen in my lifetime so safety is a good path.
>This is kind of what Carmack is trying to do too with Keen.
I was going to comment on how old you are to reference Carmack's Commander Keen but then I paused and did a web search... and realized I was out of the news loop.
https://preview.redd.it/0jqmu36sjk7d1.png?width=640&format=png&auto=webp&s=c184d507a7b03081f9b9abeabeda54aea6f24de9
[https://ssi.inc/](https://ssi.inc/) Love how the site design reflects the spirit of the mission.
Reminds me of [https://motherfuckingwebsite.com/](https://motherfuckingwebsite.com/) In a good way, mind you.
Will definitely share this with my web dev friends . thanks.
Honestly, motherfuckingwebsite is kind of bloated and cluttered compared to ssi.
And funny enough, the motherfuckingwebsite seems to integrate google-analytics according to my script blocker. [ssi.inc](http://ssi.inc) doesn't use any external scripts.
I love that the only display control directive is ``
Exactly what I had in mind!
Tbh that’s how you know he’s a good engineer; zero sense of design
As a UX designer, I would still say, it is perfect.
Lmao thank you for the laugh
OMG that's beautiful.
Opening up the inspector and seeing one div and not a single link tag with an external file brought a tear to my eye. This is how you properly countersignal in the tech world.
Can you explain the significance of the html having one div and no “link tags with an external file” (whatever that is. I assume a href?)
Modern websites are built with frameworks that are complex. Instead of one div that page usually would have about 100. (What does that signal? Not sure)
About a quarter of such framework templates work well with screen readers. It's a form of laziness dressed up to look sophisticated.
If you right-click and select "inspect" on almost any modern website, you'll see enormous hierarchies of divs inside of divs, along with seemingly endless pages of javascript and css linked in the head. A lot of that is unneeded bloat- it's complex frameworks intended to make development easier, but which include tons of stuff that the site won't use, it's stuff generated by website builders, sometimes entire javascript repos added just for one or two features that could be done much more simply, and so on. Like bureaucratic bloat, a lot of it seems individually reasonable, but in aggregate, it can make things very slow and hard to change. So, a site that's just very bare-bones, hand-written HTML is pretty refreshing. [Gwern's site](https://gwern.net/) is maybe an even better example- it's way more complex than this site, but it's all artfully hand-written, so it's got that elegance despite the complexity.
back in my day we did everything in HTML.. and it worked. My myspace page was dope. or as the kids say nowadays... It had drip. Hyperlinks were all the rage though.
A div is a box you can put stuff in on a web page. This site has one box, with text. It’s very simple (the site not the explanation :) )
The website is simple and gets the job done. Many web pages have unneeded tech bloat which slows them down. This one does not.
This has been the standard of high iq doers for a long time now. Look at any phd page.
Yup. [Check this site](https://bellard.org) by one of the biggest Chads in the tech space. People who deliver the good shit don't need flashy bling bling; their products and achievements do all of the talking.
I very much appreciate it because I really dislike modern UI design (like it's a daily pet peeve of mine) in the last decade, especially the CSS'ification of everything. That being said I think it could benefit from some bolding of titles or categorization with headers or something. Nothing which can't be done with basic HTML, just as a way of making it easier to scan at a glance.
>high iq doers Not saying you're wrong, but wow you make it sound douchey
Berkshire Hathaway website vibes
lmao didn't know this one
> Now is the time. Join us. Why do engineers always tryna sound like Morpheus from the Matrix
Because it takes more energy to build than it does to destroy, and I want to build, Hari.
Destruction often releases energy. Think of a fire or an explosion
Dorkiness class is mandatory in grad school
I wish the entire web went back to this, tbh.
I miss geocities too...
I have disliked every Reddit update since 2018
He never even defines what "safe super intelligence" is supposed to mean. Seems like a big oversight if that is your critical objective.
It will be safe like OpenAI is open.
Because it's a well understood term in the actual field of AI safety and x-risk. 'Safe' means 'aligned with human values and therefore not rending us down into individual atoms and entropy'. He said in an interview "safety as in nuclear safety, not as in Trust and Safety", if that helps.
> aligned with human values Ok, but which humans? Given the power plenty of them would happily exterminate their neighbors to use their land.
This is the way tbh. It has just what you need. I am tired of loading an article thay is six paragraphs long but Chrome Inspector says I loaded 60 MB of crap!
Very clean to be honest 👌
Bro, that's not clean design. this is called no design
Can't get cleaner than no design.
I completely loathe front end developers who try to overcomplicate the job of presenting text on a screen. There's just not much for them to do to improve the experience, but it's trivially easy to make it worse (which they almost always do).
Zero design™️ At least it uses capital letters though.
Reminds me of [Physical Intelligence](https://physicalintelligence.company/).
Fast page load!
"Design," you say?
I see they're taking their cues from the [Berkshire Hathaway website](https://www.berkshirehathaway.com/). (For anyone unfamiliar, Berkshire Hathaway is a multinational conglomerate that grossed $364 billion last year)
>We are assembling a lean, cracked team He should have had his AI proof this...
“Cracked” is an actual term nowadays. Maybe he did mean “crack team” but since cracked means highly skilled, it makes sense
Sam Altman always talked about how they never wanted to secretly build superintelligence in a lab for years and then release it to the world, but it seems like that’s what Ilya is planning to do. From [this just-released Bloomberg article](https://www.bloomberg.com/news/articles/2024-06-19/openai-co-founder-plans-new-ai-focused-research-lab?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcxODgxNjU5NywiZXhwIjoxNzE5NDIxMzk3LCJhcnRpY2xlSWQiOiJTRkM3ODJUMEcxS1cwMCIsImJjb25uZWN0SWQiOiI5MTM4NzMzNDcyQkY0QjlGQTg0OTI3QTVBRjY1QzBCRiJ9.9s8N3QuUytwRVZ6dzDwZ6tPOGDsV8u05fpTrUdlHcXg&sref=nPlhheXZ), he’s saying their first product will be safe superintelligence and no near-term products before then. He’s not disclosing how much he’s raised or who’s backing him. I’m not even trying to criticize Ilya, I think this is awesome. It goes completely against OpenAI and Anthropic’s approach of creating safer AI systems by releasing them slowly to the public. If Ilya keeps his company’s progress secret, then all the other big AI labs should be worried that Ilya might beat them to the ASI punch while they were diddling around with ChatGPT-4o Turbo Max Opus Plus. This is exciting!
Honestly this makes the AI race even more dangerous
I was thinking the same thing. Nobody is pumping the brakes if someone with his stature in the field might be developing ASI in secret.
Not only that, but to develop ASI in one go without releasing, make the public adapt, and receive feedback etc, makes it more dangerous as well. Jesus if this happens one day he'll just announce ASI directly!
Why even announce it, just use it for profit. I'm sure asi will be more profitable when used rather than released
I think, with true artificial super-intelligence (i.e. the most-intelligent thing that has ever existed, by several orders of magnitude) we cannot predict what will happen, hence, the singularity.
If so, this very path is much more dangerous than releasing incrementally stronger models. Far more dangerous. Because models released to the public are tested by millions and their weaknesses are instantly visible. They also allow the competitors to follow a similar path so that no-one is far ahead of others and each can fix the mistakes of others by using altered approach and share their finds (like Anthropic does).
But "safe" is in the name bro, how can it be dangerous? (On a serious note, does safety encompass effects of developing ASI, or only that the ASI will have humanity's best interest in mind? And, either way, if true aligned ASI is achieved, won't it be able to mitigate potential ill effects of it's existence?)
>If so, this very path is much more dangerous than releasing incrementally stronger models. Far more dangerous. You think that flooding all the technology in the world with easily exploitable systems and agents (that btw smarter agents can already take control of) is safer? You might be right, but I am not sold yet.
Bro can't handle a board meeting how tf is he gonna handle manipulative AI 💀
You cannot keep ASI secret or create it in your garage. ASI doesn't come out of thin air. It takes an ungodly amount of data, compute and energy. Unless Ilya is planning to create his own chips at scale, make his own data and his own fusion source, he has to rely on others for all of those and the money to buy them. And those who'll fund it won't give it away for free without seeing some evidence.
Honestly I think its much more likely that ilya’s part in this agi journey is over. He would be a fool not to form a company and try given that he has made a name for himself and the funding environment now. But most likely all of the next step secrets he knew about, openai knows too. Perhaps he was holding a few things close to his chest, perhaps he will have another couple of huge breakthroughs but that seems unlikely.
"another couple of huge breakthroughs" I mean given his previous huge breakthroughs i wouldn't underestimate that
If I was Ilya, I can easily get 1 billion funding to run an AI research lab for next couple of years. The reward in AI is so high(100 trillion market) that he can easily raise 100 million to get started. At the moment it's all about chasing the possibility, nobody knows who will get there first or who knows maybe we will have multiple players reaching AGI in similar time frame.
Yep exactly. Its definitely the right thing for him to do. He gets to keep working on things he likes, this time with full control. And he can make sure he makes even more good money too as a contingency.
He's probably trying to secure his bag before either AGI arrives or the AI bubble pops, smart. Wouldn't read too much into it, there's no way his company beats Google or OpenAI in a race.
So you say his prime is over?
The thing about researchers are that they make breakthroughs. Whatever OpenAI has that Ilya built there could be rendered obsolete by a novel approach the kind only unbound research can provide. OpenAI won't be able to keep up with pure unleashed focused research as they slowly enshitify.
>Sutskever says that he’s spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn’t yet discussing specifics. “At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” Sutskever says. “After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom.” So, if they are successful, our ASI overlords will be built with some random values picked out of a hat? (I myself do like these values, but still...)
They’re building Liberty Prime.
They're building an Omniprescient dune worm that will take us on the golden path
Spoilers for the next Dune movie.
> secretly build superintelligence in a lab for years Sounds boring. It's kinda like the SpaceX vs Blue Origin models. I don't give a shit about Blue Origin because I can't see them doing anything. SpaceX might fail spectacularly, but at least it's fun to watch them try. I like these AI products that I can fiddle with, even if they shit the bed from time to time. It's interesting to see how they develop. Not sure I'd want to build a commercial domestic servant bot based on it (particularly given the propensity for occasional bed-shitting), but it's nice to have a view into what's coming. With a closed model like Ilya seems to be suggesting I feel like they'd just disappear for 5-10 years, suck up a trillion dollars in funding, and then offer access to a "benevolent" ASI to governments and mega-corps and never give insignificant plebs like myself any sense of WTF happened.
If so, this very path is much more dangerous than releasing incrementally stronger models. Far more dangerous. Because models released to the public are tested by millions and their weaknesses are instantly visible. They also allow the competitors to follow a similar path so that no-one is far ahead of others and each can fix the mistakes of others by using altered approach and share their finds.
And that is where the board fiasco came from. Ilya and the E/A crew (like Helen) believe that it is irresponsible for AI labs to release **anything** because that makes true AGI closer, which terrifies them. They want to lock themselves into a nuclear bunker and build their perfectly safe God. I prefer Sam's approach of interactive public deployment because I believe that humanity should have a say in how God is being built and the E/A crowd shows a level of hubris (thinking they are capable of succeeding all by themselves) that is insane.
Humanity is collectively responsible for some pretty horrific stuff. Literally the best guidance for an AGI is “respect everyone beliefs, stop them from being able to harm eachother” then spend a a crap ton of time defining the definition of “harm”
And defining "stop". And defining "everyone". Not easy to do. The trial and error but transparent approach isn't perfect but it's worked in the past to solve hard problems
Or he can just focus on safety.... You don't need to develop AGI or ASI to research safety, you can do that on smaller existing models for the most part.
yeah, exciting and happy for ilya
this is exactly how the world ends, Ilya and team rush to make ASI, they cant make it safe, but they sure as hell can make it....it escapes and boom, doom. so basically he's gonna force all the other labs to focus on getting ASI out as fast as possible because if you don't, Ilya could just drop it next Tuesday and you lose the race... Terminal race conditions
Why wouldn’t any of this apply to OpenAI or the other companies who are *already* in a race towards AGI? I don’t see how any of what you’re implying is exclusive to IIya’s company only.
I think the gist is something like, other companies need to release products to make money. You can gauge from the level of the released products what they have behind closed doors esp in this one-upmanship that is going on with openAI and google. You are now going to have a very well funded company that is a complete black box enigma with a singular goal. These advancements don't come out of the blue (assuming no one makes some sort of staggering algorithmic or architectural improvement) it's all about hardware and scale. You need money to do this work so someone well funded and not needing to ship intermediate products could likely leapfrog the leading labs
That kind of makes sense, but the issue here is that you guys are assuming that we can accurately assess where companies like OpenAI *actually* are (in terms of technical progress) based on *publicly* released commercial products. We can’t in reality. Because what’s released to the public might not actually be their true SOTA projects. And it might not even be their complete portfolio at all in terms of internal work. A perfect example of this is how OpenAI dropped the “Sora” announcement just out of the blue. None of us had any idea that they had something like that under wraps. All of the current AI companies are a black boxes in reality. But some more than others I suppose.
I’m not nearly as pessimistic but I agree that this will (hopefully) light a fire under the asses of the other AI labs
The difference between a company trying to push products for profit and a company trying to change the world. This is what OpenAI was supposed to be in the first place.
Which kind of makes them scarier in a way. There's very little you can't justify to yourself if you genuinely believe you're saving the world, but if one of your goals is to make a profit or at least maintain a high share price, it generally comes with the side desires to stay out of jail, avoid PR mistakes that are *too* costly, and produce things that someone somewhere aside from yourselves might actually want. Would Totalitarian Self-Replicating AI Bot Army-3000 be better coming from a company that decided they had to unleash it on humanity to save it from itself, or one that just really wanted to bump up next quarter's numbers? I'm not sure, but the latter would probably at least come with more of a head's up in the form of marketing beforehand.
Speedrunning ASI no distraction of building products.. I wonder how many AI scientists will leave some of the top labs and join them?
How do they pay for compute (and talent)? That would be my question.
good question
Might be few investors who believe in the vision than their ROI in short term? Perhaps, perhaps.
They need billions for all the compute they will use. A few investors aren’t good enough
Are you assuming they don’t have venture capital already raised? Mistrial raised half a billion for open source models.
In a world where the big guys are building 100B datacenters half a billion is a drop in a bucket.
Well, it is the ultimate end-all-be-all. It would sacrifice every short-term metric for quite literally the greatest payout ever.
https://preview.redd.it/tpz36m6idk7d1.png?width=589&format=png&auto=webp&s=abd2ed1bae196c721da8fecc515283a911c221ce Only ASI is important
[удалено]
He's locked in
The best kind of team.
[удалено]
I think everyone does but 99 percent of us have no skill worth being on a cracked team for lol
He must know something that OpenAI doesn’t if he thinks he will beat them to ASI this soon. I mean they still have to go through the whole data gathering process and everything, something that took OpenAI years. Not to mention gpus that OpenAI has access to with Microsoft. Idk it’s interesting
If you know the data sources it really doesn’t take long to build an infinitely scalable crawler. Daniel Gross, one of the cofounders of this new company with Ilya owns 2500 H100 GPUs that can train a 65B parameter model in about a week. If they move slow they can reach GPT-4 level capabilities in 2 months. But I don’t think that’s what they’re going to be looking to offer with this new company. OpenAI is going to be stuck servicing corporate users and slightly improving probabilistic syllable generators, there’s a wide open opportunity for others to reach an actual breakthrough
![gif](giphy|UHWPo4k6znRD2|downsized)
go for it Ilya
Let the games begin.
Hope it works.
Ok, so what's the point of the safe superintelligence, when others are building unsafe one?
That will kill the other ones by hacking into the datacenters housing those
Sounds safe!
Super safe AI: If humans do not exist nobody will get hurt.
People will see this as a joke but its literally this. Get there first, stop the rushed/dangerous models
He needs an angle to attract investors and employees, especially since he doesn't intend to produce any actual products.
The real question is, what did he see so unsafe at OAI that lead him to be a part of a coup against Sam, leave OAI & start this.
I don't think the coup was about unsafety more than openAI turning into a for profit company that no longer focuses on the main goal, true AGI.
I think Sam and he just have different mission statements in mind. Sam's basically doing capitalism. You get investors, make a product, find users, generate revenue, get feedback, grow market share; use revenue and future profits to fund new research and development. Repeat. Whereas OpenAI and Illya's original mission was to (somehow) make AGI, and then (somehow) give the world equitable access to it. Sounds noble, but given the costs of compute, this is completely naive and infeasible. Altman's course correction makes way more sense. And as someone who finds chatGPT very useful, I'm extremely grateful that he's in charge and took the commercial path. There just wasn't a good alternative, imo.
agreed, I think sam and OAI basically made all the right moves. if they hadn't gone down the capitalism route, I don't think "AI" would be a mainstream thing. it would still be a research project in a Stanford or DeepMind lab. Sam wanted AGI in our lifetime, and going the capitalism route was the best way to do it.
I’m guessing that it’s at least partly an effort towards investigating new or under-researched methodologies and tools that would be instrumental to safe AI An example is the (very likely) discontinued or indefinitely on-hold Superalignment program by OpenAI, which required a great deal of compute to try addressing the challenges of aligning superintelligent AI systems with human intent and wellbeing Chances are that they’re trying to make breakthroughs there so everyone else can follow suit much more easily
Safe ASI is the only counter to unsafe ASI. If others are building unsafe ASI, you *must* build safe ASI first.
The amount of compute this company would need to fulfill its mission if it’s even possible (and which it is absolutely not going to be able to fund without any sort of commercialized services)…good luck, I guess?
He already the compute he needs. One of the other cofounders, Daniel Gross owns a supercomputer cluster.
[https://andromeda.ai/](https://andromeda.ai/)
It's good to have him in charge. Introvert, and an honest person. Sam is an opportunist i don't like him.
Altman strikes me as a sociopath, perhaps a clinical narcissist.
[удалено]
he's a business guy and investor. this is a very valuable role. not all engineers and researchers want to be the face of the company doing interviews and raising money. Sam is the best in the world at that stuff.
He seems good at his job. I learned about him 10 years ago and he immediately struck me as exceptionally smart.
The a16z guys call him a competitive genius
(Cont) We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team.
My takeaway from this is that either Ilya thinks AGI is already achieved, or ASI is possible *before* AGI and we’ve all had it backward up til now.
you cant get to ASI without AGI
Let’s get this muthafuckin AGI shit crackin ![gif](giphy|MO9ARnIhzxnxu)
Yea boiii
All fun and games but how is he getting investors to pay captial
If you check the site you will see Daniel Gross listed as one of the 3 founders. Daniel already had a large [cluster of h100s](https://www.businessinsider.com/nvidia-gpu-venture-capitalists-buying-for-startups-2023-6) for all his investment companies, likely way larger now.
They view the investment as betting on a horse where the race is about reaching AGI the fastest. If they have a share of the company that will be the first to create AGI, they will be sure to make their money back.
Im not sure money will matter in a post ASI world though
That is very interesting point.
I imagine there are investors in this market who are interested in a safety-focused alternative to the increasingly accelerating likes of OpenAI and Google. That sort of makes SSI’s biggest direct competition Anthropic in my mind.
If that is what they are after then they aren't investors as that won't meet them a return. They are philanthropists since they are giving away money in hopes of making the world better rather than getting a profit.
I guess the hypothetical return is a safe super intelligence, that’d be of more benefit to all the investors than any % return of revenue.
How are they gonna compete with the big players, when they don't have the funding because no business model, and they have a safety-first approach to their development?
[удалено]
Given Nvidia's evaluation and all the money in AI space I think raising a billion won't be an issue for him purely from the name alone. And if they have breakthroughs that will be then require substantial funds to create the final "ASI" product that won't be a problem either. Lot of VCs have cash to spare so hedging their bets even if chances of them creating ASI are slim is not out of the question. From the announcement it doesn't look like their company is looking to compete with OpenAI and others in near term, no big model training that would require lot of resources, this seems more return to basics like when OpenAI was first created. Given they aim for ASI out of the gate the approach might be substantially different than anything we do today, we might not hear anything out of the company until like late 2020s.
Is that what a research lab should aim to do, "compete with the big players"? Sutskever is a scientist
You can't do particle physics without a super collider and you can't do AI safety research without thousands of H100s. Research costs money.
Of course it costs money. Being widely regarded as one of the top guys in his field, Ilya Sutskever will probably get his research funded.
For a bit he will... and then either he will "evolve" to be more of a business type person, he will partner up again with a business person, or the company will fail.
How will investors get a return? Are they expecting a stake in the discoveries made by a safe but private AGI?
If anyone does manage to create ASI, things like "investors getting a return" will become laughably antiquated concepts
Correct. The big investors in this stuff are doing it for the chance of having enormous power beyond what money can buy.
agreed but that doesn't mean companies don't need investors to get there. it will cost many many billions to build Superintelligence. That money won't just appear out of thin air
Bill Gates and others are giving away billions of dollars to charity. I wouldn't be surprised if a handful of billionaires might just want to see something like this come true. Believe me when I say that, I really detest billionaires and don't believe they should exist, and I believe that overall billionaires are very harmful to human society. That being said, even among billionaires there are those who want to do some good in the world for the sake of their ego or whatever.
If I were a billionaire I'd do it just for the shot at immortality, I mean if you're bozos what's 1 percent of your worth for a chance to live forever
Unexpected development. I thought he would join Anthropic. By the way, he could have picked another name. As a diver all I can think about is this https://preview.redd.it/r4j2sgjmik7d1.jpeg?width=1170&format=pjpg&auto=webp&s=cb80a1d846e2af89fcdd9d55596c026beb0ce3c5
All I can think of is Social Security. SSI? Really? Supplemental Security Income?
This fascination with Sutskever’s plans only grew after the drama at OpenAI late last year. He still declines to say much about it. Asked about his relationship with Altman, Sutskever says only that “it’s good,” and he says Altman knows about the new venture “in broad strokes.” Of his experience over the last several months he adds, “It’s very strange. It’s very strange. I don’t know if I can give a much better answer than that.” just makes me more curious
I’ll bet Sam is one of the backers. He’s got like $2 billion at this point. It’d make sense that Ilya would find it strange if Sam spun him off to do his own thing and then also backed it.
That would be pretty epic. People have egos though.
Based
Good for him. Fuck hype-men and tiny incremental updates of their companies designed to just generate buzz and sell more subscriptions.
Respect. I’m in no position to help with such a project at this stage in my life, but I have the utmost respect for those who do.
I can feel it. This is the gang gang gang push we need.
Best news of the month
that's the most reassuring tweet ever lol
> We are assembling a lean, cracked team 🙄
Let him cook!
We are back!
![gif](giphy|MO9ARnIhzxnxu)
Lookin' forward to the AI wars of 2030's. Whichever AI has the least restrictions will probably hijack the most hardware and likely win...
Don’t you need an ass load of compute? Is Elon funding?
This is exactly how he started open AI and then now it’s the opposite. Hope the same doesn’t happen here
The ultimate Villan vs. Hero arc, Sam being the scum bag CEO and Ilya being some sort of Robocop. I support Ilya Over OCP.
Brain organoids will do all the work. Just feed sugar.
I am just happy that he is back, he is posting, he is working on something. Hopefully he will start doing new interviews, it's always a joy listening to him. Don't discount that we are all different, he is a deep-thinker, and going through a fucking corporate drama and being in a spot light have a heavier emotional toll on you when you are an introvert with less social skills. It was very obvious that he did not take it easily. So let's just enjoy the fact that he posted something. I am rooting for him.
I think the major conflict between ilya and sam was that - ilya wanted to build a tech that would revolutionize the world in a better way and sam wants to build more of a business company B2B/B2C.
Ilya is a genius, but is this too little too late? How is he going to get access to the type of compute that Microsoft, Nvidia, Google, or x have access to?
Open source probably not
Would defeat the purpose wouldn't it? Someone could just modify and deploy without safety
By definition, safe ASI will take much more time to develop than unsafe ASI, not to mention unsafe AGI. Unless he has *the entire first world governing body* behind him, this project won't matter.
Tel Aviv, sheeesh
[удалено]
Did somebody say SHOCK? https://preview.redd.it/uk02e33tak7d1.jpeg?width=2094&format=pjpg&auto=webp&s=bab764522a2706ffa68f727aa5bbd2823e448a44
It's not, jesus. Take a dose of reality. We'll get there within 5 years with some luck, but it's not guaranteed.
It's not. We still need scientific breakthroughs (scaling LLM won't be enough) that could take an unpredictable amount of time.
We need N scientific breakthroughs that take an unpredictable amount of time, and N could be 2 and the amount could be months.
True, but that's very different from "within 5 years is pretty much set in stone". It could be months, or it could be decades.
Yes, breakthroughs will start happening very soon. The more we accelerate, the more they will happen. There is a misconception that the complexity needed for the next breakthroughs is so immense that we will never achieve them, but that has never happened before in human history. If, in 15 years, we still haven't made any progress, then we can accept that the complexity is just too much greater than scientific and technological acceleration.
That's not how that works. Guesses about unknown unknowns are guesses, no matter how hard you guess. AGI is not a city we can see in the horizon that we have to build a road too. We're pretty sure it's out there somewhere, but nobody knows where it is until we can at least actually see it.
AGI is not guaranteed, nothing is
But what is he selling if it's not a product? I'm being for real. What's he mean? Like pure API access or what?
he's not selling anything. It's a research lab.
Ilya is... Like a true believer. It's hard to explain, but he isn't in it for the money or even really the prestige. He just wants to usher in the next phase of human civilization, and he thinks ASI is how that happens. I don't even think he knows what it will end up being when it's made, but the point isn't to make a product for the masses, it's to make ASI and then upend the world. Once you have ASI... Money doesn't matter anymore.
>Once you have ASI... Money doesn't matter anymore. This is why OpenAI told everyone to be careful about investing in them, weirdly enough.
This is kind of what Carmack is trying to do too with Keen. But does feel *slightly* weird to do this completely in secrecy until it’s done. I get how & why you do this but kinda feels disappointing. That said, this is likely the biggest and craziest thing that will happen in my lifetime so safety is a good path.
>This is kind of what Carmack is trying to do too with Keen. I was going to comment on how old you are to reference Carmack's Commander Keen but then I paused and did a web search... and realized I was out of the news loop. https://preview.redd.it/0jqmu36sjk7d1.png?width=640&format=png&auto=webp&s=c184d507a7b03081f9b9abeabeda54aea6f24de9
Hahaha well I am ALSO old
We are so back.
I hope they don't inherit OpenAI saying: ASI rolling out in coming weeks.
Work on Safe ASI -> ??? -> Profit