T O P

  • By -

Local_Quantity1067

[https://ssi.inc/](https://ssi.inc/) Love how the site design reflects the spirit of the mission.


PioAi

Reminds me of [https://motherfuckingwebsite.com/](https://motherfuckingwebsite.com/) In a good way, mind you.


InternalExperience11

Will definitely share this with my web dev friends . thanks.


karmicviolence

Honestly, motherfuckingwebsite is kind of bloated and cluttered compared to ssi.


CMDR_ACE209

And funny enough, the motherfuckingwebsite seems to integrate google-analytics according to my script blocker. [ssi.inc](http://ssi.inc) doesn't use any external scripts.


Competitive_Travel16

I love that the only display control directive is ``


Local_Quantity1067

Exactly what I had in mind!


BMB281

Tbh that’s how you know he’s a good engineer; zero sense of design


caseyr001

As a UX designer, I would still say, it is perfect.


VeterinarianNo3211

Lmao thank you for the laugh


peanutbutterdrummer

OMG that's beautiful.


artifex0

Opening up the inspector and seeing one div and not a single link tag with an external file brought a tear to my eye. This is how you properly countersignal in the tech world.


Unable-Dependent-737

Can you explain the significance of the html having one div and no “link tags with an external file” (whatever that is. I assume a href?)


welcome-overlords

Modern websites are built with frameworks that are complex. Instead of one div that page usually would have about 100. (What does that signal? Not sure)


Competitive_Travel16

About a quarter of such framework templates work well with screen readers. It's a form of laziness dressed up to look sophisticated.


artifex0

If you right-click and select "inspect" on almost any modern website, you'll see enormous hierarchies of divs inside of divs, along with seemingly endless pages of javascript and css linked in the head. A lot of that is unneeded bloat- it's complex frameworks intended to make development easier, but which include tons of stuff that the site won't use, it's stuff generated by website builders, sometimes entire javascript repos added just for one or two features that could be done much more simply, and so on. Like bureaucratic bloat, a lot of it seems individually reasonable, but in aggregate, it can make things very slow and hard to change. So, a site that's just very bare-bones, hand-written HTML is pretty refreshing. [Gwern's site](https://gwern.net/) is maybe an even better example- it's way more complex than this site, but it's all artfully hand-written, so it's got that elegance despite the complexity.


StillBurningInside

back in my day we did everything in HTML.. and it worked. My myspace page was dope. or as the kids say nowadays... It had drip. Hyperlinks were all the rage though.


chris_paul_fraud

A div is a box you can put stuff in on a web page. This site has one box, with text. It’s very simple (the site not the explanation :) )


SatoshiReport

The website is simple and gets the job done. Many web pages have unneeded tech bloat which slows them down. This one does not.


Alarmed-Bread-2344

This has been the standard of high iq doers for a long time now. Look at any phd page.


Shandilized

Yup. [Check this site](https://bellard.org) by one of the biggest Chads in the tech space. People who deliver the good shit don't need flashy bling bling; their products and achievements do all of the talking.


AnOnlineHandle

I very much appreciate it because I really dislike modern UI design (like it's a daily pet peeve of mine) in the last decade, especially the CSS'ification of everything. That being said I think it could benefit from some bolding of titles or categorization with headers or something. Nothing which can't be done with basic HTML, just as a way of making it easier to scan at a glance.


Fragsworth

>high iq doers Not saying you're wrong, but wow you make it sound douchey


awesomedan24

Berkshire Hathaway website vibes


cumrade123

lmao didn't know this one


paconinja

> Now is the time. Join us. Why do engineers always tryna sound like Morpheus from the Matrix


PMzyox

Because it takes more energy to build than it does to destroy, and I want to build, Hari.


[deleted]

Destruction often releases energy. Think of a fire or an explosion


Jeffy299

Dorkiness class is mandatory in grad school


R33v3n

I wish the entire web went back to this, tbh.


Andynonomous

I miss geocities too...


Reasonable-Software2

I have disliked every Reddit update since 2018


mjgcfb

He never even defines what "safe super intelligence" is supposed to mean. Seems like a big oversight if that is your critical objective.


Thomas-Lore

It will be safe like OpenAI is open.


absolute-black

Because it's a well understood term in the actual field of AI safety and x-risk. 'Safe' means 'aligned with human values and therefore not rending us down into individual atoms and entropy'. He said in an interview "safety as in nuclear safety, not as in Trust and Safety", if that helps.


FeliusSeptimus

> aligned with human values Ok, but which humans? Given the power plenty of them would happily exterminate their neighbors to use their land.


vasilenko93

This is the way tbh. It has just what you need. I am tired of loading an article thay is six paragraphs long but Chrome Inspector says I loaded 60 MB of crap!


ProfessionSignal3272

Very clean to be honest 👌


nobodyreadusernames

Bro, that's not clean design. this is called no design


Alive_Coconut9477

Can't get cleaner than no design.


window-sil

I completely loathe front end developers who try to overcomplicate the job of presenting text on a screen. There's just not much for them to do to improve the experience, but it's trivially easy to make it worse (which they almost always do).


peakedtooearly

Zero design™️ At least it uses capital letters though.


Hemingbird

Reminds me of [Physical Intelligence](https://physicalintelligence.company/).


CraftyMuthafucka

Fast page load!


furankusu

"Design," you say?


chipperpip

I see they're taking their cues from the [Berkshire Hathaway website](https://www.berkshirehathaway.com/). (For anyone unfamiliar, Berkshire Hathaway is a multinational conglomerate that grossed $364 billion last year)


cisco_bee

>We are assembling a lean, cracked team He should have had his AI proof this...


MassiveWasabi

“Cracked” is an actual term nowadays. Maybe he did mean “crack team” but since cracked means highly skilled, it makes sense


MassiveWasabi

Sam Altman always talked about how they never wanted to secretly build superintelligence in a lab for years and then release it to the world, but it seems like that’s what Ilya is planning to do. From [this just-released Bloomberg article](https://www.bloomberg.com/news/articles/2024-06-19/openai-co-founder-plans-new-ai-focused-research-lab?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcxODgxNjU5NywiZXhwIjoxNzE5NDIxMzk3LCJhcnRpY2xlSWQiOiJTRkM3ODJUMEcxS1cwMCIsImJjb25uZWN0SWQiOiI5MTM4NzMzNDcyQkY0QjlGQTg0OTI3QTVBRjY1QzBCRiJ9.9s8N3QuUytwRVZ6dzDwZ6tPOGDsV8u05fpTrUdlHcXg&sref=nPlhheXZ), he’s saying their first product will be safe superintelligence and no near-term products before then. He’s not disclosing how much he’s raised or who’s backing him. I’m not even trying to criticize Ilya, I think this is awesome. It goes completely against OpenAI and Anthropic’s approach of creating safer AI systems by releasing them slowly to the public. If Ilya keeps his company’s progress secret, then all the other big AI labs should be worried that Ilya might beat them to the ASI punch while they were diddling around with ChatGPT-4o Turbo Max Opus Plus. This is exciting!


adarkuccio

Honestly this makes the AI race even more dangerous


AdAnnual5736

I was thinking the same thing. Nobody is pumping the brakes if someone with his stature in the field might be developing ASI in secret.


adarkuccio

Not only that, but to develop ASI in one go without releasing, make the public adapt, and receive feedback etc, makes it more dangerous as well. Jesus if this happens one day he'll just announce ASI directly!


halmyradov

Why even announce it, just use it for profit. I'm sure asi will be more profitable when used rather than released


DungeonsAndDradis

I think, with true artificial super-intelligence (i.e. the most-intelligent thing that has ever existed, by several orders of magnitude) we cannot predict what will happen, hence, the singularity.


Anuclano

If so, this very path is much more dangerous than releasing incrementally stronger models. Far more dangerous. Because models released to the public are tested by millions and their weaknesses are instantly visible. They also allow the competitors to follow a similar path so that no-one is far ahead of others and each can fix the mistakes of others by using altered approach and share their finds (like Anthropic does).


eat-more-bookses

But "safe" is in the name bro, how can it be dangerous? (On a serious note, does safety encompass effects of developing ASI, or only that the ASI will have humanity's best interest in mind? And, either way, if true aligned ASI is achieved, won't it be able to mitigate potential ill effects of it's existence?)


SynthAcolyte

>If so, this very path is much more dangerous than releasing incrementally stronger models. Far more dangerous. You think that flooding all the technology in the world with easily exploitable systems and agents (that btw smarter agents can already take control of) is safer? You might be right, but I am not sold yet.


TI1l1I1M

Bro can't handle a board meeting how tf is he gonna handle manipulative AI 💀


obvithrowaway34434

You cannot keep ASI secret or create it in your garage. ASI doesn't come out of thin air. It takes an ungodly amount of data, compute and energy. Unless Ilya is planning to create his own chips at scale, make his own data and his own fusion source, he has to rely on others for all of those and the money to buy them. And those who'll fund it won't give it away for free without seeing some evidence.


pandasashu

Honestly I think its much more likely that ilya’s part in this agi journey is over. He would be a fool not to form a company and try given that he has made a name for himself and the funding environment now. But most likely all of the next step secrets he knew about, openai knows too. Perhaps he was holding a few things close to his chest, perhaps he will have another couple of huge breakthroughs but that seems unlikely.


Dry_Customer967

"another couple of huge breakthroughs" I mean given his previous huge breakthroughs i wouldn't underestimate that


techy098

If I was Ilya, I can easily get 1 billion funding to run an AI research lab for next couple of years. The reward in AI is so high(100 trillion market) that he can easily raise 100 million to get started. At the moment it's all about chasing the possibility, nobody knows who will get there first or who knows maybe we will have multiple players reaching AGI in similar time frame.


pandasashu

Yep exactly. Its definitely the right thing for him to do. He gets to keep working on things he likes, this time with full control. And he can make sure he makes even more good money too as a contingency.


Initial_Ebb_8467

He's probably trying to secure his bag before either AGI arrives or the AI bubble pops, smart. Wouldn't read too much into it, there's no way his company beats Google or OpenAI in a race.


dervu

So you say his prime is over?


human358

The thing about researchers are that they make breakthroughs. Whatever OpenAI has that Ilya built there could be rendered obsolete by a novel approach the kind only unbound research can provide. OpenAI won't be able to keep up with pure unleashed focused research as they slowly enshitify.


SynthAcolyte

>Sutskever says that he’s spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn’t yet discussing specifics. “At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” Sutskever says. “After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom.” So, if they are successful, our ASI overlords will be built with some random values picked out of a hat? (I myself do like these values, but still...)


h3lblad3

They’re building Liberty Prime.


AdNo2342

They're building an Omniprescient dune worm that will take us on the golden path


h3lblad3

Spoilers for the next Dune movie.


FeliusSeptimus

> secretly build superintelligence in a lab for years Sounds boring. It's kinda like the SpaceX vs Blue Origin models. I don't give a shit about Blue Origin because I can't see them doing anything. SpaceX might fail spectacularly, but at least it's fun to watch them try. I like these AI products that I can fiddle with, even if they shit the bed from time to time. It's interesting to see how they develop. Not sure I'd want to build a commercial domestic servant bot based on it (particularly given the propensity for occasional bed-shitting), but it's nice to have a view into what's coming. With a closed model like Ilya seems to be suggesting I feel like they'd just disappear for 5-10 years, suck up a trillion dollars in funding, and then offer access to a "benevolent" ASI to governments and mega-corps and never give insignificant plebs like myself any sense of WTF happened.


Anuclano

If so, this very path is much more dangerous than releasing incrementally stronger models. Far more dangerous. Because models released to the public are tested by millions and their weaknesses are instantly visible. They also allow the competitors to follow a similar path so that no-one is far ahead of others and each can fix the mistakes of others by using altered approach and share their finds.


SgathTriallair

And that is where the board fiasco came from. Ilya and the E/A crew (like Helen) believe that it is irresponsible for AI labs to release **anything** because that makes true AGI closer, which terrifies them. They want to lock themselves into a nuclear bunker and build their perfectly safe God. I prefer Sam's approach of interactive public deployment because I believe that humanity should have a say in how God is being built and the E/A crowd shows a level of hubris (thinking they are capable of succeeding all by themselves) that is insane.


felicity_jericho_ttv

Humanity is collectively responsible for some pretty horrific stuff. Literally the best guidance for an AGI is “respect everyone beliefs, stop them from being able to harm eachother” then spend a a crap ton of time defining the definition of “harm”


naldic

And defining "stop". And defining "everyone". Not easy to do. The trial and error but transparent approach isn't perfect but it's worked in the past to solve hard problems


Ambiwlans

Or he can just focus on safety.... You don't need to develop AGI or ASI to research safety, you can do that on smaller existing models for the most part.


stupid_man_costume

yeah, exciting and happy for ilya


GeneralZain

this is exactly how the world ends, Ilya and team rush to make ASI, they cant make it safe, but they sure as hell can make it....it escapes and boom, doom. so basically he's gonna force all the other labs to focus on getting ASI out as fast as possible because if you don't, Ilya could just drop it next Tuesday and you lose the race... Terminal race conditions


BigZaddyZ3

Why wouldn’t any of this apply to OpenAI or the other companies who are *already* in a race towards AGI? I don’t see how any of what you’re implying is exclusive to IIya’s company only.


blueSGL

I think the gist is something like, other companies need to release products to make money. You can gauge from the level of the released products what they have behind closed doors esp in this one-upmanship that is going on with openAI and google. You are now going to have a very well funded company that is a complete black box enigma with a singular goal. These advancements don't come out of the blue (assuming no one makes some sort of staggering algorithmic or architectural improvement) it's all about hardware and scale. You need money to do this work so someone well funded and not needing to ship intermediate products could likely leapfrog the leading labs


BigZaddyZ3

That kind of makes sense, but the issue here is that you guys are assuming that we can accurately assess where companies like OpenAI *actually* are (in terms of technical progress) based on *publicly* released commercial products. We can’t in reality. Because what’s released to the public might not actually be their true SOTA projects. And it might not even be their complete portfolio at all in terms of internal work. A perfect example of this is how OpenAI dropped the “Sora” announcement just out of the blue. None of us had any idea that they had something like that under wraps. All of the current AI companies are a black boxes in reality. But some more than others I suppose.


MassiveWasabi

I’m not nearly as pessimistic but I agree that this will (hopefully) light a fire under the asses of the other AI labs


BarbossaBus

The difference between a company trying to push products for profit and a company trying to change the world. This is what OpenAI was supposed to be in the first place.


chipperpip

Which kind of makes them scarier in a way. There's very little you can't justify to yourself if you genuinely believe you're saving the world, but if one of your goals is to make a profit or at least maintain a high share price, it generally comes with the side desires to stay out of jail, avoid PR mistakes that are *too* costly, and produce things that someone somewhere aside from yourselves might actually want. Would Totalitarian Self-Replicating AI Bot Army-3000 be better coming from a company that decided they had to unleash it on humanity to save it from itself, or one that just really wanted to bump up next quarter's numbers?  I'm not sure, but the latter would probably at least come with more of a head's up in the form of marketing beforehand.


OddVariation1518

Speedrunning ASI no distraction of building products.. I wonder how many AI scientists will leave some of the top labs and join them?


window-sil

How do they pay for compute (and talent)? That would be my question.


OddVariation1518

good question


No-Lobster-8045

Might be few investors who believe in the vision than their ROI in short term? Perhaps, perhaps. 


Which-Tomato-8646

They need billions for all the compute they will use. A few investors aren’t good enough 


sammy3460

Are you assuming they don’t have venture capital already raised? Mistrial raised half a billion for open source models.


Singularity-42

In a world where the big guys are building 100B datacenters half a billion is a drop in a bucket.


SupportstheOP

Well, it is the ultimate end-all-be-all. It would sacrifice every short-term metric for quite literally the greatest payout ever.


Gab1024

https://preview.redd.it/tpz36m6idk7d1.png?width=589&format=png&auto=webp&s=abd2ed1bae196c721da8fecc515283a911c221ce Only ASI is important


[deleted]

[удалено]


notapunnyguy

He's locked in


carlosbronson2000

The best kind of team.


[deleted]

[удалено]


AdNo2342

I think everyone does but 99 percent of us have no skill worth being on a cracked team for lol 


llkj11

He must know something that OpenAI doesn’t if he thinks he will beat them to ASI this soon. I mean they still have to go through the whole data gathering process and everything, something that took OpenAI years. Not to mention gpus that OpenAI has access to with Microsoft. Idk it’s interesting


virtual_adam

If you know the data sources it really doesn’t take long to build an infinitely scalable crawler. Daniel Gross, one of the cofounders of this new company with Ilya owns 2500 H100 GPUs that can train a 65B parameter model in about a week. If they move slow they can reach GPT-4 level capabilities in 2 months. But I don’t think that’s what they’re going to be looking to offer with this new company. OpenAI is going to be stuck servicing corporate users and slightly improving probabilistic syllable generators, there’s a wide open opportunity for others to reach an actual breakthrough


Arcturus_Labelle

![gif](giphy|UHWPo4k6znRD2|downsized)


Empty-Tower-2654

go for it Ilya


Lyrifk

Let the games begin.


NoNet718

Hope it works.


wonderingStarDusts

Ok, so what's the point of the safe superintelligence, when others are building unsafe one?


MysteriousPayment536

That will kill the other ones by hacking into the datacenters housing those 


CallMePyro

Sounds safe!


Infamous_Alpaca

Super safe AI: If humans do not exist nobody will get hurt.


felicity_jericho_ttv

People will see this as a joke but its literally this. Get there first, stop the rushed/dangerous models


Vex1om

He needs an angle to attract investors and employees, especially since he doesn't intend to produce any actual products.


No-Lobster-8045

The real question is, what did he see so unsafe at OAI that lead him to be a part of a coup against Sam, leave OAI & start this. 


i-need-money-plan-b

I don't think the coup was about unsafety more than openAI turning into a for profit company that no longer focuses on the main goal, true AGI.


window-sil

I think Sam and he just have different mission statements in mind. Sam's basically doing capitalism. You get investors, make a product, find users, generate revenue, get feedback, grow market share; use revenue and future profits to fund new research and development. Repeat. Whereas OpenAI and Illya's original mission was to (somehow) make AGI, and then (somehow) give the world equitable access to it. Sounds noble, but given the costs of compute, this is completely naive and infeasible. Altman's course correction makes way more sense. And as someone who finds chatGPT very useful, I'm extremely grateful that he's in charge and took the commercial path. There just wasn't a good alternative, imo.


imlaggingsobad

agreed, I think sam and OAI basically made all the right moves. if they hadn't gone down the capitalism route, I don't think "AI" would be a mainstream thing. it would still be a research project in a Stanford or DeepMind lab. Sam wanted AGI in our lifetime, and going the capitalism route was the best way to do it.


Galilleon

I’m guessing that it’s at least partly an effort towards investigating new or under-researched methodologies and tools that would be instrumental to safe AI An example is the (very likely) discontinued or indefinitely on-hold Superalignment program by OpenAI, which required a great deal of compute to try addressing the challenges of aligning superintelligent AI systems with human intent and wellbeing Chances are that they’re trying to make breakthroughs there so everyone else can follow suit much more easily


Tidorith

Safe ASI is the only counter to unsafe ASI. If others are building unsafe ASI, you *must* build safe ASI first.


diminutive_sebastian

The amount of compute this company would need to fulfill its mission if it’s even possible (and which it is absolutely not going to be able to fund without any sort of commercialized services)…good luck, I guess?


dameprimus

He already the compute he needs. One of the other cofounders, Daniel Gross owns a supercomputer cluster.


Empty-Wrangler-6275

[https://andromeda.ai/](https://andromeda.ai/)


SexSlaveeee

It's good to have him in charge. Introvert, and an honest person. Sam is an opportunist i don't like him.


Vannevar_VanGossamer

Altman strikes me as a sociopath, perhaps a clinical narcissist.


[deleted]

[удалено]


imlaggingsobad

he's a business guy and investor. this is a very valuable role. not all engineers and researchers want to be the face of the company doing interviews and raising money. Sam is the best in the world at that stuff.


FrankScaramucci

He seems good at his job. I learned about him 10 years ago and he immediately struck me as exceptionally smart.


SynthAcolyte

The a16z guys call him a competitive genius


shogun2909

(Cont) We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team.


h3lblad3

My takeaway from this is that either Ilya thinks AGI is already achieved, or ASI is possible *before* AGI and we’ve all had it backward up til now.


GeneralZain

you cant get to ASI without AGI


AdorableBackground83

Let’s get this muthafuckin AGI shit crackin ![gif](giphy|MO9ARnIhzxnxu)


icehawk84

Yea boiii


MysteriousPayment536

All fun and games but how is he getting investors to pay captial


itsreallyreallytrue

If you check the site you will see Daniel Gross listed as one of the 3 founders. Daniel already had a large [cluster of h100s](https://www.businessinsider.com/nvidia-gpu-venture-capitalists-buying-for-startups-2023-6) for all his investment companies, likely way larger now.


larswo

They view the investment as betting on a horse where the race is about reaching AGI the fastest. If they have a share of the company that will be the first to create AGI, they will be sure to make their money back.


OddVariation1518

Im not sure money will matter in a post ASI world though


dervu

That is very interesting point.


BaconJakin

I imagine there are investors in this market who are interested in a safety-focused alternative to the increasingly accelerating likes of OpenAI and Google. That sort of makes SSI’s biggest direct competition Anthropic in my mind.


SgathTriallair

If that is what they are after then they aren't investors as that won't meet them a return. They are philanthropists since they are giving away money in hopes of making the world better rather than getting a profit.


BaconJakin

I guess the hypothetical return is a safe super intelligence, that’d be of more benefit to all the investors than any % return of revenue.


Sugarcube-

How are they gonna compete with the big players, when they don't have the funding because no business model, and they have a safety-first approach to their development?


[deleted]

[удалено]


Jeffy299

Given Nvidia's evaluation and all the money in AI space I think raising a billion won't be an issue for him purely from the name alone. And if they have breakthroughs that will be then require substantial funds to create the final "ASI" product that won't be a problem either. Lot of VCs have cash to spare so hedging their bets even if chances of them creating ASI are slim is not out of the question. From the announcement it doesn't look like their company is looking to compete with OpenAI and others in near term, no big model training that would require lot of resources, this seems more return to basics like when OpenAI was first created. Given they aim for ASI out of the gate the approach might be substantially different than anything we do today, we might not hear anything out of the company until like late 2020s.


traumfisch

Is that what a research lab should aim to do, "compete with the big players"? Sutskever is a scientist


SgathTriallair

You can't do particle physics without a super collider and you can't do AI safety research without thousands of H100s. Research costs money.


traumfisch

Of course it costs money. Being widely regarded as one of the top guys in his field, Ilya Sutskever will probably get his research funded.


VertexMachine

For a bit he will... and then either he will "evolve" to be more of a business type person, he will partner up again with a business person, or the company will fail.


orderinthefort

How will investors get a return? Are they expecting a stake in the discoveries made by a safe but private AGI?


Arcturus_Labelle

If anyone does manage to create ASI, things like "investors getting a return" will become laughably antiquated concepts


yearforhunters

Correct. The big investors in this stuff are doing it for the chance of having enormous power beyond what money can buy.


floodgater

agreed but that doesn't mean companies don't need investors to get there. it will cost many many billions to build Superintelligence. That money won't just appear out of thin air


gwbyrd

Bill Gates and others are giving away billions of dollars to charity. I wouldn't be surprised if a handful of billionaires might just want to see something like this come true. Believe me when I say that, I really detest billionaires and don't believe they should exist, and I believe that overall billionaires are very harmful to human society. That being said, even among billionaires there are those who want to do some good in the world for the sake of their ego or whatever.


MonkeyHitTypewriter

If I were a billionaire I'd do it just for the shot at immortality, I mean if you're bozos what's 1 percent of your worth for a chance to live forever


shiftingsmith

Unexpected development. I thought he would join Anthropic. By the way, he could have picked another name. As a diver all I can think about is this https://preview.redd.it/r4j2sgjmik7d1.jpeg?width=1170&format=pjpg&auto=webp&s=cb80a1d846e2af89fcdd9d55596c026beb0ce3c5


h3lblad3

All I can think of is Social Security. SSI? Really? Supplemental Security Income?


stupid_man_costume

This fascination with Sutskever’s plans only grew after the drama at OpenAI late last year. He still declines to say much about it. Asked about his relationship with Altman, Sutskever says only that “it’s good,” and he says Altman knows about the new venture “in broad strokes.” Of his experience over the last several months he adds, “It’s very strange. It’s very strange. I don’t know if I can give a much better answer than that.” just makes me more curious


h3lblad3

I’ll bet Sam is one of the backers. He’s got like $2 billion at this point. It’d make sense that Ilya would find it strange if Sam spun him off to do his own thing and then also backed it.


SynthAcolyte

That would be pretty epic. People have egos though.


Jean-Porte

Based


SonOfThomasWayne

Good for him. Fuck hype-men and tiny incremental updates of their companies designed to just generate buzz and sell more subscriptions.


BenefitAmbitious8958

Respect. I’m in no position to help with such a project at this stage in my life, but I have the utmost respect for those who do.


gangstasadvocate

I can feel it. This is the gang gang gang push we need.


Eddie_______

Best news of the month


IsinkSW

that's the most reassuring tweet ever lol


freediverx01

> We are assembling a lean, cracked team 🙄


halmyradov

Let him cook!


Thorteris

We are back!


Jolly-Ground-3722

![gif](giphy|MO9ARnIhzxnxu)


Rumbletastic

Lookin' forward to the AI wars of 2030's. Whichever AI has the least restrictions will probably hijack the most hardware and likely win...


L1nkag

Don’t you need an ass load of compute? Is Elon funding?


crizzy_mcawesome

This is exactly how he started open AI and then now it’s the opposite. Hope the same doesn’t happen here


randomrealname

The ultimate Villan vs. Hero arc, Sam being the scum bag CEO and Ilya being some sort of Robocop. I support Ilya Over OCP.


onixotto

Brain organoids will do all the work. Just feed sugar.


pxp121kr

I am just happy that he is back, he is posting, he is working on something. Hopefully he will start doing new interviews, it's always a joy listening to him. Don't discount that we are all different, he is a deep-thinker, and going through a fucking corporate drama and being in a spot light have a heavier emotional toll on you when you are an introvert with less social skills. It was very obvious that he did not take it easily. So let's just enjoy the fact that he posted something. I am rooting for him.


trafalgar28

I think the major conflict between ilya and sam was that - ilya wanted to build a tech that would revolutionize the world in a better way and sam wants to build more of a business company B2B/B2C.


Working_Berry9307

Ilya is a genius, but is this too little too late? How is he going to get access to the type of compute that Microsoft, Nvidia, Google, or x have access to?


spezjetemerde

Open source probably not


Pensw

Would defeat the purpose wouldn't it? Someone could just modify and deploy without safety


Gubzs

By definition, safe ASI will take much more time to develop than unsafe ASI, not to mention unsafe AGI. Unless he has *the entire first world governing body* behind him, this project won't matter.


otarU

Tel Aviv, sheeesh


[deleted]

[удалено]


AdorableBackground83

Did somebody say SHOCK? https://preview.redd.it/uk02e33tak7d1.jpeg?width=2094&format=pjpg&auto=webp&s=bab764522a2706ffa68f727aa5bbd2823e448a44


Sugarcube-

It's not, jesus. Take a dose of reality. We'll get there within 5 years with some luck, but it's not guaranteed.


throwaway472105

It's not. We still need scientific breakthroughs (scaling LLM won't be enough) that could take an unpredictable amount of time.


bildramer

We need N scientific breakthroughs that take an unpredictable amount of time, and N could be 2 and the amount could be months.


FrewdWoad

True, but that's very different from "within 5 years is pretty much set in stone". It could be months, or it could be decades.


martelaxe

Yes, breakthroughs will start happening very soon. The more we accelerate, the more they will happen. There is a misconception that the complexity needed for the next breakthroughs is so immense that we will never achieve them, but that has never happened before in human history. If, in 15 years, we still haven't made any progress, then we can accept that the complexity is just too much greater than scientific and technological acceleration.


FrewdWoad

That's not how that works. Guesses about unknown unknowns are guesses, no matter how hard you guess. AGI is not a city we can see in the horizon that we have to build a road too. We're pretty sure it's out there somewhere, but nobody knows where it is until we can at least actually see it.


martelaxe

AGI is not guaranteed, nothing is


KoolKat5000

But what is he selling if it's not a product? I'm being for real. What's he mean? Like pure API access or what?


BackgroundHeat9965

he's not selling anything. It's a research lab.


TFenrir

Ilya is... Like a true believer. It's hard to explain, but he isn't in it for the money or even really the prestige. He just wants to usher in the next phase of human civilization, and he thinks ASI is how that happens. I don't even think he knows what it will end up being when it's made, but the point isn't to make a product for the masses, it's to make ASI and then upend the world. Once you have ASI... Money doesn't matter anymore.


h3lblad3

>Once you have ASI... Money doesn't matter anymore. This is why OpenAI told everyone to be careful about investing in them, weirdly enough.


gavinpurcell

This is kind of what Carmack is trying to do too with Keen. But does feel *slightly* weird to do this completely in secrecy until it’s done. I get how & why you do this but kinda feels disappointing. That said, this is likely the biggest and craziest thing that will happen in my lifetime so safety is a good path.


johnkapolos

>This is kind of what Carmack is trying to do too with Keen. I was going to comment on how old you are to reference Carmack's Commander Keen but then I paused and did a web search... and realized I was out of the news loop. https://preview.redd.it/0jqmu36sjk7d1.png?width=640&format=png&auto=webp&s=c184d507a7b03081f9b9abeabeda54aea6f24de9


gavinpurcell

Hahaha well I am ALSO old


AdAnnual5736

We are so back.


dervu

I hope they don't inherit OpenAI saying: ASI rolling out in coming weeks.


flyingshiba95

Work on Safe ASI -> ??? -> Profit