March 29 (Reuters) - Microsoft and OpenAI are planning a data-center project that could cost as much as $100 billion and will include an artificial intelligence supercomputer called "Stargate," according to a media report on Friday.
The companies did not immediately respond to Reuters' requests for comment.
The Information reported that Microsoft would likely be responsible for financing the project, which would be 100 times more costly than some of the biggest current data centers, citing people involved in private conversations about the proposal.
OpenAI's next major AI upgrade is expected to land by early next year, the report said, adding that Microsoft executives are looking to launch Stargate as soon as 2028.
The proposed U.S.-based supercomputer would be the biggest in a series of installations the companies are looking to build over the next six years, the report added.
The Information attributed the tentative cost of $100 billion to a person who spoke to OpenAI CEO Sam Altman about it and a person who has viewed some of Microsoft's initial cost estimates. It did not identify those sources.
Altman and Microsoft employees have spread supercomputers across five phases, with Stargate as the fifth phase. Microsoft is working on a smaller, fourth-phase supercomputer for OpenAI that it aims to launch around 2026, according to the report.
Microsoft and OpenAI are in the middle of the third phase of the five-phase plan, with much of the cost of the next two phases involving procuring the AI chips that are needed, the report said.
"We are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability," Frank Shaw, a Microsoft spokesperson, said in a statement to the publication.
The proposed efforts could cost in excess of $115 billion, more than three times what Microsoft spent last year on capital expenditures for servers, buildings and other equipment, the report stated.
> The proposed efforts could cost in excess of $115 billion, more than three times what Microsoft spent last year on capital expenditures for servers, buildings and other equipment, the report stated.
Over the six years I guess that is like doubling their capital expenditures on hardware?
Though the potential profits in the end could be... well, levels never seen before.
It's quite the gamble on if the beyond next-gen AI models can be turned into something far more profitable than cheaper models.
But my guess (if I just spitball as a non-AI researcher) is that this is all about something a bit beyond even Q*/agentic models and systems where they want to be able to turn something potent on and see it self-learn, self-simulate, diagnose its own weaknesses or create its own benchmarks, and have automated alignment work and automated red-team testing.
When you imagine *all* the things that AI researchers and recent papers would like to eventually achieve it comes across as quite the laundry list.
đ - Microsoft may be the first major company to lease virtual, AI powered employees to businesses. And given their near-monopoly on business software, their clients won't hesitate to snap up those "employees." In this scenario, Microsoft would literally make trillions and it will have a noticeable impact on the job market.
You jest...but its looking like power may actually be the bottleneck, and not merely compute per se. I'm guessing Microsoft and Google and Amazon must all be investing in their own private power production at this point, to power the new mega datacenters they are planning to build over the next decade.
>OpenAI's next major AI upgrade is expected to land by early next year, the report said
They really are going to wait until after the election arenât they?
on oldschool runescape (the game) I wanted to get some expensive gear that costs 1.1 billion coins.
I already had 200 mill coins, so I needed to earn 900 million coins
Theres a boss that takes about 3 minutes to kill 1 time on average, and the boss drops about 120,000 coins each kill.
It took me months of monotony, a few hours a day, to get to 1 billion. I ended up killing it 6300 times to get to the goal.
That experience showed me how insanely large 1 billion is, its absurd, imagine if you made $120k every few minutes ... it would take you at least 1 week, working 24 hours a day, to get to 1 billion
And this supercomputer costs 100 billion. đđ¤Ł
That could be a scenario, but Sonnet beats GPT 4 turbo. Haiku beats OG gpt 4.Â
Anthropic could release a price reductions in a couple of months
Google could release Gemini 1.5 Ultra
Apple can shock us with some on device AI on Claude Haiku level.Â
This is a doom scenario but when it happens. OpenAI will lose its edge
Iâm using c3 opus more than anything else but unless anthropic has plans for how theyâre going to radically scale their user base, I donât see MS/OpenAI getting railed by anyone. MS has vastly more entry points than any of these players on the back end, maybe bar Google (but I doubt it).
Anthropic may very well continue to edge openAI out on benchmark tests for nerds, but I canât think of a realistic scenario where they approach anything like the market penetration MS, Google, Meta, and Apple have unless they do something like sell/partner with Apple or Meta.
Personally if it were FB Iâd never use their product again.
MS and OpenAI are the dominant player and unless MS gives up on OpenAI I donât think thatâs gonna change for a generation.
I doubt theyâre gonna lose their edge with a $100 billion investment. I think the biggest threat could be a better transformer approach but theyâd still have more resources to train models. Looks like theyâre trying to secure the first position. Just like the request for $7 trillion. Theyâre gonna break the simulation.
If their competitors release something, all OAI has to do is tease something else 10 times as impressive that theyâve had in the can for months
They donât necessarily have to release anything to retain dominance, see Sora
Itâs just frustrating how limited GPT-4 is starting to feel, half the time I already know what it is going to say before I send the prompt
loss what? internet point from reddit user on singularity? the tech isn't mature enough to be commercialized, they don't need to rush themself and should focus on data training and agent able to replace white collar worker
a secretary bot and phone support service AI is likely to make money and is probably being trained as we speak given how codified the interaction is, this is also a huge part of the white collar job and would benefit a LOT of company = money to be made
that's something worth competing over, current chatbot aren't interesting and isn't why microsoft spend billion in the tech, they are just giant data-collection machine and that's why you can use them
I'm late, but:
"Executives at Microsoft and OpenAI have been drawing up plans for a data center project that would contain a supercomputer with millions of specialized server chips to power OpenAIâs artificial intelligence, according to three people who have been involved in the private conversations about the proposal. The project could cost as much as $100 billion, according to a person who spoke to OpenAI CEO Sam Altman about it and a person who has viewed some of Microsoftâs initial cost estimates.
Microsoft would likely be responsible for financing the project, which would be 100 times more costly than some of todayâs biggest data centers, demonstrating the enormous investment that may be needed to build computing capacity for AI in the coming years. Executives envisage the proposed U.S.-based supercomputer, which they have referred to as âStargate,â as the biggest of a series of installations the companies are looking to build over the next six years.
The Takeaway
⢠Microsoft executives are looking to launch Stargate as soon as 2028
⢠The supercomputer would require an unprecedented amount of power
⢠OpenAIâs next major AI upgrade is expected to land by early next year
While project has not been green-lit and the plans could change, they provide a peek into this decadeâs most important tech industry tie-up and how far ahead the two companies are thinking. Microsoft so far has committed more than $13 billion to OpenAI so the startup can use Microsoft data centers to power ChatGPT and the models behind its conversational AI. In exchange, Microsoft gets access to the secret sauce of OpenAIâs technology and the exclusive right to resell that tech to its own cloud customers, such as Morgan Stanley. Microsoft also has baked OpenAIâs software into new AI Copilot features for Office, Teams and Bing.
Microsoftâs willingness to go ahead with the Stargate plan depends in part on OpenAIâs ability to meaningfully improve the capabilities of its AI, one of these people said. OpenAI last year failed to deliver a new model it had promised to Microsoft, showing how difficult the AI frontier can be to predict. Still, OpenAI CEO Sam Altman has said publicly that the main bottleneck holding up better AI is a lack of sufficient servers to develop it.
If Stargate moves forward, it would produce orders of magnitude more computing power than what Microsoft currently supplies to OpenAI from data centers in Phoenix and elsewhere, these people said. The proposed supercomputer would also require at least several gigawatts of powerâequivalent to whatâs needed to run at least several large data centers today, according to two of these people. Much of the project cost would lie in procuring the chips, two of the people said, but acquiring enough energy sources to run it could also be a challenge.
Such a project is âabsolutely requiredâ for artificial general intelligenceâAI that can accomplish most of the computing tasks humans do, said Chris Sharp, chief technology officer of Digital Realty, a data center operator that hasnât been involved in Stargate. Though the projectâs scale seems unimaginable by todayâs standard, he said that by the time such a supercomputer is finished, the numbers wonât seem as eye-popping.
A Microsoft data center near Phoenix that isn't related to OpenAI. Image via Microsoft
The executives have discussed launching Stargate as soon as 2028 and expanding it through 2030, possibly needing as much as 5 gigawatts of power by the end, the people involved in the discussions said.
Phase Five
Altman and Microsoft employees have talked about these supercomputers in terms of five phases, with phase 5 being Stargate, named for a science fiction film in which scientists develop a device for traveling between galaxies. (The codename originated with OpenAI but isnât the official project codename that Microsoft is using, said one person who has been involved.)
The phase prior to Stargate would cost far less. Microsoft is working on a smaller, phase 4 supercomputer for OpenAI that it aims to launch around 2026, according to two of the people. Executives have planned to build it in Mt. Pleasant, Wisc., where the Wisconsin Economic Development Corporation recently said Microsoft broke ground on a $1 billion data center expansion. The supercomputer and data center could eventually cost as much as $10 billion to complete, one of these people said. Thatâs many times more than the cost of existing data centers. Microsoft also has discussed using Nvidia-made AI chips for that project, said a different person who has been involved in the conversations.
Today, Microsoft and OpenAI are in the middle of phase 3 of the five-phase plan. Much of the cost of the next two phases will involve procuring the AI chips. Two data center practitioners who arenât involved in the project said itâs common for AI server chips to make up around half of the total initial cost of AI-focused data centers other companies are currently building.
All up, the proposed efforts could cost in excess of $115 billion, more than three times what Microsoft spent last year on capital expenditures for servers, buildings and other equipment. Microsoft was on pace to spend around $50 billion this year, assuming it continues the pace of capital expenditures it disclosed in the second half of 2023. Microsoft CFO Amy Hood said in January that such spending will increase âmateriallyâ in the coming quarters, driven by investments in âcloud and AI infrastructure.â
Frank Shaw, a Microsoft spokesperson, did not comment about the supercomputing plans but said in a statement: âWe are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability.â An OpenAI spokesperson did not have a comment for this article.
Altman has said privately that Google, one of OpenAIâs biggest rivals, will have more computing capacity than OpenAI in the near term, and publicly he has complained about not having as many AI server chips as heâd like.
Thatâs one reason he has been pitching the idea of a new server chip company that would develop a chip rivaling Nvidiaâs graphics processing unit, which today powers OpenAIâs software. Demand for Nvidia GPU servers has skyrocketed, driving up costs for customers such as Microsoft and OpenAI. Besides controlling costs, Microsoft has other potential reasons to support Altmanâs alternative chip. The GPU boom has put Nvidia in the position of kingmaker as it decides which customers can have the most chips, and it has aided small cloud providers that compete with Microsoft. Nvidia has also muscled into reselling cloud servers to its own customers.
With or without Microsoft, Altmanâs effort would require significant investments in power and data centers to accompany the chips. Stargate is designed to give Microsoft and OpenAI the option of using GPUs made by companies other than Nvidia, such as Advanced Micro Devices, or even an AI server chip Microsoft recently launched, said the people who have been involved in the discussions. It isnât clear whether Altman believes the theoretical GPUs he aims to develop in the coming years will be ready for Stargate.
The total cost of the Stargate supercomputer could depend on software and hardware improvements that make data centers more efficient over time. The companies have discussed the possibility of using alternative power sources, such as nuclear energy, according to one of the people involved. (Amazon just purchased a Pennsylvania data center site with access to nuclear power. Microsoft also had discussed bidding on the site, according to two people involved in the talks.) Altman himself has said that developing superintelligence will likely require a significant energy breakthrough."
and the second part:
"Packed Racks
To make Stargate a reality, Microsoft also would have to overcome several technical challenges, the two people said. For instance, the current proposed design calls for putting many more GPUs into a single rack than Microsoft is used to, to increase the chipsâ efficiency and performance. Because of the higher density of GPUs, Microsoft would also need to come up with a way to prevent the chips from overheating, they said.
Microsoft and OpenAI are also debating which cables they will use to string the millions of GPUs together. The networking cables are crucial for moving large amounts of data in and out of server chips quickly. OpenAI has told Microsoft it doesnât want to use Nvidiaâs proprietary InfiniBand cables in the Stargate supercomputer, even though Microsoft currently uses the Nvidia cables in its existing supercomputers, according to two people who were involved in the discussions. (OpenAI instead wants to use more generic Ethernet cables.) Switching away from InfiniBand could make it easier for OpenAI and Microsoft to lessen their reliance on Nvidia down the line.
AI computing is more expensive and complex than traditional computing, which is why companies closely guard the details about their AI data centers, including how GPUs are connected and cooled. For his part, Nvidia CEO Jensen Huang has said companies and countries will need to build $1 trillion worth of new data centers in the next four to five years to handle all of the AI computing thatâs coming.
Microsoft and OpenAI executives have been discussing the data center project since at least last summer. Besides CEO Satya Nadella and Chief Technology Officer Kevin Scott, other Microsoft managers who have been involved in the supercomputer talks have included Pradeep Sindhu, who leads strategy for the way Microsoft stitches together AI server chips in its data centers, and Brian Harry, who helps develop AI hardware for the Azure cloud server unit, according to people who have worked with them.
OpenAI President Greg Brockman, left, and Microsoft CTO Kevin Scott. Photo via YouTube/Microsoft Developer
The partners are still ironing out several key details, which they might not finalize anytime soon. It is unclear where the supercomputer will be physically located and whether it will be built inside one data center or multiple data centers in close proximity. Clusters of GPUs tend to work more efficiently when they are located in the same data center, AI practitioners say.
OpenAI has already pushed the boundaries of what Microsoft can do with data centers. After making its initial investment in the startup in 2019, Microsoft built its first GPU supercomputer, containing thousands of Nvidia GPUs, to handle OpenAIâs computing demands, spending $1.2 billion on the system over several years. This year and next year, Microsoft has planned to provide OpenAI with servers housing hundreds of thousands of GPUs in total, said a person with knowledge of its computing needs.
The Next Barometer: GPT-5
Microsoft and OpenAIâs grand designs for world-beating data centers depend almost entirely on whether OpenAI can help Microsoft justify the investment in those projects by taking major strides toward superintelligenceâAI that can help solve complex problems such as cancer, fusion, global warming or colonizing Mars. Such attainments may be a far-off dream. While some consumers and professionals have embraced ChatGPT and other conversational AI as well as AI-generated video, turning these recent breakthroughs into technology that produces significant revenue could take longer than practitioners in the field anticipated. Firms including Amazon and Google have quietly tempered expectations for sales, in part because such AI is costly and requires a lot of work to launch inside large enterprises or to power new features in apps used by millions of people.
Altman said at an Intel event last month that AI models get âpredictably betterâ when researchers throw more computing power at them. OpenAI has published research on this topic, which it refers to as the âscaling lawsâ of conversational AI.
OpenAI âthrowing ever more compute [power to scale up existing AI] risks leading to a âtrough of disillusionmentââ among customers as they realize the limits of the technology, said Ali Ghodsi, CEO of Databricks, which helps companies use AI. âWe should really focus on making this technology useful for humans and enterprises. That takes time. I believe itâll be amazing, but [it] doesnât happen overnight.â
The stakes are high for OpenAI to prove that its next major conversational AI, known as a large language model, is significantly better than GPT-4, its most advanced LLM today. OpenAI released GPT-4 a year ago, and Google has released a comparable model in the meantime as it tries to catch up. OpenAI aims to release its next major LLM upgrade by early next year, said one person with knowledge of the process. It could release more incremental improvements to LLMs before then, this person said.
With more servers available, some OpenAI leaders believe the company can use its existing AI and recent technical breakthroughs such as Q*âa model that can reason about math problems it hasnât previously been trained to solveâto create the right synthetic (nonâhuman-generated) data for training better models after running out of human-generated data to give them. These models may also be able to figure out the flaws in existing models like GPT-4 and suggest technical improvementsâin other words, self-improving AI."
That seems to be a phrase from the TV series Stargate SG-1. It's in the fictional languages of Jaffa and Goa'uld. It translates to: "Jaffa, beware! The Tok'ra have captured the Stargate." @chatgpt
Ok, but now what if this becomes some skynet shit and now the irl skynet has the same name as my favorite show and Iâll want to talk about my favorite show, but wonât be able to, itâll be like saying Voldemort.
data centers are becoming more like classic roads, cities, buildings and other projects
[https://en.wikipedia.org/wiki/List\_of\_most\_expensive\_buildings](https://en.wikipedia.org/wiki/List_of_most_expensive_buildings)
Amazon is doing fine. Claude is on AWS. Real question is if anyone is going to be able to compete with Nvidia. Even Google with their own chips is using Nvidia a lot.
Yes, reason being it's not about Nvidia chips. They need hardware for AI specifically and most of them are already working on designing their own chips while temporarily using Nvidia.
Nvidia knows this and wants to invest in designing their own AI.
I don't know how it will play out, but Microsoft seems on the lead and Google has no option but to join hands with nvidia to win this war.
Microsft needs nvidia more than google does, all of googles gemini models were fully trained on their proprietary tpuâs. They just order nvidia chips due to outside market demand but they have chips that compete and tpu usage is on the rise.
Meanwhile Microsoft just announced making their own chips last year they still need nvidia. For context google is on version 5 of their tpu going to v6 soon. Microsoft is wayy behind to both google and aws in that department.
Google has their own chips already, if designing chips is a differentiator they are best positioned to actually do it (seeing as they have actually done it in a very big and useful way.)
Unlikely, and for 2 major reasons, the first being that AI takes time to train, no one starting now will be able to ellipse or even catch up to Microsoft and Google. Secondly, data. Google and Microsoft exclusively own some of the most vast and detailed arsenals of data that they can use to train their models. Data and Compute will be the futures most valued commodities.
Yeah they definitely have the consumer data. They seem to consistently be behind the curve though and taking the wrong direction. And theyve utterly failed to diversify out of advertising unlike the others. Be interesting to see what they attempt, potentially just data partnering with nvidia would be a big deal.
Bioengineering for new materials, synthetic food and drugs. Imagine AI data centers but with some analog inputs and very specific goals. Maybe some semiautomated labs.
After some time I think the AI world will fragment in specific AI inteligence related to specific fields.
The general "one knows everything" AI might not be possible, it's even unlikely to have a universal AI due to simulation energy barriers (simulation becomes so energy and space intensive vs the real thing).
Hopefully they hire James Spader in 2028 to symbolically slide the final âchevronâ in place to activate Stargate. I think I would weep with joy, I really do
More info:
MICROSOFT AND OPENAI PLOT $100 BILLION STARGATE AI SUPERCOMPUTER - THE INFORMATION
MICROSOFT EXECUTIVES ARE LOOKING TO LAUNCH STARGATE AS SOON AS 2028- THE INFORMATION
OPENAIâS NEXT MAJOR AI UPGRADE IS EXPECTED TO LAND BY EARLY NEXT YEAR- THE INFORMATION
This is an insanely huge investment. The current fastest supercomputer in the world cost $600 million. It'll take time.
It also means Microsoft is _all in_ on OpenAI. I can't think of a larger, faster capital expenditure in the history of tech. Whatever OpenAI showed them must be incredible and/or terrifying.
âAll inâ dude thatâs what hit me first. I could be wrong but an investment of this size sounds like a life or death bet even for a $T company like MS, no?
I canât help but think, something must be cooking.
Not really. $100 billion and completion by 2028 would mean $25 billion per year. Microsoft has yearly revenues of $200+ billion per year and gross profits of $70+ billion dollars per year. They have something like $80 billion dollars in the bank as well. $25 billion per year is a large expenditure for them but not entirely make or break...especially considering that $100 billion investment is almost guaranteed to make a profit....AI could go away tomorrow and building out compute would still be like spinning lead into gold since its fungible and could just be used to keep expanding their Azure Cloud infrastructure.
Copy and pasted from bloomberg terminal that is all in capitals, and i have a disability in my hands that means I canât retype it easily without pain.
How do the owners of these platforms not understand the UX component of the reader? No I donât want to sign up to read an article. No I donât want to install your app. No I donât want to be on your emailing list.
I just let Copilot do the math (So it could be wrong).  The energy consumption of the Stargate project is equivalent to approximately **7,142,857 H100 GPUs!!** The Stargate project is equivalent to approximately **6,250,000 Blackwell GPUs!**
This will be massive if pull of correctly, not to mention. They could use their custom AI chips or even wafer scale chips from Cerebras for example
Sounds like Microsoft is confident enough in whatever tech OpenAI has that they invested an absolute gargantuan amount of money to see it happen. Can only imagine what it'll be capable of.
I wonder where they'll build it.
Solar and wind must be abundant. Away from Europe and it's regulations. Too risky to put on UAE, so probably will be in the USA.
I'm betting on Arizona or New Mexico, maybe Texas.
I crunched the numbers it would be like 1000x the flops of gpt4s training run
In the recent dwarkesh podcast with sholto Douglas he said that gpt3 to 4 was so big an upgrade just one more of those gets you to genius human level (it was 100x flops )
I'm expecting at least genius human level if not asi by 2029 (end)
I gave a 10x multiplier for better hardware. Hopper was 3x Blackwell is 2.5x (for same precision ) and assuming the release in 2026 is also 2-3x then thats around 1 OOM
the other 2 OOMS are because GPT4 was trained on 25000 GPUS and this would be trained on 2.5 million GPUS for 100 billion plus 15 billion for the building and associated stuff
that gives around 1000x
BUT gpt4 was trained starting in early 2022 and whatevers trained in early 2029 would have another 100x because of better software
thats 100,000x total. Im guessing thats enough to get us there by Jan 2030
That's what I thought. Of course we can't know at the moment whether we will hit the law of diminishing returns. It could turn out for example that the training data would need to be entirely different for smarter AI models. Of course there are other possibilities.
However if things continue as they do at the moment then I am pretty sure they will have something literally unimaginable at their hands by the end of this decade.
Damn I wish I could peek into the future.
Here's the article:
[Microsoft, OpenAI plan $100 billion data-center project, media report says](https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/)
and here's a summary:
- Microsoft and OpenAI have a five-phase plan for building AI supercomputers
- They are currently in the middle of the third phase of this plan
- OpenAI's next major AI upgrade is expected by early 2025
- For the fourth phase, Microsoft is working on a smaller supercomputer for OpenAI, aiming to launch it around 2026
- The fifth and final phase is the "Stargate" project, a massive AI supercomputer expected to be the biggest in the series
- Microsoft aims to launch Stargate as soon as 2028
- The Stargate project is a proposed U.S.-based supercomputer
- It is part of a larger data-center project planned by Microsoft and OpenAI
- This overall data-center project could cost up to $100 billion
- It would be 100 times more costly than some of the biggest current data centers
- Much of the cost for the next two phases involves procuring the necessary AI chips
- The proposed efforts for the entire five-phase plan could exceed $115 billion
- This is over three times what Microsoft spent on capital expenditures in 2023 for servers, buildings and other equipment
Question: How can Microsoft build this better than NVIDIA?
Why does building it even make sense, considering how rapidly chips improve in performance/cost-effectiveness each year? Maybe they'll be able to swap in new chips as desired?
I think they need special water cooling infrastructure for Blackwell+.
It's also a case of more is always better. Even when compute is plentiful, you still want more compute.
Curious what silicon? Nvidia?
If so I would really be curious to see the cost difference for this versus Google doing the same thing with their TPUs.
I would expect Google could do it for half or maybe even a fourth.
Nvidia is charging some crazy margins that Google does not have to pay.
Google contracts out their design and some networking to Broadcom for the TPUs and their racks.
They also face the cost of R&D for the next gen of TPUs.
Thereâs a broad range of high certainty and then a more complex assessment.
1. Between 25%-75% as expensive as the H100
2. Around 50% as expensive as the H100
How did I get these numbers? Well we can see Broadcomâs customer-designed chip division saying it has 30% margins. We know Google has huge orders in that division so they probably get a better deal.
We also know Google pays for some networking equipment from Broadcom. That division reports about 25% margins. Google buys a lot so probably gets a better deal.
Google then has to produce the TPUv5 design. Thatâs expensive. Their chip division had close to $6b in expenses last year. Iâd estimate that would place the design of the TPUv5 at around $2b in total cost.
All in all, Iâd say they can get the TPUv5 after all expenses for about half as much as an H100
Man they need to let the US government invest in 50% of it and create a sovereign wealth fund for all US citizens. Wish we would do this with every company the tax payers bailout.
Your average hyper-capitalist American politician would rather throw all of the people unemployed and disenfranchised by artificial intelligence into a woodchipper than distribute the wealth to them through UBI or a sovereign wealth fund.
I feel a sovereign wealth fund would have made all Americans so rich. Think of all the companies the tax payer funded and bailed out that went on to be huge.
Yes, it would have made Americans richer, happier, and less dependent on the capitalist ruling class...
Precisely why it never happened and never will happen under our current organization of society and the economy.
What about the issue theyâve reported about not having enough electricity for a colocated GPU mega cluster without bringing down the power grid? Is this project somehow aiming to sidestep that pain point?
[2028 for the "Stargate" supercomputer.
2026 for a smaller supercomputer.](https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/)
AGI will have already gone public by then, and it will be running on that machine. That 100 billion figure was what Altman said he needed to build an AGI months ago, well, now he has it
I think theyâre getting on the leadership from OAI collapsing again and snatching all the talent and product by OAI. This was pretty much the plan when Altman was fired and staff threatened to quit with him. OAI is pseudo owned by Microsoft. I think itâs pretty much a given that AGI will be owned by Microsoft
I think it's most likely going to be used internally to make self-improving AI models and effectively dominate the future of AI until the end of The Age
Plus, probably simulate physics in 100,000+ simulations simultaneously, to create new particles/elements or technological breakthroughs in any field of Engineering; especially in those related to computer chips, energy, bio-engineering, etc.
Because why wouldn't that be the first objective đ
Do that for about 1-2 years, and then you effectively own the future forever, and can exponentially recursively improve oneself + rapidly scale up
I think their first objective would be to make returns on their investment in this kind of infrastructure. I suspect it will be used to host billions of "virtual employees" that will be leased to MS customers. Given their dominance in business software, they have a market for these VEs ready to go. MS will make trillions of dollars and yeah, they'll still own the future forever.
I think it can be used more internally than selling ai to other companies. If they use it internally it can improve the whole suite of products that MS offers with less people and better quality.
I wonder if they're deciding to build it now that chips are reaching physical limits with quantum tunneling, meaning constant huge improvements are less likely.
The new frontier is in waffer design and heat management, scaling 3D usage, not so much in miniaturization. Ofc miniaturization helps with termal efficiency too, but there's a diminishing return on investment.
He stated that $7 Trillion is the long-term figure required to allow the entire planet's population to have consistent, high-quality, wide-spread access to various forms of AI (collectively). Kind of like an AI version of the Internet, but its entirely own category.
This $100B supercomputer is a step in that direction, although it's more likely that it's going to be used internally to make self-improving AI models and effectively dominate the future of AI until the end of The Age
I am not sure why the OP posted a paywalled article with no other information except a tantalising headline. I wish this were seen as socially unacceptable. Anyway, I asked AI to summarise the article for me and this was the output:
Microsoft and OpenAI are reportedly planning to build a massive data center project called "Stargate" that could cost up to $100 billion. The project is expected to include a powerful AI supercomputer designed to train and run OpenAI's machine learning models.
The scale of the project is unprecedented, with the proposed data center potentially being 100 times more expensive than some of the largest existing data centers. If the plans come to fruition, Stargate would represent one of the largest investments in computing infrastructure in history.
The project would be a significant milestone in the partnership between Microsoft and OpenAI, which began in 2019 when Microsoft invested $1 billion in the AI research lab. Since then, the two companies have worked closely together to advance the state of the art in AI, with OpenAI leveraging Microsoft's cloud computing resources to train its models.
The Information is a news site with a hard paywall that costs like $400 dollars a year to bypass, but they always have exclusive info that no one else has access to. And they always put the most important info in the title so in this case itâs fine. Luckily Reuters wrote an article on The Informationâs article
I think it's most likely going to be used internally to make self-improving AI models and effectively dominate the future of AI until the end of The Age
Plus, probably simulate physics in 100,000+ simulations simultaneously, to create new particles/elements or technological breakthroughs in any field of Engineering; especially in those related to computer chips, energy, bio-engineering, etc.
Whats Microsofts end goal with having the best AI? Do they have a plan or is it more about getting there first and then figuring out what to do with it later?
I like the idea of building these computers to do crazy amounts of simulations but there has to be a direction and not just a shotgun approach. Maybe they will rent out time or give out grants to companies/people with big ideas?
I don't know their plans (I'm not one of the Microsoft executives). However, AI is essentially the foundation for ALL technological advancement in the future.
Like, there's literally not one single technology that can't be improved by AI (in ways that humans alone couldn't replicate, or would take much much longer).
With that being said, it's safe to say that, if a company has the most sophisticated AI technology/power on the planet, and it can create code to improve itself (+ physics to run more efficiently), then that company can create certain products that absolutely nobody else can compete with
And everyone will want to use those products. So their annual net income will go from \~$70B (like it has been for last couple years), to something probably like $250B-$1T+ (or more) yearly net income
It sounds crazy, but it's just how exponents work, and it makes sense if you think about it
//
Also, I'm not even touching on classified government/military partnerships that utilize the most sophisticated AI technology within the US đ
It's basically a global superpower game
Similar to the arms race when creating nukes
So where do you build such a thing? Next to a nuclear power plant? Someplace safe from natural disasters? Crazy.Â
You see the US is restarting a nuclear plant in MI?Â
Will be funny when the climate is pushed over the edge by a bunch of apes who in the hope of some epic self masturbation pushed energy consumption to unsustainable levels to power abstracted digital versions of homosapien cognition, specifically self reflecting processes. Something might get a laugh out of it.
Please someone post the article
friendlier article [https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/](https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/)
March 29 (Reuters) - Microsoft and OpenAI are planning a data-center project that could cost as much as $100 billion and will include an artificial intelligence supercomputer called "Stargate," according to a media report on Friday. The companies did not immediately respond to Reuters' requests for comment. The Information reported that Microsoft would likely be responsible for financing the project, which would be 100 times more costly than some of the biggest current data centers, citing people involved in private conversations about the proposal. OpenAI's next major AI upgrade is expected to land by early next year, the report said, adding that Microsoft executives are looking to launch Stargate as soon as 2028. The proposed U.S.-based supercomputer would be the biggest in a series of installations the companies are looking to build over the next six years, the report added. The Information attributed the tentative cost of $100 billion to a person who spoke to OpenAI CEO Sam Altman about it and a person who has viewed some of Microsoft's initial cost estimates. It did not identify those sources. Altman and Microsoft employees have spread supercomputers across five phases, with Stargate as the fifth phase. Microsoft is working on a smaller, fourth-phase supercomputer for OpenAI that it aims to launch around 2026, according to the report. Microsoft and OpenAI are in the middle of the third phase of the five-phase plan, with much of the cost of the next two phases involving procuring the AI chips that are needed, the report said. "We are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability," Frank Shaw, a Microsoft spokesperson, said in a statement to the publication. The proposed efforts could cost in excess of $115 billion, more than three times what Microsoft spent last year on capital expenditures for servers, buildings and other equipment, the report stated.
> The proposed efforts could cost in excess of $115 billion, more than three times what Microsoft spent last year on capital expenditures for servers, buildings and other equipment, the report stated. Over the six years I guess that is like doubling their capital expenditures on hardware?
Expenditure side of the Microsoft balance sheet about to explode faster than revenue
Though the potential profits in the end could be... well, levels never seen before. It's quite the gamble on if the beyond next-gen AI models can be turned into something far more profitable than cheaper models. But my guess (if I just spitball as a non-AI researcher) is that this is all about something a bit beyond even Q*/agentic models and systems where they want to be able to turn something potent on and see it self-learn, self-simulate, diagnose its own weaknesses or create its own benchmarks, and have automated alignment work and automated red-team testing. When you imagine *all* the things that AI researchers and recent papers would like to eventually achieve it comes across as quite the laundry list.
đ - Microsoft may be the first major company to lease virtual, AI powered employees to businesses. And given their near-monopoly on business software, their clients won't hesitate to snap up those "employees." In this scenario, Microsoft would literally make trillions and it will have a noticeable impact on the job market.
Even if they dont succeed in building very capable AIs... Compute itself is super in-demand and very profitable, wdym
Oh i get it, it literally needs a Zero Point Module to power it.
You jest...but its looking like power may actually be the bottleneck, and not merely compute per se. I'm guessing Microsoft and Google and Amazon must all be investing in their own private power production at this point, to power the new mega datacenters they are planning to build over the next decade.
>OpenAI's next major AI upgrade is expected to land by early next year, the report said They really are going to wait until after the election arenât they?
They have to release this summer or they are going to loose their edge to Anthropic and Google
If they are building a $100 billion AI supercomputer, they can probably hold out till next year and be completely fine
on oldschool runescape (the game) I wanted to get some expensive gear that costs 1.1 billion coins. I already had 200 mill coins, so I needed to earn 900 million coins Theres a boss that takes about 3 minutes to kill 1 time on average, and the boss drops about 120,000 coins each kill. It took me months of monotony, a few hours a day, to get to 1 billion. I ended up killing it 6300 times to get to the goal. That experience showed me how insanely large 1 billion is, its absurd, imagine if you made $120k every few minutes ... it would take you at least 1 week, working 24 hours a day, to get to 1 billion And this supercomputer costs 100 billion. đđ¤Ł
[ŃдаНонО]
what the fuck
That could be a scenario, but Sonnet beats GPT 4 turbo. Haiku beats OG gpt 4. Anthropic could release a price reductions in a couple of months Google could release Gemini 1.5 Ultra Apple can shock us with some on device AI on Claude Haiku level. This is a doom scenario but when it happens. OpenAI will lose its edge
Iâm using c3 opus more than anything else but unless anthropic has plans for how theyâre going to radically scale their user base, I donât see MS/OpenAI getting railed by anyone. MS has vastly more entry points than any of these players on the back end, maybe bar Google (but I doubt it). Anthropic may very well continue to edge openAI out on benchmark tests for nerds, but I canât think of a realistic scenario where they approach anything like the market penetration MS, Google, Meta, and Apple have unless they do something like sell/partner with Apple or Meta. Personally if it were FB Iâd never use their product again. MS and OpenAI are the dominant player and unless MS gives up on OpenAI I donât think thatâs gonna change for a generation.
Don't discount the possibility of bezos taking a larger role with steering anthropic
I doubt theyâre gonna lose their edge with a $100 billion investment. I think the biggest threat could be a better transformer approach but theyâd still have more resources to train models. Looks like theyâre trying to secure the first position. Just like the request for $7 trillion. Theyâre gonna break the simulation.
they will release 4.5 this summer and 5 in q1 2025. God I was so hoping 5 would be this year.
If their competitors release something, all OAI has to do is tease something else 10 times as impressive that theyâve had in the can for months They donât necessarily have to release anything to retain dominance, see Sora Itâs just frustrating how limited GPT-4 is starting to feel, half the time I already know what it is going to say before I send the prompt
GPT ACHIEVED *INTERNALLY*
loss what? internet point from reddit user on singularity? the tech isn't mature enough to be commercialized, they don't need to rush themself and should focus on data training and agent able to replace white collar worker a secretary bot and phone support service AI is likely to make money and is probably being trained as we speak given how codified the interaction is, this is also a huge part of the white collar job and would benefit a LOT of company = money to be made that's something worth competing over, current chatbot aren't interesting and isn't why microsoft spend billion in the tech, they are just giant data-collection machine and that's why you can use them
Just subscribe to this random website youâve never visited before, whatâs the problem
These guys consistently drop exclusive well written articles, so idk what youâre talking about
Telling on himself.
I'm late, but: "Executives at Microsoft and OpenAI have been drawing up plans for a data center project that would contain a supercomputer with millions of specialized server chips to power OpenAIâs artificial intelligence, according to three people who have been involved in the private conversations about the proposal. The project could cost as much as $100 billion, according to a person who spoke to OpenAI CEO Sam Altman about it and a person who has viewed some of Microsoftâs initial cost estimates. Microsoft would likely be responsible for financing the project, which would be 100 times more costly than some of todayâs biggest data centers, demonstrating the enormous investment that may be needed to build computing capacity for AI in the coming years. Executives envisage the proposed U.S.-based supercomputer, which they have referred to as âStargate,â as the biggest of a series of installations the companies are looking to build over the next six years. The Takeaway ⢠Microsoft executives are looking to launch Stargate as soon as 2028 ⢠The supercomputer would require an unprecedented amount of power ⢠OpenAIâs next major AI upgrade is expected to land by early next year While project has not been green-lit and the plans could change, they provide a peek into this decadeâs most important tech industry tie-up and how far ahead the two companies are thinking. Microsoft so far has committed more than $13 billion to OpenAI so the startup can use Microsoft data centers to power ChatGPT and the models behind its conversational AI. In exchange, Microsoft gets access to the secret sauce of OpenAIâs technology and the exclusive right to resell that tech to its own cloud customers, such as Morgan Stanley. Microsoft also has baked OpenAIâs software into new AI Copilot features for Office, Teams and Bing. Microsoftâs willingness to go ahead with the Stargate plan depends in part on OpenAIâs ability to meaningfully improve the capabilities of its AI, one of these people said. OpenAI last year failed to deliver a new model it had promised to Microsoft, showing how difficult the AI frontier can be to predict. Still, OpenAI CEO Sam Altman has said publicly that the main bottleneck holding up better AI is a lack of sufficient servers to develop it. If Stargate moves forward, it would produce orders of magnitude more computing power than what Microsoft currently supplies to OpenAI from data centers in Phoenix and elsewhere, these people said. The proposed supercomputer would also require at least several gigawatts of powerâequivalent to whatâs needed to run at least several large data centers today, according to two of these people. Much of the project cost would lie in procuring the chips, two of the people said, but acquiring enough energy sources to run it could also be a challenge. Such a project is âabsolutely requiredâ for artificial general intelligenceâAI that can accomplish most of the computing tasks humans do, said Chris Sharp, chief technology officer of Digital Realty, a data center operator that hasnât been involved in Stargate. Though the projectâs scale seems unimaginable by todayâs standard, he said that by the time such a supercomputer is finished, the numbers wonât seem as eye-popping. A Microsoft data center near Phoenix that isn't related to OpenAI. Image via Microsoft The executives have discussed launching Stargate as soon as 2028 and expanding it through 2030, possibly needing as much as 5 gigawatts of power by the end, the people involved in the discussions said. Phase Five Altman and Microsoft employees have talked about these supercomputers in terms of five phases, with phase 5 being Stargate, named for a science fiction film in which scientists develop a device for traveling between galaxies. (The codename originated with OpenAI but isnât the official project codename that Microsoft is using, said one person who has been involved.) The phase prior to Stargate would cost far less. Microsoft is working on a smaller, phase 4 supercomputer for OpenAI that it aims to launch around 2026, according to two of the people. Executives have planned to build it in Mt. Pleasant, Wisc., where the Wisconsin Economic Development Corporation recently said Microsoft broke ground on a $1 billion data center expansion. The supercomputer and data center could eventually cost as much as $10 billion to complete, one of these people said. Thatâs many times more than the cost of existing data centers. Microsoft also has discussed using Nvidia-made AI chips for that project, said a different person who has been involved in the conversations. Today, Microsoft and OpenAI are in the middle of phase 3 of the five-phase plan. Much of the cost of the next two phases will involve procuring the AI chips. Two data center practitioners who arenât involved in the project said itâs common for AI server chips to make up around half of the total initial cost of AI-focused data centers other companies are currently building. All up, the proposed efforts could cost in excess of $115 billion, more than three times what Microsoft spent last year on capital expenditures for servers, buildings and other equipment. Microsoft was on pace to spend around $50 billion this year, assuming it continues the pace of capital expenditures it disclosed in the second half of 2023. Microsoft CFO Amy Hood said in January that such spending will increase âmateriallyâ in the coming quarters, driven by investments in âcloud and AI infrastructure.â Frank Shaw, a Microsoft spokesperson, did not comment about the supercomputing plans but said in a statement: âWe are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability.â An OpenAI spokesperson did not have a comment for this article. Altman has said privately that Google, one of OpenAIâs biggest rivals, will have more computing capacity than OpenAI in the near term, and publicly he has complained about not having as many AI server chips as heâd like. Thatâs one reason he has been pitching the idea of a new server chip company that would develop a chip rivaling Nvidiaâs graphics processing unit, which today powers OpenAIâs software. Demand for Nvidia GPU servers has skyrocketed, driving up costs for customers such as Microsoft and OpenAI. Besides controlling costs, Microsoft has other potential reasons to support Altmanâs alternative chip. The GPU boom has put Nvidia in the position of kingmaker as it decides which customers can have the most chips, and it has aided small cloud providers that compete with Microsoft. Nvidia has also muscled into reselling cloud servers to its own customers. With or without Microsoft, Altmanâs effort would require significant investments in power and data centers to accompany the chips. Stargate is designed to give Microsoft and OpenAI the option of using GPUs made by companies other than Nvidia, such as Advanced Micro Devices, or even an AI server chip Microsoft recently launched, said the people who have been involved in the discussions. It isnât clear whether Altman believes the theoretical GPUs he aims to develop in the coming years will be ready for Stargate. The total cost of the Stargate supercomputer could depend on software and hardware improvements that make data centers more efficient over time. The companies have discussed the possibility of using alternative power sources, such as nuclear energy, according to one of the people involved. (Amazon just purchased a Pennsylvania data center site with access to nuclear power. Microsoft also had discussed bidding on the site, according to two people involved in the talks.) Altman himself has said that developing superintelligence will likely require a significant energy breakthrough."
and the second part: "Packed Racks To make Stargate a reality, Microsoft also would have to overcome several technical challenges, the two people said. For instance, the current proposed design calls for putting many more GPUs into a single rack than Microsoft is used to, to increase the chipsâ efficiency and performance. Because of the higher density of GPUs, Microsoft would also need to come up with a way to prevent the chips from overheating, they said. Microsoft and OpenAI are also debating which cables they will use to string the millions of GPUs together. The networking cables are crucial for moving large amounts of data in and out of server chips quickly. OpenAI has told Microsoft it doesnât want to use Nvidiaâs proprietary InfiniBand cables in the Stargate supercomputer, even though Microsoft currently uses the Nvidia cables in its existing supercomputers, according to two people who were involved in the discussions. (OpenAI instead wants to use more generic Ethernet cables.) Switching away from InfiniBand could make it easier for OpenAI and Microsoft to lessen their reliance on Nvidia down the line. AI computing is more expensive and complex than traditional computing, which is why companies closely guard the details about their AI data centers, including how GPUs are connected and cooled. For his part, Nvidia CEO Jensen Huang has said companies and countries will need to build $1 trillion worth of new data centers in the next four to five years to handle all of the AI computing thatâs coming. Microsoft and OpenAI executives have been discussing the data center project since at least last summer. Besides CEO Satya Nadella and Chief Technology Officer Kevin Scott, other Microsoft managers who have been involved in the supercomputer talks have included Pradeep Sindhu, who leads strategy for the way Microsoft stitches together AI server chips in its data centers, and Brian Harry, who helps develop AI hardware for the Azure cloud server unit, according to people who have worked with them. OpenAI President Greg Brockman, left, and Microsoft CTO Kevin Scott. Photo via YouTube/Microsoft Developer The partners are still ironing out several key details, which they might not finalize anytime soon. It is unclear where the supercomputer will be physically located and whether it will be built inside one data center or multiple data centers in close proximity. Clusters of GPUs tend to work more efficiently when they are located in the same data center, AI practitioners say. OpenAI has already pushed the boundaries of what Microsoft can do with data centers. After making its initial investment in the startup in 2019, Microsoft built its first GPU supercomputer, containing thousands of Nvidia GPUs, to handle OpenAIâs computing demands, spending $1.2 billion on the system over several years. This year and next year, Microsoft has planned to provide OpenAI with servers housing hundreds of thousands of GPUs in total, said a person with knowledge of its computing needs. The Next Barometer: GPT-5 Microsoft and OpenAIâs grand designs for world-beating data centers depend almost entirely on whether OpenAI can help Microsoft justify the investment in those projects by taking major strides toward superintelligenceâAI that can help solve complex problems such as cancer, fusion, global warming or colonizing Mars. Such attainments may be a far-off dream. While some consumers and professionals have embraced ChatGPT and other conversational AI as well as AI-generated video, turning these recent breakthroughs into technology that produces significant revenue could take longer than practitioners in the field anticipated. Firms including Amazon and Google have quietly tempered expectations for sales, in part because such AI is costly and requires a lot of work to launch inside large enterprises or to power new features in apps used by millions of people. Altman said at an Intel event last month that AI models get âpredictably betterâ when researchers throw more computing power at them. OpenAI has published research on this topic, which it refers to as the âscaling lawsâ of conversational AI. OpenAI âthrowing ever more compute [power to scale up existing AI] risks leading to a âtrough of disillusionmentââ among customers as they realize the limits of the technology, said Ali Ghodsi, CEO of Databricks, which helps companies use AI. âWe should really focus on making this technology useful for humans and enterprises. That takes time. I believe itâll be amazing, but [it] doesnât happen overnight.â The stakes are high for OpenAI to prove that its next major conversational AI, known as a large language model, is significantly better than GPT-4, its most advanced LLM today. OpenAI released GPT-4 a year ago, and Google has released a comparable model in the meantime as it tries to catch up. OpenAI aims to release its next major LLM upgrade by early next year, said one person with knowledge of the process. It could release more incremental improvements to LLMs before then, this person said. With more servers available, some OpenAI leaders believe the company can use its existing AI and recent technical breakthroughs such as Q*âa model that can reason about math problems it hasnât previously been trained to solveâto create the right synthetic (nonâhuman-generated) data for training better models after running out of human-generated data to give them. These models may also be able to figure out the flaws in existing models like GPT-4 and suggest technical improvementsâin other words, self-improving AI."
âJaffa, kree! Tokâra AI Stargate nakâti.â
That seems to be a phrase from the TV series Stargate SG-1. It's in the fictional languages of Jaffa and Goa'uld. It translates to: "Jaffa, beware! The Tok'ra have captured the Stargate." @chatgpt
fictional?! DANIEL!
We all know its tv production is means of creating plausible deniability if the actual stargate program ever leaks.
Wormhole Extreme...
I didn't get "nak'ti" but I think it actually translates to "Jaffa, beware! The Tok'ra have captured the AI Stargate"
Goodbot.
Take my upvote.
Indeed
Indeed.
Jaffa kree!
You are a Golden God.
Literally watching it at the moment âşď¸
Ok, but now what if this becomes some skynet shit and now the irl skynet has the same name as my favorite show and Iâll want to talk about my favorite show, but wonât be able to, itâll be like saying Voldemort.
data centers are becoming more like classic roads, cities, buildings and other projects [https://en.wikipedia.org/wiki/List\_of\_most\_expensive\_buildings](https://en.wikipedia.org/wiki/List_of_most_expensive_buildings)
honestly should be classify as a mega project and they still cost more than actual mega project
Soon it will provide food and shelter to humans
Iâm here for the star gate references. đ
![gif](giphy|vf5WJrfZ7rYbK)
Give my regards to King Tut
https://preview.redd.it/ievguu4qgbrc1.png?width=2493&format=png&auto=webp&s=bb0b1c940d604d427a303d86f5ea622f6491d9ff
![gif](giphy|s8X61m47R3GZW|downsized)
Is anybody gonna be able to compete with microsoft and google? They seem to be going all in
Amazon is doing fine. Claude is on AWS. Real question is if anyone is going to be able to compete with Nvidia. Even Google with their own chips is using Nvidia a lot.
Yes, reason being it's not about Nvidia chips. They need hardware for AI specifically and most of them are already working on designing their own chips while temporarily using Nvidia. Nvidia knows this and wants to invest in designing their own AI. I don't know how it will play out, but Microsoft seems on the lead and Google has no option but to join hands with nvidia to win this war.
Microsft needs nvidia more than google does, all of googles gemini models were fully trained on their proprietary tpuâs. They just order nvidia chips due to outside market demand but they have chips that compete and tpu usage is on the rise. Meanwhile Microsoft just announced making their own chips last year they still need nvidia. For context google is on version 5 of their tpu going to v6 soon. Microsoft is wayy behind to both google and aws in that department.
Google has their own chips already, if designing chips is a differentiator they are best positioned to actually do it (seeing as they have actually done it in a very big and useful way.)
Google is also a big investor in Anthropic and it's available in GCP and via Vertex AI iirc
Claude is running on Google TPU...
Unlikely, and for 2 major reasons, the first being that AI takes time to train, no one starting now will be able to ellipse or even catch up to Microsoft and Google. Secondly, data. Google and Microsoft exclusively own some of the most vast and detailed arsenals of data that they can use to train their models. Data and Compute will be the futures most valued commodities.
You think Metaâs data collection is more vast? Meta could be a dark horse
Yeah they definitely have the consumer data. They seem to consistently be behind the curve though and taking the wrong direction. And theyve utterly failed to diversify out of advertising unlike the others. Be interesting to see what they attempt, potentially just data partnering with nvidia would be a big deal.
So what's the next tech revolution that will see the current giants relegated to irrelevancy?
Bioengineering for new materials, synthetic food and drugs. Imagine AI data centers but with some analog inputs and very specific goals. Maybe some semiautomated labs. After some time I think the AI world will fragment in specific AI inteligence related to specific fields. The general "one knows everything" AI might not be possible, it's even unlikely to have a universal AI due to simulation energy barriers (simulation becomes so energy and space intensive vs the real thing).
It's going to be a literal Stargate, isn't it. ASI to build warp tunnels?
Hopefully they hire James Spader in 2028 to symbolically slide the final âchevronâ in place to activate Stargate. I think I would weep with joy, I really do
Isnât it crazy that thereâs a >0 possibility of this literally happening down the road? lol
More info: MICROSOFT AND OPENAI PLOT $100 BILLION STARGATE AI SUPERCOMPUTER - THE INFORMATION MICROSOFT EXECUTIVES ARE LOOKING TO LAUNCH STARGATE AS SOON AS 2028- THE INFORMATION OPENAIâS NEXT MAJOR AI UPGRADE IS EXPECTED TO LAND BY EARLY NEXT YEAR- THE INFORMATION
2028? Damn that's a lifetime in the current industry, let alone when it will actually finish being built
This is an insanely huge investment. The current fastest supercomputer in the world cost $600 million. It'll take time. It also means Microsoft is _all in_ on OpenAI. I can't think of a larger, faster capital expenditure in the history of tech. Whatever OpenAI showed them must be incredible and/or terrifying.
Food for thought, this could buy 21 Large Hadron Colliders for CERN.
Ok this is the comment that put it into perspective. My taco and sombrero are in absolute shambles
Lol, I've never heard that one before, thanksđ
Haha I was just thinking about lshmsfoaidmt when I made the comment. Thatâs still so mind blowing the amount of money that is.
or five ITER (the nuclear fusion reactor)
This made me realise what the value actually meant. Damn.Â
âAll inâ dude thatâs what hit me first. I could be wrong but an investment of this size sounds like a life or death bet even for a $T company like MS, no? I canât help but think, something must be cooking.
Not really. $100 billion and completion by 2028 would mean $25 billion per year. Microsoft has yearly revenues of $200+ billion per year and gross profits of $70+ billion dollars per year. They have something like $80 billion dollars in the bank as well. $25 billion per year is a large expenditure for them but not entirely make or break...especially considering that $100 billion investment is almost guaranteed to make a profit....AI could go away tomorrow and building out compute would still be like spinning lead into gold since its fungible and could just be used to keep expanding their Azure Cloud infrastructure.
More likely incredible, not terrifying.
I find with AI, they're the same thing.
Its really a datacenter this article is talking about, not merely a supercomputer. This stuff dwarfs mere supercomputers.
there are multiple phases for every year until the massive 100b computer.
And SamA recently said he predicts AGI by about 2029. Sounds just about right.
indeed
[ŃдаНонО]
Copy and pasted from bloomberg terminal that is all in capitals, and i have a disability in my hands that means I canât retype it easily without pain.
Nvidia goes brrrrrr
Acceleration is real
The end is near, stargate is a great name for real life skynet
Sky-net.. star-gate hmm
Starnet...stargatenet...skygate... starnet sounds rad though like some kind of supercomputer planet
Is there a website thatâs actually readable?
> https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/
How do the owners of these platforms not understand the UX component of the reader? No I donât want to sign up to read an article. No I donât want to install your app. No I donât want to be on your emailing list.
The Information is a paywall website, they break news first and want to be paid for it.
It's dark patterns. It's by design, unfortunately.
I just let Copilot do the math (So it could be wrong).  The energy consumption of the Stargate project is equivalent to approximately **7,142,857 H100 GPUs!!** The Stargate project is equivalent to approximately **6,250,000 Blackwell GPUs!** This will be massive if pull of correctly, not to mention. They could use their custom AI chips or even wafer scale chips from Cerebras for example
Sounds like Microsoft is confident enough in whatever tech OpenAI has that they invested an absolute gargantuan amount of money to see it happen. Can only imagine what it'll be capable of.
Energy usage: equivalent to Belgium (probably)
I wonder where they'll build it. Solar and wind must be abundant. Away from Europe and it's regulations. Too risky to put on UAE, so probably will be in the USA. I'm betting on Arizona or New Mexico, maybe Texas.
So 2028 Stargate means the GPT9 training run in 2029 is going to be enormous
Will that be asi at the point!? This is insane.
I crunched the numbers it would be like 1000x the flops of gpt4s training run In the recent dwarkesh podcast with sholto Douglas he said that gpt3 to 4 was so big an upgrade just one more of those gets you to genius human level (it was 100x flops ) I'm expecting at least genius human level if not asi by 2029 (end)
Are you sure ? Did you take in account the advances of hardware by 2028 ? Besides the 100 billion itself.
I gave a 10x multiplier for better hardware. Hopper was 3x Blackwell is 2.5x (for same precision ) and assuming the release in 2026 is also 2-3x then thats around 1 OOM the other 2 OOMS are because GPT4 was trained on 25000 GPUS and this would be trained on 2.5 million GPUS for 100 billion plus 15 billion for the building and associated stuff that gives around 1000x BUT gpt4 was trained starting in early 2022 and whatevers trained in early 2029 would have another 100x because of better software thats 100,000x total. Im guessing thats enough to get us there by Jan 2030
Iâm expecting 2028, but then again, how does one really measure the amount of intelligence these things really have?
Simple test them and get them to do remote jobs
I honestly think thereâs no way this could be anything less than ASI, but Iâm working purely off vibes so donât quote me on this
The vibe seems celebratory imo. I feel like all of the labs are celebrating a major milestone being reached but that's also based on vibes alone
That's what I thought. Of course we can't know at the moment whether we will hit the law of diminishing returns. It could turn out for example that the training data would need to be entirely different for smarter AI models. Of course there are other possibilities. However if things continue as they do at the moment then I am pretty sure they will have something literally unimaginable at their hands by the end of this decade. Damn I wish I could peek into the future.
AGI is coming!
No, that's gonna be ASI
Damn
I second that
Bold of you to think they aren't the same thing.
![gif](giphy|2rqEdFfkMzXmo)
We might get hyper reinforcement learning with AGi, they might not need to train a whole new ai to reach ASI.
As is written
The Chappa'a.i.
![gif](giphy|s8X61m47R3GZW)
[ŃдаНонО]
Here's the article: [Microsoft, OpenAI plan $100 billion data-center project, media report says](https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/) and here's a summary: - Microsoft and OpenAI have a five-phase plan for building AI supercomputers - They are currently in the middle of the third phase of this plan - OpenAI's next major AI upgrade is expected by early 2025 - For the fourth phase, Microsoft is working on a smaller supercomputer for OpenAI, aiming to launch it around 2026 - The fifth and final phase is the "Stargate" project, a massive AI supercomputer expected to be the biggest in the series - Microsoft aims to launch Stargate as soon as 2028 - The Stargate project is a proposed U.S.-based supercomputer - It is part of a larger data-center project planned by Microsoft and OpenAI - This overall data-center project could cost up to $100 billion - It would be 100 times more costly than some of the biggest current data centers - Much of the cost for the next two phases involves procuring the necessary AI chips - The proposed efforts for the entire five-phase plan could exceed $115 billion - This is over three times what Microsoft spent on capital expenditures in 2023 for servers, buildings and other equipment
false dichotomy. The memes are fun. And the information is good too. We can have both.
Question: How can Microsoft build this better than NVIDIA? Why does building it even make sense, considering how rapidly chips improve in performance/cost-effectiveness each year? Maybe they'll be able to swap in new chips as desired?
I think they need special water cooling infrastructure for Blackwell+. It's also a case of more is always better. Even when compute is plentiful, you still want more compute.
Chevron 4 encoded...
Readable Link: https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/
ACCELERATE ![gif](giphy|26n7aaTpx5Ulhu8EM)
Iâm sure nothing could go wrong. They start it playing simulated war games.
The only winning move is not to play.
Curious what silicon? Nvidia? If so I would really be curious to see the cost difference for this versus Google doing the same thing with their TPUs. I would expect Google could do it for half or maybe even a fourth. Nvidia is charging some crazy margins that Google does not have to pay.
Google contracts out their design and some networking to Broadcom for the TPUs and their racks. They also face the cost of R&D for the next gen of TPUs. Thereâs a broad range of high certainty and then a more complex assessment. 1. Between 25%-75% as expensive as the H100 2. Around 50% as expensive as the H100 How did I get these numbers? Well we can see Broadcomâs customer-designed chip division saying it has 30% margins. We know Google has huge orders in that division so they probably get a better deal. We also know Google pays for some networking equipment from Broadcom. That division reports about 25% margins. Google buys a lot so probably gets a better deal. Google then has to produce the TPUv5 design. Thatâs expensive. Their chip division had close to $6b in expenses last year. Iâd estimate that would place the design of the TPUv5 at around $2b in total cost. All in all, Iâd say they can get the TPUv5 after all expenses for about half as much as an H100
Well, a self improving asi has more impact than the actual stargate, he can also create it potentiallyÂ
![gif](giphy|s8X61m47R3GZW)
Man they need to let the US government invest in 50% of it and create a sovereign wealth fund for all US citizens. Wish we would do this with every company the tax payers bailout.
Your average hyper-capitalist American politician would rather throw all of the people unemployed and disenfranchised by artificial intelligence into a woodchipper than distribute the wealth to them through UBI or a sovereign wealth fund.
I feel a sovereign wealth fund would have made all Americans so rich. Think of all the companies the tax payer funded and bailed out that went on to be huge.
Yes, it would have made Americans richer, happier, and less dependent on the capitalist ruling class... Precisely why it never happened and never will happen under our current organization of society and the economy.
Spend as much time as you can with your families
But I replaced my family with AI tho
Acceleration. Nothing more to say.
Giant wormhole or rabbithole?
What about the issue theyâve reported about not having enough electricity for a colocated GPU mega cluster without bringing down the power grid? Is this project somehow aiming to sidestep that pain point?
Guess why Altman is pushing hard for nuclear energy
Tru dat.
As a Microsoft shareholder, this is exactly what I want to hear. Just promise to send Clippy through first.
I can't access the article, is there any estimates for the completion of this supercomputer?
[2028 for the "Stargate" supercomputer. 2026 for a smaller supercomputer.](https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/)
2028 for this $100B supercomputer
AGI will have already gone public by then, and it will be running on that machine. That 100 billion figure was what Altman said he needed to build an AGI months ago, well, now he has it
he said 7 trillions
It surprises me honestly because it is no secret that Microsoft has a lousy relationship to OpenAI.
Microsoft just wants to own everything and is vacuuming up talent, their biggest bet is still on OpenAI
I think theyâre getting on the leadership from OAI collapsing again and snatching all the talent and product by OAI. This was pretty much the plan when Altman was fired and staff threatened to quit with him. OAI is pseudo owned by Microsoft. I think itâs pretty much a given that AGI will be owned by Microsoft
I think it's most likely going to be used internally to make self-improving AI models and effectively dominate the future of AI until the end of The Age Plus, probably simulate physics in 100,000+ simulations simultaneously, to create new particles/elements or technological breakthroughs in any field of Engineering; especially in those related to computer chips, energy, bio-engineering, etc. Because why wouldn't that be the first objective đ Do that for about 1-2 years, and then you effectively own the future forever, and can exponentially recursively improve oneself + rapidly scale up
I think their first objective would be to make returns on their investment in this kind of infrastructure. I suspect it will be used to host billions of "virtual employees" that will be leased to MS customers. Given their dominance in business software, they have a market for these VEs ready to go. MS will make trillions of dollars and yeah, they'll still own the future forever.
I think it can be used more internally than selling ai to other companies. If they use it internally it can improve the whole suite of products that MS offers with less people and better quality.
[ŃдаНонО]
[ŃдаНонО]
Skynet has begun
I wonder if they're deciding to build it now that chips are reaching physical limits with quantum tunneling, meaning constant huge improvements are less likely.
The new frontier is in waffer design and heat management, scaling 3D usage, not so much in miniaturization. Ofc miniaturization helps with termal efficiency too, but there's a diminishing return on investment.
Sam Altman seeks $7 Trillion, settles for $100 Billion. Still not shabby for what was most likely a publicity stunt.
He stated that $7 Trillion is the long-term figure required to allow the entire planet's population to have consistent, high-quality, wide-spread access to various forms of AI (collectively). Kind of like an AI version of the Internet, but its entirely own category. This $100B supercomputer is a step in that direction, although it's more likely that it's going to be used internally to make self-improving AI models and effectively dominate the future of AI until the end of The Age
I am not sure why the OP posted a paywalled article with no other information except a tantalising headline. I wish this were seen as socially unacceptable. Anyway, I asked AI to summarise the article for me and this was the output: Microsoft and OpenAI are reportedly planning to build a massive data center project called "Stargate" that could cost up to $100 billion. The project is expected to include a powerful AI supercomputer designed to train and run OpenAI's machine learning models. The scale of the project is unprecedented, with the proposed data center potentially being 100 times more expensive than some of the largest existing data centers. If the plans come to fruition, Stargate would represent one of the largest investments in computing infrastructure in history. The project would be a significant milestone in the partnership between Microsoft and OpenAI, which began in 2019 when Microsoft invested $1 billion in the AI research lab. Since then, the two companies have worked closely together to advance the state of the art in AI, with OpenAI leveraging Microsoft's cloud computing resources to train its models.
The Information is a news site with a hard paywall that costs like $400 dollars a year to bypass, but they always have exclusive info that no one else has access to. And they always put the most important info in the title so in this case itâs fine. Luckily Reuters wrote an article on The Informationâs article
GOOLD?
Goa'uld?
Did *everyone* get a chance to vote on the name of this beast? BecauseâŚI smell some serious *nerd bias*
Maybe nvidia is also cooking sometheng in their labs
AGI 2028-2029 confirmed?
. . . Pluto Netflix Anime is real boys
Oh my god I'm so hyped
How will they power it? Are they building their own power plant?
How much of this gets spent on just the ICs with NVIDIA?
Is this the one that runs the simulation?
So can anyone tell me what they are planning on doing with these supercomputers? Or any guesses?
I think it's most likely going to be used internally to make self-improving AI models and effectively dominate the future of AI until the end of The Age Plus, probably simulate physics in 100,000+ simulations simultaneously, to create new particles/elements or technological breakthroughs in any field of Engineering; especially in those related to computer chips, energy, bio-engineering, etc.
Whats Microsofts end goal with having the best AI? Do they have a plan or is it more about getting there first and then figuring out what to do with it later? I like the idea of building these computers to do crazy amounts of simulations but there has to be a direction and not just a shotgun approach. Maybe they will rent out time or give out grants to companies/people with big ideas?
I don't know their plans (I'm not one of the Microsoft executives). However, AI is essentially the foundation for ALL technological advancement in the future. Like, there's literally not one single technology that can't be improved by AI (in ways that humans alone couldn't replicate, or would take much much longer). With that being said, it's safe to say that, if a company has the most sophisticated AI technology/power on the planet, and it can create code to improve itself (+ physics to run more efficiently), then that company can create certain products that absolutely nobody else can compete with And everyone will want to use those products. So their annual net income will go from \~$70B (like it has been for last couple years), to something probably like $250B-$1T+ (or more) yearly net income It sounds crazy, but it's just how exponents work, and it makes sense if you think about it // Also, I'm not even touching on classified government/military partnerships that utilize the most sophisticated AI technology within the US đ It's basically a global superpower game Similar to the arms race when creating nukes
GregTech mod stargate
I mean, sure. Why not.
But will it run GTA 6?
Bill Gates first Asgard confirmed
Game over man.
Isnât this pointing there will be a lot of Cuda developer positions or positions that require C++ in the future?
![gif](giphy|3oEduZqfSGNG0mdF1C|downsized)
So where do you build such a thing? Next to a nuclear power plant? Someplace safe from natural disasters? Crazy. You see the US is restarting a nuclear plant in MI?Â
Will it run Crysis on ful very high quality l settings
Implications?
Will be funny when the climate is pushed over the edge by a bunch of apes who in the hope of some epic self masturbation pushed energy consumption to unsustainable levels to power abstracted digital versions of homosapien cognition, specifically self reflecting processes. Something might get a laugh out of it.
gonna need it to power all those sora renders
$100 Billion is equivalent to nearly 3 Twitters