OpenAI has signed a massive 7-year, $38 billion deal with Amazon Web Services (AWS) to bolster its computing power. This partnership will provide OpenAI access to hundreds of thousands of Nvidia graphics processors to train and run its AI models, such as ChatGPT and Sora.
Scaling the development of frontier AI requires massive compute resources, according to OpenAI CEO Sam Altman. The deal with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI capabilities to a wider audience.
Amazon plans to deploy hundreds of thousands of chips, including Nvidia's GB200 and GB300 AI accelerators, in data clusters designed to generate ChatGPT responses, create AI videos, and train future models. This deal has sent Amazon shares soaring, while those of long-time OpenAI partner Microsoft dipped briefly following the announcement.
OpenAI needs significant computing power to run its generative AI models for millions of users. Amid chip shortages, finding reliable sources of this muscle has been challenging. The company is working on developing its own GPU hardware to alleviate the strain.
This massive spending commitment β totaling over $1 trillion β raises questions about the sustainability of OpenAI's business model. The company is laying groundwork for an initial public offering that could value it at up to $1 trillion, but whether this valuation makes sense given its burning cash reserves is another matter.
As AI development continues to advance rapidly, concerns have been raised about the potential for a bubble. Despite these doubts, OpenAI's ambitious plan to add 1 gigawatt of compute every week has sparked debate among experts. This lofty goal comes with significant costs, including capital expenditures of over $40 billion per gigawatt.
The deal marks a significant shift in OpenAI's relationship with Microsoft, which is no longer its exclusive computing partner. By securing this massive AWS deal, OpenAI further solidifies its independence from its wealthy benefactor and takes a step towards becoming a standalone entity in the AI landscape.
Scaling the development of frontier AI requires massive compute resources, according to OpenAI CEO Sam Altman. The deal with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI capabilities to a wider audience.
Amazon plans to deploy hundreds of thousands of chips, including Nvidia's GB200 and GB300 AI accelerators, in data clusters designed to generate ChatGPT responses, create AI videos, and train future models. This deal has sent Amazon shares soaring, while those of long-time OpenAI partner Microsoft dipped briefly following the announcement.
OpenAI needs significant computing power to run its generative AI models for millions of users. Amid chip shortages, finding reliable sources of this muscle has been challenging. The company is working on developing its own GPU hardware to alleviate the strain.
This massive spending commitment β totaling over $1 trillion β raises questions about the sustainability of OpenAI's business model. The company is laying groundwork for an initial public offering that could value it at up to $1 trillion, but whether this valuation makes sense given its burning cash reserves is another matter.
As AI development continues to advance rapidly, concerns have been raised about the potential for a bubble. Despite these doubts, OpenAI's ambitious plan to add 1 gigawatt of compute every week has sparked debate among experts. This lofty goal comes with significant costs, including capital expenditures of over $40 billion per gigawatt.
The deal marks a significant shift in OpenAI's relationship with Microsoft, which is no longer its exclusive computing partner. By securing this massive AWS deal, OpenAI further solidifies its independence from its wealthy benefactor and takes a step towards becoming a standalone entity in the AI landscape.