Move over Nvidia: OpenAI just plugged into 6 gigawatts of AMD power.

Move over Nvidia: OpenAI just plugged into 6 gigawatts of AMD power.
Photo by Igor Omilaev / Unsplash

The AI arms race just kicked into an even higher gear. On October 6, 2025, AMD and OpenAI announced a multi-year, multi-generation partnership to deploy a massive 6 gigawatts of AMD Instinct GPUs — a move that could reshape the competitive landscape of high-performance computing and generative AI infrastructure.

According to the announcement, the first 1-gigawatt deployment of AMD’s upcoming Instinct™ MI450 Series GPUs will roll out in the second half of 2026. That’s only the beginning of a long-term collaboration designed to power OpenAI’s expanding compute needs while propelling AMD’s data-center business into new territory.

“We are thrilled to partner with OpenAI to deliver AI compute at massive scale,” said Dr. Lisa Su, AMD’s CEO. “This partnership brings the best of AMD and OpenAI together to create a true win-win.”
“Building the future of AI requires deep collaboration across every layer of the stack,” added Greg Brockman, OpenAI’s president. “Working alongside AMD will allow us to scale to deliver AI tools that benefit people everywhere.”

Why this deal matters

The 6-gigawatt figure isn’t just impressive — it’s unprecedented. It reflects both the explosive demand for AI compute and AMD’s growing credibility as a challenger to NVIDIA’s dominance.

Under the deal, AMD becomes a core strategic compute partner for OpenAI. The two will collaborate not only on the MI450 generation, but also on future GPU architectures and rack-scale AI solutions. AMD will even issue warrants for up to 160 million shares to OpenAI, which will vest as GPU deployments and milestone achievements progress — a structure that ties the success of both companies tightly together.

Financially, AMD projects the partnership will generate tens of billions of dollars in revenue and be accretive to non-GAAP earnings per share, according to AMD CFO Jean Hu.


A look back: How it compares to past AI infrastructure deals

This announcement echoes and arguably surpasses earlier strategic moves in the AI hardware space:

  • Microsoft and OpenAI (2019–2023): Microsoft’s multi-billion-dollar investment in OpenAI included exclusive Azure cloud integration and huge NVIDIA GPU orders. That partnership essentially set the modern AI boom in motion.
  • NVIDIA and CoreWeave (2023): NVIDIA’s partnership with CoreWeave provided another early example of vertically integrated AI compute scaling. But even that deal, worth billions, operated on a far smaller energy footprint.
  • AWS and Anthropic (2024): Amazon’s investment in Anthropic to use Trainium and Inferentia chips was a big step toward in-house AI compute, but the hardware scale was still measured in megawatts, not gigawatts.

The AMD-OpenAI deal is different in both scale and structure. Instead of a single-generation purchase, it’s a multi-year collaboration where hardware, software, and even equity incentives evolve together. This level of long-term alignment could signal a new era of co-engineered AI infrastructure — one where model developers and chipmakers design entire systems side-by-side.


What’s next?

Starting in 2026, OpenAI’s MI450-powered clusters will begin ramping up, followed by future AMD Instinct generations. The companies aim to push the boundaries of performance and efficiency, potentially redefining how large language models and multimodal systems are trained.

If successful, this partnership won’t just fuel OpenAI’s next breakthroughs — it could cement AMD’s place as a cornerstone of global AI infrastructure, and prove that the GPU market has room for more than one giant.


Bottom line:
The AMD–OpenAI partnership isn’t just another supply deal — it’s a massive bet on the future of AI compute. With 6 gigawatts of power, it may very well be one of the defining collaborations of the AI decade.

Read more