AMD talks 1.2 million GPU AI supercomputer to compete with Nvidia — 30X more GPUs than world's fastest supercomputer (2024)

AMD talks 1.2 million GPU AI supercomputer to compete with Nvidia — 30X more GPUs than world's fastest supercomputer (1)

Demand for more computing power in the data center is growing at a staggering pace, and AMD has revealed that it has had serious inquiries to build single AI clusters packing a whopping 1.2 million GPUs or more.

AMD's admission comes from a lengthy discussion The Next Platform had with Forrest Norrod, AMD's EVP and GM of the Datacenter Solutions Group, about the future of AMD in the data center. One of the most eye-opening responses was about the biggest AI training cluster that someone is seriously considering.

When asked if the company has fielded inquiries for clusters as large as 1.2 million GPUs, Forrest replied that the assessment was virtually spot on.

Morgan: What’s the biggest AI training cluster that somebody is serious about – you don’t have to name names. Has somebody come to you and said with MI500, I need 1.2 million GPUs or whatever.Forrest Norrod: It’s in that range? Yes.

Morgan: You can’t just say “it’s in that range.” What’s the biggest actual number?Forrest Norrod: I am dead serious, it is in that range.

Morgan: For one machine.Forrest Norrod: Yes, I’m talking about one machine.

Morgan: It boggles the mind a little bit, you know?

1.2 million GPUs is an absurd number (mind-boggling, as Forest quips later in the interview). AI-training clusters are often built with a few thousand GPUs connected via a high-speed interconnect across several server racks or less. By contrast, creating an AI cluster with 1.2 million GPUs seems virtually impossible.

We can only imagine the pitfalls someone will need to overcome to try and build an AI cluster with over a million GPUs, but latency, power, and the inevitability of hardware failures are a few factors that immediately come to mind.

AI workloads are extremely sensitive to latency, particularly tail latency and outliers, wherein certain data transfers take much longer than others and disrupt the workload.Additionally, today's supercomputers have to mitigate the GPU or other hardware failures that, at their scale, occur every few hours. Those issues would become far more pronounced when scaling to 30X the size of today's largest known clusters. And that's before we even touch on the nuclear power plant-sized power delivery required for such an audacious goal.

Even the most powerful supercomputers in the world don't scale to millions of GPUs. For instance, the fastest operational supercomputer right now, Frontier, "only" has 37,888 GPUs.

The goal of million-GPU clusters speaks to the seriousness of the AI race that is molding the 2020s. If it is in the realm of possibility, someone will try to do it if it means greater AI processing power. Forest didn't say which organization is considering building a system of this scale but did mention that "very sober people" are contemplating spending tens to hundreds of billions of dollars on AI training clusters (which is why millions of GPU clusters are being considered at all).

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

See more GPUs News

More about gpus

Nvidia Blackwell GPUs allegedly delayed due to design flaws — launch expected to be pushed back by three months or moreNvidia reportedly discontinues Steam's most popular gaming GPU — rumors claim the RTX 3060's days are numbered

Latest

Zen 5 testing shows AMD's performance and power gains with threading — Intel ditched threading with Lunar Lake
See more latest►

24 CommentsComment from the forums

  • A Stoner

    And even with that, they would not even have the processing power of an insect at their disposal. We still have no factual intelligence from all the AI spending that has happened. No AI knows anything at all as of yet.

    Reply

  • JRStern

    Well Musk was just out there raising money for 300,000 GPUs, we're talking billions or trillions before they're all installed and usable, not to mention gigawatts of power to run. OTOH this is crazy stuff, IMHO, and perhaps Elon isn't hip to the news that much smaller LLMs are now being seen as workable so maybe nobody will need a single training system with more than 300 or perhaps 3000 GPUs, to do a retrain within 24 hours. And maybe whole-hog retrains won't be as necessary anymore, either.

    So AMD is just trolling, is what this comes down to, unlikely to actually build it out.

    Reply

  • Pierce2623

    The record Dynex set recently was only a quantum record and the record they beat wasn’t even real quantum computing. The record they beat only involves 896 GPUs

    Reply

  • jeremyj_83

    It literally said "For instance, the fastest operational supercomputer right now, Frontier, "only" has 37,888 GPUs." in the article. Frontier has 1.1 exaFLOPs of computing power just so you know.

    Reply

  • DS426

    Usually business is all about ROI and profit but... really, c'mon, someone show me the math on how investments like this pay off without losing money?? We're also talking about cooling, electric bills, sys admins, and so on, so... wtf is so magically about a (relatively?) well-trained and advanced AI LLM or such that costifies this?

    Seriously, not being a hater just to hate but again being on the business side of things in IT, I need to see some math.

    On another note, at least some folks are seeing the value in not paying ridiculous cash just to have "the best" (nVidia) whereas AMD can honestly and probably provide a better return on investment. Kind of that age-old name brand vs. generic argument.

    Still mindblown over here. How many supercomputers have more than 1.2 million CPU's? I know this doesn't account for core counts but holy smokes, we're clearly not talking apples to apples here!! Pretty sure a mini power plant is literally needed to sit beside a datacenter/supercomputing facility like this.

    Reply

  • oofdragon

    I honestly don't get it. Ok so someone like Elon is considering 300 thousand GPUs like Blackwell's, spending in the order of billions just to buy them, then you have the electric bill and maintenance as well every month. In what war can he possible make a profit out of this situation?

    Reply

  • abufrejoval

    Nice to see you reading TNP: it's really one of the best sites out there and on my daily reading list.

    And so are the vultures next door :-) (the register)

    Reply

  • ThomasKinsley

    Not to get all cynical, but this sounds like a bit of a stretch to me. The reporter gave the random number 1.2 million and the AMD staff member responded with, "It’s in that range? Yes." A range needs more than one number. Are we talking 700,000? 1 million? 1.4 million? There's no way to know.

    Reply

  • kjfatl

    If Musk is serious about the 300,000 GPU's it makes perfect sense that the design would support an upgrade path where compute modules could be replaced with future modules with 2X or 4X the capacity.
    The most obvious use for such a machine is for constant updates to self-driving vehicle software. Daily or even by the minute upgrades are needed for this to be seamless. This is little different than what Google or Garman does with maps. When 'interesting' data is seen by vehicles it would be sent to the compute farm for processing. Real-time data from a landslide just before the driver ran off the side of the road would qualify as 'interesting'. Preventing the crash in the next landslide would be the goal.

    This sort of system is large enough to justify custom compute silicon supporting a limited set of models. This alone might cut the hardware requirements by a factor of 4. Moving to Intel 14A or the equivalent from TSMC or Samsung might give another factor of 8 toward density. Advanced packaging techniques might double it again. Combining all of these could provide a machine with the same footprint and power envelope of today's supercomputer with 30,000 GPUs.

    Reply

  • shawman123

    How much power would Million GPUs would consume. its seems off the charts if all of them are fully used. !!!

    Reply

Most Popular
China to achieve 'basic' self-sufficiency for chip fab tools this summer claims industry veteran
NZXT is renting Core i5, RTX 4060 gaming PCs for $59 a month — Core i7, RTX 4070 Ti Super premium package costs $169
Sabrent's Thunderbolt 5 external SSD hits over 6 GB/s — Rocket XTRM 5 brings NVMe-class speeds to portable storage
Intel-powered laptop with dual screens starts at $899 — Acemagic X1 wields two 14-inch FHD displays
Intel's stock drops 30% overnight —company sheds $39 billion in market cap
UK government puts on hold £1.3bn worth of AI and tech projects — exascale supercomputer at Edinburgh University also included on the list
Intel customer bemoans CPU RMA process — furious owner says Intel claims brand new Core i9-14900K chips purchased from Amazon and Micro Center are fake
Pineboards announces Raspberry Pi AI HAT bundle combining both NPU and M.2 NVMe storage
U.S. DoJ launches Nvidia antitrust investigation — investigating potential strong-arm tactics related to AI GPU supply
Intel loses $1.6 billion as data center CPU and foundry divisions struggle
Intel Lunar Lake is 'almost entirely' outsourced as Panther Lake and Clearwater Forest get powered on
AMD talks 1.2 million GPU AI supercomputer to compete with Nvidia — 30X more GPUs than world's fastest supercomputer (2024)
Top Articles
Wright's Furniture
2 x 12" Maxi - Daft Punk - Around the World - Limited Edt. / Virgin France 1998 • EUR 8,93
Fighter Torso Ornament Kit
NOAA: National Oceanic & Atmospheric Administration hiring NOAA Commissioned Officer: Inter-Service Transfer in Spokane Valley, WA | LinkedIn
Umbc Baseball Camp
Housing near Juneau, WI - craigslist
EY – все про компанію - Happy Monday
Kristine Leahy Spouse
Dr Doe's Chemistry Quiz Answer Key
Fire Rescue 1 Login
South Ms Farm Trader
Orlando Arrest and Public Records | Florida.StateRecords.org
Craigslist Chautauqua Ny
Signs Of a Troubled TIPM
Builders Best Do It Center
Enterprise Car Sales Jacksonville Used Cars
Gino Jennings Live Stream Today
Fdny Business
Pekin Soccer Tournament
Christina Steele And Nathaniel Hadley Novel
Cvs El Salido
Dwc Qme Database
U Of Arizona Phonebook
Jeffers Funeral Home Obituaries Greeneville Tennessee
Marion City Wide Garage Sale 2023
Red8 Data Entry Job
Ecampus Scps Login
Encyclopaedia Metallum - WikiMili, The Best Wikipedia Reader
Sadie Sink Reveals She Struggles With Imposter Syndrome
Marokko houdt honderden mensen tegen die illegaal grens met Spaanse stad Ceuta wilden oversteken
Truvy Back Office Login
Danielle Ranslow Obituary
Studentvue Calexico
Penn State Service Management
Www.1Tamilmv.con
Transformers Movie Wiki
Delta Rastrear Vuelo
Western Gold Gateway
Nancy Pazelt Obituary
Miami Vice turns 40: A look back at the iconic series
Parent Portal Pat Med
Anthem Bcbs Otc Catalog 2022
Cleveland Save 25% - Lighthouse Immersive Studios | Buy Tickets
Market Place Tulsa Ok
Aurora Southeast Recreation Center And Fieldhouse Reviews
Gander Mountain Mastercard Login
Random Warzone 2 Loadout Generator
Cars & Trucks near Old Forge, PA - craigslist
Joe Bartosik Ms
Jesus Calling Oct 6
Dinargurus
Latest Posts
Article information

Author: Kelle Weber

Last Updated:

Views: 5531

Rating: 4.2 / 5 (73 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Kelle Weber

Birthday: 2000-08-05

Address: 6796 Juan Square, Markfort, MN 58988

Phone: +8215934114615

Job: Hospitality Director

Hobby: tabletop games, Foreign language learning, Leather crafting, Horseback riding, Swimming, Knapping, Handball

Introduction: My name is Kelle Weber, I am a magnificent, enchanting, fair, joyous, light, determined, joyous person who loves writing and wants to share my knowledge and understanding with you.