Pictured is "power hungry" Sam Altman - just in case you don't immediately recognize him. Back in late September, Anissa Gardizy, a reporter for The Information, wrote an article in which she said, in virtually her first line, "Artificial intelligence is hungry for power at a scale that defies belief." I am presenting the entire column, below, since you'd otherwise have to pay to read it.
Those bent on developing "Artificial Intelligence" are really asking us, as human beings, to sacrifice virtually everything to which we are now committed, in order to bring profit-making success to a venture that proposes to eliminate massive numbers of human jobs, and that wants to dare the fates, represented by the danger of Global Warming, as it does so. Massive amounts of money are involved, and to the extent that our money represents "opportunity" (and it does), what Altman and his acolytes suggest is that we forego all other dreams and hopes, and build a system that will replace us, functionally, and that will expose the World of Nature to a massively increased likelihood that water and land resources will be depleted in the effort to construct an alternative to human intelligence.
Power hungry? Maybe that's one way to put it.
Or Power Mad (as in "insane").
oooOOOooo
By Anissa Gardizy
Welcome to the first edition of The Information’s newsletter on AI infrastructure. In the coming months, we’ll cover the data centers, chips, networking and energy that power AI.
Artificial intelligence is hungry for power at a scale that defies belief.
Last week, OpenAI and Nvidia said they would work together to develop 10 gigawatts of data center capacity over an unspecified period. Inside OpenAI, Sam Altman floated an even more staggering number: 250 GW of compute in total by 2033, roughly one-third of the peak power consumption in the entire U.S.!
Let that sink in for a minute. A large data center used to mean 10 to 50 megawatts of power. Now, developers are pitching single campuses in the multigigawatt range—on par with the energy draw of entire cities—all to power clusters of AI chips.
Or think of it this way: A typical nuclear power plant generates around 1 GW of power. Altman’s target would mean the equivalent of 250 plants just to support his own company’s AI. And based on today’s cost to build a 1 GW facility (around $50 billion), 250 of them implies a cost of $12.5 trillion.
“We are in a compute competition against better-resourced companies,” Altman wrote to his team last week, likely referring to Google and Meta Platforms, which also have discussed or planned large, multigigawatt expansions. (XAI CEO Elon Musk also knows a thing or two about raising incredible amounts of capital.)
“We must maintain our lead,” Altman said.
OpenAI expects to exit 2025 with about 2.4 GW of computing capacity powered by Nvidia chips, said a person with knowledge of the plan, up from 230 MW at the start of 2024.
Ambition is one thing. Reality is another, and it’s hard to see how the ChatGPT maker would leap from today’s level to hundreds of gigawatts within the next eight years. Obviously, that figure is aspirational.
Then again, OpenAI’s fast-rising server needs surprised even Nvidia executives, said people on both sides of the relationship.
Before the events of last week, OpenAI had contracted to have around 8 GW by 2028, almost entirely consisting of servers with Nvidia graphics processing units. That’s already a staggering jump, and OpenAI is planning to pay hundreds of billions of dollars in cash to the cloud providers who develop the sites.
To put it into perspective, Microsoft’s entire Azure cloud business operated at about 5 GW at the end of 2023—and that was to serve all of its customers, not just AI. (Azure is No. 2 after Amazon’s cloud business.)
Bigger Is Still Better
Data center developers tell me most of OpenAI’s top competitors are asking for single campuses in the 8 to 10 GW range, an order of magnitude bigger than anything the industry has ever attempted to build.
A year and a half ago, OpenAI’s plan with Microsoft to build a single Stargate supercomputer costing $100 billion seemed like science fiction. Barring a seismic macroeconomic change, these types of projects now seem like a real possibility.
The rationale behind them is simple: Altman and his rivals believe that the bigger the GPU cluster, the stronger the AI model they can produce. Our team has been at the forefront of reporting on some of the limitations of this scaling law, as evidenced by the smaller step-up in quality between GPT-5 and GPT-4 than between GPT-4 and GPT-3.
Nevertheless, Nvidia’s fast pace of GPU improvements has strengthened the belief of Altman and his ilk that training runs conducted with Blackwell chip clusters this year and with Rubin chips next year will crack open significant gains, according to people who work for these leaders.
In the early days of the AI boom, it was hard to develop clusters of a few thousand GPUs. Now firms are stringing together 250,000, and they want to connect millions in the future.
That desire runs into a pretty important constraint: electricity. Companies are already trying to overcome that hurdle in unconventional ways, by building their own power plants instead of waiting for utilities to provide grid power, or by putting facilities in remote areas where energy is easier to secure.
Still, the gap between company announcements and the reality on the ground is enormous. Utilities by nature are conservative when it comes to adding new power generation. They won’t race to build new plants if there’s a risk of ending up with too much capacity—no matter who is asking.
‘Activating the Full Industrial Base’
OpenAI’s largest cluster under development, in Abilene, Texas, currently uses grid power and natural gas turbines. But other projects it has announced in Texas will use a combination of natural gas, wind and solar.
Milam County, where OpenAI is planning one of its next facilities, recently approved a 5 GW solar cell plant, for instance. And gas is expected to be the biggest source of power for the planned sites, this person said.
To accomplish its goals, OpenAI and its partners will need the makers of gas and wind turbines to greatly expand their supply chains. That’s not an easy task, given that it involves some risk-taking on the part of the suppliers. Perhaps Nvidia’s commitment to funding OpenAI’s data centers while maintaining control of the GPUs will make those conversations easier.
Altman told his team that obtaining boatloads of servers “means activating the full industrial base of the world—energy, manufacturing, logistics, labor, supply chain—everything upstream that will make large-scale compute possible.”
There are other bottlenecks, such as getting enough chipmaking machines from ASML and getting enough manufacturing capacity from Taiwan Semiconductor Manufacturing Co., which produces Nvidia’s GPUs. Negotiating for that new capacity will fall to Nvidia.
Predicting the future is notoriously difficult, but a lot of things will need to go right for OpenAI and its peers to get all the servers they want. In the meantime, they will keep making a lot of headlines in their quest to turn the endeavor into a self-fulfilling prophecy (emphasis added).
https://www.theinformation.com/articles/sam-altman-wants-250-gigawatts-power-possible








