Imagine thinking you’re just chatting with a helpful bot about homework, strategy slides, or perhaps travel plans, only to find yourself helping to build an empire.
Within weeks of its launch, ChatGPT reached roughly 100 million users. It has since grown to hundreds of millions of regular users worldwide, becoming one of the fastest‑adopted apps in history. For many people, that’s where the story ends: a clever tool, a runaway success, another Silicon Valley fairy tale.
For journalist Karen Hao, it’s only the prologue to something far more unsettling.
I recently spoke with Hao about her book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, which traces how a research lab turned global AI boomtown helped create a new kind of power: corporate empires built on data, energy, and labor, all wrapped in the language of saving humanity.
Hao studied mechanical engineering, worked briefly in Silicon Valley, and quickly realized that tech culture wasn’t where she wanted to spend her life. Writing had always been a passion and within a year she pivoted into journalism.
That technical background never went away. At MIT Technology Review, she spent years covering AI research and its social impacts, eventually publishing a four‑part series on “AI colonialism” and how AI systems built in wealthy countries rely on resources and people from poorer ones, often repeating familiar patterns of extraction and inequality.
She was already toying with the idea of a book on AI and colonialism when ChatGPT arrived. The LLM didn’t just dominate headlines; it turned the whole industry towards ever‑larger, more energy‑hungry models and unleashed a tidal wave of hype. Suddenly everyone was an “AI expert,” and, Hao felt, the quality of information available to the public sharply deteriorated. And this was the moment the book crystalized. She would tell the story of OpenAI’s rise, how we arrived at ChatGPT – and set it within a much older story: the story of empire.
AI as a new imperial frontier
Why “empire”? For Hao, the parallel is not a metaphor of convenience but a structural comparison. Historic empires seized land, minerals, and human labor from colonies to enrich a small elite at the center. Today’s AI giants, she argues, operate in eerily similar ways: hoovering up data, exploiting poorly paid workers to label and moderate it, and siting energy‑hungry data centers in places that bear the environmental costs – all while concentrating wealth and power in a handful of companies and countries.
Hao’s reporting tracks data annotators in countries like Kenya, paid just a few dollars an hour to sift through toxic content so that systems like ChatGPT do not traumatize users instead. It follows communities in Chile and elsewhere whose water and energy systems are strained by giant data centers built to fuel the AI boom.
Hao’s point isn’t that AI is uniquely evil but that the way we have chosen to build it – who pays the costs and who reaps the benefits – looks like a high‑tech update of a 19th‑century imperial model.
One of the book’s most striking arguments is how avoidable this trajectory is. Many forecasts now suggest data centers could more than double their share of U.S. electricity use by 2030, with AI‑related buildout accounting for 30 to 40% of all new demand this decade.
Those numbers are not inevitable laws of nature; they’re the result of a particular bet: that bigger is always better, especially in AI. Train on more data, run larger systems, accept the higher costs. Hao likens AI development to choosing a path through a forest. One route bulldozes straight through, clear‑cutting trees and wildlife. Another path winds around, leaving the forest largely intact. Both get you to the other side, but Silicon Valley has convinced much of the world that only the clear‑cutting route can possibly deliver the magical benefits we’ve been promised.
Her counterargument is straightforward: useful AI systems can be built with smaller, more efficient models, locally controlled infrastructure, and tighter regulation of training data. The idea that massive energy use and extraction is the “price of progress” is itself part of the empire’s narrative.
AGI as a religion
If empire is the structure, AGI (artificial general intelligence) is the theology.
Hao interviewed hundreds of employees across top AI companies, particularly at OpenAI. Many genuinely believe they are inching toward systems that are as capable as, or surpass, human intelligence. Keep in mind that they see internal demos and prototypes that the public never does. Combined with life inside a tight Silicon Valley bubble where optimism is constantly reinforced, skepticism can begin to seem irrational. People describe themselves as “AGI believers” or “AGI‑pilled.” They know they’re working inside a myth, and they lean into it.
For the rest of us, the sales pitch is framed differently but is no less powerful. Hao likens AGI to the enchanted mirror in Harry Potter that reflects your deepest desire. Look into the future that AI leaders describe, and you might see the end of poverty, a cure for cancer, endless economic growth, or personalized education for every child. Whatever you long for, AGI conveniently promises it.
When so much appears to be on offer – if only we keep up the supply of money, data, and electricity – it becomes very hard to say no. That mix of insider devotion and public yearning gives the AI empire its ideological fuel.
The strange case of OpenAI’s governance
The empire isn’t just about lofty dreams; it’s wired into corporate plumbing. OpenAI’s unusual governance is a perfect example.
Hao begins with a talent problem. Early on, Sam Altman and his co‑founders needed to attract elite researchers, for example those who would otherwise work at Google. Rather than trying to outbid Google on salaries, OpenAI competed on mission, launching as a nonprofit dedicated to building safe AI “for the benefit of all humanity.” It was a powerful signal to idealistic researchers that they’d be doing something bigger than optimizing ad clicks.
Once the team was assembled, with stars like Ilya Sutskever, the bottleneck shifted. Massive supercomputers and gigantic training runs require staggering amounts of cash, so OpenAI created a for‑profit arm, nested under the nonprofit, to raise billions from investors like Microsoft and, later, SoftBank. That’s when the contradictions started to bite and internal tensions began to grow. Early employees believed they’d joined a charity; later hires thought they were working at a hot startup. Disagreements over mission versus revenue escalated until they erupted in 2023 when Altman was abruptly fired – and just as abruptly reinstated after staff revolt and investor pressure.
The drama didn’t end there, of course. Under regulatory scrutiny, OpenAI has since embarked on an intricate recapitalization, transforming its commercial arm into a public benefit corporation that can raise money more easily while remaining under the formal control of the nonprofit, with the attorneys general of Delaware and California acting as key watchdogs. On paper, it’s a compromise between mission and market. In practice, Hao argues, it shows just how far the company has travelled from its original identity as a humble research nonprofit – and how difficult it is to restrain an empire once it begins to expand.
It’s tempting to frame all of this as yet another tale of a tech “genius” CEO. Silicon Valley thrives on these stories. Hao resists overdoing the psychologization, but she does reveal a vicious cycle at work. To even attempt to “build the future” for billions of people, a certain level of ego is necessary. Success amplifies that. Power insulates leaders from criticism, friction disappears, and dissent becomes ever easier to dismiss. In fact, the more critics push back, the more such leaders seem to double down.
Still, Empire of AI is not a character study. Its real target is the system that allows a handful of companies – and the people who run them – to shape global infrastructure, the information flows, and political choices.
Empire versus democracy
This is where the metaphor becomes most unsettling. Historic empires did not merely extract resources; they also ruled. According to Hao, today’s AI empires impose a quiet yet profound threat to democracy. They control the models that generate information, the platforms that distribute it, and the analytics that decide who sees which political message. Their data centers reshape local economies and ecosystems without meaningful consent from affected communities. Their lobbying muscle bends regulation toward their interests, sometimes even pushing for laws that block local governments from regulating AI at all.
Meanwhile, the public is told that AI is inevitable and that any attempt to slow or redirect it risks economic decline or geopolitical defeat. That’s not a level playing field. It is a colonial bargain, repackaged.
For Hao, the conclusion is stark: democracy and empire cannot thrive together. When a small cluster of firms and founders essentially governs the digital infrastructure of the whole world, meaningful self‑government increasingly becomes an illusion.
This is not a call to smash machines or retreat to a pre‑digital age. Hao is explicit that the goal is not to eliminate AI firms but to stop them from becoming – or remaining – empires.
That could mean antitrust action, as happened with earlier oil and telecom giants. It could mean stronger labor protections for data workers, democratic control over where data centers are built and how they’re powered, and tighter global rules around data use and surveillance. It may also require public and civic alternatives: open models, public compute, and regional AI projects controlled by the communities they serve rather than by Silicon Valley boards.
Above all, it requires puncturing the myth that there is only one road through the forest.
The next time you hear someone promise that AGI will solve climate change, cure disease, and usher in universal prosperity, take a pause, and as Hao’s work suggests, ask yourself: Whose future is being promised here, and whose forest is being cut down to build it?
© IE Insights.








