Dec. 10, 2025

Who Should Control AI? The Illusion of Sovereignty

Who Should Control AI? The Illusion of Sovereignty
The player is loading ...
Who Should Control AI? The Illusion of Sovereignty
Apple Podcasts podcast player iconSpotify podcast player iconRSS Feed podcast player icon
Apple Podcasts podcast player iconSpotify podcast player iconRSS Feed podcast player icon

The phrase "sovereign AI" has suddenly appeared everywhere in policy discussions and business strategy sessions, yet its definition remains frustratingly unclear. Our host, Carter Considine, breaks it down in this episode of Ethical Bytes.

 

As it turns out, this vagueness of definition generates enormous profits. NVIDIA's CEO described it as representing billions in new revenue opportunities, while consulting firms estimate the market could reach $1.5 trillion.

 

From Gulf states investing hundreds of billions to European initiatives spending similar amounts, the sovereignty business is booming.

 

This conceptual challenge goes beyond mere marketing. Most frameworks assume sovereignty operates under principles established after the Thirty Years' War: complete control within geographical boundaries.

 

But artificial intelligence doesn't respect national borders.

 

Genuine technological independence would demand dominance across the entire development pipeline: semiconductors, computing facilities, algorithmic models, user interfaces, and information systems.

 

But the reality is that a single company ends up dominating chip production, another monopolizes the manufacturing equipment, and even breakthrough Chinese models depend on restricted American components.

 

Currently, nations, technology companies, end users, and platform workers each wield meaningful but incomplete influence.

 

France welcomes Silicon Valley executives to presidential dinners while relying on American semiconductors and Middle Eastern financing. Germany operates localized versions of American AI services through domestic intermediaries, running on foreign cloud platforms.

 

All that and remaining under U.S. legal reach!

 

But through all of these sovereignty negotiations, the voices of ordinary people are inconspicuously lacking. Algorithmic systems increasingly determine job prospects, financial access, and legal outcomes without our informed agreement or meaningful ability to challenge decisions.

 

Rather than asking which institution should possess ultimate authority over artificial intelligence, we might question whether concentrated control serves anyone's interests beyond those doing the concentrating.

 

Key Topics:
•Who Should Control AI? The Illusion of Sovereignty (00:00)
•The Westphalian Trap (03:15)
•Sovereignty at the Technical Level (07:15)
•The Corporate-State Dance (15:50)
•The Missing Sovereign: The Individual (25:45)
•Beyond False Choices (24:15)

 

More info, transcripts, and references can be found at ⁠⁠ethical.fm

 

When Bloomberg reported in October 2025 that "Europe wants sovereign AI but can't agree on what it means," the observation revealed more than continental disunity, but exposed an ambiguity at the heart of one of the decade's defining technological debates. From Saudi Arabia's $100 billion Project HUMAIN to the EU's €200 billion InvestAI initiative to South Korea's $75 billion full-stack sovereignty gambit to every major tech company suddenly marketing "sovereign AI solutions," the term has abruptly appeared across policy circles and corporate boardrooms. Yet ask what sovereign AI actually means and you'll get radically different answers.

This ambiguity isn't accidental; it's profitable. NVIDIA CEO Jensen Huang explicitly described sovereign AI as a "multibillion dollar vertical market" in 2024, up from zero the previous year. The company projected $10 billion in government revenue from sovereign AI investments in 2024 alone. Accenture's 2025 report sized the sovereign AI market at $1.5 trillion, with 61% of surveyed leaders now more likely to seek "sovereign technology solutions" due to geopolitical risks. When Huang tours Europe dining with presidents and princes (a June 2025 Élysée Palace dinner with Macron, meetings with the King of Denmark, Germany's chancellor), he's not just selling chips. He's selling a vision that places data centers as a matter of national destiny.

But the term sovereign AI serves too many masters to mean anything specific. For governments, sovereignty justifies industrial policy and public investment. For corporations, sovereign AI opens lucrative public sector contracts. While tech giants use the phrase to sell the same infrastructure at premium prices, but with sovereign branding. Consulting firms, such as Accenture, justify billion-dollar implementations to orchestrate the dance between global providers and national clients. This vagueness ensures that everyone can buy sovereignty from the very powers they seek independence from, purchasing what seems like autonomy but, in reality, is tying people down into deeper dependencies. As Accenture's 2025 report claims, only one-third of workloads actually need a sovereign approach, and we're the ones who will help you make that dream come true!

The Westphalian Trap

Most discussions on Sovereign AI assume that the modern notion of political authority, sovereignty, was established at the Peace of Westphalia in 1648, after the Thirty Years' War killed approximately one fifth of the population of the Holy Roman Empire. The concept is defined as "supreme authority within a territory"; the state is the political institution in which sovereignty is embeded, where an assemblage of states forms a sovereign state system. This ended the medieval confusion of overlapping jurisdictions where popes, emperors, and feudal lords all claimed power over the same populations.

The turn towards Westphalian sovereignty assumes that states are the natural units of sovereignty, that authority must be territorially bounded, and that whoever controls the territory controls the technology within it. This assumption runs so deep that when Audrey Herblin-Stoop, Vice President of France's Mistral AI, told Bloomberg in October 2025 that she was "less enthusiastic for the term 'sovereignty' itself," defining it instead as "independence, control over data and technology, and the crucial element of choice," she was working within the same territorial framework. France's AI sovereignty means French territorial control, even if the infrastructure, chips, and software all come from elsewhere.

Westphalian sovereignty isn't the only option available. For instance, in the late 16th century, before Westphalia cemented state sovereignty as the dominant model, political philosopher Johannes Althusius developed a radically different theory of sovereignty in his 1603 work Politica. Althusius argued that sovereignty was not indivisible and absolute, located in a single monarch or state, but rather nested and federal, a series of vertically arranged associations from families to guilds to cities to provinces, each retaining sovereignty within their sphere. These entities were interlocking but detachable, forming larger units by consent while maintaining their own authority. Sovereignty resided in the base units, in the people organized into communities, not in some supreme power above them.

Contrast this with Jean Bodin's theory from his 1576 Six Books of the Commonwealth, which heavily influenced the Westphalian settlement. Bodin insisted sovereignty must be absolute, indivisible, and perpetual, located in a single sovereign power (the monarch, the state) that recognizes no equal. For Bodin, divided sovereignty was a contradiction in terms. You either possess supreme authority or you don't. The Westphalian system adopted this view, creating a world of territorially bounded states, each claiming absolute authority within its borders.

Modern discussions around Sovereign AI seem to adopt the Bodinian framework: we ask which entity (state, corporation, international body) should possess supreme authority over AI. We assume someone must be sovereign, that sovereignty cannot be divided or nested, that control must be exclusive and absolute. But what if the reality of AI systems better resembles Althusius's nested federalism than Bodin's territorial absolutism? What if AI sovereignty, rather than being indivisible, is actually distributed across multiple layers and actors, none of whom possess supreme authority but all of whom exercise real power within their domains?

Sovereignty at the Technical Level

Consider how a nation might achieve true AI sovereignty in the Bodinian sense, to exercise supreme and exclusive authority over AI systems. The state would need control at every layer of the tech stack, or the series of dependencies that make AI systems function. This isn't a metaphor; it's the technical reality that makes claims of AI sovereignty so often illusory.

At the bottom sits the hardware layer. Training advanced AI models requires specialized chips, primarily GPUs and increasingly specialized AI accelerators. NVIDIA controls roughly 86% of the AI chip market. When Germany announced "OpenAI for Germany" in September 2025, running OpenAI models on Microsoft Azure via SAP's Delos Cloud, they were claiming sovereignty while depending entirely on NVIDIA's chips. The chips themselves are manufactured using ASML's extreme ultraviolet lithography machines (the only company in the world that makes them), fabricated by TSMC in Taiwan or Samsung in South Korea, and subject to U.S. export controls that can cut off supply with a policy change. You cannot train frontier AI models without these chips. There is no sovereignty at the hardware layer for any nation except potentially the United States, and even that depends on ASML and TSMC.

Above the hardware sits the infrastructure layer: data centers, power systems, cooling, networking. This is where France's €30-50 billion partnership with UAE to build Europe's largest AI data center becomes relevant. You can own the physical infrastructure, the buildings and power systems, but you're still running on someone else's chips, someone else's software, someone else's cloud management systems. The UAE's Stargate project with OpenAI, NVIDIA, and Oracle, a $20 billion investment in AI infrastructure, physically hosts American supercomputers in Abu Dhabi. That's not sovereignty; that's being the landlord for someone else's technology.

The model layer is where things get more interesting and more complex. This is where the AI model itself lives: the architecture, the training data, the learned weights, the algorithms that generate responses. Training a frontier AI model from scratch currently costs between $100 million and $1 billion, requires thousands of specialized GPUs running for months, demands enormous datasets, and needs rare expertise. China's DeepSeek proved in January 2025 that you could train a competitive model more efficiently than Western companies claimed possible, achieving strong performance at a fraction of the reported cost. This caused NVIDIA's stock to plummet $589 billion in a single day, the largest one-day market cap loss in history, because it suggested that the moat around AI model development might be narrower than investors believed.

But DeepSeek's technical achievement came with profound constraints. The models refuse queries about Tiananmen Square, Uyghur treatment, or Xi Jinping's policies. When asked about Taiwan, DeepSeek responds with CCP talking points. This is what sovereignty at the model layer actually looks like when achieved: technical independence purchased through ideological control, a model that serves state interests rather than user interests. When examined through the dimensions that the Stanford Encyclopedia identifies as characteristic of sovereignty (internal and external), DeepSeek represents internal sovereignty: China exercises supreme authority over AI development within its borders. Yet China's external sovereignty remains constrained; Chinese AI development still depends on smuggled or stockpiled NVIDIA chips subject to U.S. export controls. The Stanford Encyclopedia notes that external sovereignty establishes "constitutional independence, a state's freedom from outside influence upon its basic prerogatives." U.S. chip controls demonstrate that China lacks this constitutional independence in AI. China has model sovereignty, and it means Chinese citizens get an AI that won't discuss their own history honestly, while remaining dependent on American hardware.

For most nations, training frontier models remains practically impossible. South Korea's Ha Jung-woo, appointed as the nation's first Presidential Chief AI Adviser in June 2025, defined sovereignty as "liberation from technological dependency," achieving AI systems that "deeply understand and align with nation's own language, culture, laws, and social values." Yet South Korea, despite Samsung semiconductors, strong domestic industry, and massive investment, trains models on NVIDIA GPUs and develops data centers with AWS. Even with all their advantages, countries cannot escape dependency on foreign chips and infrastructure. Ha's doctrine acknowledges the reality that sovereignty isn't isolation but "not allowing technological dependency to threaten national interest." But defining sovereignty as choosing which dependencies to accept already abandons the Bodinian ideal of absolute, supreme authority.

Above the model layer sits the application layer: the APIs, interfaces, and services that let people actually use AI. This is where most of what gets called "sovereign AI" actually operates. When Germany announces OpenAI for Germany, they mean German government agencies can access OpenAI's models through German intermediaries, with data processed in German data centers. But it's still OpenAI's models, Microsoft's cloud infrastructure, and all three companies remain subject to U.S. laws including the CLOUD Act, which allows American government access to data regardless of where it's physically stored. Sovereignty at the application layer is symbolic when every layer beneath it remains foreign-controlled.

Finally, there's the data layer, which cuts across all the others: the training datasets that teach models, the user data that fine-tunes them, the interaction logs that improve them. This is where control becomes even more distributed. AI models learn from data scraped from the internet (much of it American platforms), licensed from publishers (often American companies), generated by users globally, and refined through feedback from human labelers (frequently workers in the Global South earning poverty wages). Who is sovereign over this data? The platforms that host it? The users who create it? The companies that collect it? The laborers who label it? The nations whose citizens generate it?

The technical reality is that AI sovereignty doesn't exist at any single layer. Even if a state or company could legally claim sovereignty over AI, it would still face what Friedrich Hayek called the knowledge problem: no central authority can ever possess the dispersed, local knowledge needed to govern complex systems well. This knowledge is distributed, contested, and asymmetric. NVIDIA controls chips. TSMC and Samsung control fabrication. The U.S. controls export policy. Cloud providers control infrastructure. Model developers control architectures and weights. Application providers control interfaces. Users generate data. Workers label it. States regulate it (sometimes). Every actor in this chain exercises real power, but none possesses supreme authority. This is Althusian federalism by nature of the free market: overlapping spheres of authority, nested dependencies, with no single sovereign at the top.

The Corporate-State Dance

What makes the sovereignty discourse so slippery is that both states and corporations benefit from the ambiguity while neither can achieve actual supremacy. Consider France's approach. President Macron hosted Jensen Huang at the Élysée Palace in June 2025 to announce NVIDIA partnerships with French AI companies. France simultaneously invested €2.2 billion in national AI strategy, partnered with UAE for that massive data center project, and saw Mistral (their national AI champion) receive over €1 billion in funding in 2024. Is this sovereignty through independence, or sovereignty through strategic dependency on Emirates capital and American chips?

The answer is both and neither. France gains something real; physical infrastructure on European soil, a domestic AI company that can deploy models, regulatory leverage over how AI operates within French borders, and political cover for massive public investment in technology. But France doesn't gain sovereignty in any meaningful technical sense. The chips are American. The cloud infrastructure relies on American software. The funding partly comes from Gulf states with their own interests. The technical expertise is globally distributed. France has negotiated a better position within a system of dependencies, not escaped those dependencies.

Germany's approach reveals the same dynamic. SAP CEO Christian Klein described it as the partnership model: "We will not achieve digital sovereignty by isolating ourselves, but by bringing the best technologies to Europe together with strong partners, while retaining control over our data." The OpenAI for Germany initiative means German government workers use OpenAI's models, but through German intermediaries, with data stored in German data centers. The messaging is "built in Germany, for Germany." The reality is a U.S. company (OpenAI) running on U.S. cloud (Azure) via a German subsidiary (SAP), with all three companies subject to U.S. jurisdiction. The sovereignty claim is a legal and political construction, not a technical one.

Meanwhile, corporations like NVIDIA, OpenAI, Microsoft, and Oracle sell sovereignty to multiple clients simultaneously, each receiving "sovereign" solutions that depend on the same underlying infrastructure. These companies don't control territory, but they control the technological stack that states depend on. They can't make laws, but they shape what's technically possible and economically viable. They can't directly coerce, but they can decide who gets access to chips, models, and platforms.

The power dynamics play out through lobbying and regulatory capture. In 2024, 648 companies spent on AI lobbying federally, up 141% from 458 in 2023. OpenAI's spending increased nearly sevenfold to $1.76 million. The eight largest tech and AI companies combined spent $36 million in just the first half of 2025, roughly $320,000 for every day Congress was in session. When California attempted state-level AI safety requirements with SB 1047, OpenAI joined Meta, Google, Amazon, and Microsoft in a lobbying campaign that convinced Governor Newsom to veto the bill in September 2024.

This creates a system where states claim territorial sovereignty while depending on corporate infrastructure, and corporations claim to merely provide services while shaping the legal and technical environment in which they operate. Neither is truly sovereign; both exercise real power. The result resembles Althusius's nested federalism more than Bodin's absolute sovereignty, except it happened accidentally through market dynamics and political negotiation rather than through conscious constitutional design. States regulate within their territories. Corporations control across territories. Users generate value. Workers enable training. Each actor has a sphere of power, none has supremacy, and the system lurches forward through constant negotiation and conflict.

The Missing Sovereign

Throughout all this maneuvering between states and corporations, one actor barely appears: the individual. In classical political thought, legitimate authority comes from the consent of the governed; in AI, almost no one affected by these systems has consented to their rules or objectives. Popular sovereignty has almost no presence in AI sovereignty debates. We argue about which state controls AI or which corporation provides it, but we rarely ask whether individuals have any say in systems that increasingly govern their lives.

Consider what individual sovereignty over AI would actually require. At minimum, you would need meaningful choice about which AI systems to use, genuine understanding of how they work, ability to exit systems that harm you, and recourse when automated decisions affect your life. You would need AI systems that serve your interests rather than extracting value from your data and attention. You would need some voice in how these systems develop and deploy.

The current reality offers none of this. AI systems make decisions about employment, creditworthiness, insurance eligibility, criminal justice, healthcare, and education with minimal transparency or accountability. You typically cannot know why an automated system denied your loan application or flagged your resume. You often cannot opt out of algorithmic decision-making without opting out of essential services entirely. When AI systems make mistakes, you have limited recourse. When they optimize for advertiser revenue or platform engagement rather than user wellbeing, you have no remedy.

Philosophers would call this a loss of autonomy: people live under systems whose decisions shape their lives without their understanding or meaningful say. It also fits the older republican fear of domination, where freedom means not living at the mercy of another's arbitrary power.

The medieval analogy helps clarify the dynamic. Peasants in feudal Europe weren't sovereign because survival required participating in systems beyond their control. They needed lords for military protection, the church for salvation, guilds for market access. When those powers claimed to serve peasant interests ("we protect you," "we save your soul," "we guarantee quality"), the claims weren't entirely false, but they obscured the fundamental power relationship. Peasants were subjects whose welfare depended on rulers' benevolence, not sovereigns with authority over their own lives.

Today's AI users occupy a similar position. We depend on systems we don't control, built by companies and governments that claim to serve our interests while pursuing their own goals. When Google says its AI helps you find information, when Facebook says its algorithms connect you with friends, when banks say automated systems make fair lending decisions, these claims aren't entirely false. But they obscure that these systems primarily optimize for corporate profit or state control, with user welfare as a secondary consideration at best.

Beyond False Choices

The question that opened this essay, what does sovereign AI mean, has an uncomfortable answer: it means whatever serves the interests of whoever is selling it. For NVIDIA, it means selling chips to everyone. For governments, it means justifying massive public spending. For corporations, it means premium prices for rebranded services. For consultants, it means lucrative implementation contracts. The ambiguity isn't confusion; it's the business model.

But the deeper problem isn't the term's vagueness. It's that we're asking the wrong question. By framing the debate as "who should control AI" (states or corporations), we're replaying the Bodinian assumption that sovereignty must be absolute and located in a single power. We're having territorial arguments about borderless technology, applying Westphalian frameworks to systems that inherently transcend jurisdiction.

The Althusian insight, developed before Westphalia crystallized state sovereignty as the dominant model, offers a different approach. What if we stopped trying to locate supreme authority over AI in a single sovereign and instead recognized that AI systems necessarily involve nested, overlapping spheres of authority? States can regulate deployment within their territories. Corporations can build and operate systems. Users can generate and withhold data. Workers can refuse to label. Civil society can set norms. Each actor has real power, but none has or should have absolute authority.

This doesn't mean accepting the current mess, where power is distributed by accident and market dynamics rather than design. It means consciously building systems where authority is contestable, where base units (individuals and communities) retain meaningful power, where larger entities (corporations and states) exercise authority by consent rather than imposition. This is close to what later thinkers would call subsidiarity: handle decisions at the lowest level competent to make them, and let larger structures step in only when necessary. None of this means every person has to become an AI expert; it means the default architecture of AI governance should build individual choice, exit, and recourse into the system, rather than assuming a single sovereign will decide for everyone. It means treating the stack not as layers to be monopolized but as nested federations where power at each layer remains accountable to those below.

The technical reality of AI systems already resembles Althusian federalism more than Bodinian sovereignty. Dependencies cascade through layers. Multiple actors exercise real but limited power. No single entity controls the whole system. The question isn't whether to accept this distributed reality, but how to structure it so it serves human flourishing rather than extracting value for concentrated powers.

Until we stop asking how to achieve AI sovereignty (in the singular, absolute sense) and start asking what governance structures protect human agency, constrain concentrated power, and distribute authority accountably, we'll keep buying independence packaged as dependency. We'll keep letting states and corporations negotiate over control while individuals remain subjects rather than sovereigns. We'll keep applying 17th-century territorial concepts to 21st-century technological realities, mistaking the map of sovereignty for the terrain of actual power.

The choice isn't between state sovereignty and corporate sovereignty over AI. The choice is between accepting an absolute sovereign (whether state or corporate) or building nested, federated structures where authority remains contestable and accountable all the way down. If sovereignty means anything worth keeping in the age of AI, it should mean that individuals and communities retain real control over the technologies that govern their lives, and that no higher power can treat them as raw material. We've been asking who should control AI. The better question is whether anyone should have absolute control at all.