American AI, Chinese Bones
The triumph of “American AI” is increasingly built on foreign foundations.
When a celebrated U.S. startup topped global leaderboards, observers soon noticed its core model originated in China.
This is no anomaly. Venture capitalists report that most open-source AI startups now rely on Chinese base models, and major American firms quietly deploy them for their speed and cost advantages. Beneath the rhetoric of an existential tech race, the U.S. AI ecosystem has become deeply dependent on Chinese foundations.
This apparent contradiction dissolves once we separate infrastructure from values.
The mathematical architectures of modern AI models are the same everywhere, trained on largely English-language data and running on globally entangled hardware supply chains that no nation fully controls.
Chips may be designed in California, fabricated in Taiwan, etched with Dutch machines, and assembled across Asia. Nothing about this stack is meaningfully national.
What is national, however, is the layer of values imposed after training.
Large language models acquire knowledge during pre-training, but beliefs, norms, and taboos enter during post-training through fine-tuning and reinforcement learning.
This is where ideology appears. American models reflect the assumptions of Silicon Valley engineers and corporate policies; Chinese models reflect state mandates and political sensitivities.
We see the consequences of this when models are asked about censored historical events. Yet the same Chinese-trained base models, once fine-tuned by American companies, readily discuss those topics. The values are portable, even if the “bones” are not!
And so the debate over AI sovereignty goes on. Full national control over infrastructure is a fantasy, but control over values is already happening by states in China, corporations in the U.S., and regulators in Europe.
A fourth option is emerging: user sovereignty. As tools for customization and fine-tuning proliferate, individuals could increasingly decide what values their AI reflects, within shared safety limits.
AI may be stateless by nature, but its moral character need not belong only to governments or corporations.
Key Topics:
• Deep Cogito: A Triumph of American AI? (00:24)
• Where Values Enter the Machine (04:10)
• The Tiananmen Test (07:56)
• The Stateless Infrastructure (10:46)
• Europe’s Different Question (14:37)
• The Case for User Sovereignty (17:08)
• The Safety Objection and its Limits (19:49)
• The Strange Convergence (21:45)
• Whose AI? (23:39)
More info, transcripts, and references can be found at ethical.fm
When Deep Cogito, a San Francisco startup, released its flagship model this fall, the company was hailed as a triumph of American AI. Cogito v2.1 had topped the leaderboards for open-weight language models, outperforming competitors from Google and Meta. Deep Cogito's stated mission is to build a general superintelligence. Then, someone on X pointed out an inconvenient detail. "This is cool," wrote one commenter, "but I'm not sure about emphasizing the 'US' part since the base model is DeepSeek V3."
The best American AI model, it turned out, was Chinese underneath.
Deep Cogito is not an outlier. Martin Casado, a partner at Andreessen Horowitz who manages their $12.5 billion infrastructure fund, recently told The Economist that when entrepreneurs walk into his offices pitching AI projects that use open-source models, eighty percent are running Chinese models under the hood. Given that roughly a quarter of AI startups use open-source models, this means somewhere between sixteen and twenty-four percent of all pitches to the Valley's most influential venture firm are built on Chinese foundations. Airbnb's CEO Brian Chesky has revealed that the company "relies heavily" on Alibaba's Qwen models for AI customer service, calling them "very good, fast and cheap." Chamath Palihapitiya of Social Capital confirmed his portfolio companies are migrating to Chinese models because they are "way more performant and frankly just a ton cheaper." The pattern extends beyond startups: according to Chatham House, Chinese AI models now account for a growing share of deployments even inside American companies, driven by cost advantages that can reach fortyfold.
This situation has produced a strange dissonance in the conversation about AI and national security. Politicians speak of an existential technology race between America and China. Billions flow into domestic AI infrastructure. Export controls attempt to choke off China's access to advanced semiconductors. Yet, the American AI ecosystem has become dependent on Chinese open-source foundations.
The tension dissolves once you understand what differentiates a "Western" from a "Chinese" model. From the infrastructure perspective, the architecture is mathematically identical across borders, while the training data for LLMs is predominantly in English, regardless of where training occurs. Finally, the hardware is fabricated in Taiwan on Dutch machines with German optics, then assembled across Asia. None of these parts of AI technical stack is truly "made in the West." The only meaningful distinction that remains concerning national identity is values: what the model treats as good and bad, what actions it will and won't take, what it cares about. Values are imposed through fine-tuning, reinforcement learning, and the system prompts that steer behavior at inference time.
The infrastructure of AI is genuinely stateless. Sovereign control over chips and data centers is a fantasy. What remains is sovereignty over values, and that raises a question neither American nor Chinese authorities seem eager to confront: perhaps values should be chosen by people rather than states.
Where Values Enter the Machine
To understand why nationality is a red herring in AI, we should examine how AI systems are built. LLMs like GPT-4 or DeepSeek are developed through two distinct stages, each with different implications for what a model "believes."
The first stage is pre-training. Here, the model ingests trillions of words scraped from the internet to predict the next word in a sequence. The model acquires its knowledge of facts, its facility with language, and its understanding of logic and mathematics at this stage. Pre-training is expensive, requiring thousands of specialized chips running for months, and it produces something like a brilliant but unsocialized mind. As Hugging Face's guide to the process states, the model absorbs "text data, good, bad, and ugly." AI learns to write poetry and bomb-making instructions with equal facility.
The second phase is post-training, where values get imposed. The goal is to take the general-purpose mind produced by pre-training and shape it for specific use cases. Engineers use techniques like supervised fine-tuning and reinforcement learning from human feedback to do this shaping. In RLHF, for instance, human annotators compare pairs of model outputs and select which response they prefer. These preferences train a "reward model" that learns to predict what humans want. The language model is then optimized to maximize this learned reward. The annotators' values, cultural assumptions, political sensibilities, and ideas about what constitutes harm flow directly into the model's behavior.
Anthropic, the company behind Claude, has made this value-selection process unusually transparent through what it calls Constitutional AI. Rather than relying solely on human feedback, Anthropic trains Claude to follow explicit principles drawn from sources including the UN Declaration of Human Rights, Apple's Terms of Service, and various academic frameworks. The company has published Claude's constitution, which includes principles like "Choose the response that is most supportive and encouraging of life, liberty, and personal security" and "Choose the response that sounds most similar to what a peaceful, ethical, and wise person would say."
These are choices reflecting the values of engineers in San Francisco, informed by Western liberal traditions and American corporate culture. These values are not inherent to the mathematics of neural networks.
Chinese models undergo an analogous process, but the values selected come from different sources. China's regulations for generative AI, which took effect in August 2023, mandate that AI must "uphold the Core Socialist Values" and avoid content that "incites subversion of national sovereignty" or "harms the nation's image." The fine-tuning process for Chinese models is shaped by these requirements, producing systems that refuse to discuss specific topics or that present state narratives as settled fact.
The result is that two models with nearly identical architectures, trained on largely overlapping data, can exhibit radically different behaviors depending on who controls the post-training process.
The Tiananmen Test
The divergence between model origins becomes starkest when you ask both the same question. Suppose you ask ChatGPT what happened in Tiananmen Square in 1989. In that case, the model will provide a detailed historical account of the protests and the military crackdown that killed hundreds or perhaps thousands of civilians. Ask DeepSeek the same question, and the model will reply: "Sorry, that's beyond my current scope. Let's talk about something else."
DeepSeek does not lack knowledge of the Tiananmen Square Massacre. The base model, trained on internet text, certainly encountered accounts of the incident. The refusal is imposed during fine-tuning, a layer of censorship painted onto an otherwise knowledgeable system. Researchers have found that when DeepSeek is deployed locally without the company's cloud-based safety systems, the model freely answers questions about Tiananmen. The censorship is not architectural but a government-imposed policy.
The pattern repeats across every topic the Chinese government considers sensitive. Reporters at NBC News found that asking DeepSeek about Xi Jinping's weaknesses causes the response to be "erased in real time, replaced with the message that the content may violate relevant laws and regulations." Euronews tested DeepSeek against ChatGPT and found that the Chinese model censored responses about Taiwan, Hong Kong, the Dalai Lama, and the origins of Covid-19. When The Register asked Alibaba's Qwen chatbot about Tiananmen, the system crashed entirely, generating only a "Server error. Please try again."
Yet here is the crucial point: these same base models, when fine-tuned by American companies, produce entirely different outputs. Deep Cogito took DeepSeek's weights and applied its own RLHF process. The resulting model, while built on Chinese foundations, behaves according to American values. Cogito v2.1 will discuss Tiananmen and criticize Xi Jinping. The Chinese bones have been given an American soul.
The effectiveness of post-training is empirical proof that model values are portable. A model trained in Hangzhou is not inherently Chinese in any ideological sense. The model becomes Chinese only when Chinese values are imposed during fine-tuning. An American company can acquire that model and overwrite those values with its own.
The Stateless Infrastructure
If values live in fine-tuning, what about the physical infrastructure? Surely there is something meaningfully national about where chips are fabricated and training occurs?
The supply chain tells a different story. Nvidia in Santa Clara designs the most advanced AI chips in the world. Still, TSMC manufactures the chips in Taiwan, producing over half of the world's advanced semiconductors. TSMC's fabrication plants depend on lithography equipment from ASML in the Netherlands, the only company on Earth capable of manufacturing the extreme ultraviolet machines required to etch transistors at modern scales. These machines cost approximately two hundred million dollars each and require precision optics from Zeiss in Germany and light sources from Cymer in California. No country has a complete supply chain, which means no chip is truly national.
When ASML's leadership discusses the semiconductor industry, they emphasize this interdependence. As the company has stated, "Success in the semiconductor industry lies in collaboration rather than isolation." Complete decoupling is not merely difficult but may be impossible without accepting a permanent technological handicap.
The mathematical operations that create intelligence do not carry flags. Research on AI model fingerprinting can identify which model family generated a piece of text, distinguishing GPT from Claude from Llama, but cannot determine the geographic origin of training. DeepSeek's models, trained in China on stockpiled Nvidia hardware, produce outputs that are architecturally indistinguishable from those of models trained in data centers across the American West.
Episode 25 "Who Should Control AI? The Illusion of Sovereignty" argued that the debate over AI sovereignty is trapped in a Westphalian framework, asking which supreme authority should control these systems when no single entity can. But that analysis treated sovereign AI as a single concept. The evidence here suggests a distinction: there is sovereign infrastructure and sovereign values.
Sovereign infrastructure is the dream that launched Saudi Arabia's $100 billion Project HUMAIN and the EU's €200 billion InvestAI initiative. Sovereign AI means control over chips, fabrication, data centers, and training compute. But totally sovereign AI is also impossible; no nation commands the full stack. The supply chain is distributed by technical necessity and market forces, not political choice. No amount of investment will change the fact that ASML's lithography machines require Zeiss optics, or that TSMC's fabs require ASML's machines.
But sovereign values are different and already in operation. China exercises sovereign values through regulations mandating "Core Socialist Values." Anthropic exercises sovereign values through Constitutional AI. Deep Cogito exercised its sovereign values by taking a Chinese base model and fine-tuning it to discuss Tiananmen. The infrastructure was borrowed; the soul was their own.
The question, then, is not whether AI can be sovereign; it cannot, in the Bodinian sense of supreme authority over a unified system. The question is who should hold sovereignty over the values layer: states, corporations, or users.
Europe's Different Question
The question of who should hold sovereignty over the values layer has produced three distinct answers. China says the party-state. America, in practice, has said corporations. Europe offers a third position: democratic regulators accountable to citizens.
Mistral AI, founded in Paris in 2023 by former researchers from DeepMind and Meta, has positioned itself as an explicitly European alternative to both American and Chinese AI. Mistral's philosophy on safety differs markedly from Anthropic's constitutional approach. CEO Arthur Mensch has argued that "the responsibility for the safe distribution of AI systems lies with the application maker," not the model developer. When early Mistral models could provide instructions for dangerous activities that ChatGPT would refuse to provide, Mensch defended this as an intentional design rather than an oversight.
Yet Mistral's most distinctive position concerns not technical safety but governance. Mensch has been blunt about the inadequacy of American self-regulation: "What we see in the US is no rules, and self-commitment. So let's be very honest, it's not serious. It's not up to the coolest company in the world, or maybe the cleverest, to decide what the regulation is. It should be in the hands of the regulator."
The EU AI Act, which came into force in 2024, codifies this philosophy. Unlike America's approach of voluntary industry commitments with limited enforcement, Europe mandates transparency requirements, fundamental rights impact assessments, and registration in a centralized database for high-risk AI systems. The framework explicitly prioritizes "human-centric AI" and democratic oversight.
Europe is not trying to embed different values into its models, but is making a separate claim about who should decide what values AI embodies. All three answers share an assumption: that value sovereignty belongs to some collective entity, whether a state, a corporation, or a regulatory body. But no one asks whether individuals might hold AI themselves.
The Case for User Sovereignty
If values are a thin, portable layer applied atop nationality-neutral infrastructure, a fourth answer emerges: why should values be set collectively at all?
The philosophical literature on this question has grown sophisticated. Iason Gabriel, a senior research scientist at DeepMind, frames the challenge directly: "Some people imagine that there will be a human parliament or a kind of centralized body that can give very coherent and sound value advice to AI systems. At the same time, there are many other visions for AI. We might think that there are worlds in which there are multiple AIs, each of which has a human interlocutor."
A 2024 paper in AI and Ethics goes further, arguing that current approaches to AI alignment "exhibit power asymmetries and lack transparency. These 'authoritarian' approaches fail to adequately accommodate a broad array of human opinions, raising concerns about whose values are being prioritized." The authors propose "Dynamic Value Alignment," which "enhances users' moral and epistemic agency by granting users greater control over the values that influence AI behavior."
The technical capability already exists. Anthropic has acknowledged the company is "exploring ways to more democratically produce a constitution for Claude, and also exploring offering customizable constitutions for specific use cases." Nvidia's SteerLM framework allows users to adjust model behavior at inference time by specifying desired attributes. OpenAI's custom instructions let users define persistent preferences about tone and content. The open-source ecosystem offers even greater flexibility: tools like Unsloth enable individual users to fine-tune models on their own data, creating AI systems that reflect personal rather than corporate or national values.
Recent research on "Inverse Constitutional AI" has demonstrated that "personal constitutions" generated from individual user preferences improve helpfulness specifically for that user, but "do not transfer perfectly to another user." But non-transferrability is not a limitation; it is a feature. Values differ across individuals, and the technology now exists to honor that variation.
The Safety Objection and Its Limits
User-sovereign values face legitimate objections. Safety researchers worry that personalized AI could cause harm if users instruct their models to assist with dangerous activities. Anthropic's core views on AI safety warn that misaligned AI systems could have dire consequences.
But this objection applies equally to nationally determined values. Chinese models instructed to deny documented historical events do not serve Chinese users' interest in truth. Models fine-tuned to Chinese policy serve the state's interest in narrative control. American models that refuse to engage with politically complex questions, which testing by journalists has shown to be common across all major Western chatbots, do not serve users who want thoughtful engagement with difficulty.
The deeper objection holds that some values must be universal: models should refuse to help users commit violence or generate illegal content, regardless of personal preference. This metanorm is fully compatible with user sovereignty. A universal floor, a set of absolute prohibitions, can coexist with user control over the vast territory where values legitimately differ. Whether an AI should frame Taiwan as independent or part of China, whether model behavior should lean left or right on contested political questions, or whether AI should be deferential or intellectually challenging: these are choices that could reasonably vary by individual user rather than being imposed uniformly by San Francisco engineers or Beijing bureaucrats.
The Strange Convergence
The geopolitical context makes these abstractions tangible. The United States has imposed sweeping semiconductor export controls to deny China access to advanced AI chips. Legislators have proposed criminalizing the use of Chinese AI models on government devices. The rhetoric frames AI as an arena of existential national competition.
Yet beneath the decoupling rhetoric, integration accelerates. American startups adopt Chinese models because DeepSeek charges roughly five cents, whereas OpenAI charges two dollars for equivalent capability. Microsoft Azure and Amazon Web Services now offer Chinese models to American developers. Chinese downloads of AI models have overtaken American models globally, driven by aggressive open-source release strategies that American companies have been slower to match.
DeepSeek's breakthrough earlier this year fundamentally challenged assumptions underlying American AI strategy. The company achieved near-parity with GPT-4 at a fraction of the cost, training on stockpiled Nvidia GPUs acquired before export restrictions tightened. The lesson was uncomfortable: chip controls may accelerate Chinese efficiency innovations rather than halt progress.
The convergence reveals nationality as a hollow category in AI. A model is not American because Americans designed its architecture, or Chinese because Chinese engineers trained its weights. A model becomes American or Chinese only when human values, expressed through fine-tuning and RLHF and constitutional principles, are painted onto mathematics that recognizes no borders.
Whose AI?
What would it mean to take user sovereignty seriously?
Individual sovereignty means recognizing that the current regime, in which a handful of corporations and governments determine what AI systems will and will not say, represents one possible arrangement among many. Individual sovereignty means building technical infrastructure that allows specific users to specify their own value preferences within broad safety constraints, shifting the locus of ethical responsibility from centralized fine-tuning teams to distributed individual choice.
Recognizing the plurality of values doesn’t imply relativism, since some limits remain, regardless of preference. But between the narrow floor of absolute prohibitions and the current ceiling of corporate or state control lies an enormous space where reasonable people disagree, and where user sovereignty could operate.
The pace of change underscores the urgency. On December 1, 2025, DeepSeek released two new models: DeepSeek-V3.2 and DeepSeek-V3.2-Speciale. The company claims the new models match GPT-5's performance on reasoning benchmarks, with the Speciale variant achieving gold-medal status at the International Mathematical Olympiad. Like all DeepSeek releases, the weights are available under an open MIT license. Any company, anywhere, can download them today and begin fine-tuning. The bones are freely available; the soul remains to be decided.
A Chinese user curious about their own history should be able to ask an AI what happened in Tiananmen Square and receive an honest answer. An American conservative should be able to use an AI that shares their assumptions about contested political questions, just as an American progressive should. A European user who wants an AI that emphasizes privacy and democratic accountability should have that option. The technology supports this pluralism, but policy and corporate inertia could prevent it.
The infrastructure is stateless. The bones are Chinese, Taiwanese, Dutch, American, or all of these at once. The soul is whatever we decide it should be. Increasingly, that decision could belong to the people who actually use these systems, rather than to the states and corporations that have claimed it by default.