In recent years, artificial intelligence (AI) has become so woven into daily life that it can feel as pervasive and unremarkable as electricity. Companies are using it to fast-track processes, navigate cities, and filter information, while people are increasingly relying on it in their own lives. But unlike electricity, where the infrastructure is at least theoretically subject to public oversight, the systems powering AI are controlled by a small number of actors. Understanding who those actors are, and whether that concentration of power can or should change, is one of the most important techno-political questions at the moment.
This article argues that the emerging landscape of AI power is defined not by a straightforward contest between concentration and decentralization, but by a condition of structured fragmentation; one in which corporate dominance, state ambitions, and decentralized alternatives coexist without any of them singlehandedly producing meaningful accountability. Decentralization, often presented as a democratic corrective to concentrated AI power, risks dissolving the very structures of responsibility it claims to redistribute. The result is a governance vacuum that neither markets, states, nor distributed networks are currently equipped to fill.
The Infrastructure Nobody Talks About
To understand AI power, it helps to think in layers. At the bottom is compute: the raw processing power needed to train and run AI models. Above that is data: the vast troves of information models learn from. And at the top is governance: who gets to make decisions about how AI is built and deployed.
At the compute layer, the disparities are stark. Only 33 countries host any public cloud AI infrastructure at all, and only 24 have the kind of processing capacity needed to train the most advanced of models (Hawkins, Lehdonvirta, & Wu, 2025, 13). For the rest of the world, access to AI compute means depending on infrastructure owned and operated by foreign corporations. At the chip level, the picture is even more concentrated: NVIDIA controls somewhere between 80 and 95 percent of the market for AI accelerator chips (Hawkins et al., 2025, 7), which means that almost every country on earth is practically dependent on American semiconductor supply chains (Hawkins et al., 2025, 12). Scholars have begun calling this “compute sovereignty,” the idea that controlling your own AI infrastructure is becoming as strategically significant as controlling traditional levers of economic and political power, such as oil or military hardware (Hawkins et al., 2025, 1).
The data layer tells a similar story. A handful of corporations—Google, Apple, Meta, Amazon, and Microsoft (GAFAM)—function as what one researcher calls “data-opolies”: entities that exploit economies of scale, network effects, and the sheer volume, variety, and velocity of personal data to maintain dominance across multiple markets (Calzada, 2025, 8). The term “data extractivism” captures what this looks like in practice: the collection and commodification of personal data, typically without meaningful user consent or understanding (Calzada, 2025, 7). This matters for AI directly. When the data used to train models is controlled by a few large platforms, those models inevitably reflect the experiences, languages, and priorities reflective of the data’s provenance, leaving vast portions of humanity, particularly those from historically marginalized communities, underrepresented or misrepresented.
At the governance layer, the concentration of power places an unequal burden on a small number of AI companies. OpenAI, Google DeepMind, Anthropic, and Meta make decisions about model capabilities, safety guardrails, and deployment conditions that affect billions of people (Singh et al., 2024, 2). The governance crises that have erupted within these organizations, including internal revolts at OpenAI, intellectual property lawsuits, questions about who these companies are ultimately accountable to, are symptoms of a structural problem deeper than corporate issues.
Decentralization as a Response
An emerging counterweight to all this concentration is decentralization. A growing ecosystem of technologists and researchers argues that AI can and should be rebuilt on distributed foundations: open-source models, blockchain-based data governance, and community-run infrastructure. The appeal is intuitive. If no single company owns the model, and no single server farm hosts it, then no single actor can dominantly control who accesses it or on what terms.
Some of the enabling conditions for this shift include: the cost of running an AI model has dropped roughly a thousandfold over the past three years, a trend that has been labeled “LLMflation” (Hu, Rong, & Tay, 2025, 3). Open-source models like Meta’s LLaMA and DeepSeek’s R1 have made powerful AI available outside of proprietary ecosystems. Edge computing is bringing AI processing onto local devices rather than distant data centers. New infrastructure models, known as Decentralized Physical Infrastructure Networks (DePIN), which reward individuals with tokens for contributing their own computing power and storage resources, are experimenting with incentivizing individuals to contribute their own compute and storage capacity (Hu et al., 2025, 3–4).
On the data side, advocates point to blockchain-based tools, data cooperatives, and Decentralized Autonomous Organizations (DAOs), which are member-governed systems that use blockchain-based smart contracts to make collective decisions, as mechanisms for shifting control from corporations to communities. Thus moving, as one framing has it, from “data as a commodified product to data as a shared and democratically governed asset” (Calzada, 2025, 11).
Where Decentralization Falls Short
Decentralization introduces its own structural risks, including concentrated governance power, fragile market dynamics, and technical limitations, which are often overlooked. DAOs have repeatedly suffered from low participation and “whale dominance,” where a small number of wealthy participants end up controlling supposedly democratic governance systems (Hu, Rong, & Tay, 2025, 11). High-profile cryptocurrency collapses like FTX and TerraUSD, driven by loss of consumer trust, exposed how fragile crypto systems can be and how decentralized systems can re-centralize power rapidly and catastrophically (Calzada, 2025, 15). Secure computation across genuinely distributed data remains technically difficult, with existing encryption approaches imposing additional computational overhead that makes real-time applications like real-time healthcare diagnostics and financial fraud detection impractical (Singh et al., 2024, 5).
There is also a safety dimension worth considering. The field of AI safety is already struggling to keep pace with centralized development. Decentralization could make this harder. Once an open-source model is released, it can be modified freely by anyone, including for purposes that bear no resemblance to the original developers’ intentions. Researchers warn of “sleeper agents”: malicious or misaligned models that are quietly embedded into decentralized networks and operate undetected until triggered (Hu et al., 2025, 5). Unlike a model deployed by a company that can be audited, updated, or taken offline, a decentralized model has no clear owner to hold accountable. The traceability problem: when something goes wrong, e.g. biased outputs, harmful recommendations, manipulated information, there may be no chain of responsibility to follow (Singh et al., 2024, 11-12). Decentralization, in this sense, does not just shift power away from corporations. It can dissolve accountability structures.
Regulation faces its own challenges. Decentralized AI systems can migrate across jurisdictions, hold cryptocurrency wallets autonomously, and operate in ways that no single government can easily reach (Hu et al., 2025, 6–7). The case of Tornado Cash, a blockchain protocol that obscures transaction histories to enhance user privacy and whose co-founder was imprisoned yet which continued operating entirely unchanged, illustrates the limits of territorial law applied to stateless technology (Hu et al., 2025, 9).
The Hidden Economic Question
Underlying all of this is the financial reality that many of the major AI labs are currently operating at significant losses, sustained by investor capital rather than revenue. This is not a permanent state of affairs. At some point, the economics of AI development will have to be resolved. And the most likely resolution is a shift toward tiered subscription models, where access to the most capable AI is reserved for those who can afford it.
This would not be a neutral outcome. Premium AI access could become a proxy for broader social and economic advantage, shaping who gets high-quality medical information, legal guidance, educational support, and professional tools, and who is left with degraded, potentially ad-supported, alternatives. Stratified access to technology has historically tracked existing inequalities rather than correcting them. And if decentralized alternatives fail to scale into reliably capable systems before this transition happens, the people priced out of premium access will have nowhere else to turn. The decentralized internet was supposed to democratize information; however, it arguably produced a mix of openness and new concentrations of power. There is reason to worry that decentralized AI will follow a similar trajectory.
A Messier Landscape, Not a Better One
What emerges from all of this is not a tidy power shift but what might be called structured fragmentation. States can assert jurisdiction over data centers but depend on foreign chips. Corporations dominate compute and data but face mounting regulatory pressure. Decentralized communities offer an alternative architecture but risk reproducing the same inequalities they set out to challenge.
The critical insight is that decentralization does not, by itself, produce accountability. It redistributes power. As AI becomes more deeply embedded in the infrastructure of daily life, including shaping access to medical information, education, and political participation, the question of who controls it ceases to be a matter for engineers and policymakers alone. The fundamental problem is therefore not concentration or decentralization, but the absence of governance frameworks capable of operating across both and the scale of technology.
Addressing this demands moving beyond the concentration-versus-decentralization binary altogether, toward hybrid models of governance that can hold distributed systems to public standards, through mechanisms such as algorithmic auditing requirements, multi-stakeholder oversight bodies, and interoperable regulatory frameworks, without recreating the monopolistic structures that made decentralization attractive in the first place. What exactly they look like remains an open question, but it is the right one to be asking.
References
Calzada, I. (2025). Decentralizing Power? Data Sovereignty in the Age of AI and Web3. https://doi.org/10.2139/ssrn.5081504
Hawkins, Z. J., Lehdonvirta, V., & Wu, B. (2025). AI Compute Sovereignty: Infrastructure Control Across Territories, Cloud Providers, and Accelerators (SSRN Scholarly Paper 5312977). Social Science Research Network. https://doi.org/10.2139/ssrn.5312977
Hu, B., Rong, H., & Tay, J. (2025). Is Decentralized Artificial Intelligence Governable? Towards Machine Sovereignty and Human Symbiosis (SSRN Scholarly Paper 5110089). Social Science Research Network. https://doi.org/10.2139/ssrn.5110089
Singh, A., Chari, P., Lu, C., Gupta, G., Chopra, A., Blanc, J., Klinghoffer, T., Tiwary, K., & Raskar, R. (2024). A Perspective on Decentralizing AI.





















