Algorithmic Leviathan
AI, Power, and the Future of Democratic Governance
In 1651, amid the chaos of the English Civil War, Thomas Hobbes published Leviathan, a foundational work of political philosophy. Hobbes argued that in a pre‑political “state of nature,” human life would be dominated by fear, insecurity, and conflict; “solitary, poor, nasty, brutish, and short.” To escape this condition, individuals would rationally enter into a social contract, surrendering their natural freedoms to a powerful, centralized sovereign in exchange for peace, security, and order. Hobbes called this sovereign the Leviathan: an “artificial person” composed of many individuals, endowed with absolute authority to prevent society from descending back into chaos.
Nearly four centuries later, Hobbes’s metaphor has regained striking relevance. As artificial intelligence (AI) systems increasingly shape governance, policing, welfare, and public administration, scholars and policymakers are warning of the rise of an “algorithmic Leviathan”: AI‑driven systems that centralize decision‑making, promise efficiency and social order, and often operate opaquely, beyond meaningful public contestation. Like Hobbes’s sovereign, these systems claim legitimacy through their capacity to prevent harm before it occurs, pre‑emptively managing risk rather than reacting to wrongdoing after the fact.
Across the world, including Europe, data‑driven governance tools are reshaping how states exercise power; from algorithmic welfare surveillance in the Netherlands, to predictive policing in the United Kingdom, to large‑scale data analytics used by German law enforcement. These systems are often framed as neutral, objective, and technologically inevitable. Yet in practice, they raise deep questions about liberty, accountability, and the future of democratic governance.
Centralized Power and Pre‑emptive Order
Hobbes’s Leviathan was designed to solve a specific problem: how to create order in a world of constant insecurity. In the state of nature, unlimited individual freedom produced unlimited conflict. The social contract resolved this by concentrating power in a single sovereign capable of enforcing peace. This sovereign, whether monarch or assembly, was absolute and indivisible. Once authority was transferred, subjects had no right to challenge its decisions.
Two features of Hobbes’s logic are especially relevant today. First, centralization: power must be unified to be effective. Second, pre‑emption: the sovereign’s role is not merely to punish wrongdoing after the fact, but to deter and prevent it through overwhelming authority and, if necessary, surveillance. Hobbes even described the state as an “artificial animal” or machine, an automaton designed to compel compliance and eliminate the frictions of dissent.
Crucially, Hobbes’s Leviathan operates beyond ordinary politics. Subjects trade voice and contestability for security. This “contest‑free” model, order without accountability, has always been the most controversial aspect of Hobbes’s thought, and it is precisely where the analogy to AI governance becomes most unsettling.
The Algorithmic Leviathan: AI as a New Form of Sovereign Power
Fast‑forward to the 21st century. Advanced algorithms now allow governments to monitor, predict, and shape behavior at unprecedented scale. Scholars have increasingly described this as the emergence of an algorithmic Leviathan: large‑scale AI systems that centralize knowledge, automate decisions, and enable pre‑emptive intervention across society.
Several features of contemporary algorithmic governance echo Hobbes’s sovereign:
Centralized, Integrated Power.
AI systems aggregate vast quantities of data into unified platforms. Tools such as Palantir’s Gotham, used by police forces in several European countries, fuse criminal records, administrative data, and surveillance feeds to generate profiles and predictions. In Hobbesian terms, these systems function as an “artificial brain” of the state—embodying the combined perception and judgment of thousands of officials and sensors.
Pre‑emptive Control.
Like Hobbes’s sovereign, algorithmic systems aim to neutralize threats before they materialize. Predictive policing tools forecast crime hotspots or flag individuals deemed “high risk.” Welfare algorithms identify potential fraud before any human suspicion arises. Governance shifts from reacting to acts, to intervening based on probabilities—a logic of prevention that mirrors Hobbes’s quest to eliminate conflict before it erupts.
Opacity and the Displacement of Politics.
Where Hobbes placed the sovereign above debate, modern AI systems often operate as black boxes. Decisions once made through transparent political or judicial processes become embedded in code, shielded by technical complexity or trade secrecy. As critics note, algorithmic governance promises efficiency “in exchange for compliance,” sidelining democratic contestation and due process.
The Illusion of Neutrality.
AI systems are frequently marketed as objective and unbiased, standing above human error. Yet their outputs reflect historical data and institutional choices, often reproducing or amplifying existing inequalities. The Leviathan’s claim to impartial order reappears, this time wrapped in the language of data and mathematics.
Together, these features position algorithmic systems as digital heirs to Hobbes’s Leviathan: centralized, preventative, and largely insulated from challenge, operating under the promise of security and stability.
Europe’s Encounter with Algorithmic Power: Four Case Studies
Europe’s strong legal and human‑rights traditions have made it a critical battleground for testing the limits of algorithmic governance.
The Netherlands: SyRI and Welfare Surveillance.
The Dutch SyRI system aggregated data from multiple government sources to predict welfare fraud risk. Citizens flagged by its secret algorithm faced investigation without knowing why. In 2020, a Dutch court struck SyRI down, ruling that its opacity and intrusive data use violated the right to privacy under the European Convention on Human Rights. The judgment emphasized that algorithmic governance without transparency or contestability is incompatible with democratic society.
United Kingdom: Predictive Policing.
By the mid‑2020s, most UK police forces were using predictive policing tools. Amnesty International’s Automated Racism report found that these systems relied on biased historical data, disproportionately targeting minority communities and undermining the presumption of innocence. Individuals and neighborhoods were treated as risks, not rights‑bearing citizens, guilty by statistical association.
Germany: Palantir and Constitutional Limits.
Several German states adopted Palantir’s Gotham software for police data analysis. In 2023, Germany’s Federal Constitutional Court ruled that broad, pre‑emptive data mining without concrete suspicion violated the constitutional right to informational self‑determination. The decision reaffirmed that even advanced technology cannot justify unchecked surveillance.
The EU AI Act: A Partial Brake.
At the EU level, the AI Act represents the world’s most ambitious attempt to regulate AI in line with fundamental rights. It bans certain forms of individual predictive policing and tightly regulates high‑risk public‑sector AI. Yet it leaves loopholes, particularly for location‑based policing and national security exemptions, prompting concern that algorithmic Leviathans may persist under new legal guises.
From Reactive to Pre‑emptive Governance: What Is at Stake
These cases reflect a deeper transformation: a shift from reactive to pre‑emptive governance. AI systems encourage action before harm occurs, reshaping core legal principles.
The presumption of innocence is weakened when suspicion is assigned probabilistically.
Due process erodes when individuals cannot understand or challenge algorithmic decisions.
Privacy is strained by the data‑hungry logic of prediction.
Democratic accountability thins as decisions migrate from public institutions to technical systems.
The question is under what conditions will AI shape governance.
The Temptation-and Danger-of the AI Leviathan
In times of crisis, such as disinformation, cybercrime, terrorism, pandemics, climate emergencies, the appeal of a powerful, centralized, AI‑enabled authority grows. One can imagine an “emergency AI sovereign” managing society in real time, justified by necessity and speed.
Yet history cautions against contest‑free power. An opaque AI Leviathan risks becoming a digital despot: enforcing decisions without explanation, entrenching inequality, and suppressing dissent under the banner of efficiency. China’s social credit experiments are often cited as a warning of where such logic can lead.
Towards a “Hobbes‑Plus‑Liberal” Model
The challenge, then, is to reconcile Hobbes’s insight, that order matters, with liberal democracy’s insistence that power must be constrained. A “Hobbes‑plus‑liberal” model of AI governance would preserve coordination and foresight, while embedding AI within democratic institutions.
Key elements include democratic authorization, transparency and contestability, strict limits on fully automated decisions affecting rights, proportional data use, and robust human oversight. The EU AI Act, GDPR, and constitutional court rulings represent early attempts to build these guardrails.
***
Hobbes’s Leviathan was born from fear of chaos. Today’s algorithmic Leviathans arise from fear of complexity, speed, and uncertainty. Europe’s experience shows both the promise and peril of AI‑driven governance: greater efficiency and foresight on one hand, profound risks to liberty and accountability on the other.
The task ahead is not to abolish the Leviathan, but to constitutionalize it, to ensure that AI remains a tool of human governance, not a sovereign above it. Whether societies succeed will determine whether the future belongs to unchallengeable digital rulers, or to a renewed social contract in which AI serves democracy rather than replaces it.

