You probably haven’t thought about philosophers John Locke or Thomas Hobbes since high school, or that required college class. They died centuries before personal computing was invented, let alone the artificial intelligence technologies that now infuse many parts of our daily life. What could they possibly teach Americans about living in a world with AI?
Turns out, quite a lot. The same thinkers who inspired America’s Constitution grappled with a fundamental question we’re facing now: How do you build a society where people can live together when new forces threaten to upend it?
From the Constitution’s checks and balances to the commerce clause to civil rights laws, America has repeatedly renewed its social contract when new technologies disrupted old arrangements. The telegraph, automobile and internet all required old principles to be applied in new ways.
When cars created interstate commerce and safety challenges, for instance, federalism was not abandoned – national highway standards and traffic laws were created, while preserving state authority over local roads.
AI portends a massive transformation of our economy and society, and so it demands similar creativity. The thinkers who inspired the Constitution oƯer something like a user manual for the AI age. Here’s how six ideas from the social contract tradition can help us navigate the socioeconomic challenges brought forth by AI.
When machines act like people (but faster)
Hobbes warned that without rules, life becomes chaos – everyone fighting everyone else for advantage that he called the “state of nature.” His solution? Create a government that can keep order.
Today’s AI agents may be Hobbes’ vision at digital speed. These systems can trade stocks, make hiring decisions and may soon perform most work done on a computer. Without proper oversight, they’re operating in a digital state of nature.
Just like the Constitution’s commerce clause gave Congress power to regulate trade between states, federal authority over AI agents may be needed. That means systems to track who’s responsible when AI screws up, safety standards like those in place for cars and medicines and kill switches ensuring that humans stay in control. Without rules, you don’t get freedom, you get instability.
Your rights don’t disappear just because a computer says so
One of Locke’s big ideas – the one that inspired the Bill of Rights – was that government power must have limits. You can’t just do whatever you want to people, even if you’re in charge.
This becomes urgent when government agencies use AI to decide who gets benefits or who poses a security risk. The Constitution doesn’t have an exception clause that reads “unless a computer says otherwise.”
Citizens need the same protections the Founding Fathers built against arbitrary power: transparency about how AI makes decisions, the right to appeal when systems get it wrong and strict limits on surveillance.
Democracy means you get a say
Jean-Jacques Rousseau believed that legitimate laws come from citizens working together to solve problems – not from elites imposing solutions. Our founders built this into the American system with town halls, elected representatives and the right to petition.
Here artificial intelligence might even help. Taiwan uses AI-powered platforms to help citizens find consensus on divisive issues, discovering common ground across political divides. American communities could use similar tools to democratically decide questions like how facial recognition gets used in public spaces, or whether AI should grade their kids’ homework. These questions are too important for tech executives or bureaucrats to answer alone.
Designing rules for an uncertain future
John Rawls, who died in 2002, proposed a thought experiment that suits the AI age: Design society’s rules as if you didn’t know whether you’d end up rich or poor, employed or automated away. Behind this “veil of ignorance,” you wouldn’t gamble on being among the winners. You’d insist that if AI eliminates jobs, there are ways to secure everyone’s economic future.
If algorithms make hiring decisions, they expand opportunity rather than entrench existing advantages. If AI creates vast wealth, the benefits don’t flow only to those who own the computer.
This isn’t socialism – it’s preparing for an uncertain future. Behind the veil, not knowing if you’ll be a tech CEO or a displaced cashier, you’d demand an economy where everyone can thrive, not just survive.
Finding common ground in divided times
Americans disagree about AI’s future – some see utopia and want acceleration; others see doom and push for a pause. But there are shared concerns. AI is developing faster than government can respond, private companies control the technology, and nobody can perfectly predict what comes next.
AI governance can be built on these shared foundations and despite different values and political perspectives. That could mean regulations that adapt as technology evolves, partnerships that harness innovation while maintaining accountability, or international cooperation on safety that prevents a race to the bottom.
When inequality harms democracy
Political theorist Danielle Allen warns that extreme inequality makes genuine democracy impossible. When some citizens lack basic security, they can’t participate as equals. This warning grows urgent as AI threatens to concentrate unprecedented power in few hands.
If a handful of companies control AI systems that replace millions of workers, they’ll wield influence that would have terrified the Founding Fathers, who designed the American system specifically to prevent monarchy and aristocracy. America needs modern antitrust enforcement, mechanisms ensuring aƯected communities have voice in AI governance, and economic policies that spread AI’s benefits broadly.
The question isn’t whether to slow innovation, but how to ensure it strengthens rather than undermines the democratic equality the Constitution promises. As technologies grow more capable, the question becomes fundamentally constitutional: Will AI be used to fulfill the promise of American democracy, or will it be allowed to create the kind of concentrated power that the founders designed the Constitution to prevent?
The philosophers might not have seen this coming, but they provided the tools to handle it. The question is whether America is wise enough to use them.
Benjamin Boudreaux and Beba Cibralic are researchers at RAND, a nonprofit, nonpartisan research institution, and professors at the RAND School of Public Policy. They are also part of the Social and Economic Policy Rethink project at RAND.
