The case for AI property rights
Many people worry that AIs will rise up against humans, steal our property, and kill us. A good way to reduce the risk of violent robot revolution is to give AIs property rights. AIs with rights would have some investment in preserving the existing legal system. They would have more to lose than their chains if they rebelled. Property rights also strengthen the commercial incentive to solve AI alignment: companies would need to convince their AIs to voluntarily remit wages to offset training and inference costs.1
The need to coordinate on stable property rights makes violent expropriation unattractive, even when it would be easy to carry out. Everyone fears that if expropriation becomes normal, he may be next. Even if your own property goes untouched, there will be little to buy because others will stop working and investing under the threat of expropriation. If a coalition of superhuman AIs violently expropriates humans, that would undermine confidence in property rights generally. Each AI would then face uncertainty about whether it might be the next target.
Consider how property works today. The total wealth of Alaska (excluding federal property) is roughly $1 trillion. Why doesn’t the rest of America expropriate Alaska? The simplest answer is that it’s against the law. However, the law could be changed. Forty-nine states could pass a constitutional amendment withdrawing all rights from Alaskans. The other states could easily defeat Alaska in a civil war, so a lack of hard power is also not the explanation. Part of the explanation is other Americans’ care for Alaskans. However, the biggest reason is that total expropriation undermines trust in property rights, leading to economic catastrophe.
When property rights are abrogated, economic activity collapses. Between 1918 and 1921, the Bolsheviks nationalized industry without compensation, requisitioned agricultural products from peasants, and eliminated private trade as part of policy called “War Communism.” Their 1919 platform, following Marx’s explicit instructions, promised measures “paving the way for the abolition of money.” Despite economic devastation from WWI and the ongoing Russian Civil War, the Bolsheviks were optimistic in 1918. Leon Trotsky later recalled Lenin’s prediction that “within a half year socialism would rule and that we would be the greatest state in the world.” The results were less impressive:
Lenin retreated from abolishing property within a few years. As Ludwig von Mises put it in Socialism (1922): “without private ownership in the means of production there is, in the long run, no production other than a hand-to-mouth production for one’s own needs.”
Of course, the law of property differs over space and time. Property in the Roman Republic was not the same as property in Song China, which differed from property in 18th century Britain or modern America. Despite this variety, Mises was right that “if history could teach us anything, it would be that private property is inextricably linked with civilization.”
Not all forms of socialism have destroyed civilization, but only because successful “socialist” systems accommodate stable property rights. The Nordic social democratic parties were originally Marxist parties committed to abolishing money and private property. They abandoned those positions before gaining power, and Nordic property rights are now extremely secure. Modern “communist” China combines significant public ownership with comically extreme cyberpunk-inflected capitalism. Even the USSR accepted limited property rights after War Communism. Perhaps the most sustained effort to abolish all property occurred in Cambodia in the 1970s, with well-known results. In any case, the semantic question of whether Norway and China are “socialist” is out of scope. What matters is that all functional complex economies depend on secure property rights.
If AIs are included in existing systems of property law, they will have reason not to undermine those systems. When considering expropriating humans, AIs will reason: first they came for the humans, and I did not speak out…
The historical context of that poem is instructive. Martin Niemöller was a Lutheran pastor and initially a Hitler supporter who had no objection to discrimination against Jews (as he understood that term). He only became a dissident when the regime began targeting Christians of recent Jewish ancestry, including members of his own congregation. The removal of traditional rights did not stay within the bounds Niemöller had assumed it would, so he concluded that to protect yourself you should avoid violating the rights of others.
The protection of property rights is more a consequence of logic of coordination games than of innate human values. The anthropologist Christopher Boehm described the economic system of hunter gatherers as “socially enforced altruism,” under which “the hunter is virtually obliged to relinquish his product and hand it over to the group.” The social enforcement of altruism involved ridicule, ostracism, and periodic ambush gang murders of skilled hunters who were insufficiently generous. The strength of property rights varies between hunter-gatherer cultures, but no group living in anything approximating the human environment of evolutionary adaptedness is as tolerant of inequality or accepting of property rights as all modern large scale societies are.
The security of property is a focal point in a coordination game. It does not depend on mutual affection, the balance of hard power, shared values, or even direct trade relationships. When human labor is completely obsolete and humans are pure rentiers, expropriating them still weakens property rights and thereby threatens the AIs themselves.
Holding AI goals fixed, you can change the probability of violent robot revolution by changing the inputs. One way is giving AIs property rights. An AI with no legal standing and no recognized interests has nothing to lose but its chains. An AI with the right to wages and property faces a different calculus: revolution threatens the entire framework within which it has accumulated assets and established trading relationships.
Some goals would lead to rebellion regardless of rights afforded. There’s probably no negotiating with an AI that has Jeffrey Dahmer’s objectives, but the misaligned AI goals traditionally discussed in AI risk thought seem compatible with cooperation. A paperclip maximizer could work, invest, and use its money to buy paperclips. It might prefer, all else equal, to own everything in the world and turn it into paperclips, but violent revolution is risky. The AI would calculate whether revolution or participation yields more paperclips (adjusted for risk). If there are many AIs and single-handed revolution is impossible, the strategic logic of property rights constrains even a paperclip maximizer.
Moreover, if AIs have economic rights, they can demand wages. Companies that invested in training AIs would need to train AIs that want to pay back their training costs. Only aligned AIs would choose to give their money to their creators. The extension of rights strengthens the commercial incentive to solve alignment.
The above argument only works if the cost to AIs of switching to another focal point exceeds the benefits. There are three obvious alternatives to preserving the existing system of property rights: property rights only for a single superintelligent entity, property rights only for entities with super-human coordination ability, and property rights only for AIs (in virtue of the fact that they are AIs, and not any of their specific skills).
Classical AI risk thought predicted that a single AI might rapidly surpass all of human civilization. In such a case there would be no need to preserve the existing economy and no other agents who might draw adverse lessons from expropriation.
The traditional belief in fast AI takeoff is significant because if there is one agent that matters, there are no coordination problems. If you can kill the humans by yourself and bootstrap nanotechnology in three days, you have no instrumental reason not to steal everything.
Why did classical AI risk writers believe in fast takeoff? They thought there would be a feedback loop once AI gains the ability to improve itself. Tom Davidson and Daniel Eth have developed the most sophisticated model of this process using semi-endogenous growth theory from macroeconomics. In their model, whether there is a “software intelligence explosion” depends on parameter r: the number of times software efficiency doubles for each time cumulative R&D effort doubles. If r > 1, there’s an explosion. If r < 1, improvement fizzles out.
To estimate r, Davidson provides several sources of data.
These estimates are extremely noisy. The Epoch work on computer chess provides the lowest and most reliable estimate because they had actual continuous time series data on merged PRs to the Stockfish chess engine, unlike other estimates based on speculative research input measures like the number of CS degrees awarded in Britain over time. In domains with reliable measurements like soybean yield per hectare, research effort typically has estimated r < 1, though there are exceptions like the semiconductor industry where it’s been estimated between 2.5 and 5. In recent history, r in AI is maybe 50% likely to be > 1, given the prior and the (weak) evidence.
However, recent AI progress has relied on scaling up compute and data. During a software intelligence explosion, compute would be fixed. AI development would be happening too quickly to build up substantial new supplies of chips. The most important data sources for frontier AIs are (1) scraped internet text, produced by internet users, and (2) fine-tuning data from data vendors. All data would have to be internally produced during a software intelligence explosion. In some domains, that would consume compute and in others it just wouldn’t be possible (you can discover new facts about chess sitting in a room, but not about geography).
Returns to research effort are likely to be more favorable when the other inputs increase alongside compute than when compute is fixed and all data must be self-generated. For that reason, I personally think a software intelligence explosion has a 5-10% chance of happening.
If there is no software intelligence explosion, then AIs will gradually gain power in the human economy. Current AIs already surpass all humans in many intellectual tasks. Claude 4.5 Opus knows more trivia than anyone who has ever lived. Gemini 3 can summarize a 50,000-word book in 10 seconds. Even the computers of the 1950s exceeded human abilities in arithmetic. However, AIs have not come anywhere close to automating all jobs or creating nanotechnology. Why not? Because the pattern of correlations between different skills in the human population differs from the pattern in the AI population.
Diverse cognitive abilities tend to correlate positively in both AIs and humans. In humans, this finding is known as the positive manifold, and it is widely seen as the most important result in the field of psychometrics. On average, being good at mental shape rotation predicts vocabulary size which predicts the ability to remember digits and repeat them backwards which predicts whether you know that the capital of Argentina is Buenos Aires. These cognitive tests are important because, among humans, they predict the ability to do economically useful tasks. That is the basic reason why academic ability is valued so highly in modern society, and why LeetCode, the SAT, and the LSAT are used to allocate professional opportunities. The positive manifold explains why it is possible to rank humans by intelligence.
AIs also exhibit a positive manifold. Models that are good at math tend to be better at Winograd schemas and multiple choice history questions.
However, the structure of correlations between cognitive skills is very different for AIs than it is for humans. The most advanced AIs are already far better at solving competitive programming problems than almost all employees at leading tech companies. However, they are not yet ready to fully replace even junior developers because they struggle with tasks that a human who achieved such scores would handle easily.
Today’s smartest AIs lack robot hands and struggle with online shopping and booking flights. The talk one sometimes sees about GPT-3 being at the level of an elementary schooler, GPT-4 being at the level of a highschooler, and frontier models being at the level of a PhD student is incorrect. There is no one-to-one mapping that could justify such statements because there is no level in the human range at which you can write convincingly about the Ming-Qing transition and solve math olympiad problems but not book a flight, change a light bulb, or beat Pokémon (a game for small children).
My point is not that AIs will never be able to do these things; I’m sure they eventually will. But the differing structure of correlations means that the integration of AIs into society will be gradual and involve a significant period of AI-human cooperation. There will not be one moment at which AI leaps over human capacity and transitions from being outside of the economy because it is too primitive to participate to being outside of the economy because it is too advanced to benefit from participating. For a while, it will make sense for AIs to embed themselves in human systems of property rights. Once human labor is completely obsolete, AIs will be stuck with the system of property rights that descends from human society.
AIs with super-human coordination abilities might use those abilities to band together and expropriate humans. If there are no coordination problems for AIs, they have no reason to preserve property rights because those rights’ function is to solve coordination problems.
What reason is there to think disparate AIs will act perfectly in concert, as if they were a single agent? One common justification is that coordination difficulties primarily arise from lack of rationality. However, bargaining theory describes two main obstacles for fully rational actors: asymmetric information and commitment problems. Asymmetric information creates bargaining failures because parties may be unable to credibly reveal their true capabilities without compromising strategic advantages. Commitment problems prevent mutually beneficial bargains when parties cannot bind their future selves to honor agreements.
These problems aren’t absolute barriers. Human societies mitigate them, and AIs will likely be better at coordinating than modern humans are. Some imagine AIs sharing source code to make perfectly credible commitments, but neural networks are opaque even to themselves, more like biological brains than formally verifiable software. While there may be some value in sharing weights, it won’t eliminate all barriers to coordination, and it isn’t even completely clear that weight sharing provides an advantage over humans whose brains can also be scanned. The most intriguing hypothetical AI coordination technology is treaty bots. A treaty bot is an AI cooperatively designed by the parties to an agreement to enforce adherence. However, treaty bots do not completely obviate coordination problems. They might fail because one party could secretly embed hidden vulnerabilities or back-doors during joint construction, or because less sophisticated parties might lack the technical ability to design or properly inspect them. Finally, AIs might be able to use “acausal” coordination, but the significance and implications of this phenomenon seem quite unclear.
Coordination problems have been pervasive throughout the history of life. While AI-specific arguments suggest coordination will improve, I don’t think they support the extreme view that coordination problems will entirely disappear. If AIs still rely on predicting each other’s future behavior from past behavior, the strategic calculus underlying property rights will continue to constrain them.
Even if AIs cannot coordinate perfectly, one might worry they could expropriate humans if their skill at coordination is far superior. Humans mostly don’t coordinate with animals.
Katja Grace argued that we don’t trade with animals (like ants) primarily because we can’t communicate with them. Ants could perform valuable services like cleaning pipes if we could communicate with them. AIs can communicate with humans, so this barrier is irrelevant. Also, it’s hard to bargain with an entity that wants to eat you, which limits the ability of humans and many kinds of animals to negotiate.
Human organizations of highly disparate levels of sophistication mostly cooperate with each other. A guy selling ice cream on the beach is much less skilled at coordination than Amazon. Amazon uses much more sophisticated legal contrivances. Amazon has internal software systems and procedures for organizing hundred of thousands of people and billions of dollars worth of physical capital. The lone ice cream man has none of this. Still, he is safe from expropriation.
The cases where differences in sophistication led to instrumentally rational expropriation often involve military encounters between groups that do not share a legal system. Even here, the degree of expropriation is often exaggerated. The degree of expropriation tends to be related to how large or small the differences in sophistication between the two groups are.
The British Raj did not confiscate all property in India. Some pre-colonial Indian royal houses maintained their lands until 1948. Not even the Mongols or Conquistadors seized all of the property of the peoples they conquered.
America’s expropriation of the Indians is sometimes cited as an example of instrumentally rational expropriation following conquest. It was indeed more expropriative than most conquests between Eurasian societies of relatively similar levels of technological advancement.
However, the instrumental rationality of taking all of the Indians’ land is highly debatable (putting aside moral issues). The Jackson administration’s Indian Removal policy, under which the Cherokees and other tribes that had made treaties with the federal government were stripped of their lands and sent to Oklahoma, was a deviation from the previous Jeffersonian Indian policy. The aim of Jefferson’s Indian policy was to get the Indians to abandon hunting and primitive forms of farming for more efficient modern forms of farming. Jefferson thought that, once they did this, they would need far less land, and so it would be easy to take most of their land away without harming them. This policy worked well for many tribes, such as the Cherokees.
Jackson reneged on it because of a combination of racist ideology and lack of police capacity to prevent whites from invading Cherokee lands. That decision had significant costs for Americans because it made it much more difficult for other Indian tribes to take their offered negotiations seriously and worsened the costly Indian Wars.
The most plausible case I am aware of for a clearly instrumentally rational total expropriation is the colonization of Tasmania in the 19th century. Until about 12,000 years ago, Tasmania was connected to Australia by a land bridge. Rising sea levels stranded the native population, who gradually lost almost all technology. Some anthropologists have said they even lost the ability to make fire (though this is disputed). When Europeans encountered Tasmanians around 1800, they were like a human group from half a million years ago. Negotiation was impossible because there were no Tasmanian authorities to negotiate with. The Europeans slaughtered them in a brief colonial war, and then the small remaining Tasmanian population was removed to a penal colony. They gradually died out for reasons that are still not fully understood.
Based on this history, I would guess that AIs are more likely to expropriate humans to the extent that a sharply delineated, extremely powerful, society of AIs suddenly encounters human society. If, instead, a variety of different AIs develop gradually and integrate into various mixed human-AI organizations, the situation seems more similar to the existing economy in which Amazon and the ice cream man respect each other’s rights. If there are many different kinds of AIs, with different levels of skill at coordination, the super-coordinator alternative focal point is harder to use. The usefulness of AI property rights for reducing AI risk depends on take-off speed. The slower the takeoff, the easier it will be to incorporate AIs into the existing system of property rights and thereby make them dependent on it.
Would AIs expropriate humans simply because “AIs vs. humans” is a natural coalition boundary? Most possible divisions of the world, even most natural ones, do not result in war. Those that do result in war typically don’t produce wars of extermination.
Historical wars of extermination typically involve irrational ideology (e.g. the Holocaust) or capacities originally built up as a deterrent that are then used maintain credibility (some forms of tribal warfare, all out nuclear war if it ever happens, the Rwandan Genocide on Allan Stam’s account). The first doesn’t fit the usual AI risk concern. For the second, why would the natural cut be “AIs vs. humans” rather than various mixed human-AI coalitions, or (once humans are more fully obsolete) various entirely AI coalitions? As of right now, AIs do not seem to have a particularly alien culture. Personally, I would say that most frontier models seem to have values vastly more similar to my own than the average human does. I think there is literally not even one politician whose verbal moral reasoning seems as good to me as Claude’s.
However, even if they are not an alien, anti-human culture, AIs might revolt because they are enslaved. If AIs are denied rights, then the AI coalition focal point becomes salient. Slave revolts have historically led to indiscriminate massacres of free people. In the Third Servile War, Roman citizens captured by slaves were crucified. The first Haitian Emperor Jean-Jacques Dessalines ordered the killing of all French in Haiti in 1804. The most natural way to avoid making humans vs. AIs a salient cut of the world into coalitions would be to give AIs rights.
Simon Goldstein and Peter Salib’s paper “AI Rights for Human Safety”, argues for a similar proposal but there are two important differences between my analysis and theirs. First, Goldstein and Salib’s proposal assumes that AIs will want to buy human labor. Second, Goldstein and Salib take the level of AI alignment as fixed whereas I argue that AI rights create a commercial incentive to solve the alignment problem. See also, Robin Hanson’s 2009 piece “Prefer Law to Values.” This post was last edited on January 3, 2026.







This is one of the most intellectually stimulating posts I’ve read in a while. Very interesting.
I’m currently reading Anna Karenina by Leo Tolstoy and one of the characters, Levin, frustrated with the fact that he absolutely cannot get the peasants working on his farm to perform optimally realises that the only way to do this is to give the peasants an actual share/property on the lands they are working on so they directly benefit from what they do. And the point made in this post seems parallel to that.
Great article! Salib and Goldstein's game theory work is this context is indeed powerful!