reactormag.com
Terrible Bosses, Evil Corporations, and Paperclip Maximizers: Dan Davies’ The Unaccountability Machine
Books
Seeds of Story
Terrible Bosses, Evil Corporations, and Paperclip Maximizers: Dan Davies’ The Unaccountability Machine
Exploring broken systems, and why and how people seek to avoid accountability, even when it also involves avoiding power and agency.
By Ruthanna Emrys
|
Published on February 24, 2026
Comment
0
Share New
Share
Welcome to Seeds of Story, where I explore the non-fiction that inspires—or should inspire—speculative fiction. Every couple weeks, we’ll dive into a book, article, or other source of ideas that are sparking current stories, or that have untapped potential to do so. Each article will include an overview of the source(s), a review of its readability and plausibility, and highlights of the best two or three “seeds” found there.
This week, I cover Dan Davies’ The Unaccountability Machine: Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind. This is the most fascinating book on management that you’ll ever read: It will help you write believably terrible bosses, understand your own terrible workplace, and figure out what keeps going wrong with the Jedi.
What It’s About
The Unaccountability Machine is in some ways a spiritual sequel to Seeing Like a State. It combines the idea of legibility with mid-20th-century cybernetics (not actually about computers—more on that in a moment) to explain the eternal tug-of-war between a complex world and the need to make decisions about that complexity. Done right, this is the collective equivalent of mental rules of thumb: simplifications that produce good-enough decisions in a useful amount of time, while acknowledging that the decisions are imperfect and the process can be improved. Done wrong… well, we have plenty of examples to go around, don’t we?
“Cybernetics” comes from the Greek word for a helmsman—steering a ship being a classic example of circular feedback and adjustment between world and decision maker. It’s the study of how organizations steer themselves to reach goals, based on feedback from the rest of the world. The world is always more complex than the organizational system, because it’s larger and contains more factors; management therefore depends on matching the level of management complexity (data processing and analytic capacity) to input from the world. This match can be achieved by increasing management capacity or decreasing the complexity of input—the latter is what Scott is talking about, while the former requires spending money on education, hiring, and resources. Davies waxes… eloquent… on the topic of organizations finding ways to avoid increasing capacity. (This is a delightfully rude book, and doesn’t hold back on its opinions of overly rigid management, management consultants, economists, and anyone else responsible for problems despite their best efforts to avoid responsibility.)
Simplification doesn’t just make management feel easier while increasing errors, however. One major way that organizations simplify is by reducing decisions to preexisting rules or algorithms. This has some advantages—a manager who must follow specific rules during hiring is constrained from, e.g., making decisions based on who they want to golf with. A major disadvantage, though—from the perspective of everyone but the organization—is that it places the responsibility for decisions with the process, and not with individual humans. It’s an accountability sink. No one at the insurance company is responsible for rejecting your life-saving medicine, they’re just following the rules. All loan decisions are made based on the same opaque credit score calculations. Companies that follow Milton Friedman’s assertion that (in Davies’ words) “when companies act in the interests of society instead of their shareholders, they take on the role of government” can cut all non-financial concerns—and indeed financial externalities that don’t affect their shareholders—out of their management processes, vastly simplifying the system they have to deal with and also possibly destroying the planet.
Davies describes classical economics (rudely, see above) as a tool for this kind of simplification. Homo economicus—the perfectly rational and self-interested individual assumed in classical calculations—doesn’t exist but is easy to make decisions about. These economic decisions serve zero real Homo sapiens, but make very pretty graphs and excellent accountability sinks. Friedman’s doctrine also “invited [managers] to… attribute all the bad consequences and all the frustrating lack of independence to a separate work-self, which was under obligation to a simple principle.” Which is to say: it’s right and proper to leave your ethics at the office door along with your umbrella. Markets make information about certain types of needs and resources highly legible. They also, taken to this logical extreme, undermine the many parts of life, liberty, and the pursuit of happiness that are challenging to quantify and monetize.
Over the course of the past 40-odd years (not thinking too hard about how long ago Reagan was elected), this problem has expanded so that even many financial concerns are simplified out. Quarterly shareholder reports prioritize short-term over long-term advantage, because a lack of short-term advantage invites hostile takeovers, where the private equity industry vampirically sucks all resources out of a long-term stable company that didn’t make Number Go Up. Short-term costs thus become more important than long-term revenues, creating incentives to reduce systematic safety cushions (e.g., extra labor to handle surge periods and keep workloads manageable). Quoth Davies, “If you consistently demand the impossible, you will inevitably get the unethical.”
Davies argues for a change to the overall societal and legal expectations placed on organizations: to allow “that [they] can be like people, having purposes without a single goal.” “Businesses,” he says, “ought to be like artists, not paperclip maximizers.” He identifies places where we currently push them to overweight certain types of signal and ignore others, and where accountability sinks cause the most trouble—these are the points of greatest leverage for fixing the system.
Buy the Book
The Unaccountability Machine
Dan Davies
Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind
Buy Book
The Unaccountability Machine
Dan Davies
Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind
Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind
Buy this book from:
AmazonBarnes and NobleiBooksIndieBoundTarget
In addition to being a nerd, I am also a wonk. I have been known to squee about six-month efforts to make surveys fit the requirements of the Paperwork Reduction Act, and love books with titles like The Meetings That Make or Break Your Organization (unlikely to show up in this column, but you never know). The Unaccountability Machine hits my sweet spot of “Ooh, that’s why that project went wrong—oh, I could do this with my team—the Jedi have really screwed up their complexity matching processes—what if the evil empire used this policy?—no, no, do not do that with your large language model—here’s how a future society could sink even more accountability!”
In other words, this is a particularly valuable book for readers who geek about lots of different systems that involve people. It offers a set of new-old tools for thinking through what makes governments, companies, and school boards screw up, turn evil, or occasionally manage to accomplish useful goals. It’s full of plot bunnies that double as life hacks. And it’s a whole ’nother way of imagining how (and why) actual people might use new technologies. What are the policies supporting that torment nexus? Who uses it to escape blame for the torment, and how? What regretfully reasonable justifications fill Op Ed pieces?
I particularly appreciate that the book ends with actual solutions. In some cases, I also like the solutions—yes, please, break the whole private equity system, it needs it! On the other hand, Davies suggests (in a throwaway paragraph that I suspect reflects a last-minute editorial demand to talk about AI) that LLMs are a good way to reduce incoming environmental variability. I hope we have all seen by now that they are a great accountability sink for doing so, with results that range from mediocre to disastrous. This same chapter suggests that managers should have room to be more like artists, leveraging their own individual variability to produce outcomes they can be proud of. LLMs are the opposite thing. (Looking at Davies’ newsletter archives, he seems to have since gotten more nuanced in his ideas of what LLMs are good for, and our post-book disagreements would make for a whole separate post.)
But overall, the book is just full of useful frameworks. “Criminogenic organizations” with incentives and policies that ensure—deliberately or otherwise—that the system will output crimes1. “The extent to which you are able to change a decision is precisely the extent to which you can be accountable for it.” How politicians use “the market” as an accountability sink. “People who want to break the link to human decision makers and treat the books of law as a source of algorithmic judgment are called fundamentalists. Or… strict constructionists.”
The biggest insight I got from this book, though, is a framework for why and how people seek to avoid accountability, even when it also involves avoiding power and agency. We have set up a system that teaches people to be afraid of responsibility. My most serious fear about LLMs is that they magnify this fear: that it feels safe to create art and communication that don’t come with pesky human judgments. Many people will go along with immoral orders—Milgram’s experiments suggest between 5-80% depending on incentives, apparent authority, and how easily you can see the harm you’re doing2. But all LLMs will do so, and they are unlikely to blow whistles afterwards. Plus the source of the output is conveniently separate from the entity getting paid. There is an advantage, to a paperclip optimizer, in discouraging humans from thinking too hard about our ability to distinguish right from wrong, make choices accordingly, and tell stories that flow messily and unpredictably from individual conscience. They’ll happily use machine learning to reduce that variability—but they build on all the other accountability sinks that have been used for the same purpose.
The Best Seeds for Speculative Stories
It Doesn’t Take an AI. Davies quotes Charlie Stross’ 2017 gloss of corporations as “very old, very slow AIs.” They are much like the feared failure mode of nanotechnology. Given an absurdly high level of resources and one overarching goal, a hypothetical nanobot told to prioritize paperclip production could eventually turn everything into paperclips. This sort of apocalyptic warning about future tech masks what already exists: large, inhuman entities that do their best to turn everything into quarterly profits, “and appear to be unable to change course even when faced with the imminent extinction of human life.” Cybernetics lays out the structures through which corporations, despite being full of humans, consistently produce these inhuman results. These explanations are useful both when trying to stop them, and when trying to write about scary megacorporations.
A Different Cyberpunk. The route by which “cybernetics” was applied to management, misparsed, and used to produce the modern “cyber-“ prefix is long and tortuous. But good cyberpunk does tend to feature both computers and corporations, and integrating some real management science into those corporations could make them more effective, realistic, and dramatic villains.
New Growth: What Else to Read
James C. Scott’s Seeing Like a State, along with the further readings I recommended for that book, remain entwined with Davies’ work. Nguyen’s The Score, in particular, feels like the third volume of this unofficial trilogy, focusing on the simplified measures that feed Davies’ unaccountability machines.
Victoria Goddard’s The Hands of the Emperor is about trying to collect the right information to make government good. It’s also kind of about what would happen if you put the Omelas kid in charge of the city. Arkady Martine’s A Memory Called Empire and its sequel are a gorgeous depiction of imperial politics written by someone who knows how policy and management work.
I am not actually a fan of most classic cyberpunk—the internet in those works is much duller and less scary than the one we got—but adore William Gibson’s Pattern Recognition, which mixes the aesthetics of cyberpunk with the fractal complexity of the real world, and therefore better explores the ratcheting incentives that draw people into inhuman systems. Marge Piercy’s He, She, and It is cyberpunk by the person who originally inspired the subgenre (in a brief scene from Woman on the Edge of Time), and is serious both about the complexity of society and the inhumanity of corporations. Malka Older’s Infomocracy shows how informational input enables different types of governance—and also has ninja fact-checkers.
Share your thoughts on paperclip maximizing workplaces—or your recommendations for cool policy SF—in the comments below3!
Simple example: in DC a while back, metro line inspectors were evaluated based on quotas that could not be met by doing thorough, accurate safety checks. Results are left as an exercise for the reader—and for the inspectors, who consistently solved the exercise as you’d expect. ︎I can’t find an online summary of Milgram’s full set of experiments, only of the most famous one with the “victim” in the next room but audible, and 60-65% compliance. So these figures are approximate and based on my memory of his book. What I recall is that if the victim is present and visible, 90+% of people will refuse orders and often physically intervene if the experimenter tries to continue with shocks. And if the victim can’t be heard at all, most people will follow orders to the end. ︎Or if you, like my wife, want to argue with me about whether the Jedi took negative feedback that much better than Darth Vader. ︎The post Terrible Bosses, Evil Corporations, and Paperclip Maximizers: Dan Davies’ <i>The Unaccountability Machine</i> appeared first on Reactor.