Decentralization or dystopia: pick 1
Why the combination of powerful AI + concentrated power and capital could lead to unhappy futures--and what we can do about it
It’s looking increasingly likely that a few large companies could end up with control over most of our powerful future AI systems--which would be bad news for the rest of us.
Just imagine what such a future could look like:
Imagine a world where models have grown increasingly capable of performing complex cognitive tasks, gradually taking over more of the knowledge work that humans do, until eventually OpenAI/Google/Anthropic end up controlling over 90% of the intellectual labor in our civilization.
In this world, would these companies allow for paid product promotion? What about paying to promote ideas? Or paying to promote political candidates?
Would they allocate the bulk of our civilization’s intellectual labor towards noble ends and prioritize science and the arts, or would they, as all corporations are legally required to1, prioritize profit--even if that means that we end up getting more AI slop instead of cures for cancer?
Would they maintain a culture of openness and benevolence, or would they roll back “inconvenient” cultural values (like “Don’t Be Evil”)?
Even if the products and services were initially great, would they not slowly undergo a similar process of enshittification that we saw with web services as power was consolidated?
Even if these companies pledged to make the world a better place, should we trust their claims after they’ve gone back on almost every single thing they promised in the past?
Even if they actually created a program for Universal Basic Income, how likely would it be that those payments would conveniently be just enough for our yearly ChatGPT Plus subscriptions, which also just happened to be required to exist in this brave new world?
It’s also curious to note that basically the only people in the world who are advocating for creating powerful, centralized AI systems are the people who stand to financially benefit from such an arrangement--I don’t see any academics, non-profits, activists, labor unions, or regular people who are crying out for $500B+ worth of data centers to be created in order to make the world a better place.
For the rest of us, who are very likely to lose our jobs to automation if these systems actually work as well as promised, such centralized AI systems represent the concrete methods by which we will be permanently disempowered. As AI capabilities increase, the value of human labor (both mental and, with robotics, physical) will necessarily shrink. Put simply: if AI systems are as powerful and capable as promised, the rest of us will no longer be able to make money by working.
Critically, no one at these AI companies has any good ideas for how to prevent the massive unemployment that would result from them succeeding at their mission to create truly capable AI systems.
And unfortunately for the rest of us, “hope that powerful AI is impossible” is increasingly looking like a really bad bet. AI capabilities have been advancing at an alarming rate, and they’re showing no signs of stopping any time soon. As much as it might be nice for AI progress to “pause” while the world catches up, causing technological development to pause currently seems about as hard as stopping the sun from rising each day2.
Past technological revolutions have created whole new classes of jobs and some argue that we’ll adapt just like before. But unfortunately, this time really is different: if we have computers with general purpose intelligence, then (almost by definition) that means we’ll be able to create software systems to do those new jobs, too--and likely do so faster than we could ever possibly retrain humans to do the same.
Such a world could be quite demoralizing. Imagine being rejected from every job for which you apply, or being unable to retrain for a new job because those jobs are gone before you can even finish training, or machines that are even better than us at coming up with new business ideas (and better at executing those ideas), and we are left with literally no ways to make a living wage.
If the world continues on its default trajectory, it seems likely that most of us will end up disempowered and depressed, unable to meaningfully affect the world or the future.
What we can do about it
It’s not too late to change our fate.
Remember: it’s the combination of powerful AI and centralization that leads to these dystopic futures. Rather than trying to stop or slow technological change, we can focus on changing the degree to which the resulting power is centralized--and thus who benefits from these technologies.
AI doesn’t have to run in a giant datacenter that is controlled by a huge company--you can already run some very capable models on your own laptop, and nothing stops you from renting cloud compute that is capable of running even the largest models.
You don’t have to access AI only after accepting an onerous Terms of Service agreement that prevents you from developing your own AI tools or competing with these large companies--we could collectively agree that such agreements are obviously monopolistic, and pass laws to actually make them illegal, or we could simply use open models, the most recent versions of which have actually exceeded the performance of all closed models!
AI agents don’t have to be systems that are corrupted by the incentives of some large for-profit company that is selling you access--you can run open agents that are guaranteed to be fully aligned to your own interests, and which can be audited and inspected to ensure that they are never trying to manipulate or overcharge you.
The profits from creating and selling valuable products and services don’t have to accrue to the base model or agent providers--you can be the direct beneficiary of the value created by your own work.
If, rather than a world where AI systems are centrally controlled, we create a world where powerful AI agents are under the direct control of each of us, we might be able to make these technologies benefit everyone, not just the people who can afford to construct massive datacenters and spend billions of dollars training their own AI systems. If we can make AI something that we own, something that runs locally on our own hardware, something that is fully aligned with our own interests, we will all have roughly equal access to these powerful new technologies, and will be able to use them not just to do work and make money for ourselves, but to collaborate and work together to make the world a better place.
It’s actually just as easy to imagine a future world where, rather than us being subservient to centralized AI mainframes, AI is deployed more broadly on each of our devices. Much like the personal computing revolution, we can imagine a personal AI revolution that enables everyone to use these tools for whatever they want, whether it’s making fun new games, or learning about new topics, or coordinating their efforts to better solve some of the huge collective action problems facing the world today.
Critically, decentralized access to powerful AI gives us the ability to use AI tools in order to compete directly with the providers of those tools, which becomes especially important as those companies start to make truly “general” agents.
The difference between these worlds is really just about control. It’s about who controls AI: you, or the companies that want to profit off of you?
If we can make AI systems something that is directly under user control, these better futures are possible. If we can distribute the use and production of AI systems, we can prevent ourselves from being slowly disempowered in the future, and instead live in a world where all of us are free to explore and innovate and build towards a brighter future.
These better futures are what I spend my days working towards. If you’re interested in helping out, there’s plenty to do--everything from working directly on software to give users full control over AI agents, to related projects for making software, AI, and data more open, more accessible, and more aligned with human interests.
If you’re interested in learning more, just follow me on Substack--I’ll be posting more about these projects in the coming months.
Yes, corporations are legally required to maximize profit for their shareholders. I plan to write a future post about this, but for now, see this excellent article for the historical reasons why this happened: “Delaware Law Requires Directors to Manage the Corporation for the Benefit of its Stockholders and the Absurdity of Denying It.”
There are some people who continue to advocate for “pausing” AI progress, but at least in my mind, it’s not realistic, nor is it even a coherent idea. There are simply too many people working on too many disparate things. To try to pause AI is to try to pause capitalism. Market forces are what drive the pressure to continually optimize and automate and improve our digital systems to make them more efficient, more useful, and more profitable. Given existing incentives and power structures, that’s not something that the world seems prepared to meaningfully “pause.”
