We must build an open AI ecosystem
Why building free software is critical for avoiding a dystopian future with powerful AI.
We’re heading toward a world where AI agents manage large parts of our personal and business lives. They will make important decisions as representatives of us and manage our knowledge.
One of the core problems of our time is the increasing centralization of power: as we get increasingly powerful AI agents (and we will), it will really start to matter who controls these AI systems. If we’re not careful, we may allow a small number of for-profit companies to control the power of AI agents and our access to those agents.
AI companies do not care about you
At the core, it’s a question of power. When AI agents are owned and controlled by a huge company, they will eventually choose profits over you.
Imagine you’re having a medical emergency and you ask your agent which doctor you should go to. One doctor is not very good and barely certified, but they paid OpenAI $1M to advertise their services to you. Will you trust OpenAI to reject the $1M?
We cannot meaningfully change these agents, we cannot peek inside the agents’ brains to see how it made its decision, and critically, we cannot ensure that those agents are working towards our goals.
OpenAI is literally building this right now. They’ve said as much, and the client code was recently updated to include references to ad-related features.
When there is a huge power asymmetry--like there is between an individual consumer and a $100 billion company--guess which goals end up getting prioritized?
AI companies will profit by exploiting us
When VCs and companies are pouring hundreds of billions of dollars into something, you have to ask yourself: are they all crazy, or do they actually see a plausible path to being able to exploit the world so much in the future that they’ll be able to not only recoup that money, but make even more?
The definition of investment is to spend money now in order to make a profit later.You have to wonder: how will AI companies make so much profit that they can justify spending upwards of a trillion dollars?
The Stargate Project alone is estimated at about $500 billion dollars. If we assume that represents half of the spend over the next decade, and they wanted to 2x their investment, they would need to make about as much money as is spent by every person during the entire year in Australia (as the 15th largest country by GDP, their GDP is about $2 trillion). That’s an astonishing amount of money.
It’s simply not possible to make that money by taking a small fraction of the total spend--these companies are not looking to make a few percent, like an infrastructure or utility company (e.g. power companies have around a 10% margin). These companies are going to try to make a 90% margin--they want to be selling you something for $10 that costs them $1. So when you ask an AI agent to do something in the future, it’s probably going to cost you $10 for the work that OpenAI spent $1 to do. That’s literally their plan: to make great margins, and expand to do a huge fraction of the world’s knowledge work. Now you see why VCs and companies are so excited.
But it doesn’t have to be this way.
The future of AI must be open
If we can make free software and open alternatives to these closed AI systems, then these closed AI model providers won’t have any real leverage or pricing power. If people are free to switch to their own open models at any time, or to run agents on whatever provider they want--including even their own local hardware--then these large companies will be forced to play the (appropriate) role of infrastructure. If they really do want to make “intelligence too cheap to meter,” then we want them making money like utilities (10% margins)--not like software businesses (90% margins).
Thankfully, open models are already a surprisingly good alternative--the latest open source GLM5.1 model has higher benchmark scores than even the best available closed models (eg, Opus 4.6 and GPT 5.4). Even other models like kimi k2.5, qwen, and deepseek are close to the best closed models on many benchmarks and tasks, despite being trained for a tiny fraction of the cost that went into training for the closed models. And open models have been improving at a rate that exceeds even the closed models (which are themselves improving quite rapidly). Unlike even a year or two ago, when Meta’s LLAMA models were pretty much the only game in town, there is now great competition at the model layer to provide the best open weight models, with companies like Moonshot AI, DeepSeek AI, and Reflection all hard at work to create the next best, most useful, open models and make them freely available to the world.
These open models are already having a powerful effect on the market. In just a short time, they’ve turned LLM inference from a place where companies had hoped to make healthy margins to a place where even many of the closed AI companies are effectively forced to sell their models at a loss to compete.
Because of the presence of open models, there’s always the option of just buying your own hardware (or renting from a variety of cloud providers all in cutthroat competition with one another) and running an open model, which puts strong downward pressure on the price of AI inference.
But the future of AI is about more than just open-weight models and inference: we need viable open alternatives for the entire stack, all of the way from pre-training to the final systems for AI agents, and for usefully coordinating those agents.
This is why I’m focused on free software rather than focusing too narrowly on, say, AI models or AI agents. Certainly, AI agents and their supporting infrastructure seem like one of the next most critical places where we need compelling open alternatives (which is why we’ve built Mngr, and are planning to make its source available in the future). But it’s more important that we create an entire, thriving ecosystem of open, free software for the entire lifecycle of training and deploying AI, lest some part of that chain become captured by a small number of for-profit companies.
One of the nice things about this effort is that it is gradual: it’s not necessarily a binary outcome where either 100% of AI systems are closed or 100% are open. Rather, each little library we make, each task we make more transparent, and each new open source project contributes to making it easier to access AI technologies freely.
Collectively, our efforts towards making open and free software eventually add up to shift the overall balance from closed to open.
What an open AI ecosystem could do for us
If we can keep free and open AI software components competitive with closed alternatives across the stack or, better yet, make them unarguably better (which I think is possible!), then we invite far better futures for all of us.
Rather than a world where these AI systems are black boxes, we can have one where anyone can open them up and understand how they work (and improve them!)
Rather than a world where we pay too much for access to the critical tools and technologies of the future, we can have a competitive market that provides useful AI tools efficiently and cheaply.
Rather than a world where money and power continually accumulate to a small number of the largest companies, we can have a world where anyone is free to run AI agents to build their own ideas and bring their own visions to life, and even make a living doing so.
Instead of a world with a small number of huge data centers that pose national security and ecological risks, we can distribute AI compute processes to systems owned and controlled by each individual, and which promotes a more stable geopolitical climate.
Instead of a world with a monoculture of models, we can have a diversity of local and personal models better suited to the individuals and communities that use them.
So let’s ruthlessly copy, duplicate, and commodify these closed AI systems. Let’s share data and collaborate on building open systems that benefit everyone in the world rather than create a brittle world where our creativity and productivity are constrained, safe, and profitable only for the largest companies.
Sure, in these worlds, some of these initial investors might not make the huge returns they were hoping for. But the rest of us will be far better off, and it’s not our responsibility to guarantee their returns for such risky investments.
We can create an explosion of diverse ways of thinking, creating, and building with software. We can create a world where our future AI tools are directly under our own personal control, and where we can spend time working on the things that we care about--without worrying about losing access to critical infrastructure because some company finds our work displeasing or unprofitable.
We can make good futures that have AI.
To do so, we need to make AI software part of the public commons, and not let it be something locked behind a private paywall.
So go forth, and make (AI-related) software free.
