Author: Daniel Barabander
Compiled by: TechFlow TechFlow
TechFlow Dive: Cursor was a VS Code plugin that ran on the OpenAI API three years ago. Today, it released its self-developed model, which beats Claude Opus 4.6 on key benchmarks, and costs only one-tenth of the price.
Starting with this case, this article systematically answers one of the most important strategic questions on the internet: When should APIs be opened and when should they be closed? The conclusion serves as a warning to everyone building platforms.
The full text is as follows:
Co-authored with Elijah Fox (@PossibltyResult).
In early March, Cursor released Composer 2—a proprietary programming model built on an open-source foundation model that outperformed Claude Opus 4.6 on key benchmarks at a price that was only one-tenth of the cost. Three years ago, Cursor was a VS Code fork that ran entirely on the OpenAI API.
Cursor's journey from a dependent customer to a true competitor is a microcosm of one of the most important strategic questions on the internet: When should a company open up its capabilities through APIs, and when should it remain closed?
We've developed a framework to answer this question, which depends on two things. First: Will open APIs erode your competitive advantage? If so: Can you find a competitive advantage elsewhere?
Whenever a company opens up its intellectual property through APIs, it faces the risk of its competitive advantage being eroded by demand aggregation. Simply put: competitors can use this intellectual property to guide the early stages of their own products, and once they have accumulated enough demand, they can cut off the API through vertical integration. Netflix did just that: first, it licensed film and television content, and then, after it had a large enough user base to amortize its huge fixed costs, it produced its own series, such as "House of Cards."
The truly dangerous scenario is that the API's output can be directly used as input, compounding the improvement in the quality of competing products. This is a double whammy, as competitors can both use the API to guide and aggregate demand and directly improve their own production processes. This is precisely what's happening in the AI field. While OpenAI and Anthropic explicitly prohibit companies accessing their APIs from using the output to train competing models in their contracts, they cannot prevent companies like Cursor from leveraging cutting-edge models to guide the workflow needed to collect proprietary product data and improve their own models over time.
This appears to be exactly what happened behind Composer 2. Cursor amassed enough demand using foundational models like Claude and GPT to generate approximately $2 billion in annualized revenue, and then leveraged the open-source foundational model Kimi K2.5, along with data from continuous pre-training and reinforcement learning within its IDE, to build a state-of-the-art programming model.
When such output/input dynamics exist, API providers have only two options: either shut down the API to stop the bleeding, or stay open and find complementary assets that can play a moat.
Twitter is a prime example of the first path. It was initially known for its generous, freely accessible API—at its peak, developers could pull 500,000 tweets per month for free. But Twitter shut down most of its interfaces because the API exposed its moat: its proprietary social graph. Today, the API is effectively closed: access is severely throttled, expensive at a meaningful scale, and structurally, building serious products requires tightly controlled B2B integration.
The second path is to keep APIs open and supplement them with another source of power. No industry understands this better than crypto—crypto APIs are forced to be open, and the only way to survive is to find a moat elsewhere.
The lending protocol Morpho provides a representative example. It originated by integrating the open APIs of Aave and Compound and building an optimizer product on top of them. It then used the outputs of these protocols—their aggregated liquidity—as input to bootstrap its own platform. Thus, Cursor and Morpho follow strikingly similar paths in leveraging APIs to build competing products.
However, the truly interesting development is what Morpho did next. Since Morpho itself is an open API, it needed to find a moat to compensate for the lack of switching costs. So it decided to make the protocol as aggregateable as possible, instead building its moat through other means—such as the Lindy effect and the network effects generated by deep liquidity from multiple lenders and borrowers.

Following this framework, we can make a prediction: over time, companies that develop foundational models will likely choose the first path, gradually restricting API access to their most advanced models.
To believe in the second path, you must assume that models like Opus and GPT are powerful and trustworthy enough to remain open, allowing competing models to use their outputs as input, but without third parties leaving. This means model companies are betting on other sources of power: the Lindy effect (if they believe users don't want to build trust in new models), developer network effects (if they believe users will build an ecosystem heavily reliant on their API's openness), or economies of scale (if they believe maximizing API calls will allow them to amortize the fixed costs of training cutting-edge models).
However, current evidence points in the opposite direction. The dynamics of the "hottest model of the month" remain strong, with users readily migrating to the best available model—a trend we saw again in the recent surge in Claude usage after the release of Opus 4.5. At the model level, there are no clear signs of developer network effects—interoperability between APIs is increasing, not decreasing, and the surrounding tool ecosystem is actively resisting lock-in, deliberately making it easy to switch vendors. Furthermore, the economies of scale during the training phase are no longer sufficient as a moat, as distillation techniques allow competitors to train comparable models at a much lower cost. Without alternative sources of power, foundational AI companies are likely to reserve limited access for enthusiasts, focusing instead on B2B deployments with strict usage controls and monitoring. Increasingly, the winning option will be to refuse to play this game.
This is a worrying outcome because the current explosion of consumer AI products is built on the foundation of these model providers. It also opens the door to reverse positioning: if leading labs become increasingly restrictive in their access, there is value to be gained by choosing competitors with weaker moats but a strong commitment to continued openness.
Thanks to @systematicls (@openforage) and @AlexanderLong (@Pluralis) for their thoughtful feedback on this article.




