An internal memo from OpenAI reveals five strategic priorities: the new Spud model, the Frontier enterprise platform, the DeployCo deployment engine, and more.

This article is machine translated
Show original

On April 13, OpenAI 's Chief Revenue Officer, Denise Dresser, sent a four-page internal memo to all employees, which The Verge obtained and published in full. Dresser emphasized that the market has shifted from a simple competition of model capabilities to an ecosystem battle of workflow integration and agent platforms. The following is a summary and translation by Dongqu.


As we begin the second quarter, I want to start with where we should always be focused: our customers .

I have been speaking with leaders of major corporations, the most influential startups, and key venture capital firms. The message is very clear: people are excited about the product we are building, and they want a deeper understanding of our blueprint so they can plan with confidence and stay ahead of the market.

Enterprise AI is entering a more mature stage. While raw technical capabilities are important, they are no longer sufficient to support the overall picture. Customers need adaptability : how AI can be integrated into their workflows, knowledge bases, controls, and daily operations, and how it can be effectively deployed, trusted, and continuously improved over time. They need a system they can trust and build upon.

We are building this system: the most suitable model for our work, an agent platform, deep integration with business contexts, and the ability to deploy and improve at scale. Our clients are validating this direction in the clearest way: multi-year, multi-product deals in nine-figure dollars are increasing, and existing clients are expanding as they standardize our capabilities across more organizational units.

I am incredibly proud of the team's performance. We have earned trust through the depth, quality, and dedication of our work. The opportunities ahead are enormous, and our biggest constraint right now is not demand, but capacity . That's why talent remains a top priority in the second quarter. We will continue to recruit consciously, maintain high standards, and build an exceptional team that meets customer expectations and inspires each other.

We have everything we need to expand our lead right now. We have the computing power, the products, and the customer pull. Now is the time for us to fully commit and clearly and confidently demonstrate that OpenAI is the platform that enterprises should trust most when building, deploying, and scaling AI.

Here are five customer-centric priorities that I want us to focus on:

1. Winning the Model Layer in the Work Domain

Businesses buy business outcomes . They pay for models that help employees write faster, analyze better, code more efficiently, support customers more effectively, and make higher-quality decisions. They pay for increased revenue per employee, shorter cycle times, lower support costs, and better execution.

Spud is a significant step towards building the next generation of intelligent work. Early feedback from customers has been overwhelmingly positive. Spud is not only our smartest model to date, but it also excels in all aspects critical to high-value professional work: stronger reasoning, a better understanding of intent and dependencies, stronger execution, and more reliable output in production environments.

Better model performance will enhance the entire technology stack. Spud will significantly strengthen all our core products. It expands the workflows we can cover and gives customers another reason to integrate into our platform. This is the practice of our iterative deployment strategy : pushing the frontier, deploying to real products, learning from real-world use, and combining those lessons into better systems on the road to super apps.

Our computing power advantage enables us to deliver continuous leaps in capabilities. Customers have already experienced this in real-world products: higher token limits, lower latency, and more reliable execution in complex workflows.

Every advancement in computing power allows us to train stronger models, serve more needs, and reduce the cost per unit of intelligence. This is the enduring business leverage.

2. Winning over the Agent Platform Layer

The market has shifted from "prompts" to "agents." This shift presents a huge opportunity for us.

Customers need systems that can reason, use tools, operate across workflows, and function reliably in real business environments. This means orchestration, control, observability, security, integration, and governance.

Frontier allows us to take control of the platform layer. We need to position Frontier as the default platform for enterprise agents: the core intelligence layer that enterprises use to build, deploy, manage, and scale systems.

This is where our strengths can create a synergistic effect. Frontier directly links model intelligence to agent performance. As the model improves, the platform becomes more valuable. As the platform is embedded, switching costs also increase. As customers run more workflows through this system, OpenAI will become harder to replace and occupy a more central position in how work is performed.

This is how we transformed from a product supplier to an operational infrastructure provider.

3. Expand the market through Amazon

Our partnership with Microsoft is the foundation of our success. But it also limits our ability to meet with them where they are located, which for many businesses is Amazon Bedrock .

Since we announced this partnership at the end of February, the demand for this service from clients has been phenomenal. We are working diligently to build it into a scalable distribution channel.

Amazon Stateful Runtime Environment is crucial because it upgrades the product surface while expanding access permissions. By enabling memory, context, and continuity across interactions, we move from stateless model access to systems that can reliably operate over time and across complex business processes.

This will expand our market in three ways:

  1. Reduce adoption friction for AWS native customers.

  2. Strengthen our position among regulated and security-sensitive buyers , as it operates within their AWS environments and existing governance models.

  3. We will further integrate our platform , extending from model access to production execution environments for long-running, multi-step agents.

4. Sell a full range of AI-native technology stacks

Customers need a platform, not fragmented solutions. That's exactly what we offer:

  • ChatGPT for Work is the gateway to knowledge work.

  • Codex is a system for software and agent development.

  • APIs are intelligent engines embedded in customer products and workflows.

  • Frontier is an agent platform.

  • Amazon Runtime extends our reach to production-level stateful execution.

This breadth is a significant strategic advantage because our clients start from diverse points in the market. Some begin with employees, some with developers, some with internal systems, and some with external products. Our task is to meet them at whatever point they enter the market and then expand from there across the entire technology stack.

This is the flywheel we should build around: better models lead to more use, more use leads to deeper integration, deeper integration leads to multi-product adoption, and multi-product adoption makes us hard to replace.

We should stop thinking like a company with a separate product line and start thinking like a platform company with multiple entry points and an integrated enterprise solution.

5. Master deployment capabilities

The biggest bottleneck for enterprise AI is no longer whether the technology is effective, but whether the company can successfully and on a large scale deploy it .

DeployCo gave us an opportunity to translate product needs into repeatable enterprise transformation. It will serve as a "deployment engine," helping companies prove value faster, mitigate risk, and scale adoption across the organization.

This can be a force multiplier for everything we're building. It helps customers move faster, refines our feedback loops, and reveals repeatable deployment patterns. It improves product, sales, and customer success simultaneously. And, together with our Frontier Alliance partners, it provides us with a solid path to scale execution across the market.

The companies that win in enterprise AI will not only have the best models, but also the ability to deploy those models in real workflows and real organizations, generating real, measurable value. We should be among the world's best in this regard.

Explanation of the competitive landscape

The level of competition in the market is unprecedented. I believe this is ultimately a good thing. It means the opportunities are enormous and significant. However, there's no doubt that it will sometimes bring noise, volatility, and disruption. Competition motivates us, makes us better, and most importantly, our customers will feel the benefits. On this point, as you've heard me say many times, the priority should be spending time with our customers. When we listen to their questions and aspirations, focusing on how to invest and help them, everything else quiets down and becomes clear.

That being said, there are a few things worth remembering, especially regarding Anthropic :

  • Their narrative is built on fear, restriction, and the idea that "a select few should control AI." Our positive message will ultimately prevail: build robust systems, implement the right safeguards, expand access, and help people achieve more.

  • Their strategic miscalculations in acquiring computing power are now reflected in their products. Customers are experiencing this through rate limiting, poor availability, and unstable performance. We saw the exponential growth curve of computing power much earlier, acted faster, and now possess a genuine structural advantage.

  • Their focus on coding gave them an early foothold. But you don't want to be a single-product company in a platform war. As AI expands from developers to every team, workflow, and industry, this narrow focus can become a real burden.

  • Their publicly stated revenue-operating rate is inflated. Their accounting practices make revenue appear larger than it actually is, including grossing up revenue sharing with Amazon and Google. Our analysis shows this overstates their revenue-operating rate by approximately $8 billion (compared to the currently claimed $30 billion). We report Microsoft's revenue sharing using the net method, which is more in line with the standards we are required to follow as a public company.

Let's Go Build

Finally, one of the best things about this job is working with like-minded people. I'm proud of this company and our team. It's an honor to work with you all at this time, at the heart of the future. Let's stay focused, operate as a team, achieve the highest level of excellence, and row in the same direction.

This market belongs to us; let us act accordingly.

Sector:
Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments