One of the fastest growing products ever , Cursor reached $100M ARR just 20 months after launch. In the following two years, it exceeded $300 million in ARR and continued to lead the transformation of the way engineers and product teams develop software. As of early 2025, Cursor has over 360,000 paying users.
Michael Truell is co-founder and CEO of Anysphere, Cursor's parent company. He founded Anysphere with three MIT classmates and launched Cursor in three months. Michael Truell rarely gives podcast interviews and has only been on the Lex Fridman Podcast before. In this issue, he mentioned his predictions for the "Post-code Era", his counterintuitive experiences in building Cursor, and his views on the future development trends of engineers.
This content comes from Lenny's Podcast. The following is a translation of the full text.
- Cursor's goal is to create a completely new way of programming: people will see virtual codes that are closer to English sentences in the future. People will have strong control over various details of the software and are able to make modifications and iterations extremely quickly.
- "Taste" will become more and more valuable: The core of "taste" is to have a clear understanding of "what should be built".
- The best users of AI are conservative in their use of the technology: they are very good at limiting the scope of the tasks they give the AI to smaller and more specific tasks.
- The core part of the Cursor interview is a two-day assessment: these assessment projects are simulated, but they enable candidates to produce real work results in two days. This is not only a "whether you want to work with them" test, but also very important for attracting candidates. The only thing that attracts people to join an early-stage company is often a team that they feel is worth working with.
The main problem with chatbot programming is the lack of precision
Lenny: We've talked before about what's going to happen in the post-code era. How do you see Cursor's future development? How will technology move from traditional code to other forms?
Michael Truell: The goal of Cursor is to create a whole new way of programming, a different way of building software. You only need to describe your intentions to the computer in the simplest way, and you define how the software should work and how it should be presented.
As today’s technology continues to mature, we believe we can create a whole new way of building software that is more advanced, more efficient, and easier to use than today’s. This process will be very different from the way software is written today.
I would like to contrast this with several mainstream views on the future form of software, some of which are currently popular and we do not quite agree with them.
One view is that software construction in the future will remain very similar to today, relying primarily on text editing using formal programming languages such as TypeScript, Go, C, and Rust. Another view is that you only need to enter instructions into the chatbot, let it help you build the software, and then let it modify it at any time. This style of chatbot is like having a conversation with your engineering department.
We think both of these visions are problematic.
The main problem with chatbot-style programming is its lack of precision. If you want people to have full control over the look and functionality of your software, you need to provide more precise ways to instruct them to make the changes they want, rather than just saying "change this part of my app" to a bot in a chat box and then having the whole thing deleted.
On the other hand, the worldview that nothing will change is also wrong, because technology will only get more powerful. In the "post-code" world we envision, software logic will be expressed more like English.
You can imagine it existing in a more standardized form, moving towards virtual codes. You can write the logic of the software, edit it at a high level, and navigate it easily. This won't be millions of lines of incomprehensible code. Instead, it will be clearer, easier to understand and locate. We are working to evolve complex symbols and code structures into a form that is easier for humans to read and edit.
In the post-code era, taste will become more and more valuable
Lenny: That’s very profound, and I want to make sure everyone understands your point. The shift you imagine is that people no longer see code and no longer need to think in terms of JavaScript or Python. Instead, it is replaced by a more abstract form of expression, which is closer to the virtual code of English sentences.
Michael Truell: We think it will eventually get to that stage. We believe that achieving this stage requires the participation and promotion of existing professional engineers. In the future, people will still be in the driving seat.
People have strong control over every detail of the software and will not give up this control easily. People also have the ability to make changes and iterate extremely quickly. The future will not rely on engineering that happens in the background, slowly, and takes weeks to complete.
Lenny: This brings up a question. For current engineers, or those who are considering becoming engineers, designers, or product managers, what skills do you think will become increasingly valuable in the "post-code era"?
Michael Truell: I think taste will become more and more valuable. When people talk about taste in the software field, they tend to think of visual effects, smooth animations, color matching, UI, UX, and so on.
Visuals are very important for products. But as mentioned before, I think the important other half lies in the product logic and how it works.
We have many tools to design visual effects, but code is still the best form of expression for software execution logic. You can use Figma to show the effect, or just sketch it out in your notes. But the logic can only be clearly presented when you have a real working prototype.
In the future, engineers will become more and more like "logic designers". They need to express their intentions precisely, shifting from the "how to implement" behind the scenes to the high-level "what to implement" and "what it is", which means that "taste" will become more important in software development.
We are not there yet in software engineering. There are many interesting and thought-provoking jokes circulating on the Internet, reflecting people's over-reliance on AI development and the obvious defects and functional problems of the software.
But I believe that software engineers in the future will not have to focus as much on detail control as they do now. We will gradually shift from being rigorous and meticulous to paying more attention to "taste".
Lenny: This reminds me of vibe coding. Is this similar to what you described as a way of programming where you don't have to think too much about the details and just let things flow naturally?
Michael Truell: I think the two are related. What people are talking about nowadays as vibe coding seems to me to describe a controversial creative mode of generating large amounts of code without really understanding its details. This model will bring many problems. Without understanding the low-level details, you’ll quickly find yourself creating something that becomes too large and difficult to modify.
What we are actually interested in is how people can perfectly control all the details without fully understanding the underlying code. This solution is closely related to vibe coding.
We currently lack the ability to let "taste" truly guide software construction. One problem with vibe coding or similar modes is that while you can create something, a lot of it is clumsy decisions made by the AI, and you don't have full control over it.
Lenny: You mentioned "taste". What does it mean specifically?
Michael Truell: Have a clear idea of what should be built. It's also becoming easier and easier to translate the idea that this is the software you want to create, this is what it looks like, and this is how it performs.
Unlike now, when you and your team have a product idea, you still need a translation layer, which requires a lot of effort and labor to convert it into a format that computers can understand and execute.
"Taste" has little to do with UI. Perhaps the word "taste" is not quite appropriate, but its core is to have the correct understanding of "what should be built".
The birth of Cursor originated from the exploration of a problem
Lenny: I want to go back to the origin of Cursor. Many listeners may not know how it was born. You are building one of the fastest growing products in history. Cursor is profoundly changing the way people build products and even changing the entire industry. How did it all start? What are some memorable moments from your early development?
Michael Truell: The birth of Cursor originated from our exploration of a problem, and to a large extent, it is also our thinking about how AI will become better in the next ten years. There were two key moments.
The first one was when I first used the Beta version of Copilot. This is the first time we have come into contact with a practical AI product that can actually provide help, rather than a flashy demo. Copilot is one of the most valuable development tools we’ve adopted to date, and we’re really excited about it.
Another is that companies such as OpenAI released a series of papers on model scaling. These studies show that even without disruptive new ideas, AI capabilities will become stronger simply by expanding the model size and increasing the amount of training data. At the end of 2021 and the beginning of 2022, we are confident about the prospects of AI products, and this technology is destined to mature in the future.
When we look around, we find that although many people are talking about how to make models, few people really go deep into a specific field of knowledge work to think about how this field will progress with the advancement of AI technology.
This got us thinking: As this technology matures, how will these specific areas change in the future? What will the final work look like? How will the tools we use to get work done evolve? What level of model needs to be achieved to support these changes in work forms? Once model scaling and pre-training cannot be further improved, how can we continue to push the boundaries of technological capabilities?
The mistake Cursor made initially was that we chose a less competitive and more boring field. No one pays attention to such boring areas.
At that time, everyone thought programming was hot and interesting, but we felt that many people were already doing it.
For the first four months, we were actually working on a completely different project - helping mechanical engineering with automation and augmentation, building tools for mechanical engineers.
But we encountered a problem at the beginning. My co-founder and I are not mechanical engineers. Although we have friends in this field, we are extremely unfamiliar with it. It can be said that we are "blind men touching an elephant". For example, how can we truly apply the existing models to mechanical engineering? We concluded at the time that we had to develop our own model from scratch. This was tricky because there was little public information online about 3D models of the various tools and parts and the steps to build them, and it was equally difficult to obtain the models from sources that had such resources.
But eventually we came to our senses and realized that mechanical engineering wasn’t something we were very interested in and it wasn’t something we wanted to dedicate our lives to.
Looking back at the field of programming, although quite some time has passed, no significant changes have occurred. Those working in this field seem out of touch with the way we think, and they lack sufficient ambition and vision for where the future is headed and how AI will reshape everything. It was these insights that led us to the path of building Cursor.
Lenny: I really like the advice to chase seemingly boring industries where there is less competition and opportunity, and sometimes that actually works. But I prefer another approach - boldly pursuing the hottest and most popular fields, such as AI programming and application development, and it turns out that this also works.
You feel that existing tools lack sufficient ambition or potential, and that more can be done. I think this is a very valuable revelation. Even if it seems too late in an area, products like GitHub Copilot already exist, and if you find that existing solutions are not ambitious enough, do not meet your standards, or are flawed in their approach, there are still huge opportunities. is that so?
Michael Truell: Absolutely agree. If we want to achieve breakthrough progress, we need to find something specific to do. The fascinating thing about AI is that there is still a huge unknown space in many areas, including AI programming.
The ceilings in many areas are very high. Looking ahead, even with the best tools in any field, there will still be a lot of work to be done in the next few years. Having such a wide space and such a high ceiling is quite unique in software development, at least in AI.
Cursor emphasizes dogfooding from the beginning
Lenny: Let's go back to the issue of IDE (Integrated Development Environment). You have several different routes that other companies are also trying.
One is to build an IDE for engineers and integrate AI capabilities into it. The other is the complete AI Agent model, such as products like Devin. Another approach is to focus on building a model that is very good at encoding and to strive to create the best encoding model.
What made you decide that IDE was the best route?
Michael Truell: People who are focused on developing models from the beginning, or trying to automate programming end to end, are trying to build something very different than we are.
We're more focused on making sure people have control over all decisions in the tools they build. In contrast, they are more likely to envision a future in which AI performs the entire process, or even makes all decisions.
So on the one hand, our choices include interest-driven elements. On the other hand, we always try to keep a very realistic view of the current state of the art. We are extremely excited about the potential for AI to develop in the coming decades, but sometimes people tend to anthropomorphize these models because they see AI perform well in one field, thinking that if they are smarter than humans in this field, they will definitely excel in another field.
But there are huge problems with these models. Our product development has emphasized "Dogfooding" from the beginning. We ourselves use Cursor extensively every day and never want to release any features that are useless to ourselves.
We are the end users of the products ourselves, which gives us a realistic understanding of the current state of the art. We believe it is critical to keep people in the “driver’s seat” and that AI cannot do everything.
For personal reasons, we also want to give users this control. This allows us to be more than just a model company and move away from the end-to-end product development mindset that takes control away from people.
The reason we chose to build an IDE instead of developing a plugin for an existing programming environment is because we firmly believe that programming will be done through these models and that the way programming is done will change dramatically in the next few years. The extensibility of existing programming environments is so limited that if you believe that UI and programming patterns will undergo disruptive changes, then you must have full control over the entire application.
Lenny: I know you are currently working on IDE. Maybe this is your bias, and this is what you think is the future direction. But I'm curious, do you think that a large part of your work will be done for you by AI engineers in tools like Slack? Will this approach be incorporated into Cursor one day?
Michael Truell: I think the ideal is that you can switch between these things very easily. Sometimes, you may want to let the AI execute independently for a period of time; sometimes you may want to pull out the results of the AI's work and collaborate with it efficiently. Sometimes it may be possible to let it run autonomously again.
I think you need a unified environment where these backend and frontend forms can all perform well. For background execution, it is particularly suitable for programming tasks that require very little explanation to accurately specify requirements and determine the correct standards. Bug fixing is a good example, but it's definitely not all there is to programming.
The nature of IDEs changes radically over time. We chose to build our own editor precisely because it will continue to improve. This evolution includes the ability to take over tasks from different interfaces, such as Slack and issue tracking systems; the glass screen you stare at every day will also undergo tremendous changes. We currently think of an IDE as a place where software is built.
The most successful users of AI are conservative in their use of the technology
Lenny: I think one thing people don’t fully realize when they’re talking about Agents and these AI engineers is that we’re going to be very much “engineering managers” with a lot of people who aren’t that smart yet and you’re going to have to spend a lot of time reviewing, approving, and specifying requirements. What do you think about this issue? Is there any way to simplify this process?
Because it sounds like this is really not easy. Anyone who has managed a large team will understand that: "These subordinates always come to me repeatedly with work of varying quality. It's so torturous."
Michael Truell: Yeah, maybe eventually we'll have to have one-on-one conversations with all of these Agents.
We have observed that those who are most successful in using AI are the most conservative in their application of the technology. I do think that the most successful users right now rely heavily on features like our Next Edit Prediction. During their regular programming process, our AI intelligently predicts what action to perform next. They are also very good at defining the tasks they give to AI in a more specific and smaller scope.
Considering the time cost you spend on reviewing code, there are two main modes of collaboration with Agent. One is that you can spend a lot of time in the early stages to explain in detail, let the AI work independently, and then review the AI's results. When you have completed the review, the task is complete.
Alternatively, you can break the task down into smaller pieces. Assign only a small part each time, let the AI complete it, and then review it; then give further instructions, let the AI continue to complete it, and then review it again. It's like implementing something like auto-completion throughout the process.
However, we often observe that the users who make the best use of these tools still prefer to break tasks down and keep them manageable.
Lenny: That's rare. I want to go back to when you first built Cursor. When did you realize it was ready? When did you feel like it was time to release it and see what happens?
Michael Truell: When we first started working on Cursor, we were worried that it would take too long to develop and that it would take too long to release to the outside world. The initial version of Cursor was completely "hand-crafted" by us from scratch. We now use VS Code as our foundation, just like many browsers use Chromium as their core.
But it wasn't like that at the beginning, we developed a prototype of Cursor from scratch, which involved a lot of work. We had to develop many of the features required by modern code editors ourselves, such as support for multiple programming languages, navigation between code, error tracking, etc. In addition, a built-in command line is required, as well as the ability to connect to a remote server to view and execute code.
We developed at lightning speed, building our own editor completely from scratch and developing AI components at the same time. After about five weeks, we were fully committed to our own editor, completely throwing away our previous editor and jumping into the new tool. Once we felt it had reached a certain level of practicality, we gave it to other people to try out for a very short beta test.
It took Cursor only about three months from writing the first line of code to officially releasing it to the public. Our goal is to get the product into the hands of users as quickly as possible and iterate quickly based on public feedback.
To our surprise, we thought the tool would only attract a few hundred users for a long time, but from the beginning we had a large influx of users and a lot of feedback. The initial user feedback was extremely valuable and it was this feedback that prompted us to decide to abandon building a version from scratch and instead develop based on VS Code. Since then, we have been continuously optimizing the product in a public environment.
Launched the product in three months and achieved $100 million in ARR in one year
Lenny: I appreciate your modesty about your achievements. As far as I know, you increased ARR from 0 to $100 million in about one to one and a half years, which is definitely a historic achievement.
What do you think are the key elements to success? You just mentioned that using your own products is one of them. But it's incredible that you were able to launch a product in three months. What is the secret behind this?
Michael Truell: The first version, the one that was done at three months, was not perfect. So we always have a constant sense of urgency and always feel that there is a lot we can do better.
Our ultimate goal is to truly create a new formalization of programming that can automate a lot of the coding work we know today. No matter how much progress Cursor has made so far, we feel that we are still far from that ultimate goal and there is always a lot to be done.
Many times, we don’t dwell too much on the initial release results, but rather focus on the continued evolution of the product and are committed to continuously improving and perfecting this tool.
Lenny: Was there a turning point after those three months where everything started to take off?
Michael Truell: To be honest, the growth felt quite slow at the beginning, perhaps because we were a little impatient. But in terms of the overall growth rate, it continues to surprise us.
I think the most surprising thing is that this growth has actually remained a steady exponential trend, continuing to grow every month, although new version releases or other factors sometimes accelerate this process.
Of course, this exponential growth feels pretty slow in the beginning and the base is really low, so it doesn’t really take off in the beginning.
Lenny: This sounds like a case of “build it and they will come.” You just created a tool that you like, and once it was released, everyone liked it and spread it by word of mouth.
Michael Truell: Yeah, that's pretty much it. Our team put most of its energy into the product itself and did not distract itself to do other things. Of course, we also spent time doing a lot of other important things, like building a team and rotating user support responsibilities.
However, we “just left it there” for some of the routine tasks that many startups devote their energy to in the early stages, especially sales and marketing.
We focus all our energy on polishing the product, first creating a product that our team likes, and then iterating based on feedback from some core users. This may sound simple, but it is actually not easy to do well.
There are many directions to explore, many different product routes. One of the challenges is staying focused and strategically choosing and prioritizing the key features to build.
Another challenge is that the space we’re in represents a completely new model for product development: we’re somewhere between a traditional software company and a basic model company.
We are developing products for millions of users, which requires us to achieve excellence in product development. But another important dimension of product quality lies in continuously deepening scientific research and model development, and continuously optimizing the model itself in key scenarios. Balancing these two aspects is always challenging.
The most counterintuitive thing is that I didn’t expect to develop my own model.
Lenny: What’s the most counterintuitive thing you’ve found so far in building Cursor and building AI products?
Michael Truell: The most counterintuitive thing for me was that we never expected to develop our own model at the beginning. When we first entered this field, there were already companies focusing on model training from the beginning. Once we calculated the cost and resources required to train GPT-4, it became clear that this was not something we were capable of doing.
There are already many excellent models on the market, why bother to copy what others have already done? Especially in pre-training, which requires a neural network that knows nothing about anything to learn the entire Internet. So we had no intention of going down this path at first. It was clear to us from the outset that there were many things that could be done with existing models that were not being done because of a lack of appropriate tools to build them. But we still invested a lot of energy in model development.
Because every "magic moment" you experience when using Cursor comes from our custom models in some way. This process is gradual. We initially tried training our own model on a use case where none of the popular base models were a good fit, and it was a success. We then extended this idea to another use case, which also worked well, and we kept moving forward.
When developing this type of model, a key point is to choose the target accurately and not to reinvent the wheel. We do not touch areas where the top basic models have already done very well, but instead focus on their shortcomings and think about how to make up for them.
Lenny: A lot of people are surprised to hear that you have your own models. Because when people talk about Cursor and other products in this space, they often call them "GPT shells", thinking that they are just tools built on models like ChatGPT or Sonnet. But you mentioned that you actually have your own model. Can you talk about the technology stack behind this?
Michael Truell: We do use the most mainstream basic models in a variety of scenarios.
We rely more on self-developed models to provide users with a key Cursor experience, such as some use cases that cannot be handled due to the cost or speed of the basic model. An example of this is autocompletion.
This may be difficult to understand for people who don't write code. Writing code is a unique job. Sometimes, what you will do in the next 5, 10, 20 minutes or even half an hour can be predicted by observing your current operations.
Compared with writing, perhaps many people are familiar with Gmail's auto-complete function and the various auto-complete prompts that appear when editing text messages, emails, and other texts. But these features have limited usefulness. Because it is often difficult to infer what you are going to write next based solely on what has already been written.
But when writing code, when you modify a part of the code base, you often need to modify other parts of the code base at the same time, and the content that needs to be modified is very obvious.
One of the core features of Cursor is this enhanced auto-completion. It can predict the series of actions you will perform next across multiple files and different locations within the files.
For the model to perform well in this scenario, it must be fast enough to ensure that the completion result is given within 300 milliseconds. Cost is also an important factor. Every time we press a key, we trigger thousands of inferences and constantly update our predictions of your next action.
This also involves a very special technical challenge: we need the model to not only be able to complete the next token like processing a normal text sequence, but also to be good at completing a series of diffs (code changes), that is, based on the modifications that have occurred in the code base, predict the additions, deletions, and modifications that may occur next.
We trained a model specifically for this task and it worked very well. This part is completely developed by us independently and never calls on any basic model. We don't label or brand this technology, but it is the core of Cursor.
Another scenario where we use our own models is to enhance the performance of large models like Sonnet, Gemini, or GPT, especially in terms of output and input.
On input, our models search the entire codebase and identify the relevant parts that need to be shown to these large models. You can think of it as a "mini Google Search" specifically for searching relevant content in code repositories.
In the output stage, we will process the modification suggestions given by these large models and use our specially trained models to supplement and improve the details. For example, high-level logic design is done by more advanced models, which will spend some tokens to give the overall direction. Other smaller, more specialized, and extremely fast models combine some inference optimization techniques to convert these high-level modification suggestions into complete, executable code transformations.
This approach has greatly improved the quality of completion of professional tasks and greatly accelerated response speed. For us, speed is also a key indicator for measuring our products.
Lenny: That’s very interesting. I recently interviewed Kevin Weil, CPO of OpenAI, on a podcast, and he called this an Ensemble of models.
They also take advantage of what each model does best. Using a cheaper model can be very cost-effective. Are the models you train yourself developed based on open source models such as LLaMA?
Michael Truell: We are very pragmatic in this regard and do not want to reinvent the wheel. Therefore, we start with the best pre-trained models on the market, which are usually open source, and sometimes work with large models whose weights are not open to the outside world. We are less concerned with being able to directly read or query row-by-row the weight matrices that determine the output results, but more concerned with the ability to train and post-train the model.
The ceiling of AI products is like that of personal computers and search engines in the last century
Lenny: Many AI entrepreneurs and investors are thinking about a question: Where is the moat and defense capabilities of AI? A custom model seems like a way to build a moat. How do you develop long-term defensive capabilities when your competitors are constantly trying to launch new products and take your job?
Michael Truell: I think there are some traditional ways to build user inertia and moats. But at the end of the day, we have to keep working to build the best product possible. I firmly believe that the ceiling of AI is very high, and no matter what barriers you build, they may be surpassed at any time.
This market is somewhat different from the traditional software market or enterprise market in the past. An example of a similar market to this one is search engines in the late 1990s and early 2000s. The other is the development of personal computers and minicomputers from the 1970s, 1980s to the 1990s.
The ceiling for AI products is very high, and products are iterated quickly. You can continually get huge output from the incremental value of every smart person hour, and the incremental value of every dollar of R&D spent. This state can last for a long time. You'll never be short of new features to develop.
Especially in the search field, adding distribution channels can also help improve the product because you can continue to iterate algorithms and models based on user data and feedback. I believe these dynamics also exist in our industry.
For us, this may be a somewhat helpless reality. But it is an exciting truth for the whole world. There will be many leading products emerging in the future, and too many meaningful functions waiting to be created.
We are still a long way from our vision of five to ten years from now, and what we need to do is keep the innovation engine running at high speed.
Lenny: It sounds more like building a consumer-style moat. Continue to provide the best products so that users are willing to use them all the time, rather than forcing employees to use them by binding contracts with the entire company system like Salesforce.
Michael Truell: Yes. I think the point is that if you're in a field where there's soon to be not much valuable to do, then that's not a very good thing. But if in this field, large amounts of capital investment and the efforts of outstanding talents can continuously produce value, then you will be able to enjoy the scale effect in R&D, deepen the technology in the right direction, and build barriers.
This does have a certain consumer-oriented trend. But at the heart of it all is building the best product.
Lenny: Do you think this will be a "winner takes all" market in the future, or will many differentiated products emerge?
Michael Truell: I think the market is huge. You mentioned the IDE situation earlier. Some people who study this field will look back at the IDE market over the past decade and ask, "Who can make money making editors?" In the past, everyone had their own custom configurations. Only one company has achieved commercial success by building a good editor, but its scale is very limited.
Some people have therefore concluded that the future will be similar. But I think what this view misses is that in the 2010s, the potential for building editors for programmers was limited.
The company that makes money from the editor focuses on simplifying code base navigation, checking errors, and making good debugging tools. While these capabilities are valuable, I think there is a huge opportunity to build tools for programmers and, more broadly, knowledge workers across a wide range of fields.
The real challenge we face is how to automate a lot of tedious transactional and knowledge work. This will lead to more reliable and efficient productivity gains in all areas of knowledge work.
I think the market we are in is very large, far larger than people used to think of the developer tools market. In the future, various solutions will emerge, and an industry leader may also emerge. It could be us, but it remains to be seen. This company will create a universal tool that can help build most of the world's software, and it will be a large-scale enterprise with epoch-making influence. But there are also products that focus on specific market segments or specific parts of the software development life cycle.
Eventually, programming itself may move away from traditional formal programming languages and toward higher levels, and these higher-level tools will become the main items that users buy and use. I believe that there will be a dominant player in AI programming, and it will grow into an extremely large business.
Lenny: It’s interesting that Microsoft was at the center of this change, with great products and strong distribution channels. You said Copilot made you realize the huge potential in this area, but it doesn't seem to have completely won the market. There is even some lag. What do you think is the reason?
Michael Truell: I think there are specific historical and structural reasons why Copilot has not fully lived up to expectations.
From a structural perspective, Microsoft's Copilot project has been a huge inspiration for us. They do a lot of great things, and we're a lot of Microsoft users ourselves.
But I think this market is not so friendly to mature companies. Markets that are friendly to them tend to be those with little room for innovation, that can be commercialized quickly, and that can make profits by bundling products.
In this market, the ROI difference between different products is not that big. It makes less sense to buy standalone innovative solutions, but rather bundled products are more attractive.
Another type of market that is particularly favorable to mature businesses is one in which users are highly dependent on your tool from the beginning and the switching costs are very high.
But in our field, users can easily try different tools and choose which product is more suitable for them based on their own judgment. This situation is less favorable to large companies and more friendly to those with the most innovative products.
As for the specific historical reasons, as far as I know, most of the team members who first participated in the development of the first version of Copilot later went to other companies to do other things. It is indeed difficult to coordinate among all relevant departments and personnel to work together on a product.
Senior engineers have too low expectations, while entry-level engineers have too high expectations
Lenny: If you could sit next to every new Cursor user who is using it for the first time and whisper a few words of advice in their ear to help them use Cursor better, what would you say?
Michael Truell: I think there is a problem we need to solve at the product level right now.
Many users who are now able to successfully use Cursor have a certain "taste" for the model's capabilities. These users understand the extent to which the Cursor can accomplish tasks and the extent to which they need to provide it with instructions. They understand the model’s qualities, limitations, what it can do, and what it cannot do.
In our existing products, we have not done a good job of educating users in this area, and we have not even provided clear usage guidelines.
To cultivate this "taste", I have two suggestions.
First, don’t give the model all the tasks at once. You can either be disappointed with the output or accept it as it is. Instead, I would chop the task into small chunks. You can spend the same amount of time to get the final result, but do it in steps. Identify small tasks at a time, get partial results, and repeat the process rather than trying to write one long, lengthy instruction. This approach could easily lead to disaster.
Second, it’s best to try it on a side project first, rather than using it directly for serious work. I would encourage developers who are accustomed to the existing development process to have more failure experiences and try to break the upper limit of the model.
They can make the most of AI in a relatively safe environment, such as in a side job. Many times, we find that some people have not given AI a fair chance and underestimate its capabilities.
By taking the approach of decomposing tasks and proactively exploring the boundaries of the model, we can try to break through in a safe environment. You may be surprised to find that in some scenarios, AI does not make mistakes as you expect.
Lenny: My understanding is that you need to develop an intuition to understand the boundaries of the model's capabilities and how far it can push an idea, rather than just following your instructions. And every time a new model is released, like GPT-4, do you need to rebuild this intuition?
Michael Truell: That’s right. In the past few years, this feeling may not be as strong as when people first came into contact with the large model. But this is indeed a pain point that we hope to better solve for users in the future and reduce their burden. Each model has slightly different quirks and personalities, though.
Lenny: People have been discussing whether tools like Cursor are more helpful for entry-level engineers or for advanced engineers? Do they make senior engineers ten times more productive, or do they make junior engineers more like senior engineers? Which type of people do you think would benefit most from using Cursor right now?
Michael Truell: I think both types of engineers can benefit greatly, and it's hard to say which type will benefit more.
They will fall into different anti-patterns. Junior engineers sometimes become overly reliant on AI and let it do everything. But we don’t yet have the conditions to use AI end-to-end on professional tools, in collaboration with dozens or even hundreds of people, and in a long-term maintained code base.
As for senior engineers, while this isn’t true for everyone, adoption of these tools in companies is often hampered by some extremely senior people, such as some developer experience teams. Because they are often responsible for developing tools to increase the productivity of other engineers within the organization.
We’re also seeing some very cutting-edge attempts, with some senior engineers at the forefront embracing and leveraging the technology as much as possible. On average, senior engineers tend to underestimate how AI can help them and tend to stick to their existing workflows.
It is difficult to determine which group of people will benefit more. I think both types of engineers will encounter their own "anti-patterns", but both can gain significant benefits from using these tools.
Lenny: That makes total sense, it's like two ends of a spectrum, one with too high expectations and one with not enough expectations. It's like the fable of the three bears.
The core of Cursor recruitment is a two-day assessment
Lenny: What do you wish you knew before starting Cursor? If you could go back to the beginning of Cursor and give Michael some advice, what would you tell him?
Michael Truell: The difficulty is that many valuable experiences are implicit and difficult to describe in words. Unfortunately, there are some areas of humanity where you really need to fail yourself to learn the lesson, or you need to learn from the best in the field.
We have a deep understanding of this in recruitment. We are actually extremely patient in recruiting. It is critical for us to have a world-class team of engineers and researchers working together to hone Cursor, both for personal vision and company strategy.
Because we need to build a lot of new things, we hope to find talents who have both curiosity and experimental spirit. We also look for people who are pragmatic, appropriately prudent, and outspoken. As the company and business continue to expand, there will be more and more noise, and it is particularly important to keep a clear head.
Apart from products, finding the right people to join the company is probably our biggest focus, and that’s why we didn’t expand the team for a long time. A lot of people say that hiring too quickly is a problem, but I think we hired too slowly at the beginning and we could have done better.
The recruiting approach we ended up using that worked very well for us was to focus on finding what we considered to be world-class talent, sometimes spending several years to recruit them.
We gained valuable experience in many areas, such as how to judge ideal candidates, who will really fit in with the team, what the standard of excellence is, and how to communicate with and inspire interest in people who are not actively looking for a job. It took us quite a while to learn how to do this.

The four post-2000s co-founded Anysphere: Aman Sanger, Arvid Lunnemark, Sualeh Asif and Michael Truell
Lenny: What experience can you share with companies that are hiring now? What did you miss and what did you learn?
Michael Truell: In the beginning, we were too inclined to look for people who fit the profile of a prestigious school background, especially young people who had achieved excellent results in a well-known academic environment.
We were lucky to find some really great people early on. They are already very senior in the workplace, but they are still willing to do this together.
When we first started recruiting, we overemphasized interest and experience. While we have hired many wonderful and talented young people, they are not the same as a veteran lineup that comes straight from center stage.
We have also upgraded the interview process. We have a set of specially tailored interview questions. The core part is to let the candidate come to the company for two days and work with us on a two-day assessment project. This approach is very effective and we are constantly optimizing it.
We are also constantly learning about candidates’ interests, offering attractive conditions, and starting conversations and introducing them to job opportunities before they have any intention of applying for a job.
Lenny: Do you have any favorite interview questions?
Michael Truell: I thought this two-day job review wouldn't work for a lot of people, but it has had surprisingly longevity. It allows candidates to participate from start to finish, just like completing a real project.
These projects are fixed, but allow you to see real work results in two days. And this will not take up a lot of the team's time. You can spread the original half-day or one-day on-site interview time into these two days, giving candidates enough time to complete the project. This makes it easier to scale.
The two-day project is also a test of "whether you are willing to work with this person." After all, you will be staying for two days and will have to eat several meals together during that time.
We didn't initially anticipate that this assessment would continue, but it is now a valuable part of our recruiting process. This is also very important for attracting candidates, especially in the early stages of a company, when the product has not yet been widely used and the quality is not yet mature enough. The only thing that can attract people to join is often a team that they feel is special and worth working together.
Two days will give candidates a chance to get to know us and even convince them that they want to join us. The effect of this assessment was beyond expectations. It is not strictly an interview question, but more of a forward-looking interview model.
Lenny: The ultimate interview question is, you give them a task, like building a feature in our code base, working with the team to code it and release it, is that right?
Michael Truell: Almost. We will not use the IP, nor will we incorporate the project results directly into the product line. It is an analog project. This is usually a real, two-day mini-assignment in our codebase where they complete the work end-to-end independently, but there are also collaborative sessions.
We are a company that places great emphasis on offline collaboration, and in almost all cases, they will sit in the office and work with us on the project.
Lenny: You mentioned that this interview method has been continuing. How big is your team?
Michael Truell: There are about 60 of us.
Lenny: Considering your influence, this is really small. I thought it would be much bigger. I guess engineers have the most people.
Michael Truell: One of our most important tasks next is to build a larger and better team to continuously optimize our products and improve the quality of customer service. So we don't intend to stay this small for long.
Part of the reason we have a small team is that we have a very high proportion of in-house engineers, R&D and designers. Many software companies tend to have more than 100 people when they have about 40 engineers because there is a lot of operations work and these companies are usually very dependent on sales from the beginning, which requires a lot of manpower.
We have adopted an extremely lean, product-centric approach from the very beginning. Today, we serve customers across a broad range of markets and continue to expand our suite on this basis. However, we still have a lot of work to do in the future.
Lenny: The field of AI is undergoing tremendous changes. There are new things every day and many newsletters telling you what is happening in AI every day. How do you stay focused when you manage a hot, core company? How can you help your team focus on their work and delve deeply into the product without being distracted by these endless new things?
Michael Truell: I think recruiting is key. The key is whether you can recruit people with the right attitude. I believe we are doing a good job in this regard and perhaps we can do better.
This is also something we discuss more within the company. It is important to recruit people with the right personality. They should be less concerned with external recognition and more focused on building great products, delivering high-quality work, and generally keeping a cool head and not having wild mood swings.
Recruitment can help us cope with many challenges, which is also the consensus of the entire company. Any organization needs processes, hierarchies and systems, but for any organizational tool introduced into a company, if you want to achieve its intended effect, this can be achieved largely by recruiting people with the corresponding qualities.
One example is that we don't have many processes in engineering and yet we function well. I think we need to add some processes. But because the company is small, as long as we recruit truly outstanding talents, we don’t need to set up too many processes.
The first is to recruit calm and collected people. The second is to communicate fully. The third is to set a good example.
From 2021 to 2022, we have been focusing on AI work. We have witnessed major changes in various technologies and concepts. If you could go back to late 2021 or early 2022, there was GPT-3, but InstructGPT didn't exist, and DALL-E and Stable Diffusion hadn't yet appeared.
All of these imaging technologies came out, InstructGPT came out, GPT-4 came out, and all these new models, all these different technologies, modalities, video-related technologies came out. Only a very small number have had an impact on our business. We have built up a certain immunity to what developments are truly important to us.
Even though there is a lot of discussion, only a few things really matter. This has also been reflected in the field of AI over the past decade, with a large number of papers on deep learning and AI published in academia. But what’s amazing is that many of the advances in AI stem from some very simple, elegant, and enduring ideas. However, the vast majority of ideas proposed have neither stood the test of time nor made a significant impact. The current dynamics are somewhat similar to the development of deep learning as a field.
The demand for engineers will only grow
Lenny: What misunderstandings still exist about where AI is headed and how it will change the world? Or something that is not yet fully understood?
Michael Truell: People still have some overly extreme views, either that everything is going to happen very quickly or that it's all hype and exaggeration.
We are in the midst of a technological shift that will be more significant than the Internet and more important than anything we have seen since the advent of the computer. But this transformation will take time; it will be a decades-long process, and many different groups will play important roles in driving it forward.
To achieve a future in which computers can do more and more of our work, we need to solve all of these independent technical challenges and continue to overcome them.
Some of these are scientific challenges, such as making models understand different types of data, becoming faster, cheaper, smarter, more adaptable to the modalities we care about, and taking action in the real world.
There are also some questions about human-computer collaboration, thinking about how people should see, control, and interact with these technologies on computers. I think it's going to take decades and there's a lot of exciting work to be done.
I think there is one type of team that will be particularly important. Not to toot their own horn, but these companies are focused on automating a specific area of knowledge work, building the underlying technology for that area and integrating best-of-breed third-party technology. Sometimes we also need to conduct independent research and development to create a corresponding product experience.
The importance of such teams lies not only in their value to users, but also in the key role they will play in driving technological progress as they scale. The most successful teams will be able to build large companies, and I look forward to seeing more similar companies emerge in other fields.
Lenny: I know you’re hiring people who are interested in, “I want to work here and build this kind of product.” What kind of people are you looking for now? What specific people or positions are you recruiting? Which positions do you most want to fill as soon as possible? If anyone is interested in this, what information should they know?
Michael Truell: There is so much that needs to be accomplished by our team, but a lot of things have not been done yet. We have a lot of work to do, so if you feel like we're not recruiting for a specific position, why not contact us first? Perhaps you can bring us new ideas and make us aware of gaps that we haven't noticed.
I think the two most important things this year are creating the best products in this field and growth. We are in the stage of grabbing market share, and currently almost everyone in the world either does not use our similar tools or is using other people's products that are developing more slowly. Driving Cursor's growth is an important goal.
We are always hiring great engineers, designers and researchers, and we are also looking for other talents.
Lenny: There is a saying that AI will replace engineers to complete all programming codes. But this is in stark contrast to the reality, where everyone is still hiring engineers like crazy, including companies developing basic models. Do you think there will be a turning point where the demand for engineers starts to slow?
I know this is a big question, do you think there will be an increasing demand for engineers in all companies? Or do you think that at some stage there will be a large number of Cursors executed to complete the development work?
Michael Truell: We always believe that this is a long and complicated process. It will not be achieved in one step, and we will not directly achieve the state where you only need to give instructions and AI can completely replace the engineering department.
We very much want to promote a smooth evolution in the way programming is done, so that humans always take the lead. Even in its final state, it is still crucial to give people control over everything, and professionals are needed to decide what the software should look like. So I think engineers are indispensable and they will be able to do more.
The need for software is constant. This is nothing new, but think about how costly and labor-intensive it is to build something that seems simple and easy to define. At least in the eyes of laymen, these things should not be difficult to accomplish, but they are actually difficult to do well. If you can reduce development costs and manpower investment by another order of magnitude, you can realize more new possibilities on computers and develop countless new tools.
I have a deep understanding of this. I once worked at a biotech company, developing internal tools for them. The tools available on the market at that time provided extremely poor experience and were completely unable to meet the company's needs. The demand for the internal tools I could build was huge, far exceeding my personal development capabilities.
The physical capabilities of computers are already powerful enough that we should be able to move or build anything we want, but there are too many obstacles in reality. The demand for software far exceeds current development capabilities, as developing a simple productivity software may cost as much as making a blockbuster movie. Therefore, in the distant future, the demand for engineers will only increase.
Lenny: Is there anything else we didn’t mention that you’d like to add? Any pearls of wisdom you’d like to leave for the audience?
Michael Truell: We're always thinking about how to build a team that can both create new things and continuously improve the product. If we want to succeed, IDEs have to change a lot. Its future shape must also be very different.
If you look at the companies





