How to Become a 10x Engineer in the AI Era

How to Become a 10x Engineer in the AI Era

Source: Dev.to

This article explores the concept of a “ten-fold engineer,” referring to individuals whose work efficiency far surpasses that of the average engineer. It begins by highlighting the potential of AI tools to improve engineer efficiency, and then uses the author’s personal experience to illustrate how AI helped him increase his efficiency by two to three times when developing milestone projects. Introduction
The term “10x engineer” refers to individuals whose work efficiency far exceeds that of the average engineer. This concept can be traced back to research in the 1960s, particularly the views expressed by Frederick P. Brooks in his book The Mythical Man-Month. Brooks pointed out that in the field of software development, some programmers are significantly more efficient than others, sometimes by as much as ten times. This observation was widely cited and gradually evolved into the “10x engineer” concept we know today. Today, an increasing number of AI tools claim to improve engineers’ work efficiency by 10 times, including but not limited to well-known AI companies such as Anthropic, Cursor, Trae, and Devin. My favorite professor, Andrew Ng, has also published a long article exploring how AI makes “10x” possible. Ng believes that as more and more jobs are supported by artificial intelligence, there will be more “10x engineers,” “10x marketers,” “10x recruiters,” “10x financial analysts,” and other “10x professionals.” https://x.com/AndrewYNg/status/1887919658201960807?s=20 If we start counting from the release of chatGPT (November 30, 2022), the Gen-AI trend has been going on for 3 years, and Cursor has been released for 2 years. Has AI really brought us 10 times the efficiency? If not yet, how can we get closer to becoming a “ten-times engineer”? A real case
I’d like to start by discussing my own experience and how much efficiency AI has actually improved for me. Before discussing efficiency improvements, I want to talk about how to measure efficiency. We can initially define efficiency as follows: the less time spent doing the same task, the higher the efficiency. However, this definition is insufficient; it only specifies “time,” not “quality.” For software engineering, I believe efficiency can be defined as follows: the less time spent implementing the same task with minimal code and achieving bug-free results, the higher the efficiency. This definition adds two extra requirements: avoiding “garbage code” and “problematic code.” Without these two requirements, how can we even talk about “efficiency improvement”? The development we’re discussing here specifically refers to large-scale software engineering projects undertaken by teams, not individual projects or toy projects. For example, consider an open-source project I participated in, GitHub — chatem/Chaterm: Open source AI terminal and SSH Client for EC2, Database and Kubernetes (an AI terminal tool). This project has a total codebase of around 80,000 lines. The complexity of the original project is crucial for evaluating efficiency; adding 1,000 lines of code to a project with 100,000 lines is significantly more difficult than adding 1,000 lines to a project with 1 million lines. I recently had to develop a new feature — support for MCP. To complete this feature, I made approximately 10 commits. The two most crucial ones are shown below, which involve a total of 5000 lines of valid code updates. Since the project has been live for three weeks and no bugs have been found, I unilaterally determine that these 5000 lines of code are acceptable (if you’re interested, you can click the link to review them for me). Without AI assistance, how long would it take me to complete such a feature? I estimate about a month. The algorithm is simple: assuming a daily effective coding volume of 200–300 lines by hand, 5000 lines would take approximately a month. With AI assistance, how long did it take me? The answer is a little over a week, less than two weeks. So, AI improved my efficiency by about two to three times. I’m still a long way from being a “ten-time engineer.” To achieve ten times the efficiency, completing this function should only take 2–3 days. Is that possible? After reviewing the results, the answer is yes. Below, I will explain how to become a “ten-time engineer”: It can be divided into several parts: First, choose a tool you’re comfortable with; second, adjust your programming paradigm; third, learn some AI coding techniques; and finally, through long-term deliberate practice, solidify these improvements into long-term capabilities. **Choose your favorite AI tool
**As the saying goes, “A skilled craftsman must first sharpen his tools,” and the first step to becoming a tenfold engineer is choosing the right AI tools. Today, AI coding products are truly abundant and diverse. Currently, existing AI coding products on the market can be mainly divided into four categories: IDE:Cursor、Windsurf、Kiro、Trae、CodeBuddy、Lingma
CLI:Claude Code、OpenAI Codex
IDE Plugins:GitHub Copilot、Cline、Agument
Web Applications:Lovable、Replit
When it comes to choosing tools, the best one is the one that suits your needs. Each provider has its own unique features. The best approach is to try them all and see which one feels most comfortable using. Don’t be swayed by online influencers or media outlets constantly touting Claude Code or Codex. It’s like someone telling you that Vim is highly efficient; you have to experience it for yourself to know. If you’re not comfortable “coding” in a terminal, then CLI-based products are not an option for you. If you’re not a professional developer, then web-based products might be your best choice. In my experience, there isn’t that much difference in AI capabilities between different products, so just choose the one you’re most comfortable using. Personally, I prefer Cursor, mainly for one reason — Cursor’s IDE makes it easy for me to review code. Other products don’t do this as well. Recently, I discovered that ByteDance’s Trae is also quite good, and it’s cheaper than Cursor, so I plan to switch and try it out next month. **Conduct code reviews
**When discussing how AI can improve efficiency, if I had to choose one principle, it would definitely be “do thorough code review.” Just like when using assisted driving, “don’t take your hands off the steering wheel/don’t take your eyes off the road,” when coding with AI, try not to completely relinquish your final say on the code. I know that more and more people are no longer reviewing the code when using AI coding. Their usual approach is this: give the AI ​​instructions -> AI executes the coding -> run the project -> check if the functionality is implemented -> then start battling with the AI: “I want this, not that,” “I want this implementation, not that implementation,”… “Are you an idiot?” If you’re an AI coding practitioner, you’ve probably experienced these frustrating moments. This is an inevitable painful process for programmers/engineers trying to tame AI. I’ve observed that many of those recommending this coding method are developers without a professional background, primarily individual developers. People on Twitter who do “build in public” are especially enthusiastic about this approach. And this is usually accompanied by their “achievements”: starting from scratch using AI coding, launching a product in a very short time, and achieving a certain MRR (Monthly Recurring Revenue). However, most people aren’t doing independent development; they’re working as engineers in companies. Aren’t both just writing code? What’s the fundamental difference? The difference is huge. If you’re developing independently, I believe you’ll spend at least 50% of your time “selling the product” and the remaining 50% “making the product.” The quality of your product only determines half of your success, or even less than half in my opinion. But if you’re working as an engineer in a company, you’ll spend the vast majority of your time “making the product” and won’t need to worry as much about “selling the product.” Furthermore, engineering development in companies is usually done in teams, which makes “code review” even more crucial. Besides code quality issues, code reviews should pay special attention to one point: code reusability. Currently, AI is not very good at this; when encountering a problem, it often writes a new function on the fly instead of prioritizing reusable functions or similar logic already existing in the code. If this isn’t carefully considered, the project will become increasingly bloated and eventually a mountain of code. **Spec Driven Program
**When we first started learning AI coding, the typical approach was to “give instructions to the AI ​​-> the AI ​​executes the coding -> run the project -> check if the functionality is achieved,” continuously repeating this process until the desired effect was achieved. There’s a specific term in the industry to describe this programming method — Vibe Coding. After several months of practice, I’ve found that Vibe Coding isn’t the best approach to AI coding. So what’s the right approach? Here, I’ll introduce another AI coding concept/paradigm that’s been gaining popularity in the community recently — Spec programming. Spec programming stands for “Spec-Driven Development,” where Spec is an abbreviation of the word “specification.” Its core idea is to use the “specification” as the core driving force for development. If you’ve taken a software engineering course in college, you’ll immediately understand what this means. A typical software engineering project requires a group to complete a project (usually an xxx management system). The final submission includes not only code but also four documents: a requirements analysis, a high-level design, a detailed design, and test documentation. There’s an open-source project on GitHub — github/spec-kit: 💫 Toolkit to help you get started with Spec-Driven Development, specifically designed to help people develop with SSDs. Amazon’s Kiro, on the other hand, designs Spec as a mode; in Spec mode, it generates three documents for you before executing the code: You might think I’m here to sell you Kiro? Wrong! Spec programming is just a concept that can be applied to any development tool. Just like object-oriented programming, you can also do object-oriented programming in C. Let’s revisit Spec programming and think about why we need three documents first. Become a member
I think the most fitting explanation is this: Plan first, then build. Documentation serves as a guide; through it, we can know in advance whether what we do next will meet expectations. If not, we can correct it beforehand; if so, we can begin execution. Whether there is documentation at all is actually not important. Documentation updates are best done retrospectively and consistently, rather than in advance and piecemeal. Below I will demonstrate how to perform Spec programming in Cursor. I recently expanded the AI ​​functionality of Chater from a single-task mode to a multi-task mode (meaning multiple tasks can be executed simultaneously). After the refactoring, a bug appeared: when sending back the result, it was unable to send the result back to the tab that initiated the request. Select plan mode and describe the problem to the AI. It will provide an analysis and solution to the problem. The AI’s analysis seems correct, but if you click “build” at this point, you’re doomed. This will trap you in the same cycle of “reviewing code -> rejecting solutions -> constantly recalibrating with the AI.” At this point, we can understand the AI ​​as providing a “requirements document,” but without offering any “implementation solutions.” Remember, AI excels at: making seemingly nonsensical statements, and this still happens frequently, even after years of model iteration. Therefore, we must not only observe its statements but also its actions. At this stage, we need to demand an implementation solution from the AI: Provide key details of each implementation After the AI ​​provides the implementation details, don’t rush to build. At this point, you should review the implementation details; reviewing now is much easier than reviewing after modifying the code. The key is to review two aspects: First, the data structures and algorithms. If these two are sound, the program should be fine. Second, code reusability — check if the AI ​​has written a bunch of new functions that already exist. The implementation details of AI are largely complete, but some details have been overlooked, namely resource cleanup. Every time AI sends a message, it stores the message mapping relationship, only destroying it when the program terminates. For example, if a user initiates 500 interactions, 500 mapping relationships would be retained. This is clearly unreasonable. Given the following instructions (following a syllogistic structure: What to do? How to do it? Why do it this way?): Mapping relationships need to be cleaned up in advance. Clean them up immediately after each message is sent back. Otherwise, the mapping relationships will accumulate. Everything is ready, click build, and it succeeds on the first try. Fixing such a bug only takes 10 minutes. If you don’t do it this way, the time spent battling the AI ​​for even a couple more rounds will increase exponentially. And the code written this way is of guaranteed quality. Looking at this, are you wondering what’s going on with the AI ​​doing the work? Why does it feel like you’re doing so much yourself? You need to confirm the solutions provided by the AI, correct its mistakes, and finally review its code. That’s exactly right. The bottleneck for the “ten-times engineer” isn’t so much with the AI ​​itself, but with the person using it. In my view, AI is an engineer of average ability but with extremely high comprehension. As long as you tell it the correct way to do it, it can implement it immediately. In reality, it’s like someone who grasps things instantly. **User Rules & project rules
**Almost all mainstream programming software now support custom rules (User Rules and Project Rules), a severely underrated feature. By setting explicit coding standards, architectural constraints, naming conventions, etc., in your project, you can make AI automatically follow these conventions when generating code. For example, you can specify specific design patterns, coding styles, or even your team’s best practices. This significantly improves the quality of AI-generated code, reducing the workload of later review and modification. Utilizing these rules effectively is equivalent to equipping AI with a “coding guide,” making it better understand your project requirements. User rules are typically given the highest priority, such as being placed at the very beginning or end of a System Prompt, to ensure that the AI ​​follows user instructions as closely as possible. Therefore, when we find that the AI’s response does not meet our expectations, limiting it with user rules is the best solution. I added the following restrictions to my rules. MCP
MCP (Model Context Protocol) is a powerful feature introduced by Anthropic that allows AI to interact directly with external tools, databases, and APIs. With MCP, you can enable AI to access project documents, query databases, and call third-party services, among other things. There are many aspects of MCP worth exploring, and many people think it’s useless. However, using MCP effectively can greatly automate workflows. I will write a separate article later to discuss how to use MCP effectively. Here, I’ll just give one very practical example. When using AI coding, I often encounter a situation where the AI ​​expands my cognitive boundaries — that is, the AI ​​solves a problem I didn’t understand before. Many people, upon encountering this, consider their work finished after accepting the AI’s code. But actually, this is precisely a great opportunity to deepen understanding and learn new knowledge. “Learning by doing” is a perfect interpretation of Wang Yangming’s concept of “unity of knowledge and action.” With AI as a teacher, we can let it break down and explain each line of code. Understanding it isn’t the end; true masters are always good at summarizing. We are not masters, and summarizing is too troublesome, but now we have AI as a great helper. MCP allows AI to summarize questions and write them into its own knowledge base. You can use Notion, Yuque, or Obsidian; here, I’m using our self-developed Confluence MCP server to connect to the enterprise’s knowledge base. In the future, I will take out my notes every now and then to review them and learn something new. Counterintuitive advice
Many people believe that in the AI ​​era, engineers should focus on “how to better use AI tools,” such as Prompt Engineering. Prompt Engineering is important, but not fundamental. What truly sets you apart is still your fundamental skills as a software engineer. The following seemingly counterintuitive suggestions are key to becoming a “ten-fold engineer”: Gain a deeper understanding of the language you use than ever before. This goes beyond simply “knowing how to use it,” delving into the underlying details of the language’s type system, runtime mechanisms, memory model, and concurrency model. The reason is simple: when AI provides a piece of code, you need to be able to judge whether it’s correct, efficient, and whether it’s fallen into the traps of the underlying mechanisms. AI is so prone to writing seemingly correct code. You can easily be led astray if you’re not careful. And every time you correct it, it will only say: “You’re absolutely right!”
More than ever before do you need to understand the frameworks you use. Understanding how the framework works, its lifecycle, data flow, and performance bottlenecks is crucial to guiding AI to write code that conforms to the framework’s best practices. Without a deep understanding of the framework, you can only accept seemingly working code written by AI. I’ve encountered many pitfalls in this area.
Mastering data structures and algorithms is crucial. Even though AI can write code, data structures and algorithms remain the “ultimate underlying logic” of software engineering. You need to know which structures are suitable for which scenarios and what their complexity is. This allows you to judge whether the AI ​​has provided a good solution, rather than a mountain of cluttered code built with if-else statements.
Thoroughly understand design patterns. Design patterns are essentially a set of experiential summaries on “how to write maintainable code.” When requirements change or features are added, a well-structured pattern with clearly defined responsibilities allows you to make minimal code changes instead of dismantling the entire system and rebuilding. As I mentioned earlier, AI is very prone to writing garbage if-else code. AI can write code, but it won’t make architectural decisions for you or consider the future scalability of your project. Design patterns are precisely the core capability of this structural thinking. The role of future engineers will become: you are responsible for thinking about the structure, and AI will handle the details.
All of this is done to: better review AI code and better guide code writing. The ultimate goal is singular: to become an architect. The future development workflow will be one person guiding a group of agents.、 Cost
Finally, I’d like to talk about costs. Over the past month, I’ve spent about $60–70 on Cursor, which is still below the $200 limit set by both Cursor and Claude. This might be one of the reasons I haven’t become a “ten-times engineer” — if the usage isn’t ten times greater, the efficiency naturally can’t be ten times greater either. The “usage” mentioned here doesn’t refer to asking random questions to inflate credit limits, but rather to involving AI in real engineering tasks as much as possible: writing code, troubleshooting, revising designs, and then having it carefully reviewed and corrected by yourself. The essence of high usage is that a high-frequency feedback loop is formed between you and the AI ​​- the more you use it, the more it can help you increase efficiency. I know some very skilled engineers reached the $200 usage limit a long time ago. At the time, I didn’t realize the extent to which they had utilized AI. You can try it yourself; even without reviewing the code, just judging the reliability of AI-generated code by its actual operation, reaching the $200 usage limit is extremely difficult. When my usage reaches $200 a month, I will write another article to explain my experience. Outlook
Becoming a “ten-times engineer” is no easy feat, regardless of the presence of AI. However, the emergence of AI does offer a possibility: to leverage it as a tool to amplify your abilities. The key is that you must first become an excellent engineer before you can use AI to become a “ten-times engineer.” Then, through deliberate practice, you can infinitely approach this goal. If you’re working on a large-scale software engineering project, it’s unlikely you’ll be able to completely hand over the reins to AI for the next three to five years. During this time, what we can do is maximize AI’s strengths and use that to improve our own capabilities to meet the future demands on engineers. Those who embrace the times will never be abandoned by them. Originally published at https://chaterm.ai
GitHub:https://github.com/chaterm/Chaterm Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse