Systems, Stories, and Skills: A 2025 Year in Review

Systems, Stories, and Skills: A 2025 Year in Review

Source: Dev.to

Beyond the Hype: The Rise of Utility ## From Experimentation to Execution ## The New Frontier: Context Engineering ## Looking Ahead: Skill Management in 2026 After a four-year hiatus, 2025 was the year I finally returned to consistent blogging. It was a year defined by transition, both for the industry and for my own career. In late 2024, I stepped into the Enablement and Platform Engineering space, marking my first role outside of a security organization since graduating college. It has been a fascinating vantage point from which to watch the Generative AI landscape shift from simple chat interfaces to the rise of autonomous agents, and standardized tools and skills. When ChatGPT was initially released, I was not impressed. Every week there was another news article about an LLM hallucinating. Without the ability to interact with external data sources or drive change in external systems, I saw limited value. In 2025, there was a shift to standardized tool calling via Model Context Protocol (MCP). This year the industry moved beyond single-turn chat responses and smart code-completion. Agentic loops, with the ability to verify implementations with bash commands or controlling the browser, demonstrated the power of coding agents. Rather than be an "all knowing" LLM, the focus shifted to usefulness and ability to use tools (via MCP). Claude Code emerged early this year as a practical coding agent, with a number of challengers following as the space matured. These agents helped me get back into creating personal projects following a similar hiatus to what I mention above. They helped greatly reduce the cost of experimentation and quickly move past errors that might have sent me down a rabbit hole previously. One of these personal projects was Verifi helping make it easier for developers to manage their certificates across programming language ecosystems. Rather than getting bogged down in syntax, I found myself co-developing a well-defined plan with the agent to drive toward the outcomes and non-functional requirements I cared about. Building Verifi made it clear to me that as agent adoption grows, the ability to articulate and evolve a plan will matter more than the mechanics of implementation. Early on, I assumed that giving an agent access to more MCP servers would make it more capable. As I added too many tools, performance degraded, reinforcing that agent failures are often the result of system design rather than model limitations. While Prompt Engineering worked well enough for single-turn prompts, this tension pushed Context Engineering to the forefront in 2025. This work focused on ensuring the context window was not overwhelmed by tool definitions, system prompts, and custom instructions layered on top of the conversation itself. This evolution raised my expectations for what was achievable by an agent and changed how I evaluated agent quality. Rather than judging agents by the best-case output of a single interaction, I started evaluating whether their behavior remained understandable and predictable across a longer workflow. Various strategies emerged to help manage context, including optimizing MCP tool definitions, custom/sub agents, and progressive disclosure. The Agent Skills standard (originally created by Anthropic) helped ensure consistency across coding agents so the right context could be pulled in at the right time (using progressive disclosure techniques). I created some example skills, including one around dependency management. As I outline in the post an agent can now load dependency-management skills only when a package file changes vs every instruction upfront. Similar to discussions this year around MCP and how to manage the tools they provide at scale, I suspect we will see a similar conversation around managing skills as their adoption grows next year. To cap off the year, I used some downtime between the holidays to prototype a potential solution. If skills become as prevalent as I expect, we'll need tooling to manage them. I built skset (short for "Skill Sets") to explore what that category might look like, starting with the basic problem of skills scattered across different agent directories. In 2026, it will be interesting to see if a standardized marketplace for sharing skills publicly emerges beyond the vendor-specific ecosystems we see today. We are already seeing numerous discussions on how coding agents will fundamentally reshape software engineering and how that shift will impact individual engineers across their entire career journeys. While this year brought massive improvements to base model performance, those capabilities are just the foundation. As we head into next year, the "downstream" impact on areas like automated code review, social coding, and quality governance at scale will become clear. For me, 2025 was the definitive turning point for AI. As I wrote in Cloudy With a Chance of Context, this era feels remarkably similar to the shift I experienced with cloud in 2018. During that time, cloud computing moved from emerging to general best practice while challenging governance teams to keep up. 2026 will be the year we see if these autonomous workflows can truly meet the high bar of production-grade engineering. Based on what I've observed building skset and working with context management, the challenge won't be any single capability but orchestrating skills, sub-agents, MCP servers, and other capabilities together effectively. Teams will need to solve discovery without overwhelming context budgets, configure logical groupings per project, and understand how these features interact rather than just layer them on. The organizations that treat agent capabilities as a system to be composed, rather than features to be accumulated, will be the ones that unlock real productivity without sacrificing quality. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse