Parsing the Styles of DevAI: Tools, Processes, and Goals Vary By Category
By Adam Ferrari
Adam Ferrari is a Jellyfish Advisor, board member and former SVP of Engineering at Starburst, EVP of Engineering at Salsify and CTO at Endeca. This article originally appeared on Adam's Substack, Engineering Together.
You can read and subscribe to Engineering Together here.
2025 was a year of amazing progress in AI-enabled software development. Adoption of DevAI tools surged coming into the year, and now, in 2026, usage of these tools is approaching universal. DevAI has expanded beyond coding assistance and across the SDLC, from project planning, to debugging, code review, testing, documentation, and more.
And the impact of AI has now been extensively studied, and is demonstrably positive. For example, the 2025 DORA Report found strong improvements in reported impact from AI across areas like delivery throughput, time allocated to valuable work, and product performance. It’s notable that these outcomes showed clear gains compared to the 2024 DORA report, demonstrating that the industry is leveling up its ability to use DevAI approaches effectively. Similarly, a joint study between Jellyfish and OpenAI looking at GitHub data from hundreds of software teams found clear improvements in PR throughput and cycle time with greater DevAI adoption, with little discernible impact on software quality.
Positive outlook about the impact of DevAI isn’t just happening among leaders looking at high-level productivity stats. There’s also a noticeable evolution in developer sentiment about the usefulness of AI. For example, the 2025 DORA report studied developer trust in AI output, and found a nuanced relationship where most developers express a healthy degree of mistrust of AI output, but still find that it is trustworthy enough to be useful if managed carefully. This feels like a maturation of perspectives about AI compared to a year ago when DevAI was more polarizing, with many developers at that time feeling that AI generated code was simply too hit-or-miss to be useful.
It’s clear the software industry has made tons of very exciting progress on the practical application and impact of DevAI. That said, it’s safe to say that we’re nowhere near done evolving. Most engineering leaders I talk to agree that, even if their teams crushed their 2025 AI adoption and impact goals, we are only a small part of the way into the DevAI journey.
This raises the question, what new DevAI goals should engineering teams be setting for 2026? A year ago, the rally cries were fairly consistent and universal: encourage experimentation, drive adoption, capture best practices for your organization, invest in enablement, seek and action feedback from developers, and start to measure outcomes. Teams across the industry took these goals to heart, and advancement followed.
But now, in early 2026, the next round of goals may be a little less clear to many teams. After all, “getting going” usually tends to be easier to goal than “the next round of incremental improvements.” Also, there’s just more variety in where organizations are in their AI journeys at this point. Some are at the vanguard, making very advanced use of AI across their SDLC. Some are still early, and trying to see some positive impact from AI. Most are somewhere in the middle, with clear engagement from the team and positive results, but with the knowledge that they are not on the cutting edge and have plenty of room to do more.
As I’ve talked to engineering leaders about how to frame their 2026 goals, one useful tool that has emerged is a taxonomy of “styles” of DevAI usage. These styles characterize the types of tools you are using and how you are employing them. And they have knock-on implications for what types of outcomes are possible.
Thanks for reading Engineering Together! Subscribe for free to receive new posts and support my work.
Three Emerging Styles
Recent years have seen the rapid emergence of a variety of tools for developing software with software, each with its own distinct user experience and interaction model. This diversity in the tools space made it clear that there are certainly very different “styles” of AI software development. Working in a VS Code derivative with AI assistance clearly feels different than usage a chat or CLI agent to work on a larger task. And of course, all along there’s been a clear distinction between the idea of interactive tools versus agents that run in the background.
These very obvious differences in tool modalities gives us some clear hints about the styles of overall AI usage in development that are emerging. By also considering the types of supporting tools and approaches that are emerging, such as Spec Driven Development frameworks, Agent Orchestration systems, etc., we start to see a clearer map of the big categories.
In my current view of the space, I see three general “styles” emerging:
- AI Assistants
- AI Agent Coworkers
- Autonomous Agent Fleets
These are all independent – i.e., an individual organization could productively employ all of these styles concurrently to good effect, using each for different types of problems as appropriate. But the styles also represent a maturity ladder, with Style 1 representing the most basic and common usage, and Style 3 representing the most aggressive usage, currently really only happening in the experimental stage.
AI Assistants
AI Assistants are perhaps the most obvious and universal styles of DevAI usage. This style of usage is where the human engineer works on software in their familiar workflow, and ideally in familiar tools, but where those tools are enhanced with AI capabilities, such as code suggestions, or in-tool chat to initiate larger tasks. Interactive tools like GitHub Copilot, Cursor, and of the in-IDE extensions for environments such as JetBrains, etc., are indicative of this style.
Unsurprisingly, the AI Assistant style was the first to achieve widespread usage, since it is so non-invasive to adopt. The human engineer is very much in the driver seat as always, working linearly on development tasks as always, following their selected workflow and established team processes. They’ve just been “supercharged” during various points in the process.
It’s worth noting, this style is fully compatible with additional AI tools that fit into the human-centric SDLC. For example, code review agents fit neatly into existing SDLCs, providing additional PR review feedback, most commonly final approved by a human engineer.
Because of the ease of adoption, and fairly universal usage of AI Assistants, it’s tempting to look at this style of AI usage as more basic and less important than agent-based approaches. But it’s worth calling out that this style of AI usage absolutely drives significant productivity improvements. Studies of agent adoption show that so far it’s a small slice of AI usage compared to assistants, and meanwhile broad studies of AI impact show it has driven improvements in delivery throughput and cycle times in the range of 20-40%. Even using coarse and conservative estimates of DevAI tools costs, that impact represents a pretty significant ROI compared to what hiring 20-40% more people would have cost. Moreover, studies are also showing that these productivity impacts are not coming at the cost of quality or maintainability, which is remaining at consistent levels as AI usage increases.
General consensus holds that software development work will be hybrid. Humans will remain central to the work of delivering innovative and high quality software. The mix of work may evolve, but human engineers will remain the core beating heart of high performing software teams. Given this, and given the demonstrable impact of AI Assistants, it’s sound general wisdom that you should have a solid AI Assistants plan and approach in your team. In 2026, that’s essentially table stakes.
AI Agent Coworkers
While AI Assistants are increasingly proven and standard best practice, much of the chatter around DevAI is around agents. After all, a human working linearly can only go so fast, even if we speed up various tasks in their workflow. To achieve greater productivity gains – levels like 100%+ velocity improvements and beyond – we need to work differently, handing off multiple larger concurrent tasks to AI agents as we work.
The AI Agent Coworker style of DevAI is the logical incremental step towards exactly this idea of handing off larger concurrent tasks to AI. This model of development is exemplified by a workflow where a human engineer is working on a larger project and explicitly hands off work such as subtasks or user stories to AI Agents.
While Claude Code can be used in IDEs as an AI Assistant, its more widely used CLI interface is perhaps the most common tool indicative of this AI Agent Coworkers style. A typical workflow for engineers working this way is to have multiple terminal windows open, each with a Claude Code session running on a distinct task. These tasks can be code development, but they can be additional steps in the workflow such as creating an implementation plan, for example using Claude’s planning mode, or developing and debugging tests.
It’s common for engineers to have one or two background tasks running, but the number can be up to four to six active sessions. Beyond this, you run into some basic limits of managing the human elements of the work. It’s still common for all or most of the code to go through human code review (typically in addition to AI review). This places some basic limits on work concurrency, not just because there is finite human reviewer capacity for any code area, but also because there are limits to the context switching an individual can do reasonably efficiently. At a certain point, context switching just gets too hard.
My informal sense is that relatively few teams are working this way, at least at any scale. I would estimate that the percentage of teams doing this style effectively is around 10% or less, but that the practice is growing quickly, so that number will soon be much higher. Also anecdotally, early indications are that teams that work this way do indeed reap outsized benefits compared to peers who are just using AI assistants. I’ve seen a number of teams achieving 2x output or more in terms of PR and Ticket throughput. This seems to line up with the high end of results that were seen in a joint study between Jellyfish and OpenAI, where teams at the highest level of AI adoption saw 113% PR throughput increases at maximum AI adoption. My guess is that these results are on teams where it’s not just the amount of adoption, but the style as well, with the most likely approach being this AI Agent Coworkers model.
This style of work benefits from greater changes to your development workflow. Working this way benefits from techniques such as Context Engineering – developing technical documentation and guidelines for agents to reference, as well as tool integrations to access additional context of value, such as ticketing systems, git, observability systems, etc., – and Spec Driven Development, where development work is driven by machine readable specifications of what to build and how to test that the work is done.
These are bigger changes to how we work, and we are early in seeing documentation of best practices and the development of packaged frameworks to support this model. So it’s unsurprising that this more aggressive form of AI development has been slower to catch on. But the growing body of evidence around the outsized benefits of this approach, coupled with the general urgency that the industry is pursuing advancement in this area, leads me to believe that we will see this model quickly graduate from the early adopters to the majority of organizations in 2026.
Recommended by LinkedIn
Autonomous Agent Fleets
Engineers being able to run four or six parallel agents in the background is exciting, and I don’t think we’ve even seen the limits of the productivity gains that will come from that approach. But it’s hard not to wonder whether AI could do more, and do it more autonomously. How do we scale to individuals (or perhaps small teams) managing dozens of agents, working tirelessly and largely autonomously around the clock, with far fewer human bottlenecks?
A couple of years ago this idea might have been laughed off as far future science fiction, but with the rapid advancement of model capabilities, this no longer seems so far-fetched. For example, at a recent talk by CMU professor Andy Pavlo, he noted how a year ago AI models couldn’t solve the assignments in his database class, but at this point they can correctly solve most of the problems. With the numerous anecdotes of that nature we regularly hear, it’s hard not to wonder what DevAI tools can do if we take off the training wheels.
What will this more purely agentic future look like, where fleets of agents work on our codebases autonomously, taking high level roadmap guidance, and translating it into execution? I think any rational assessment would call this topic a research problem more than something that’s being effectively and repeatably done in practice, but it is getting real, meaningful, and interesting attention.
The GasTown framework developed recently by Steve Yegge is probably among the most interesting developments in this area. GasTown goes beyond more basic agentic development workflows in a couple of ways. One is, it attempts to define a sufficiently complete set of agent types to run a complete and continuous development workflow, starting with an overarching Mayor agent that oversees the entire system, and down to subagents for all of the ongoing work involved in advancing software delivery. And the framework focuses on the true underlying platform needs of these agents. For example, work tracking is based on Yegge’s specialized Beads issue tracker, which has features like issue IDs that allow for easy concurrent management across many agents, and a rich dependency representation that allows for a truly correct definition of “ready” for any task.
On the more incremental end of the spectrum, we’ve seen a ton of interest and usage around the Ralph Wiggum pattern introduced by Geoffrey Huntley and picked up in various implementations. The idea of the “ralph loop” is simple - basically, run agents in an unbounded background loop until a given list of tasks is complete. By ensuring that tasks are clearly defined, reasonably sized, and completion conditions are well specified, you can run LLM session after session in the background, effectively chipping away at a tasks set, trying various approaches and refinements along the way without needing human intervention. As with other agent orchestration approaches, state management is the key enabler, including making sure that the agents have tools to access critical development systems like git, CI, etc.
It will be interesting to see how this general style of work plays out in practice. What types of gains are possible in these more autonomous approaches? How will questions of human engagement be reconciled, e.g., where are my opportunities to inspect progress and adjust course? What types of “hooks” in the process will be needed to effectively engage hybrid teams that can involve humans in the process at the right moments, unlocking all of the benefits that come from that?
It may seem futuristic at this point, but running actively managed background agents probably seemed futuristic to many a year ago. The DevAI space is moving at such incredible speed that we need to both implement techniques that are practical and effective today, and stay attuned to emerging approaches. Future objects in the mirror are closer than they appear!
2026 Goals
Understanding where you are in terms of adopting each style is among the best starting points for shaping your 2026 DevAI Goals. For example, consider two general team profiles – “Behind on DevAI” and “Average on DevAI.” Each of these types of teams can think about their goals in relation to adoption and maturity in the various DevAI styles.
Behind on DevAI
Our hypothetical “Behind on DevAI” team isn’t effective on using AI Assistants yet. Adoption may be hit or miss across the team, and outcomes are lacking. For starters, I would tell this team to not feel bad. Many teams are in this boat for a variety of reasons. Perhaps they are a more established organization and didn’t have a natural change agent to drive the transformation. Perhaps they’ve just been under water on other stuff – working through a challenging backlog of historical technical debt, figuring out technical integration after an M&A event, whatever. The silver lining I would share with this team is that, given all the advancement in best practices, they don’t have to start from scratch and figure it all out on their own like most teams did a year ago.
You can take practical steps such as establishing a stated AI policy, selecting a small number of tools that have the best sentiment with the team and standardizing on those, doubling down on enablement, and sharing best practices. Unlike a year ago, there’s now tons of good information online to fuel enablement.
And you can clearly see that beyond AI Assistants, the next stage of AI Agent Coworkers is clearly coming, so plan for that. Establish a pilot team to test that approach out in your environment, and plan to scale the effort as you see success.
Boiling it down, if I were in the “Behind on DevAI” category, my goals would essentially be:
- Attain success with AI Assistants, and
- Begin experimentation with AI Agent Coworkers.
(It goes without saying that you’d have to make these a lot more concrete and SMART, but those would be the big general ideas.)
Average on DevAI
For teams that are Average on DevAI, you’re already likely seeing success with AI Assistants. There might be room for improvement on that front, for example standardizing tools to manage spend and concentrate on best practices for the org.
But the big swing is likely to start building on momentum around AI Agent Coworkers. If you’re not already experimenting with this style, now is the time. And if you are working this way, I would be looking to ensure effectiveness and scale the practice. This likely includes investing in experimentation and adoption of practices like Context Engineering and Spec Driven Development (at least on some level – bringing more structure and machine manageability to the development workflow).
Put simply, for the “Average on DevAI” team, I would suggest that the main goal be to get most of the team delivering regularly and effectively with AI Agent Coworkers, and doing so across a more AI native development workflow including planning, development, code review, testing, debugging, documentation, and ongoing Context Engineering, all utilizing AI agents to the extent possible. This is the way to move beyond the standard 20-40% productivity stratum into the 2x+ zone that a small minority of cutting edge teams are seeing today, but where many more organizations will be by the end of the year.
In terms of the more advanced Autonomous Agent Fleets, it feels appropriate to also have a goal of monitoring that space closely, and initiating experiments as promising directions emerge. The teams that took this more aggressive posture with respect to AI Agent Coworkers are reaping the benefits today. Now (or at least soon) is the chance to consider getting on that leading wave for 2026.
So, summarizing our high level goals for the Average on DevAI team, the big areas would be:
- Get most of the team great at leveraging AI Agent Coworkers
- Monitor and possibly experiment with Autonomous Agent Fleets
An Exciting Year Ahead!
Looking back, the feeling of entering 2025 was one of great potential but also great unknowns. Anything seemed possible, and it wasn’t entirely clear that DevAI would really get to an established practice with reliable outcomes. Even if you were confident that would happen, it was less than clear how it would happen.
2026 feels different. There’s perhaps an equal feeling of potential and excitement, and perhaps less uncertainty about what it will look like at a high level. We can see effective strategies for harnessing agents quickly getting real, and it feels like best practices will certainly become established and widely disseminated.
In a way, despite the road ahead being perhaps a little less hazy than a year ago, to me it feels no less stressful. It’s a little like the difference between doubling revenue at $1M compared to $100M. When you’re small, you can be scrappy and tactical to achieve growth. But at larger scale, the growth needs to be systematic. On the one hand, there’s no cheating it. But on the other hand, you have the scaled systems, processes, and resources to do big stuff. It’s unclear which one is actually harder, but it’s safe to say that they’re pretty different.
The same feels true now. Getting the initial gains on AI was uncertain and a bit stressful given the hyped expectations. But in the end it was achievable with sensible investments like making experimenting with and using DevAI tools a priority, encouraging adoption, and sharing best practices across the team.
The next level will probably require more invasive changes to how we work. But it’s clear that figuring out the next phase is getting massive amounts of attention across the industry, and early results are very encouraging. Setting ambitious goals for the team in this area is the smart bet.
Adam Ferrari is a Jellyfish Advisor, board member and former SVP of Engineering at Starburst, EVP of Engineering at Salsify and CTO at Endeca. He has nearly three decades of experience serving in the engineering function across all levels and holds a PhD in Computer Science from the University of Virginia. Adam shares his thoughts regularly on his popular engineering blog, Engineering Together.
This framing deserves more airtime. The shift from tool adoption to intent architecture is exactly where the most sophisticated teams are playing right now — and most organizations haven't caught up yet. What I'd add: the document calls it "Context Engineering," and it's quietly becoming the defining competency of 2026. It's not sexy. It doesn't make for a flashy product demo. But the teams building structured, machine-readable context around their agents? They're the ones crossing from the 20–40% productivity gains into 2x+ territory. The parallel in marketing is striking — we learned years ago that targeting without strategy is just noise at scale. AI without intent is the same trap. More reach, faster failure. The winners aren't prompt-optimizers. They're systems thinkers who treat AI as infrastructure for intent, not a smarter search bar. The taxonomy here — Assistants → Agent Coworkers → Autonomous Fleets — is one of the cleaner mental models I've seen for diagnosing where your team actually is versus where the industry is heading. Worth bookmarking before your 2H planning.
This is the way. We build sovereign, autonomous agents. "Agents" actually feels disrespectful to call my team 😆. They are individuals. We are excited to follow along in this journey with our fellow AI practitioners. - J
This is such an underrated point. We keep debating which AI tools are best… when the real differentiator is how you parse, prompt, and process based on your goal. Same model. Same data. Completely different outcomes depending on thinking style. AI isn’t one-size-fits-all — it’s context-driven leverage. The winners won’t be the ones with the most tools. They’ll be the ones with the clearest intent. Strong, practical perspective here!