3 AI Trends Reshaping Software Engineering in 2026
Over the last few weeks, we’ve seen companies and thought leaders – Jellyfish included – make their predictions about what’s coming in 2026.
Jellyfish Advisor Adam Ferrari and Director of Engineering Luke Stevens recently connected to discuss three of the biggest trends that impacted software engineering in 2025 and are set to reshape the industry in 2026.
This is a lightly edited transcript of their conversation.
Jellyfish: Last year, we heard a lot about how the industry was trying to figure out what inning we're in when it comes to AI adoption and standardizing best practices. When you look back at the last 12 months, how far do you think the industry has come? What inning are we heading into in 2026?
Adam: I think we made a surprising amount of progress in 2025. When we entered the year, it felt quite chaotic. People were really just getting going with adoption, and it was unclear how far we were going to get in terms of actually showing outcomes and establishing best practices.
The outcomes are actually pretty impressive. You know, we’re seeing lots of reports, studies like the DORA report and analyses we did with OpenAI, that are showing 20–30% improvements in productivity. DORA has done surveys strictly around AI for the last two years, and the first year saw some negative results around throughput and how much time is dedicated to interesting work like innovation versus cleanup or maintenance. They found that those metrics improved significantly this year.
So it was clear qualitatively that the industry isn’t just adopting and experimenting, but they’re purposefully improving. The results are getting better.
Luke: In baseball terms, I would call this the third or fourth inning, when every batter starts seeing the pitcher for the second time. During the first half of the year, most engineering organizations were swinging and missing. Very few teams were benefiting from AI, but the majority were at least taking a swing and trying something. Now it's the second attempt – organizations are taking what they learned, making adjustments, and getting closer.
I don’t think we’re in the 8th. I actually think some teams are going to need to take three or four whacks at this, and there’s still more to do in the next couple of years. Teams are realizing that they need to measure the impact of AI. If they didn’t already have metrics coming in, not only are they not equipped to transform their org, they don’t even know what their starting position was. It’s impossible to know whether they’re better or worse than they were a year ago. We’re seeing measurement as a forcing function; folks who started trying things before they even knew how to quantify an outcome are now having to catch up on that.
Adam: It’s interesting. When I look at the most successful teams, I think they’ll have something reasonably stable and mature established over the next year. You could call that like, late innings of the first incarnation of this. Some teams are going to be wildly ahead.
I do think I’ve never seen such a pressurized concept in technology where boards and leadership desperately want to see their teams execute on AI. They don’t want to fall behind or get disrupted by competitors, they want to stay competitive and efficient. What I’m anticipating is the emergence of some hyper-successful teams that are literally getting 2X throughput. We should be able to call that the first cut of a very solid and mature workflow.
Jellyfish: In 2024, the AI industry took huge steps forward each time a new model was released. But in 2025, we started to see growing questions around whether new model releases would keep consistently changing the game. Has this changed the way engineering organizations approach new releases? Can we no longer count on these massive improvements?
Luke: Cal Newport, a professor of computer science at Georgetown University, has written a lot about this, and he says there's plenty of evidence to suggest that this is normal. You can't keep adding to large language models forever – there are going to be plateaus. That doesn't mean we won't find other ways to do things, but there probably is a ceiling.
Recommended by LinkedIn
We need to remember that LLMs don't get smarter the way that people get smarter. The learning curve is different, and process maturity needs to occur to make the best use of it. But I don’t actually know if I need it to get better. Even if what I have today never got any smarter, that would probably be fine. I need two years to hook it up to my test writing, my code review, and to change the way I architect my software and my development process. Getting the best use out of what I have today will take time.
Adam: I totally agree. I think people have moderated their expectations on the model advancements. I suspect there are significant advancements still to be made on model architecture, but the focus has transitioned to learning and adapting to the amazing capabilities they already offer. That includes feeding it the right context, prompting appropriately, and making sure it has suitable documentation to work with. There are also features and integrations that will allow us to make smarter use of the tools. I think those are going to be the difference-makers this year.
Luke: There's a ton of innovation in the productization of AI's core capabilities, too. Startups are building tools that only write tests or review code, making them more ergonomic and applying them to specific tasks.
We still don't know the full extent to which AI will change the fundamental nature of human-computer interaction. Every web app might have an AI feature, but how many of those are actually useful? I don't need a robot to read me what's on the page – I can do that myself. So when it comes to how we interact with machines, there's still a long way to go.
Jellyfish: The DORA report presented the idea of AI being a mirror for your engineering organization. Was that a groundbreaking insight for organizations in 2025? Do you think that's always going to be the case, or will we reach a point where it can get past organizational messiness and still deliver value?
Adam: I don't think the finding came as a surprise, but it did challenge some of the irrational exuberance around the coding capability. Everyone was so fixated on the amazing capabilities that we needed that wake-up call, and the DORA report did a great job of articulating it.
Is it always going to be that way? Basic stuff is still changing: we don't yet understand the right shape for teams or roles, or the best practices. One of the insights from DORA was that we have to work in small units, but how do you do that? Tickets and PRs have a certain granularity around what humans want to do – we're going to have to change how we manage these things to allow it to work effectively.
It's always going to be a queuing theory problem. A network of roles and activities delivers software, and that means you're always going to have bottlenecks, whether they're human bottlenecks or AI bottlenecks. While AI exacerbates that, it can also help us to solve some of the issues. We need to work out what the future delivery graph looks like for engineering organizations: which parts are AI, and how do we work with it?
Luke: I agree. It's like you've got an assembly line for making a car that has ten machines. Even if you make one of those machines ten times faster, you're probably not going to make more cars because the machine after that can't keep pace. But if you're learning how to make machines ten times faster, and you can do it to all the other machines on the assembly line, now you've got something. The DORA report tells us that the fundamentals still apply, and we're learning how to scale a system one piece at a time.
Six months ago, I would have said that we were maybe a year or two away from creating code that no human reads. Now I actually think we're further out. We're not going to get there in 2026; somebody will, but I don't think that the industry is going to see a big shift in that direction.
Adam: Broadly speaking, I think that's true. But there's also recognition that some stuff is pretty straightforward and non-controversial. If you have a well-documented design and plan for what's going in, really good observability capabilities when you deploy it, and rollback if there's an issue, you could get flow through the system and have humans working on the most important, innovative, and creative stuff. But we're a long way from just writing a spec and never looking at the code. Humans still need to understand how it's working and make sure that it's going in the right direction.
For more on how AI is reshaping software engineering, check out the Jellyfish blog. To start measuring AI impact at your organization, request a product demo.
Great breakdown of where the industry is heading. These trends are exactly why I believe devs need to start learning AI-assisted workflows NOW, not later. The developers who thrive in 2026 won't be the ones who resist AI - they'll be the ones who learned to use it effectively. I'm running a workshop this Saturday focused on practical AI coding patterns for devs entering the market. More info: https://www.linkedin.com/posts/htekdev_juniordeveloper-codenewbie-bootcampgrad-activity-7424210805270855680-huZC
Luke Stevens is right. We are a long way from "writing a spec and never looking at code". My mindset has shifted from building developers to building software operators. I don’t need my team to type faster; I need them to architect better. Even if models don’t get "smarter" in 2026, there is still 2x value left to unlock just by maturing our processes: - AI handles the "how", but humans must own the "why." - Use AI to bridge the part of the week that isn't coding (documentation, testing, friction). - Prevent the "vibe code" that looks good but breaks at scale.
Great take on changes to HOW we work, and not just WHO does the work. 2025 was fraught with “the end of the junior developer” predictions and I think we’re starting to see that that’s not the case. Instead, folks who are earlier in their career can use these tools as an accelerate to their learning and, in many cases, those are the one who will “grow up” being able to think AI-natively. I’ve long posited that the real value to unlock here is in everything around the writing of code; just like the bottleneck problem described here, if the code writing is faster but the planning, documentation, deployment, and operations stay the same, teams will be severely limited in the value they can deliver. Instead, 2026 needs to be about integrating AI into each phase of software development, allowing engineers to move towards planning and architecting execution, leaving more of the work to AI while they oversee, confirm the quality of the output, and tweak things to get to the right result. It’s an exciting time to be in software!