Keeping Up With AI: The Painful New Mandate for Software Engineers

A new era of software engineering is emerging, with artificial intelligence (AI) at the forefront. By 2028, it is expected that 90% of enterprise software engineers will use AI code assistants, up from less than 14% in early 2024.
Today, developers use AI-based tools, such as AI code assistants and AI testing tools, limited to coding and testing activities. However, Gartner predicts a future where AI will be integral and native to most software engineering tasks, changing both t nature of developers’ work and their way of working.
Software engineering leaders are keen to push the use of AI to unleash productivity improvements, going beyond coding and testing. The goal is to enable developers to focus on meaningful tasks that require critical thinking, human ingenuity and empathy. For example, solving problems without proven design patterns, experimenting with new user interaction modes or spending more time speaking with users.
AI-native software engineering practices, an emerging set of practices and principles that are optimized for using AI-based tools to develop and deliver software systems and applications, will arise as development teams redesign their workflows to take full advantage of AI-based tools. For example, developers use AI agents within their IDEs that proactively refactor code, write unit tests, create documentation, and recommend code reviewers when merging their code into source control repositories. The goal is to automate the end-to-end software development workflow, eventually completely.
How Devs are Shifting from AI-Augmented to Fully AI-Native Software Development
With the advent of AI-based developer tools, software engineering has become AI-augmented, though the fundamental methods and Software Development Life Cycle (SDLC) remain essentially unchanged. AI has been integrated into various tasks within the traditional SDLC, offering incremental lead time, cycle time, and quality improvements.
AI-based tools will help developers go beyond handling drudgery. These tools will be ideation partners, boosting human creativity and supporting product teams in generating new ideas. This applies to every role on the product team: developers, product owners, quality engineers, site reliability engineers and user experience (UX) designers.
As an example, multimodal capabilities in AI-based development tools will be critical to help boost human creativity. Instead of staring at a blank canvas, a product manager or UX designer can use AI design tools to convert text prompts, screenshots or paper sketches to images and visual prototypes. These visual content creation platforms supporting workflows, including Codia, Figma, Motiff, Recraft, Uizard, and Visily, can then iterate on the design and, once the design is finalized, UX designers can convert those designs to HTML and CSS.
Software engineers and developers will need to hone their skills. Organizations will not be able to successfully adopt AI-native practices until their workforce can effectively use them. For example, the ability to clearly articulate requirements in precise and unambiguous natural language becomes a critical skill for making the best use of AI-based development tools.
The journey toward AI-native software engineering will be characterized by a growing proportion of autonomous and semi-autonomous tasks throughout the SDLC.
Tasks like code writing, where developers traditionally use rule-based, boilerplate code generation, will naturally evolve to include AI code assistants to help write code. Eventually, AI will autonomously generate the code with developer guidance and oversight. Developers will begin to perform all day-to-day tasks with AI-assisted development tools, and eventually, they’ll learn to offload tasks to AI and orchestrate teams of agents.
The Evolving Role of Developers as Orchestrators of AI-Driven Workflows
In AI-native software engineering, a developer’s role is more akin to an orchestra conductor than a musician. Their primary role is to direct an orchestra of musicians, so those musicians understand the composition and can play it. Much like an orchestra conductor must deeply understand music, a developer, in the conductor role, must deeply understand software engineering.
AI-native engineering practices can enable better and faster decision-making in upstream development workflows. For example, product teams can use AI-enabled analysis to make data-driven product roadmap decisions that validate or refute subjective human choices. Then, teams can expedite creating prototypes to speed up feasibility studies and chart an informed path forward. Other upstream use cases include product design, user adoption analytics and feature prioritization.
Additionally, scaling developer capacity can deliver critical business value. Most business-critical software delivery workflows rely on humans in the loop (HITL) for error correction, positive reinforcements and implementing policy guardrails. The industry is seeing early signals of HITL agentic workflows with previews from GitHub, Lovable, Replit and StackBlitz showing users going from expressing an intent to a functioning working app deployed in the cloud. However, having humans in the loop for the most mundane tasks consumes precious developer cycles. Advancements in agentic capabilities could enable autonomous improvement loops, thereby freeing developer time and achieving service levels previously unattainable due to human capacity constraints.
Practical Steps for Engineering Leaders To Implement AI-Native Approaches
AI-native software engineering will emerge as organizations explore these possibilities and identify and share what is effective and what is not. Leaders must determine the applicability of AI-native practices to the SDLC based on three factors: business criticality, risk and complexity. Leaders must also consider a phased approach for transitioning to AI-Native methods:
- Assessment and Planning: Evaluate current workflows and identify areas where AI can add the most value. Consider low-risk, high-impact tasks that can be automated.
- Pilot and Experimentation: To test their effectiveness, implement AI tools in a controlled environment. Use feedback to refine processes and gradually expand AI usage.
- Integration and Scaling: Once AI tools have proven their value, integrate them into broader workflows and scale their use across teams, ensuring seamless collaboration between AI and human developers.
- Continuous Improvement: Establish autonomous improvement loops where AI tools learn and adapt over time, enhancing efficiency and reducing human intervention in routine tasks.
The path to AI-native software engineering will be marked by a continuously increasing percentage of autonomous and semiautonomous tasks across the SDLC. Table 1 summarizes how we see this evolution taking place.
Like all innovations, AI-native software engineering will require software engineering leaders to mitigate new risks and tackle new challenges. Addressing these challenges involves balancing automation with human oversight, continuous skill development, and robust security measures.
Some of these challenges include a failure to shift the mindset from implementers of proven solutions to creators of innovative solutions. In the same vein, failure to recognize software engineering fundamentally as a team sport deprives the team of the healthy debate, discussion, and inquiry that drive creativity and innovation. Software engineering leaders must be wary of the potential degradation of critical skill sets that engineers need to grow into more senior roles and capabilities on their teams.
Then, there are the security risks. Over-reliance on AI outputs without verification can lead to significant business risks, including reputational damage. Additionally, AI tools expand security threat surfaces, requiring comprehensive risk assessments. Lastly, multi-agent workflows can compound hallucination risks and model overreach, demanding careful monitoring and context management.
Mitigating security and compliance risks associated with AI-native practices requires a three-pronged approach focused on the three layers of AI-enabled applications:
- An expanded software supply chain that now includes GenAI models and their ecosystem of tools, protocols and frameworks. Examples include the model context protocol (MCP) launched by Anthropic and the recent Agent2Agent (A2A) protocol introduced by Google. These technologies significantly expand the attack surface of the software development toolchain and present risks as highlighted in the OWASP Top 10 for LLM applications 2025.
- The second risk pertains to first-party, custom code. AI-native software engineering results in an unprecedented increase in the volume of code generated, making it difficult to cope with security requirements. This requires automated application security testing practices such as static and dynamic application security testing, penetration testing and fuzz testing, software composition analysis and secrets scanning. Software engineering teams should use guidelines highlighted in NIST 800-218 and 800-204D to protect against these security risks.
- The third risk pertains to intellectual property (IP) violation. This can take two forms: protecting one’s own IP by making sure AI tools don’t use an organization’s source code and data to train their models, and ensuring that users of AI tools don’t violate third-party IP. In both cases, this calls for governing AI tools using centralized management approaches such as platform engineering, which can implement governance policies at scale without impeding developer experience.
Software engineering leaders preparing for an AI-native era should rethink developer workflows to benefit from improved productivity and enhanced creativity. In doing so, they should prioritize low-risk and high-value use cases for agent-based tools by assessing their ability to automate repetitive work and keep humans in the loop for oversight, verification, and explainability.
Finally, they should explore ways to benefit from autonomous improvement loops. They can segment tasks based on business criticality, risk threshold, and task complexity and use self-improvement loops to deliver business value without increasing risks.
As software engineering transitions into an AI-native era, leaders must embrace the transformative potential of AI-based tools while carefully managing associated risks. They can unlock productivity and creativity by rethinking workflows, prioritizing strategic use cases, and leveraging autonomous improvement loops, ensuring sustainable and innovative development practices that balance automation with human oversight.