Seniors are losing depth. Juniors are learning from codebases where 41% was written by a tool. The pipeline that builds judgement is being squeezed from both ends. 5,340 executives from four countries report zero productivity impact from AI¹. 41% of all code is now AI-generated. Code duplication is up 8x in two years. Developers copy-paste more than they refactor or reuse — refactoring has collapsed 60%. The codebase is getting bigger and worse at the same time. Every 25% increase in AI adoption drops system stability by 7.2%². After Copilot adoption in open-source projects, experienced developers review 6.5% more code while their own output drops 19%. Seniors are spending their time checking AI output instead of building. 23.5% more incidents per pull request³. Code churn — code rewritten within two weeks of being written — has doubled. The speed went into writing code that has to be written again. Developer trust in AI dropped from 43% to 29%. Usage rose to 84%⁴. The people using the tools don't believe the tools produce correct output. They use them anyway. Before AI, Stripe estimated developers spent 42% of their time on technical debt — $85 billion a year. Forrester predicts 75% of tech leaders will face moderate to severe tech debt by end of 2026. An MIT professor called AI "a brand new credit card that lets us accumulate technical debt in ways we were never able to before." That credit card now writes 41% of the code. Gentoo and NetBSD banned AI-generated code outright. cURL shut down its bug bounty last month. The infrastructure the internet runs on is rejecting what the rest of the industry is shipping. IBM just tripled entry-level hires after realizing that cutting juniors kills the pipeline that produces seniors. But the juniors entering now are learning from codebases where 41% was written by a tool, refactoring is at historic lows, and the seniors mentoring them are buried in review load from the same tools. The debt is compounding. The skill to pay it back is thinning. Judgement — knowing what to ask for, when output is wrong, what to throw away — comes from practice. --- 1. NBER, this month. 2. Google's DORA report, 39,000 professionals. 3. Cortex's 2026 Engineering Benchmark. 4. Stack Overflow, 2025.
Impact of Code Generators on Developer Skills
Explore top LinkedIn content from expert professionals.
Summary
Code generators, including AI-powered tools, automate parts of software development by creating code from prompts or templates, which can help developers work faster but may impact their skill growth and code quality. While these tools can speed up delivery, relying on them too heavily can lead to gaps in foundational skills and increased technical debt over time.
- Prioritize skill development: Encourage developers to use code generators for support, but ensure they also practice debugging, code review, and understanding core logic to build lasting expertise.
- Balance speed and oversight: Pair fast code generation with regular quality checks, refactoring, and human review to prevent issues from accumulating in the codebase.
- Use AI as a learning companion: Treat AI tools as aids for learning and understanding concepts rather than replacements for hands-on problem solving and critical thinking.
-
-
AI makes developers faster. But what happens when that value comes at the cost of actually understanding what you're building? When researchers at Anthropic tested 52 professional developers learning an unfamiliar Python library, the AI-assisted group scored 17% lower on conceptual understanding, code reading, and debugging — across all experience levels. There was also no significant difference in task completion time. 🔴 The biggest skill gap was in debugging. The control group hit a median of 3 errors during the task versus just 1 for the AI group. Working through those errors is what made the concepts stick. 🔴 Not all AI usage was equal. Developers who asked conceptual questions scored 65-86% on the skills quiz. Those who just delegated code generation? 24-39%. 🔴 The AI users felt it, too. Several described themselves as feeling "lazy" and wished they'd engaged more deeply with the material. To be clear, the finding isn't "don't use AI." It's that delegation and learning are fundamentally different activities — and most developers are defaulting to delegation. If you want to get the best of speed AND learning, consider these ideas: 1️⃣ Separate performance tasks from learning tasks. When your team already knows the domain, let AI accelerate delivery. When they're onboarding to something new, encourage AI for explanations and conceptual questions. 2️⃣ Stop optimizing away all friction. Debugging isn't all wasted time — it's where understanding forms. That investment comes in handy when you're trying to debug a P0 in production or explain logic to business leaders. 3️⃣ Coach high-signal interaction patterns. "Explain how this concurrency model works" produces very different outcomes than "write the function for me." We obsess over how fast AI helps developers ship, but we should think slightly longer term about the impact of that speed, and what it means for long-term learning and retention. Full research breakdown in this week's RDEL (link in comments). How is your team balancing AI speed with skill development?
-
A technical lead recently told me, "I don't have tasks for entry-level engineers on my team. AI coding assistants are doing a better job, and I can skip the mentoring efforts." That hit hard—and it’s a growing sentiment in the industry. AI coding assistants are changing the landscape. They handle everything from code completion and debugging to generating entire code blocks from natural language prompts. Developers using these tools report finishing tasks up to 55% faster. But there's a catch. The entry barrier to becoming an individual contributor has just gotten higher. Fewer companies are willing to invest in entry-level programmers, and traditional growth paths are being disrupted. And if juniors rely too heavily on AI, they risk missing out on foundational skills—deep debugging, core logic comprehension, and hands-on experience. This can result in "hollow" expertise that hinders long-term growth. Yet, this isn’t just a threat—it’s a massive opportunity. Junior developers who treat AI tools as learning companions—not crutches—can actually accelerate their careers. By pairing AI’s power with critical thinking, rigorous practice, and strong fundamentals, juniors can cultivate skills that AI can’t replicate. The key is intentional adaptation: - Treat AI as your pair programmer, not your replacement. - Prioritize human-centric skills like creativity, communication, and critical thinking. - Sharpen your abilities in debugging, code review, and prompt engineering. The future of software development isn’t AI vs. humans—it’s humans who know how to work with AI. What’s your take? Are you seeing this shift on your team?
-
AI-assisted coding is creating a quiet capability gap. New research from Anthropic shows a sharp trade-off most leaders are (probably) missing. Yes, AI tools speed up coding. No, they do not build engineers. In a controlled study, developers using AI finished tasks faster but scored 17 points lower on comprehension. Debugging suffered the most. That matters, because debugging is the skill you need when AI-generated code fails in production. This connects to a second signal. Junior hiring is collapsing, while AI-written code is increasing defect rates. The result is predictable: more velocity, weaker judgment, higher escape defects. GitHub Copilot data already hinted at this. Output goes up. Bugs go up too. The missing variable is human oversight capacity, especially at the junior and mid levels. The risk is not AI replacing developers. The risk is organizations training a generation that cannot supervise AI. I have pulled together the full research, metrics, and implications in a comprehensive report. It covers: → Why speed gains differ between familiar work and learning → How interaction patterns with AI predict skill loss or retention → Why cutting junior hiring creates a multi-year capability hole → What engineering leaders should measure instead of raw velocity If you are leading engineering, platform, or AI adoption, this is not theoretical. It is already showing up in production incidents and team quality.
-
𝗗𝗼 𝗔𝗜 𝗖𝗼𝗱𝗶𝗻𝗴 𝗔𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝘁𝘀 𝗥𝗲𝗮𝗹𝗹𝘆 𝗕𝗼𝗼𝘀𝘁 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆? 𝗔 𝗥𝗲𝗮𝗹𝗶𝘁𝘆 𝗖𝗵𝗲𝗰𝗸 𝗳𝗿𝗼𝗺 800+ 𝗚𝗶𝘁𝗛𝘂𝗯 𝗣𝗿𝗼𝗷𝗲𝗰𝘁𝘀 New research from Carnegie Mellon University just dropped and the results are fascinating. The team studied the impact of 𝗖𝘂𝗿𝘀𝗼𝗿, a popular LLM-based agentic IDE, across 807 real-world repositories using causal inference methods. 𝗛𝗲𝗿𝗲’𝘀 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗳𝗼𝘂𝗻𝗱: 𝗩𝗲𝗹𝗼𝗰𝗶𝘁𝘆 gains are real, but 𝘀𝗵𝗼𝗿𝘁-𝗹𝗶𝘃𝗲𝗱 - +281% more code in month 1 - +48% in month 2 - Back to baseline after that 𝗖𝗼𝗱𝗲 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝘁𝗮𝗸𝗲𝘀 𝗮 𝗵𝗶𝘁 and it sticks - +30% static analysis warnings - +41% increase in code complexity - Long-term slowdown due to accumulated tech debt => 𝗦𝗲𝗹𝗳-𝗿𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗶𝗻𝗴 𝗰𝘆𝗰𝗹𝗲: 𝗠𝗼𝗿𝗲 𝗰𝗼𝗱𝗲 -> 𝗠𝗼𝗿𝗲 𝗰𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 -> 𝗦𝗹𝗼𝘄𝗲𝗿 𝗽𝗿𝗼𝗴𝗿𝗲𝘀𝘀 LLM coding agents like Cursor can supercharge productivity, for a moment. But without process changes, they may speed you toward an unmaintainable codebase. 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: Build in quality assurance from day one. Test coverage, refactoring sprints, smarter prompts. AI won’t save your codebase unless you save it first. #AI #LLM #SoftwareEngineering #Productivity #TechDebt
-
I'm seeing a pattern that scares me. Junior developers who can't read a stack trace without an agent explaining it first. When everyone has the same code generator, the only edge left is knowing what to do when it's wrong. Think about it. Who gets the call when no one knows, how to fix this unsolvable bug? Probably the one who understand memory management, concurrency, and what actually happens when you hit "deploy." 𝗖𝗼𝗱𝗲 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗳𝗿𝗲𝗲. 𝗦𝘆𝘀𝘁𝗲𝗺 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗶𝘀 𝘁𝗵𝗲 𝗻𝗲𝘄 𝘀𝗰𝗮𝗿𝗰𝗶𝘁𝘆. This isn't theory—we've seen it before. Here's what I'm watching: Developers who rely on agents for everything. Developers who use agents but can debug, optimize, and architect without them. One group becomes cheaper every quarter. The other becomes irreplaceable. ✅ If you want to be in the second group, start small: Next time you hit a bug, read the stack trace yourself first. Trace the execution manually. Understand what broke before asking an agent why. Pick one system you use daily and learn how it actually works under the hood. You'll learn more in one week than a month of copy-paste. 💬 Prompt engineering vs system architecture. Which skill are you investing in right now?
-
A thought-provoking piece by Andreas Møller challenges the seductive promise that AI and no-code tools will let anyone build software 10x faster. The uncomfortable truth he surfaces is that these tools optimise for the wrong part of the problem. They make simple things simpler while making complex things harder. The muscle-training analogy resonates deeply here. When you struggle through a problem manually, you build understanding that compounds. When a tool abstracts away that struggle, you gain speed but lose the cognitive workout that transforms juniors into seniors. The learning curve flattens, but so does your growth trajectory. Experienced developers point out that AI-generated code accrues technical debt at alarming rates if you accept edits unchanged. Your codebase becomes a monument to decisions you never actually made, filled with patterns you do not understand. AI excels at the accidental stuff, typing out boilerplate and scaffolding. But the essential work of system design, security architecture, and managing evolving requirements remains stubbornly human. The real risk is not that AI makes programming too easy. It is that it creates an illusion of competence that masks fundamental gaps in understanding. You can ship a prototype in hours, but debugging it in production requires knowledge you never acquired. For those early in their careers, the counterintuitive advice might be this. Choose the steeper path while you still can. The resistance is the point. https://lnkd.in/ewJ38qU4 #SoftwareEngineering #AI #Programming #TechCareers #CodingTools #SoftwareDevelopment
-
Several comprehensive studies including O’Reilly’s Playbook for Large Language Model Security, the 2025 State of Software Delivery report, and GitClear’s 2025 AI Copilot Code Quality report conclude that companies have started using AI for #coding too soon. A general conclusion: “LLMs are not #software engineers; they are like interns with goldfish memory. They’re great for quick tasks but terrible at keeping track of the big picture.” “As reliance on #AI increases, that big picture is being sidelined. Ironically, by certain accounts, the total developer workload is increasing—the majority of developers spend more time debugging AI-generated code and resolving security vulnerabilities.” “AI output is usually pretty good, but it’s still not quite reliable enough,” says another. “It needs to be a lot more accurate and consistent. Developers still always need to review, debug, and adjust it.” One problem: “AI tools tend to duplicate code, missing opportunities for code reuse and increasing the volume of code that must be maintained.” GitClear’s report “analyzed 211 million lines of code changes and found that in 2024, the frequency of duplicated code blocks increased eightfold.” “In addition to piling on unnecessary technical debt, cloned code blocks are linked to more defects—anywhere from 15% to 50% more.” While larger context windows will help, “they’re still insufficient to grasp full software architectures or suggest proper refactoring.” One CEO says: “AI tools often waste more time than they save for areas like generating entire programs or where broader context is required. The quality of the code generated drops significantly when they’re asked to write longer-form routines.” “Hallucinations still remain a concern. AI doesn’t just make mistakes—it makes them confidently. It will invent open-source packages that don’t exist, introduce subtle security vulnerabilities, and do it all with a straight face.” “Security vulnerabilities are another issue. AI-generated code may contain exploitable flaws.” Furthermore, AI agents often “fail to find root cause, resulting in partial or flawed solutions:” “Agents pinpoint the source of an issue remarkably quickly, using keyword searches across the whole repository to quickly locate the relevant file and functions—often far faster than a human would. However, they often exhibit a limited understanding of how the issue spans multiple components or files, and fail to address the root cause, leading to solutions that are incorrect or insufficiently comprehensive.” Solutions include better training data, more testing to validate AI outputs, progressive rollouts, and greater use of finely tuned models. The bottom line for some: “AI-generated code isn’t great—yet. But if you’re ignoring it, you’re already behind. The next 12 months are going to be a wild ride.” #technology #innovation #artificialintelligence #hype
-
𝗦𝗵𝗼𝘂𝗹𝗱 𝗝𝘂𝗻𝗶𝗼𝗿𝘀 𝗖𝗼𝗱𝗲 𝗪𝗶𝘁𝗵 𝗔𝗜? We assume AI helps junior developers ramp up faster. Learn the codebase quicker, ship sooner, and close the skill gap with seniors. Anthropic just ran a randomized controlled trial that challenges this. 52 developers learned a new Python library for async programming, half with AI assistance, half without. The AI group scored 𝟭𝟳% 𝗹𝗼𝘄𝗲𝗿 on comprehension tests. That's nearly two letter grades (50% vs 67%, p=0.01). The largest gap? 𝗗𝗲𝗯𝘂𝗴𝗴𝗶𝗻𝗴, the exact skill juniors need to catch errors in AI-generated code. AI didn't even make them faster. The AI group finished about two minutes earlier, but this wasn't statistically significant. Some participants spent up to 30% of their time just writing prompts. 𝗛𝗼𝘄 𝘆𝗼𝘂 𝘂𝘀𝗲 𝗔𝗜 𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗲𝘀 𝘄𝗵𝗲𝘁𝗵𝗲𝗿 𝘆𝗼𝘂 𝗹𝗲𝗮𝗿𝗻 𝗮𝘁 𝗮𝗹𝗹 The study identified six interaction patterns. Three scored below 40%, three scored above 65%. Low scorers: → Delegated everything to AI → Started manually, then progressively offloaded work → Used AI as a debugging crutch without building understanding High scorers: → Generated code, then asked follow-up questions → Requested explanations alongside code → Asked conceptual questions, coded independently Same tool, but different outcomes. This implies that unrestricted AI access during onboarding creates a capability gap. We get faster task completion today, but we lose the debugging instincts needed to validate AI output tomorrow. Think about it before you onboard new junior developers. Image: Anthropic.
-
Anthropic released a study which found that developers using AI assistance to learn a new Python library scored 17% lower on comprehension tests than those who learned without AI. Importantly, they struggled most with debugging which is one of the key skills you need to validate AI-generated code. The study also highlights that developers who used AI to ask questions and seek explanations retained their learning. Those who delegated code generation entirely finished faster but learned less. For me, this mirrors the tension we have seen in teams prior to AI: we need to deliver outcomes but not at the expense of developing skills. AI assisted coding is making this tension more obvious. Junior developers benefit most from AI productivity gains but they're also the ones who most need to be developing foundational skills. If they're learning on the job while relying heavily on AI code generation, what capabilities are we actually building and what are we missing out on. The core skills of understanding design, debugging issues, and reading code critically require the kind of learning that comes from wrestling with problems and working through errors independently. Being deliberate about team composition and levels is even more important than ever. We need experienced developers to validate AI outputs, mentor juniors through proper skill development, and maintain institutional knowledge of how systems actually work. At the same time, we need to actively bring junior developers in as they'll be the ones who grow up native in this AI-assisted world. This means being explicit about when and how AI tools are used during onboarding and skill development phases.