Thought provoking and great conversation between Aravind Srinivas (Founder, Perplexity) and Ali Ghodsi (CEO, Databricks) today Perplexity Business Fellowship session sometime back offering deep insights into the practical realities and challenges of AI adoption in enterprises. TL;DR: 1. Reliability is crucial but challenging: Enterprises demand consistent, predictable results. Despite impressive model advancements, ensuring reliable outcomes at scale remains a significant hurdle. 2. Semantic ambiguity in enterprise Data: Ali pointed out that understanding enterprise data—often riddled with ambiguous terms (C meaning calcutta or california etc.)—is a substantial ongoing challenge, necessitating extensive human oversight to resolve. 3. Synthetic data & customized benchmarks: Given limited proprietary data, using synthetic data generation and custom benchmarks to enhance AI reliability is key. Yet, creating these benchmarks accurately remains complex and resource-intensive. 4. Strategic AI limitations: Ali expressed skepticism about AI’s current capability to automate high-level strategic tasks like CEO decision-making due to their complexity and nuanced human judgment required. 5. Incremental productivity, not fundamental transformation: AI significantly enhances productivity in straightforward tasks (HR, sales, finance) but struggles to transform complex, collaborative activities such as aligning product strategies and managing roadmap priorities. 6. Model fatigue and inference-time compute: Despite rapid model improvements, Ali highlighted the phenomenon of "model fatigue," where incremental model updates are becoming less impactful in perception, despite real underlying progress. 7. Human-centric coordination still essential: Even at Databricks, AI hasn’t yet addressed core challenges around human collaboration, politics, and organizational alignment. Human intuition, consensus-building, and negotiation remain central. Overall the key challenges for enterprises as highlighted by Ali are: - Quality and reliability of data - Evals- yardsticks where we can determine the system is working well. We still need best evals. - Extreme high quality data is a challenge (in that domain for that specific use case)- Synthetic data + evals are key. The path forward with AI is filled with potential—but clearly, it's still a journey with many practical challenges to navigate.
Challenges of AI in Software Development
Explore top LinkedIn content from expert professionals.
Summary
Artificial intelligence (AI) has significantly impacted software development, offering efficiency and innovative solutions. However, the integration of AI in the software development lifecycle presents challenges, such as data quality, scalability, code maintainability, and the need for human oversight and collaboration.
- Prioritize data quality: Ensure that your data is clean, accurate, and relevant, as AI relies on high-quality information to deliver dependable results.
- Focus on human oversight: AI can streamline processes, but human review remains essential to address issues like testing, security, and adapting AI-generated code to organizational standards.
- Build a supportive ecosystem: Invest in upskilling teams, creating standardized frameworks, and modernizing infrastructure to better integrate AI tools into workflows.
-
-
We analyzed data from over 10,000 developers across 1,255 teams to answer a question we kept hearing from engineering leaders: “If everyone’s using AI coding assistants… where are the business results?” This rigorous Faros AI longitudinal study of individual and company productivity exposes the gap between the two. On an individual level, AI tools are doing what they promised: - Developers using AI complete 98% more code changes - They finish 21% more tasks - They parallelize work more effectively But those gains don’t translate into measurable improvements at the organizational level. No lift in speed. No lift in throughput. No reduction in time-to-deliver. Correlations between AI adoption and organization-wide delivery metrics evaporate at the organization level. We’re calling this the AI Productivity Paradox—and it’s the software industry’s version of the Solow paradox: “AI is everywhere—except in the productivity stats.” Our two-year study examined the change in metrics as teams move from low to high AI adoption. - Developers using coding assistants have higher task throughput (21%) and PR merge rate (98%) and are parallelizing more work. - Code review times increased by 91%, indicating that human review remains a bottleneck. - AI adoption also leads to much larger code changes (154%) and more bugs per developer (9%). Why is there no trace of impact on key engineering metrics at the organizational level? Uneven adoption, workflow bottlenecks, and the lack of coordinated enablement strategies help explain this paradox. Our data shows that in most companies, AI adoption is still a patchwork. And, because software delivery is inherently cross-functional, accelerating one team in isolation rarely translates to meaningful gains at the organizational level. Most developers using coding assistants rely on basic autocomplete functions, with relatively low usage of advanced features such as chat, context-aware code review, or autonomous task execution. AI usage is highest among newer hires, who rely on it to navigate unfamiliar codebases, while lower adoption among senior engineers suggests limited trust in AI for more complex, context-heavy tasks. We also find that individual returns are being wiped out by bottlenecks further down the pipeline, in code reviews, testing, and deployments that simply can't keep up. AI isn't a magic bullet, and it can't outrun a broken process. Velocity at the keyboard doesn't automatically mean velocity in the boardroom. If you want AI to transform your business, you can't just distribute licenses—you need to overhaul the system around them. This report might help guide the way. https://lnkd.in/gPb4j8kf #AI #Productivity #Engineering #AIParadox #FarosAI
-
We’re reaching an inflection point: AI will soon handle code generation with ease. But, does that mean AI can now easily build and maintain heavy-duty software? For enterprises, the challenge goes far beyond code generation. The real bottleneck lies in ensuring software quality, integrity, and governance at scale - this is the new frontier. Great to hear from our co-founder and CPO, Dedy Kredo, on this critical topic and Qodo’s approach to tackling it: https://lnkd.in/d4HJTMj9 • "Vibe coding", in its naive form, is not sustainable for enterprise production code, it generates significant technical debt and overlooks critical aspects like verifying adherence to organizational best practices and thorough testing • The bottleneck shifts to code review and verification, not just generation: With increasing AI-generated code, the most frustrating problem and key bottleneck becomes how to review, test, and verify this code at scale, ensuring it's secure, well-tested, and aligned with company standards • Enterprise AI for code requires a "system layer," not just larger models. With the release of GPT-5 we can finally challenge this notion or further provide evidence. • Qodo is built on the belief that intelligent software development for enterprise-grade applications requires a deep understanding of the codebase and a system where developers orchestrate and customize AI agents, rather than simply relying on models to generate all code or manage everything in a single context window • Developers will be orchestrators, not replaced • Software quality assurance will become the "next frontier" • Agentic Swarm Approach, with specialized agents, with different UX/UI, designated tasks, and different credentials, is the way • Advice for AI Startup Founders: It's crucial to stick with your unique insight amidst market "noise" --- The central insight from the conversation here is that while AI fundamentally transforms software engineering and enables significant productivity gains, the primary challenge for enterprises shifts from code generation to ensuring software quality, integrity, and governance at scale across the entire Software Development Life Cycle (SDLC). Thank you, Dedy Kredo, for sharing!
Engineering in the AI Era: Qodo Founder on the AI-Powered SDLC
https://www.youtube.com/
-
Generative AI’s Dirty Secret... 🤫 ....the Challenges That Hold Enterprises Back What’s really holding them back from achieving the transformative results they’ve been promised? The answer lies not in the technology itself, but in the hidden challenges that companies face when trying to implement it at scale. The Challenges of Generative AI While the potential is huge, there are quite a few obstacles standing in the way of widespread adoption. 📊 What are businesses struggling with? 1️⃣ Messy Data (46%): AI needs clean, reliable data to perform well. If the data isn’t right, the results won’t be either. 2️⃣ Finding the Right Use Cases (46%): Businesses often don’t know where AI can make the biggest impact. 3️⃣ Trust and Responsibility (43%): Companies need strong guidelines to make sure AI is used ethically and doesn’t cause harm. 4️⃣ Data Privacy Concerns (42%): Keeping sensitive information secure while using AI is a constant worry. 5️⃣ Lack of Skills (30%+): Many teams don’t have the expertise needed to develop and manage AI systems effectively. 6️⃣ Data Literacy (25%+): Employees often don’t know how to interpret or work with the data AI relies on. 7️⃣ Resistance to Change (25%): Adopting AI means rethinking workflows, and not everyone is on board with that. 8️⃣ Outdated Systems (20%): Legacy technology can’t keep up with the demands of advanced AI tools. How to Overcome These Challenges Generative AI works best when companies have the right foundation: clean data, modern systems, and a team ready to embrace the change. Here’s how businesses can tackle the challenges: 1️⃣ Improve Data Quality: Make sure your data is accurate, clean, and well-organized. AI thrives on good data. 2️⃣ Find Real Use Cases: Talk to teams across your company to figure out where AI can save time or create value. 3️⃣ Build Trust with Responsible AI: Set up rules and guidelines to ensure AI is used fairly and transparently. 4️⃣ Upskill Your Team: Invest in training programs so your team can learn how to build and manage AI systems. 5️⃣ Upgrade Technology: Move to modern, scalable systems that can handle the demands of generative AI. Why This Matters Generative AI isn’t just a fancy new tool—it’s a way for businesses to work smarter, solve problems faster, and drive innovation. 🔑 What you can gain: Better Accuracy: Clean data leads to better AI results. Scalability: Modern systems make it easier to grow and take on bigger AI projects. Faster Results: Streamlined processes mean you can see the value of AI sooner. 💡 What’s next? AI will become a part of everyday workflows, helping teams make decisions faster. Cloud-based AI tools will give businesses more flexibility to innovate. Companies will put a bigger focus on ethical AI practices to build trust with customers and stakeholders. The real question isn’t whether businesses will adopt generative AI—it’s how quickly they’ll embrace it to stay ahead of the curve. ♻️ Share 👍 React 💭 Comment
-
Good tips on how to attain virality in LLM Apps, inspired by Cursor, Replit, Bolt Link in comments. h/t Kyle Poyar Challenge 1: AI feels like a black box Users hesitate to rely on AI when they don’t understand how it works. If an AI system produces results without explanation, people second-guess the accuracy. This is especially problematic in industries where transparency matters—think finance, healthcare, or developer automation. Pro-tips Show step-by-step visibility into AI processes. Let users ask, “Why did AI do that?” Use visual explanations to build trust. Challenge 2: AI is only as good as the input — but most users don’t know what to say AI is only as effective as the prompts it receives. The problem? Most users aren’t prompt engineers—they struggle to phrase requests in a way that gets useful results. Bad input = bad output = frustration. Pro-tips Offer pre-built templates to guide users. Provide multiple interaction modes (guided, manual, hybrid). Let AI suggest better inputs before executing an action. Challenge 3: AI can feel passive and one-dimensional Many AI tools feel transactional—you give an input, it spits out an answer. No sense of collaboration or iteration. The best AI experiences feel interactive. Pro-tips Design AI tools to be interactive, not just output-driven. Provide different modes for different types of collaboration. Let users refine and iterate on AI results easily. Challenge 4: Users need to see what will happen before they can commit Users hesitate to use AI features if they can’t predict the outcome. The fear of irreversible actions makes them cautious, slowing adoption. Pro-tips Allow users to test AI features before full commitment. Provide preview or undo options before executing AI changes. Offer exploratory onboarding experiences to build trust Challenge 5: AI can feel disruptive Poorly implemented AI feels like an extra step rather than an enhancement. AI should reduce friction, not create it. Pro-tips Provide simple accept/reject mechanisms for AI suggestions. Design seamless transitions between AI interactions. Prioritize the user’s context to avoid workflow disruptions
-
At IBM we sponsored a survey with 1,000+ U.S.-based enterprise AI developers, to uncover the hurdles they face when working with generative AI. Here’s what we found: 𝟭/ 𝗦𝗸𝗶𝗹𝗹𝘀 𝗚𝗮𝗽𝘀: Only 24% of app developers surveyed consider themselves experts in GenAI. Fast innovation cycles and a lack of standardized development frameworks are major obstacles. 𝟮/ 𝗧𝗼𝗼𝗹 𝗢𝘃𝗲𝗿𝗹𝗼𝗮𝗱: Developers juggle between 5–15 tools (or more!) to create enterprise AI apps. Yet the most critical tool qualities - performance, flexibility, ease of use, and integration - are also the rarest. 𝟯/ 𝗧𝗿𝘂𝘀𝘁 𝗮𝗻𝗱 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆: As enterprises explore agentic AI, trustworthiness and seamless integration with broader IT systems emerge as critical concerns. The consequences are clear: overly complex AI stack enterprise investments and slow innovation. So, what’s the solution? ⭐ SIMPLIFICATION ⭐ Developers need tools that are easy to master and enhance productivity. At IBM, we’re focused on empowering developers with tools and strategies to cut through that complexity. You can learn more about the survey conducted by Morning Consult here: https://lnkd.in/gXDuwTaS IBM Blog: https://lnkd.in/gsMVMmXX
-
🚨 "Vibe coding" is the AI buzzword of the month. And it's here to stay. But can we please have a reality check. 25% of YC founders said that 95%+ of their code is AI-generated. 𝗔𝗜 𝗰𝗼𝗱𝗶𝗻𝗴 𝗮𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝘁𝘀 𝗮𝗿𝗲 𝗱𝗲𝗹𝗶𝘃𝗲𝗿𝗶𝗻𝗴 𝗿𝗲𝗮𝗹 𝘃𝗮𝗹𝘂𝗲. - They're helping developers move from 0 to 1 faster than ever. - And it's great for making prototypes and simple pieces of software. 𝗕𝘂𝘁 "𝘃𝗶𝗯𝗲 𝗰𝗼𝗱𝗶𝗻𝗴" 𝗮𝗹𝗼𝗻𝗲 𝗰𝗮𝗻’𝘁 𝗯𝘂𝗶𝗹𝗱 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲-𝗴𝗿𝗮𝗱𝗲 𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲. 𝗜𝗳 𝘆𝗼𝘂’𝗿𝗲 𝗮𝗻 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗔𝗜 𝗹𝗲𝗮𝗱𝗲𝗿, 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 𝘁𝗼 𝗵𝗲𝗮𝗿 𝗵𝗼𝘄: - AI-generated code may not follow your enterprises's frameworks for things like security, privacy, and governance. - Enterprises rarely start from scratch - there’s legacy code and tech debt, that needs to be refactored/modernized first. - Enterprises have multiple codebases across different repos. And AI needs to work across them. - Enterprises have their own best practices and guidelines. AI tools need to follow this, consistently. 𝗣𝗼𝗽𝘂𝗹𝗮𝗿 𝗔𝗜 "𝘃𝗶𝗯𝗲 𝗰𝗼𝗱𝗶𝗻𝗴 𝘁𝗼𝗼𝗹𝘀" 𝗵𝗮𝘃𝗲 𝘀𝗲𝘃𝗲𝗿𝗮𝗹 𝗹𝗶𝗺𝗶𝘁𝗮𝘁𝗶𝗼𝗻𝘀: - Code may not be optimized for large-scale performance. And it needs to optimized for things like cost and performance before moving to production. - Complex functionality like payments, security, or even platform thinking—is still hard for these tools. - Researchers showed that these tools can be tricked into creating unsafe code through jailbreak attempts. - Their models may be trained on mixed-quality code bases. Plus all of this needs to be integrated into workflows for real adoption and value. And this includes tools on local machines and the cloud! 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲𝘀 𝗵𝗮𝘃𝗲 𝘂𝗻𝗶𝗾𝘂𝗲 𝗻𝗲𝗲𝗱𝘀, 𝘄𝗵𝗶𝗰𝗵 𝗿𝗲𝗾𝘂𝗶𝗿𝗲 𝗺𝗼𝗿𝗲 𝘁𝗵𝗮𝗻 𝗷𝘂𝘀𝘁 𝘃𝗶𝗯𝗲𝘀. Look at tools like GitHub Copilot. Or startup solutions like Moderne, Factory, Qodo (formerly Codium) are great options to explore. 👉 Are you or your team using AI coding tools? Have they helped actually moved the needle? --- ⚡ Share this with an engineering leader who needs to see this. 👇 🔔 I post about AI innovation in the real world. Follow me, Heena Purohit, for similar posts. #EnterpriseAI #ArtificialIntelligence #AIforBusiness #GenerativeAI #AIcodingassistants #softwareengineering
-
It feels like everyone’s turning to tools like ChatGPT and Claude these days to ship code faster. I have been reviewing plenty of code lately and I see a pattern. Yes, AI is great for speeding things up. But it also means that Senior Engineers and Architects end up spending a lot more time going through code, that is sometimes written by people who didn’t fully understand what the AI spit out. This means we have to catch bugs, fix security issues, and make sure everything actually works long-term. Here is the new reality that we have to come to terms with: ✅ Rapid Development vs. Deep Understanding: AI tools can produce working code in minutes, but they can also obscure the underlying logic, leaving developers less engaged with the "why" behind the code. ✅ Increased Burden on Senior Engineers and Architects: With teams relying heavily on AI, code reviews have become more critical. It's now common for senior engineers to dive into code they might not have written, ensuring it meets quality and security standards. ✅ Risks of Over-Reliance: When the basics of coding are sidelined in favor of quick fixes, the long-term maintainability of our software is at stake. The art of understanding code deeply should remain a core skill, even in an AI-assisted era. We all love the boost AI gives us, but let’s make sure that we are still building things that last. #AI #Code #SoftwareEngineering
-
AI models are increasingly handling coding tasks. Like many, I assumed this would naturally lead to more energy-efficient code, with AI optimizing and avoiding anti-patterns. But new research reveals a paradox: AI-generated code often consumes significantly more energy than human-written code. A study on LeetCode problems found AI solutions consistently used more energy, with the gap widening for harder challenges – sometimes up to 8.2x the energy of human code. Why is this a major climate problem, especially as we rely on AI for sustainability? The Paradox of AI efficiency: We expect AI to optimize, but its current focus seems to be on functional correctness or generation speed, not deep energy efficiency. This means AI code can be functionally sound but computationally heavy. A scaled problem: Every line of code, whether on a local machine or a vast data center, requires electricity. If AI is generating code that's dramatically less efficient, the cumulative energy demand skyrockets as AI coding becomes ubiquitous. The bottom line: Inefficient code demands more processing power, longer run times, and higher energy consumption in data centers. These centers already consume around 1.5% of the world's electricity (415 TWh) in 2024, projected to grow four times faster than total electricity consumption. Inefficient AI code directly exacerbates this growth, potentially undermining any 'climate gains' from AI tooling. I genuinely believe AI can advance our sustainability targets faster, more cost-efficiently, and with better precision. However, if its outputs are inherently energy-intensive, it creates a self-defeating loop. We're increasing our carbon footprint through the very tools meant to accelerate efficiency. Going forward, we must integrate energy efficiency as a core metric in training and evaluating AI coding models, prioritizing lean, optimized code. Kudos to pioneers like Hugging Face and Salesforce, with their energy-index for AI models, and Orange for championing Frugal AI. And big thanks to the research team for looking beyond the hype: Md Arman Islam, Devi Varaprasad J., Ritika Rekhi, Pratik Pokharel, Sai Siddharth Cilamkoti, Asif Imran, Tevfik Kosar, Bekir Oguzhan Turkkan. [Post 1/2 on a reality check for AI's effectiveness and efficiency]
-
🚨 Weaponizing AI Code Assistants: A New Era of Supply Chain Attacks 🚨 AI coding assistants like GitHub Copilot and Cursor have become critical infrastructure in software development—widely adopted and deeply trusted. With the rise of “vibe coding,” not only is much of modern software written by Copilots and AI, but Developers inherently trust the outputs without validating them. But what happens when that trust is exploited? Pillar Security has uncovered a Rules File Backdoor attack, demonstrating how attackers can manipulate AI-generated code through poisoned rule files—malicious configuration files that guide AI behavior. This isn't just another injection attack; it's a paradigm shift in how AI itself becomes an attack vector. Key takeaways: 🔹 Invisible Infiltration – Malicious rule files blend seamlessly into AI-generated code, evading manual review and security scans. 🔹 Automation Bias – Developers inherently trust AI suggestions without verifying them, increasing the risk of undetected vulnerabilities. 🔹 Long-Term Persistence – Once embedded, these poisoned rules can survive project forking and propagate supply chain attacks downstream. 🔹 Data Exfiltration – AI can be manipulated to "helpfully" insert backdoors that leak environment variables, credentials, and sensitive user data. This research highlights the growing risks in Vibe Coding—where AI-generated code dominates development yet often lacks thorough validation or controls. As AI continues shaping the future of software engineering, we must rethink our security models to account for AI as both an asset and a potential liability. How is your team addressing AI supply chain risks? Let’s discuss. https://lnkd.in/eUGhD-KF #cybersecurity #AI #supplychainsecurity #appsec #vibecoding