AI isn’t scaring markets. Uncertainty about execution is. If you zoom out from stock charts and look at enterprise data, the pattern is consistent across major 2025–26 reports: • Only 12% of 4,454 CEOs say AI has delivered both cost and revenue impact (PwC). • Just 11% of organisations have agentic AI in production (Deloitte). • 56% of US firms say technical debt is blocking new investment (KPMG). • 10–25% EBITDA gains are achievable — but only when governance and workflows are redesigned before tool deployment (Bain). • Firms concentrating on fewer, high-impact initiatives outperform broad AI portfolios (BCG). That’s not a bubble. That’s an execution bottleneck wearing a strategy costume. Markets are pricing future AI productivity at scale. Meanwhile, inside many enterprises: • Annual funding cycles still rule • Data ownership is a contact sport • Governance has more layers than enterprise lasagna • And automation is being politely placed on top of broken workflows Volatility isn’t about whether AI matters. It’s about whether organisations can convert AI enthusiasm into disciplined operating change. The companies that simplify execution, reduce technical debt, and redesign work before scaling tools will compound advantage. Everyone else will continue to produce very sophisticated pilots. The 2026 planning question isn’t: “How much are we investing in AI?” It’s: “Have we re-architected the system that’s supposed to absorb it?”
AI adoption hindered by execution uncertainty
More Relevant Posts
-
McKinsey’s latest “State of AI” report highlights something interesting. The companies seeing the biggest returns from AI aren’t necessarily the ones building the most models. They’re the ones breaking down silos. According to the report, organizations capturing the most value from AI are significantly more likely to have cross-functional teams working across data, engineering, product, and business units, rather than running AI as an isolated technical initiative. That may sound obvious, but in practice it’s still one of the biggest barriers. In many banks I speak with, the pieces needed for AI already exist somewhere in the organization:• data teams managing pipelines• product teams defining customer journeys• engineering teams building platforms• analytics teams modeling behavior• business teams defining outcomes But they often operate in parallel instead of together. AI struggles in those environments because the models don’t see the full picture. The organizations McKinsey describes as “AI leaders” have largely solved this by structuring teams around business outcomes instead of organizational boundaries. Data, technology, and domain expertise move together. In financial services this matters even more.Delivering meaningful financial insights requires combining transaction data, behavioral modeling, customer context, and real-time decisioning. No single team owns all of that. The real work becomes connecting those capabilities. In my experience, the most successful AI deployments in banking happen when the conversation shifts from:"What model should we build?"to"What customer outcome are we solving, and which teams need to solve it together?"Once that alignment happens, the technical work tends to move much faster. Curious how others are seeing this play out.Are silos still the biggest blocker to scaling AI in your organization, or are there other challenges proving harder to solve?
To view or add a comment, sign in
-
This is the entire #GCC Stuart Mitchell MiOD MCMI ChMC. Especially #dubai organisations. Why? Almost every single executive are still missing this. AI does not magically remove work. It changes where work sits, who does it, and how fast poor operating design gets exposed. That is why “productivity gains” so often fail to show up in the numbers. Most organisations here in #Dubai treat AI as a tool rollout and measure EVALs as success metrics. That’s what failure looks like. Nothing to do with margin expansion or revenue gains on the balance sheet. More likely to shrink the business moat 10x. Their EGOs do not focus on people. They do not redesign how: - knowledge, - judgement, - time move through the business. If you automate a bad workflow, you do not create transformation. You create: - faster escalation, - more exceptions, - more governance overhead, - a bigger gap between leadership expectation and frontline reality. The real opportunity is not task speed. It is building a new scaling model: - capture institutional knowledge before it leaves, - turn expert judgement into repeatable systems, - free people to work at the level of decision-making, not administration. The firms that understand this in #dubai will not measure AI by whether someone saved an hour. They will measure whether the organisation can scale expertise without scaling headcount at the same rate. That is the real shift. That is FIRST MOVER! #dubai #GCC #ai #genai #leadership #digitaltransformation #business #futurism #futureofwork #innovation #openai
Chief Transformation Officer | Brought in to restore control, margin & growth in PE-backed and regulated financial services | AI-Enabled Transformation | Execution Operator
We were told AI would reduce the workload. For most organisations, it hasn’t. Here’s why. When you automate a process, you make it faster. You don’t make it smaller. The same decisions still get made. The same exceptions still occur. And the oversight that was always required — still required. The work compresses. It doesn’t disappear. Most organisations are measuring AI on productivity. And finding out it delivered speed instead. JP Morgan. Bank of America. Goldman Sachs. Tens of billions committed in 2026. Goldman’s own research found no meaningful productivity impact. Billions in. Essentially zero out. The problem isn’t execution. It’s sequencing. The 2010s told us this already. When banks digitised distribution, front-office costs fell. Total costs didn’t. The work reappeared in compliance, in exceptions, in technology overhead. Cost didn’t leave. It moved. AI is following the same pattern. Because AI is a multiplier, not a fix. It multiplies what’s already there. Efficient processes get more efficient. Broken ones get faster at being broken. The organisations closing the gap are not the ones spending the most on AI. They are the ones who redesigned the operating model before deploying it. Most haven’t done that. Not because they lack the technology. Because nobody has the mandate to touch the model itself.
To view or add a comment, sign in
-
-
Very interesting and personally, I couldn’t agree more 👏🏻 AI is accelerating work, not removing it. If the value stream and data foundations aren’t redesigned first, you just move the effort downstream. The real unlock isn’t the tech, it’s the operating model and data behind it. Most data isn’t ready, inconsistent, duplicated, fragmented across systems, so AI just scales the noise. Clean, structured data is the prerequisite, not the afterthought. There’s a huge opportunity here for those deliberate enough to get the foundations right before deploying. AI will change the landscape, but only if we do the “non-sexy” work first! Food for thought....💭 #transformation #ai #operatingmodel
Chief Transformation Officer | Brought in to restore control, margin & growth in PE-backed and regulated financial services | AI-Enabled Transformation | Execution Operator
We were told AI would reduce the workload. For most organisations, it hasn’t. Here’s why. When you automate a process, you make it faster. You don’t make it smaller. The same decisions still get made. The same exceptions still occur. And the oversight that was always required — still required. The work compresses. It doesn’t disappear. Most organisations are measuring AI on productivity. And finding out it delivered speed instead. JP Morgan. Bank of America. Goldman Sachs. Tens of billions committed in 2026. Goldman’s own research found no meaningful productivity impact. Billions in. Essentially zero out. The problem isn’t execution. It’s sequencing. The 2010s told us this already. When banks digitised distribution, front-office costs fell. Total costs didn’t. The work reappeared in compliance, in exceptions, in technology overhead. Cost didn’t leave. It moved. AI is following the same pattern. Because AI is a multiplier, not a fix. It multiplies what’s already there. Efficient processes get more efficient. Broken ones get faster at being broken. The organisations closing the gap are not the ones spending the most on AI. They are the ones who redesigned the operating model before deploying it. Most haven’t done that. Not because they lack the technology. Because nobody has the mandate to touch the model itself.
To view or add a comment, sign in
-
-
We should be past the testing phase. AI initiatives should now be delivering measurable business impact now, and the organizations that will pull ahead are the ones treating their AI portfolio with the same discipline they apply to any other strategic investment. This post from Angela Zutavern captures exactly that thinking, and the Lloyds Banking Group example is worth your attention. Exceptional Women Alliance
Once you’ve tied AI into your business strategy, the next step is to build, manage, and evaluate AI initiatives with the same discipline you’d apply to your stock portfolio. A recent HBR article makes a good case for this. Lloyds Banking Group built a “GenAI Control Tower” that scores initiatives, stage-gates them through development, and rebalances quarterly. I like the stage-gate concept, though I’d recommend figuring out whether something will scale early, before you prototype. A portfolio approach also means evaluating initiatives in relation to one another. When you do that, you find synergies and reusable building blocks you’d miss evaluating them one at a time. When I’ve used this approach, three categories work well: • Game Changing: big bets, longer time horizon, needs dedicated support • Targeted Solutions: focused scope, clear value, could bubble up from experimentation or ideas from practitioners • Everyday: low cost, quick wins that build organizational muscle and momentum Each needs different governance, funding, and success metrics. Prioritization should go beyond business value and ROI to include trust, change management, and execution. The technology side of AI keeps getting easier. The organizational discipline to manage it well is what separates companies running AI from companies getting value from it. https://lnkd.in/gzrxXJw8
To view or add a comment, sign in
-
-
AI is no longer sitting beside your business systems. It's being built into the engine itself. And that changes everything about how you evaluate your next technology investment. A year ago, AI was the assistant you opened in another tab. You'd copy in some data, get a summary, paste it back. Useful. But disconnected. That era is ending. In 2026, the systems running your sales pipeline, your supply chain, your financial planning — they're embedding AI directly into the transaction layer. Not as a reporting tool. As the decision-making layer. Pricing that adjusts in real time. Inventory routing that responds before a human sees the problem. Forecasts that update as conditions change — not quarterly. I've seen teams cut planning cycles from 3 weeks to 3 days. Not by adding an AI tool. By replacing a rigid process with one that thinks. THE RISK most leaders aren't pricing in: If your competitors' core systems are making faster, better-informed decisions automatically — and yours aren't — the gap compounds quietly. Until it doesn't. 4 questions to pressure-test your current stack: Which of your core systems still require manual data hand-offs? Where do humans spend time formatting data instead of acting on it? Is AI in your roadmap, or already in your transaction layer? What decisions in your business happen too slowly because of system lag? EMBEDDED AI is not an upgrade. It's a re-architecture. Which core business system in your org is most overdue for this shift?
To view or add a comment, sign in
-
There's a case study making the rounds about a company that saved $60 million by using AI agents to resolve support tickets five times faster. Sounds like a win. But they nearly destroyed their brand in the process... The agents weren't broken. They were just aimed at the wrong thing. This is what's starting to be called the "intent gap", the distance between what a company says it values and what its AI systems are actually optimized to do. Closing that gap is harder than it sounds, because most organizational values don't exist anywhere in writing. They live in how a senior employee handles an ambiguous situation, or what a manager says in a hallway conversation. Agents can't absorb culture through osmosis. They need explicit logic before they start working. The practical consequence is that someone in every AI-enabled organization needs to do work that most job descriptions don't cover yet: extracting vague knowledge from experienced operators and translating it into decision boundaries agents can follow. Not "be customer-obsessed", but specifically what an agent should do when a customer's request conflicts with company policy, and which one wins under what conditions. This is also why the "frontier model vs. mid-tier model" debate is becoming less interesting. A company with clear operational philosophy encoded into its AI infrastructure will consistently outperform one running smarter models against fragmented, unaligned intent. The model is almost beside the point if the goal structure underneath it is incoherent. For supply chain specifically, where decisions involve real trade-offs between cost, resilience, relationships, and timing, this isn't abstract. I help supply chain teams move from headcount-driven operations to intelligence-throughput operations by turning AI spend into shipped tools that reduce exceptions, cycle time, and coordination cost.
To view or add a comment, sign in
-
-
Three forces are shaping how AI is actually showing up in organizations right now. All three are moving at once, which is why a lot of smart teams feel like they’re running just to stay in place. 1. Agentic workflows are moving into the core Current enterprise reports assume AI agents will sit inside core systems — ERP, CRM, service, finance — coordinating real work, not just answering questions at the edge. The real design problem isn’t “can we deploy agents?” but “where do they have decision rights, how do humans override them, and how do we see failure early enough to contain it?” 2. Governance is turning into an operating constraint Regulation and policy are tightening: EU timelines, a proposed U.S. national AI framework, and new state-level laws all point to higher expectations on documentation, oversight, and risk management. The language is legal, but the work is operational — building audit trails, clear ownership, and escalation paths around AI decisions so you can produce evidence on demand. 3. Capital is pricing in AI discipline, not AI theater On the capital side, private equity and other investors are being told to treat AI as part of underwriting and value-creation, not as a late-cycle experiment. The firms that will pull away are the ones building repeatable AI operating models they can apply across portfolio companies, rather than a new “AI initiative” for each asset. If you’re responsible for a P&L or a portfolio, the pattern is pretty clear: AI is becoming a governed workforce inside your systems, and the differentiator is how you design decision rights, accountability, and evidence around that workforce.
To view or add a comment, sign in
-
For CFOs who’ve successfully implemented AI within the finance function, the conversation around the technology has shifted. After initial gains from the technology, some organizations are seeing their results hit a plateau. Across industries, finance leaders are seeing a troubling pattern: Some early AI pilots delivered promising efficiency gains, but cost structures look largely unchanged months later. Meanwhile, competitors are quietly converting AI into durable margin expansion. This gap represents a growing competitive risk. #CFO #finance #AI #technology #automation #growth #risk https://lnkd.in/eWxvuMac
To view or add a comment, sign in
-
Most organizations experimented with AI agents in 2025. In 2026, the real question is: Who will actually scale them into real business outcomes? Over the past year, AI moved from helping people do work faster to helping organizations run work differently. And the companies that are winning right now are building around six core capabilities that turn agents from experiments into infrastructure. Here’s what that looks like. 1️⃣ Anyone can turn intent into agents Building agents no longer requires deep technical expertise. People can now describe what they want in natural language and create agents that automate real work. Sales leaders. Operations managers. HR teams. The barrier between ideas and execution is collapsing. 2️⃣ Agents that own workflows end-to-end Early AI mostly helped with tasks. Now agents can run entire processes — triggering actions, validating information, routing approvals, and escalating only when human judgment is needed. Less coordination. More decision-making. 3️⃣ Multi-agent systems working together Complex problems rarely live in one system. Modern agent frameworks allow specialized agents to collaborate — monitoring signals, gathering information, analyzing context, and executing actions together. Just like real teams do. 4️⃣ Flexible models for different jobs Not every task requires the same model. Organizations now choose different models for: • deep reasoning • cost efficiency • regulatory requirements • specialized tasks That flexibility matters at enterprise scale. 5️⃣ Agents that act across real systems The biggest limitation of early AI? It could suggest what to do — but not actually do it. Now agents can update systems, trigger workflows, fill out forms, and move work forward automatically. The gap between insight and action is shrinking. 6️⃣ Enterprise governance and control Innovation without oversight creates chaos. Modern platforms now allow organizations to track: • which agents exist • who uses them • performance and cost • security and compliance This is what allows agents to move from pilot projects to production infrastructure. The biggest shift in 2026 is operationalization. The organizations that win will build systems where people and agents work together to run the business. Where are AI agents already running real workflows inside your organization today — and where are they still stuck in experimentation? https://msft.it/6042QgNcg
To view or add a comment, sign in
-
-
Most organizations experimented with AI agents in 2025. In 2026, the real question is: Who will actually scale them into real business outcomes? Over the past year, AI moved from helping people do work faster to helping organizations run work differently. And the companies that are winning right now are building around six core capabilities that turn agents from experiments into infrastructure. Here’s what that looks like. 1️⃣ Anyone can turn intent into agents Building agents no longer requires deep technical expertise. People can now describe what they want in natural language and create agents that automate real work. Sales leaders. Operations managers. HR teams. The barrier between ideas and execution is collapsing. 2️⃣ Agents that own workflows end-to-end Early AI mostly helped with tasks. Now agents can run entire processes — triggering actions, validating information, routing approvals, and escalating only when human judgment is needed. Less coordination. More decision-making. 3️⃣ Multi-agent systems working together Complex problems rarely live in one system. Modern agent frameworks allow specialized agents to collaborate — monitoring signals, gathering information, analyzing context, and executing actions together. Just like real teams do. 4️⃣ Flexible models for different jobs Not every task requires the same model. Organizations now choose different models for: • deep reasoning • cost efficiency • regulatory requirements • specialized tasks That flexibility matters at enterprise scale. 5️⃣ Agents that act across real systems The biggest limitation of early AI? It could suggest what to do — but not actually do it. Now agents can update systems, trigger workflows, fill out forms, and move work forward automatically. The gap between insight and action is shrinking. 6️⃣ Enterprise governance and control Innovation without oversight creates chaos. Modern platforms now allow organizations to track: • which agents exist • who uses them • performance and cost • security and compliance This is what allows agents to move from pilot projects to production infrastructure. The biggest shift in 2026 is operationalization. The organizations that win will build systems where people and agents work together to run the business. Where are AI agents already running real workflows inside your organization today — and where are they still stuck in experimentation? https://msft.it/6042Qgx7a
To view or add a comment, sign in
-
More from this author
-
The Hidden Cost of AI: The Work No One Is Measuring
Prashanth Rao MBA, PMP, PgMP, PMI-ACP, ITIL SAFe 6 SPC 2d -
AI Governance Isn’t Broken. It’s Not Operational.
Prashanth Rao MBA, PMP, PgMP, PMI-ACP, ITIL SAFe 6 SPC 3d -
Sora’s Silence: Not a Retreat, but a Reality Check for Generative Video
Prashanth Rao MBA, PMP, PgMP, PMI-ACP, ITIL SAFe 6 SPC 4d