‼️ The AI value is there. You're just not measuring it. This LSE report should be on every CDAO and CIO's radar. 🫨 It shows that on average, employees using AI are saving 7.5 hours per week, the equivalent of a full working day. That's roughly £14,000 per employee per year in productivity gains. And yet, we're also seeing lots of messaging saying "AI isn't delivering value." We think that in most cases, it is. It's just not being measured. This is one of the most pressing, and underappreciated, responsibilities sitting on a CDAO's desk right now. Before you can prove ROI on AI, you need a baseline. You need to know what "before" looked like: how long tasks took, where bottlenecks lived, what your people were actually spending their time on. Without that foundation, AI impact is based on gut feel. The LSE research makes the measurement gap even starker. ‼️ Employees with AI training save 11 hours per week. Those without? Just 5! That's a 2x productivity difference that would be completely invisible in organisations not tracking the right signals. So if you're not capturing that delta, leadership will draw their own conclusions. And those will almost certainly undervalue what's actually happening. The "AI isn't working" narrative often isn't a story about AI. It's a story about data maturity. Organisations that can't baseline can't benchmark. Organisations that can't benchmark can't tell the story of value. And when you can't tell the story of value, budgets get cut, programmes stall, and momentum dies, even when the underlying impact is real. ❓ So, what does good look like? It starts with CDAOs treating productivity measurement as a strategic priority, not an afterthought. That means establishing clear pre-AI baselines across key workflows, defining the metrics that matter (time saved, error rates, decision speed, output quality), and building the data infrastructure to track them consistently over time. 👉 What's your approach to baselining AI value in your organisation? We'd love to hear what's working in the comments. 🔗 LSE Report: AI boosts productivity by the equivalent of one workday per week: https://lnkd.in/eZRQRBeW #DataLeadership #CDAO #AIStrategy #DataLiteracy #AIValue #DataDrivenLeadership
CDAOs: Measure AI Value to Avoid Misconceptions
More Relevant Posts
-
AI Doesn’t Reduce Work-It Intensifies It One of the more rigorous pieces I've read on how #ai is actually reshaping work- not in theory but observed up close. Researchers from University of California, Berkeley, Haas School of Business spent 8 mths embedded inside a 200-person US tech company - in-person observation 2x a week, tracking internal comms, 40+ deep interviews across engg, product, design, research & ops. Their central finding: AI tools didn't reduce work. They intensified it. & not because management demanded more output. Workers voluntarily expanded what they did, when they did it & how much they juggled - because AI made "doing more" feel possible, accessible & often genuinely rewarding. 3 distinct patterns of intensification emerged: 1. Task expansion. AI filled knowledge gaps, making unfamiliar work feel approachable. Product mgrs began writing code. Researchers took on engg tasks. People absorbed work that would previously have been outsourced or deferred. Meanwhile, engineers quietly became reviewers & coaches for colleagues' AI-assisted output - an unplanned workload that surfaced in Slack threads & desk-side conversations, not on any task board 2. Disappearing boundaries. AI reduced the friction of starting any task to near-zero. Workers slipped prompts into lunch breaks, meetings, idle moments. The conversational nature of prompting made it feel more like chatting than working. By the time people noticed, their downtime had stopped being restorative 3. The multitasking trap. Workers ran parallel AI threads - coding manually while AI generated alternatives, spinning up multiple agents, reviving deferred tasks "in the background." It felt like productive momentum. It was continuous context-switching & growing cognitive load The researchers identified a self-reinforcing cycle at the heart of all three: AI accelerates tasks → speed expectations normalize → reliance deepens → scope widens → intensity compounds. As one engineer put it - you thought you'd save time & work less. But you don't work less What makes this study worth sitting with is the nuance. This isn't an anti-AI argument. The researchers acknowledge that workers genuinely felt empowered & productive. The problem is that the intensification is voluntary & invisible - which makes it nearly impossible for leadership to detect until burnout, quality erosion & turnover are already underway. Their prescription - an "AI Practice": → Intentional pauses built into workflow to prevent unchecked acceleration → Sequencing norms that let work advance in coherent phases rather than continuous reaction to AI outputs → Protected time for human dialogue & connection to counter the isolating, individualizing pull of solo AI-mediated work The closing line: without intention, AI makes it easier to do more-but harder to stop. Worth a careful read for anyone leading teams through AI adoption. https://lnkd.in/g8_XnfvX Harvard Business Review
To view or add a comment, sign in
-
-
#AI is #not #reducing knowledge #work. It is redesigning it. The dominant narrative suggests that AI will primarily function as a labour-saving technology, automating routine cognition and freeing time for higher-value tasks. Emerging empirical work, including recent research discussed by #Harvard #Business #School, points to a more complex reality. In many organisational settings, #AI does not simply compress work. It #reconfigures its #structure and #density. #What we are beginning to observe Across early AI-adopting environments, three dynamics are becoming increasingly visible. #First, capability expansion. As technical and cognitive barriers fall, professionals rarely just finish faster. They often take on tasks that previously sat outside their remit, quietly enlarging the functional perimeter of their roles. #Second, temporal diffusion of work. Because interaction with AI is friction-light and conversational, micro-tasks increasingly spill into moments that once served as cognitive recovery windows. The workday becomes less bounded and more continuous. #Third, parallel cognitive threading. AI enables multiple work streams to run simultaneously. Throughput may increase — but so does sustained attention load and context switching. Individually manageable. Systemically compounding. #Why this matters for policy and institutional leaders The medium-term risk is not immediate overload. It is silent work densification. Historically, many general-purpose technologies have followed a similar trajectory: initial productivity gains, followed by rising performance expectations and only later organisational re-equilibration. There are early signals that AI-augmented knowledge work may be entering this phase. If so, the key challenge is no longer adoption speed alone, but cognitive sustainability at scale. The #missing layer: work design Most organisations are currently investing heavily in AI capabilities while under-investing in workflow architecture and tempo governance. Three questions deserve far more explicit attention: • How do we detect when productivity gains are masking workload inflation? • What protects decision quality under persistent acceleration? • Where should organisations deliberately re-introduce friction into AI-mediated workflows? These are not anti-AI questions. They are system maturity questions. #Europe has an opportunity to lead not only in trustworthy AI, but in sustainable AI-augmented work design. If we focus only on capability, we risk building faster systems that gradually erode human bandwidth. If we also redesign work intentionally, AI can become what it promises to be: a genuine amplifier of human judgment, not just of human throughput. Are #you observing genuine workload relief from AI or a gradual densification of knowledge work? #AI #FutureOfWork #AIGovernance #EUInnovation #DigitalTransformation #WorkDesign #ResearchPolicy
To view or add a comment, sign in
-
-
The Trust Deficit: Why Your AI is Only As Good As Your Organization's Relationships Here's a story that haunts me: A healthcare AI system predicted patient readmission risk with 89% accuracy. Doctors ignored it. Not because the AI was wrong. Because they didn't understand how it worked. They didn't trust it. And critically—they weren't involved in building it. AI doesn't fail because of bad algorithms. It fails because of broken trust. And trust isn't a technical problem. It's a human one. I'm seeing three trust gaps that kill AI initiatives: The transparency gap. Teams deploy black-box models and wonder why business users won't act on the recommendations. If you can't explain why the AI made a decision, don't expect humans to bet their reputation on it. The collaboration gap. Data scientists build models in isolation. Business teams get the output with no context. No shared understanding. No partnership. Just a fancy Excel file they don't trust. The feedback gap. AI makes a recommendation. A human overrides it. That signal never makes it back to the model. The system can't learn. Trust erodes further. The organizations building trusted AI are doing this differently: They're co-creating solutions with end users from day one. Not gathering requirements in a conference room. Actually building together. Prototyping together. Failing together. They're prioritizing explainability over marginal accuracy gains. An 85% accurate model that people understand and trust will outperform a 92% accurate black box that sits unused. They're building feedback loops everywhere. When humans override AI, they capture the reasoning. When AI gets it wrong, they understand why. The system learns. Trust grows. They're having honest conversations about limitations. "Here's what this AI can do well. Here's where it struggles. Here's when you should trust it and when you shouldn't." Here's the thing about trust: You can't mandate it. You can't buy it. You can't automate it. You have to earn it. Through transparency. Through collaboration. Through delivering value consistently over time. The most sophisticated AI in the world is worthless if humans don't trust it enough to use it. How are you building trust in your AI systems? #AITrust #ResponsibleAI #AIEthics #ChangeManagement #AITransformation #HumanCenteredAI #ExplainableAI #TechLeadership #Innovation #Collaboration #DigitalTransformation #OrganizationalTrust #ArtificialIntelligence #TechStrategy #Leadership
To view or add a comment, sign in
-
The Evolution of Work: From Instructions to Judgment in the Age of AI For generations, the workplace was defined by following instructions. Success was measured by the ability to execute tasks correctly and consistently. Employees who mastered processes and adhered to established protocols were considered invaluable assets. As industries matured and information became more accessible, the paradigm shifted. Work evolved towards knowing the answers. Expertise, experience, and becoming "the person who knew" became the ultimate competitive advantage. Professionals invested heavily in acquiring specialized knowledge, and organizations rewarded those who could provide definitive solutions. However, we are now entering a profound third phase, fundamentally reshaped by the rapid advancements in Artificial Intelligence. As AI systems increasingly take over tasks such as drafting, organizing, and analysis, the core value of human work is transforming. The new frontier is deciding what matters. AI's proficiency in processing vast amounts of data and identifying patterns means that the rote application of knowledge is becoming automated. The critical human contribution is no longer just about having the information, but about applying judgment: discerning what is truly important, identifying subtle risks, and making nuanced decisions in situations where clear rules or precedents are absent. This shift presents a significant challenge for organizations. While most companies have well-established frameworks for training individuals in task execution and knowledge acquisition, far fewer possess effective strategies for cultivating judgment. How do you deliberately build the capacity to make sound calls when the path isn't clear? How do you foster the ability to spot emerging issues that AI might miss or misinterpret? AI is not just changing how we work; it's forcing us to redefine what work truly means for humans. It's not a question of whether people still matter, but rather what kind of thinking work now demands from them. The future belongs to those who can master the art of judgment, leveraging AI as a powerful co-pilot rather than a replacement for critical human discernment. Cultivating Judgment in the AI Era To thrive in this new landscape, organizations and individuals must focus on developing: • Critical Thinking and Nuance: Moving beyond surface-level analysis to understand underlying contexts and implications. • Ethical Reasoning: Navigating complex situations with a strong moral compass, especially when AI outputs present dilemmas. • Strategic Foresight: Anticipating future trends and potential disruptions, rather than merely reacting to current data. • Adaptive Decision-Making: Comfortably making choices in ambiguous environments and learning from outcomes. #AI #FutureOfWork #Leadership #Judgment #HumanSkills #WorkplaceEvolution
To view or add a comment, sign in
-
The statement oversimplifies AI's role by ignoring human strengths in creativity and adaptability, while current trends favor hybrid AI-human models for sustainable success. Pure AI systems excel in scale but falter in nuanced judgment. Core Flaw: The "human computers" analogy fits narrow, repetitive calculation tasks but fails for modern knowledge work. Spreadsheets replaced manual math because it was deterministic; most corporate functions involve ambiguity that AI still mishandles without human oversight. Humans in the loop prevent AI errors, such as hallucinations or ethical lapses. Strengths of Pure AI/Robotics: Pure AI firms gain massive efficiency in high-volume, rule-based operations, slashing costs and scaling instantly. Robotics handles physical repetition tirelessly, as in manufacturing where initial investments yield long-term savings over human labor. This aligns with trends like Kai-Fu Lee's prediction of 40% job displacement in routine roles. Weaknesses Exposed: AI lacks empathy, intuition, and contextual adaptability, critical for leadership or crisis response—areas where "humans in the loop" enhance outcomes. High setup costs, maintenance, and brittleness to novel scenarios (e.g., black swan events) hinder pure systems. McKinsey data shows only 5% of jobs fully automatable, with augmentation dominant. Opportunities in Hybrids: Trends emphasize AI augmenting humans for innovation and agility, redeploying workers to high-value tasks like problem-solving. This drives employee engagement and customer satisfaction, key ROI metrics beyond cost-cutting. Forward-thinking firms reskill staff, positioning for AI-human synergy in creative fields. Threats to Pure AI Model Economic backlash looms: Mass job loss without income redistribution craters consumer demand, tanking revenues—as #Reddit discussions highlight in automation paradoxes. Regulatory hurdles (e.g., AI ethics laws) and talent shortages for oversight favor human-centric firms. Pure AI risks short-term wins but long-term market reshape by adaptable hybrids. Pure #AI outperforms in silos but hybrids dominate holistically, mirroring how computers amplified rather than erased human roles. Examples: - Hybrid AI-human companies integrate AI for efficiency while leveraging human judgment for complex decisions, outperforming pure AI models in adaptability and customer trust. - #IBM Watson Health pairs AI diagnostics with oncologist oversight at Memorial Sloan Kettering Cancer Center. - #Amazon deploys AI chatbots for routine customer queries but escalates nuanced issues to human agents. - #Google's search algorithms use AI for relevance ranking, augmented by human moderators for safety and edge cases. - #Volvo incorporates human feedback into self-driving AI development for ethical dilemmas and rare scenarios. These firms demonstrate hybrid superiority, aligning with trends where 70% of executives prioritize human-AI collaboration for innovation over full automation.
To view or add a comment, sign in
-
AI is accelerating work everywhere. Read the full article here: https://lnkd.in/gHdkY3sg But here’s the question: is AI improving productivity -- or just increasing output? New research highlights five hidden forces that can quietly undermine AI productivity gains, from cognitive offloading and pseudo-work to workflow overload and over-automation of judgment. One stat says it all: - 96% of executives expected productivity gains from AI. - 77% of employees say AI has increased their workload. Efficiency is local. Productivity is systemic. At Way We Do, we help organizations embed AI inside governed, human-in-the-loop workflows -- so speed turns into real business value. #AIProductivity #AIGovernance #ProcessManagement #HumanInTheLoop
To view or add a comment, sign in
-
Legibility, not tools: the missing foundation of AI adoption Most analysis of AI adoption asks the wrong question. It focuses on which jobs will disappear. After 51 weeks of research, a more consequential question emerged: which organisations have made their work legible enough to automate at all? Across industries, the pattern is consistent. Seventy-two per cent of organisations now use AI in some capacity. Only a third move beyond pilot programmes. The constraint is not technology, budgets, or workforce resistance. The constraint is structural. Most organisations have never fully documented how decisions are made, standardised the processes people depend on, or transferred expertise out of individual heads and into shared systems. AI does not repair weak architecture. It exposes it. 3️⃣Three findings that reshaped my thinking: 👉 The roles most exposed to displacement are not in coastal tech hubs, but in administrative centres supporting manufacturing and logistics. Policy attention remains focused elsewhere 👉 The most valuable capabilities are not tool fluency, but judgement: knowing which questions algorithms cannot answer, recognising misaligned optimisation, and overriding outputs when context demands it 👉 The infrastructure required to transition millions of workers does not exist at scale. Skills verification, credential alternatives, and large-scale reskilling pathways remain largely unbuilt. No institution is taking ownership 💡 What this means for strategy: AI adoption is not a technology initiative. It is a diagnostic audit of whether work was ever designed to be transferable, measurable, and scalable. If the answer is no, the tools will not save you. https://zurl.co/BbWUn #WorkSystems #AIStrategy #OrganisationalReadiness #TalentStrategy #StrategicPlanning #FutureOfWork
To view or add a comment, sign in
-
Survey: Only 17% Trust AI Without Oversight Why this matters: Most users believe dependability comes from human-in-the-loop review rather than full automation, with 64% expecting the need for human checking to increase and 82% saying AI often still needs monitoring, underscoring that oversight remains critical as organisations adopt AI. Our take: This points to a clear opportunity to build strong governance, accountability and human-plus-AI workflows that balance speed with quality, trust and real-world results. What do you think? Could robust human oversight frameworks become essential for sustainable and responsible AI adoption at work? Tim Mobley Connext Read More:- https://lnkd.in/dJGC6nqM #AI #Lifecycle #HTC #hrtechcube #Workplace #Governance #HRTech #Connext #workflow #automation
To view or add a comment, sign in
-
In today’s fast-moving business landscape, decision-making can no longer rely solely on intuition or historical reports. Enterprises are increasingly turning to Artificial Intelligence to move from reactive decisions to predictive and strategic intelligence. Modern AI systems analyze massive volumes of structured and unstructured data in real time. From customer behavior and supply chain patterns to financial risks and workforce productivity, AI uncovers insights that human analysis alone might miss. AI enables leaders to base decisions on real-time analytics instead of quarterly reports. Predictive models forecast demand, identify market trends, and simulate outcomes before implementation. Machine learning models detect anomalies, fraud patterns, and operational risks early. Enterprises can act proactively instead of reacting to crises. From intelligent supply chain optimization to smart resource allocation, AI reduces waste and maximizes ROI while increasing agility across departments. Generative AI and advanced modeling also allows enterprises to test multiple business scenarios. Beyond efficiency, AI is reshaping leadership itself. Executives now have dynamic dashboards, real-time forecasting tools, and AI copilots that enhance strategic thinking. AI is also democratizing insights across organizations. No longer limited to data scientists, advanced analytics tools are empowering managers, product teams, and even frontline employees to make smarter, faster decisions. This creates a culture where intelligence flows across levels rather than being trapped in silos. Moreover, AI enables hyper-personalized strategic planning. Marketing teams can refine targeting in real time, finance teams can model capital allocation scenarios instantly, and HR departments can use predictive analytics to improve talent acquisition and retention. The competitive edge now lies in how well enterprises integrate AI into workflows — not as a replacement for human expertise, but as a multiplier of it. The most successful organizations blend human intuition, ethical judgment, and contextual understanding with AI-driven precision. What’s truly transformative is the shift from descriptive analytics (what happened) to predictive and prescriptive analytics (what will happen and what should we do about it). However, AI-driven decision-making requires more than just tools. It demands: Strong data governance Ethical AI frameworks Cross-functional collaboration Continuous upskilling A culture that embraces experimentation Enterprises that combine human judgment with AI intelligence gain a powerful competitive advantage. The future belongs to organizations that treat AI not as a tool, but as a strategic partner in leadership. The question is no longer “Should we use AI?” The real question is “How intelligently are we integrating AI into our decision-making DNA?” #ArtificialIntelligence #EnterpriseAI #DigitalTransformation #Leadership #DataDriven #Innovation
To view or add a comment, sign in