AI is a Turbo Button for Whatever Mess You're Already Making
In early 2023, I started running workshops to help teams find the opportunities to benefit from Gen AI. The aim was to guide them to the use cases where AI could free them from the drudgery or augment their skills.
In one of the early sessions I was having a cuppa with the client just before we kicked off when they told me they wanted to focus the workshop on efficiency. Their priority was to use AI to do more work rather than do better work. I tried to convince them otherwise but they insisted.
So when we split into our breakout groups, each team generated lists of inefficiencies and tasks they'd rather not do. They were on fire. The lists grew rapidly and the groups were still in full flow when I called them back to share their ideas.
We heard about bloated approval processes with embedded politics, five mutually incompatible spreadsheets that need to be Franken-stitched into monthly reports, a data turf-war between sales and marketing, department heads who each use different definitions of “customer”, meetings about whether to schedule another meeting, a weekly status report that takes days to produce yet no one reads, an intranet site where 90% of the links are broken - the lists went on.
The session looked productive. The client was grinning at me from the corner of the room.
But there was a problem: Generative AI couldn't really help with many of the tasks.
These were lists of organisational dysfunctions (and there was no shortage of them). Any AI involvement would simply be slapping a sticky plaster on a festering wound.
An LLM is more likely to hit “turbo” on whatever madness you hand it. The best case scenario is that it simply does the dysfunctional thing faster. The worst case scenario would give your C-suite the heebie-jeebies.
It’s a mirror, not a magic wand
AI’s super-power is multiplication, not wisdom. Feed it nonsense and it will deliver nonsense at 10,000 transactions per second. Give it a flawed strategy and it will execute it faster than you can say “Oh wait, that’s not what I meant to do!”
The classic case study for this is Klarna, who bragged about replacing 700 employees with an AI chatbot. I can just imagine the CFO punching the air at the projected $10 million saving. But within a few months, the customer backlash had become so severe they were forced to backpedal faster than a juiced-up Lance Armstrong doing the Tour De France in reverse. Earlier this year, they started rehiring their customer service humans all over again.
Duolingo had a similar fire and rehire experience. As did the Commonwealth Bank of Australia.
You’ll find lots of people blaming AI for these situations, but you need to remember that humans were behind every single one of these decisions. Humans were behind the training and testing of these AI products. And humans made the decision to deploy them.
Yet these corporate humans didn’t seem to consider the perspective of the customer humans.
AI was simply the poor grunt doing what it was instructed to do, unquestioningly.
Humans are the spanners in their own works
These examples are the grand ones you’re likely to have already heard about. But I want to bring it down to earth for you and demonstrate exactly how easy it is to create an AI project that sounds good on the surface but fails to fix the problem it was designed for.
Here are examples from workshops I’ve run.
Situation #1: Let’s speed up customer refunds
Imagine you have a refund approval process that's causing frustration. Customers are becoming irate at having to wait for their money to be returned. And the hope is that AI can speed this up. However, the refund process requires three signatures before any money is released. First customer service, then legal and finally finance. AI may be able to route the refund request faster, but the blockages are still made of flesh. An out-of-office email from someone along the way is capable of derailing this workflow until they’re back at their desk tanned and refreshed.
The only solution is to reimagine the process.
Situation #2: Let’s develop better customer relationships
Your sales, marketing, and support functions have different priorities and measure their success with different metrics. One focuses on sales volume, another on customer engagement and another on the number of closed support tickets. All of them maintain separate records. And all of them have a direct line to the customer. So they might simultaneously send someone an upsell email, a discount call, and a churn-prevention chatbot message.
AI can only remedy this situation after the politics have been ironed out, the databases combined and a cohesive customer strategy created. Otherwise, AI will simply reinforce silos and create further confusion - resulting in lost customers.
Risk #3: Let’s hit those KPIs
This seems like it’s something AI was designed for. You want to reduce customer complaints, increase volume of work, or boost positive customer ratings. Surely AI can help you achieve your goals. Well, yes - if you approach it the right way. But it’s surprisingly easy to get it wrong. If you instruct an AI tool to increase the number of customer queries that are handled, it may simply focus on reducing the length of the call - which leads to disgruntled customers calling back numerous times in an effort to get their issue resolved. (This is based on a true story).
Getting your KPIs right is vital if you want to get the results you’re after.
So what’s a manager to do?
First of all, don’t believe the hype. Be sceptical of the marketing claims of AI products because the products weren’t created by people who experience the day-to-day of your company. And the marketing materials were written with the sole purpose of generating sales. None of these AI solutions give you success right out of the box - it’s all about how you implement them.
Next, do an audit of the problems in your business. Run workshops to find out what irritates your employees and what stops them from doing a better job. Then do what you can to address these issues without the help of AI. These are your dysfunctions and you need to focus on the root cause rather than the symptoms.
Finally, when you’ve fixed the problems, you can start thinking about how AI can speed it up, automate it, amplify it, or make it more pleasurable (that last one is sadly neglected).
Decide on good metrics for tracking success. Qualitative measurements are just as important as quantitative ones. Increasing productivity while damaging morale is foolish. Processing more sales while damaging customer experience is dumb. Increasing quarterly profits while harming long-term growth is idiotic. Yet, I’ve seen exactly that because these are the metrics the C-suite’s bonuses are based on.
The bottom line
AI will not tidy your dysfunctional house. Like a Roomba that’s not been programmed to deal with a dog turd, it will simply redistribute the mess everywhere it touches.
You need to deal with the hard truths. You need to have a zero tolerance approach to business bullsh*t. And by clearing up the dysfunctions, you don’t just make the workplace better for AIs; you make it a better place for humans while you’re at it.
Eliminate petty politics, prune absurd approvals, demolish data silos and rewrite perverse incentives. Get the human stuff right and AI becomes rocket fuel. Ignore it and you’ll simply be using AI to efficiently spray kerosine on a dumpster fire.
Dave Birss is Co-Founder of The Gen AI Academy - a collection of over 30 AI experts offering practical training and advice for companies all over the world. If you haven't taken his LinkedIn Learning courses, you really should.
CTO | Build dedicated software development teams for startup founders | Web3, AI, FinTech and more
1moThe Roomba analogy is a perfect warning for the current wave of enterprise automation because it highlights that AI is a force multiplier for whatever foundation it is placed upon. If an organization has broken departmental silos and redundant approval layers then the introduction of AI will likely just distribute that friction at a much faster rate. We often see leadership teams hoping that technology will magically solve structural inefficiencies but the reality is that you cannot automate your way out of a fundamentally flawed process. Cleaning up the mess of legacy dysfunction is the unglamorous but necessary first step that ensures the AI acts as a solution and not an amplifier of internal chaos. Since most companies are currently in a rush to stay competitive do you believe the pressure to deploy quickly is making it impossible for them to perform the deep operational audit required to avoid their own version of a poopocalypse?
Love this Dave! As much as everyone wants an easy button, there's no substitute for doing the hard work. Garbage in, garbage out!
First step in real transition is to let go of the old. Obviously Missing here..😄
The Roomba Poopocalypse is spot on. AI won’t fix broken processes, it just spreads the mess faster. Clean up first, then automate!
A bit like an Agile Transformation with Continuous Deployment back in the day... only way way faster and often impossible for the human to verify if done wrong 😜