Rethinking Leadership in an AI-Enabled Workplace

Rethinking Leadership in an AI-Enabled Workplace

"What role will leaders play in managing AI in the workplace?" That was the question Frederik von Briel raised during our recent catch-up, where we talked about his and his colleagues latest research—sparking a conversation about leadership in times of crisis.

An hour proved far too short for this kind of topic. What might AI actually look like in the workplace? Could it function as an agent—as part of the team—and if so, how would leaders manage it? Who would be responsible or accountable for its actions? How would we measure its performance? And what if—perhaps not likely, but still possible—it underperforms?


AI as Team Member: Shifting Team Dynamics

If an AI agent is considered part of the team, how would human team members interact with it? Would we humanise it by giving it a name—something that’s already common practice, with many AI systems named by humans—and how might that impact team dynamics?

Would naming an AI encourage trust, foster clearer communication, or create the illusion of sentience and shared responsibility? Could it blur the lines between tool and teammate?


AI as Tool: Quietly Powerful, But Still a Risk

Of course, AI won’t always function as an agent or team member. In some cases, it may take on more of a tool-like role—a powerful system operating in the background, enhancing workflows without the need for social integration.

In these scenarios, leadership challenges shift: the focus becomes how to ensure ethical use, transparency, appropriate oversight, and integration into decision-making—without creating dependency or blind trust.


AI as Decision Support: Who Makes the Call?

Another possibility is that AI acts as a decision-support system, offering recommendations or analysis that humans then act on. This raises different questions:

  • Who makes the final call?
  • How do we audit the quality of AI-generated input?
  • What happens when human judgment conflicts with machine suggestions?


AI as Infrastructure: Invisible but Influential

Alternatively, AI may be embedded within processes or platforms, almost invisible to the user—automating routine tasks, handling data, or orchestrating operations. In this context, the leadership challenge may be less about "managing AI" and more about managing change:

  • How do we support workforce adaptation?
  • What new skills do employees and leaders need?
  • How do we build trust in systems people can’t always see?

In these models, AI isn’t a teammate—it’s infrastructure. But the questions of accountability, transparency, and leadership responsibility remain just as critical.


Who Is Ultimately Responsible?

There just wasn’t enough time to explore it all in our conversation. But it raises a crucial question: how will leaders lead AI—not just implement it, but truly take responsibility for how it shapes work, teams, and decision-making?

It also raises a broader issue of responsibility.

Will it be leaders—because they are best positioned to understand how AI is used within the wider context of organisational culture, strategy, and human impact?

Or will responsibility fall to the IT professionals who implement these systems, taking accountability for both the benefits and the unintended consequences that come with them?


More Questions Than Answers—And That’s the Point

More questions than answers—but perhaps that’s exactly the point. If not already, these are the kinds of issues organisations need to start thinking about seriously.

Is it time to revisit role descriptions, leadership expectations, and even KPIs to reflect a future where AI is embedded in daily work? If accountability is shared between humans and machines, our current structures may no longer be fit for purpose.


This reflection was sparked by a conversation prompted by the research article "Why and How Societal Crises Give Rise to Extreme Growth Outliers: A Process Model of Crisis-Driven Entrepreneurial Opportunity Creation" by Frederik von Briel , Per Davidsson , and Jan Recker , published in the Academy of Management Review.

Good article Slaven. These are definitely the kinds of AI issues organisations need to start thinking about seriously. Particularly at the board and governance table.

Like
Reply

This was a valuable read, Who is ultimately responsible? opens up a whole new conversation. Thank You for sharing Slaven Drinovac

Like
Reply

Thanks for the shoutout, Slaven — it was great catching up! I'm curious to see the discussion this post sparks. Since you already mentioned my recent piece on crises as enablers, I thought I'd also share another related (though more focused) publication: “Orchestrating Human-Machine Designer Ensembles during Product Innovation” https://journals.sagepub.com/doi/pdf/10.1177/00081256231170028

To view or add a comment, sign in

More articles by Slaven Drinovac

Others also viewed

Explore content categories