TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
AI / AI Agents

Agentic AI and A2A in 2025: From Prompts to Processes

Learn about AI's latest evolutionary advancements — and what's ahead — from Kevin Laughridge of Deloitte in this episode of The New Stack Makers.
May 20th, 2025 9:00am by
Featued image for: Agentic AI and A2A in 2025: From Prompts to Processes

Generative AI is so last year. We have now entered the agentic AI hype cycle — a move driven by the need to get AI to do something useful beyond writing our emails and completing our code snippets.

Agentic AI systems don’t just generate text or code on demand; they can proactively and autonomously make decisions and accomplish tasks, with minimal human intervention.

“It’s not just copilots anymore,” said Kevin Laughridge, Deloitte’s lead alliance partner for Google, in this episode of The New Stack Makers. “We’re now talking about AI agents that can make decisions, take actions and learn as they go — all within a company’s existing business processes.”

In this On the Road episode of Makers, recorded at Google Cloud Next in Las Vegas, Laughridge joined TNS founder and publisher Alex Williams for a conversation about AI’s latest evolutionary advancement that makes the leap, as Laughridge puts it, “from prompts to processes.”

There is a growing need among the C suite to move beyond AI experimentation, he told us: “They’re saying, ‘I’ve done the pilot. I’ve got a demo. Now, how do I scale this into a capability across my organization?’”

Enter agentic AI.

Google Cloud and Deloitte Engineering are deeply focused on enabling enterprises to deploy AI agents that do real work — securely, at scale, and with measurable ROI. “The differentiating factor for an AI agent versus generative AI is that the agent is pulling information,” he said. “It’s summarizing and doing any necessary reasoning associated with it, and then actually driving some sort of action.”

The challenge, though, is that the bigger the business, the bigger the stack. Large companies conduct their operations through complex ecosystems of tools and services, often using them in tandem with each other. How can an AI agent hop across application boundaries and data silos to accomplish a complex process like, for example, recruiting and then onboarding a new hire?

‘You Can’t Be Monolithic’

Google seeks to solve this with a new, open protocol called Agent2Agent (A2A), announced during the keynote at Google Cloud Next. The A2A protocol allows AI agents to communicate with each other to exchange information securely and then coordinate actions across different enterprise platforms or applications. More than 50 companies — including household names like Atlassian, PayPal, Salesforce and of course Google — have already signed on to use A2A.

“We’re very excited because, as an open protocol, A2a allows us to connect multiple [independent software vendors] so that we can actually stitch together business processes,” Laughridge said. “You can’t be monolithic. We have to play within an entire ecosystem, with agents that actually communicate and do things together, because it’s almost never one agent or one solution that solves the entire business process.”

Ditto large language models, Laughridge said: “Organizations also need optionality with their LLMs. I need more of a model garden, where I’m able to swap different LLMs based upon the type of problem that I have.”

But where is the playground where all of these agents and LLMs can meet? Up to this point, Laughridge said, we’ve tended to build AI systems on an individual use-case-by-use-case basis. However, plumbing individual agents to every single system in your stack is very expensive, not to mention a nightmare to manage and maintain.

“The emerging paradigm is the AI platform,” he said. “An overall agentic framework and a place where all of your agents can run.”

What a Basic AI Platform Needs

A basic AI platform, Laughridge said, needs a runtime framework, some sort of UI, and connectors so your AI agents can talk to each other across different applications and APIs. “

The most powerful thing I see in Google’s A2A agent space is all these connectors,” he said. “Because once I’ve plumbed each of my different systems into the platform, I can now turn on agents and use that plumbing to do all kinds of things.”

So, what’s next? Laughridge hinted at even deeper integrations between AI and cloud native architectures. Agents that not only work across systems, but also across clouds. Agents that self-optimize based on performance. And, ultimately, agents that work hand in hand with humans to co-create workflows and business solutions.

There’s much more in the full conversation: when to buy versus build AI platforms, how AI can help when your company discovers you’re running 300 legacy Pascal applications with no documentation and no idea what they do, and how agentic AI means developers are no longer just coding, but also creating systems that can learn and interact as AI moves from side projects to core digital infrastructure. If your organization is moving from AI proof-of-concept to AI production, this is one discussion you won’t want to miss.

Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.