How to Improve Agent Interoperability

Explore top LinkedIn content from expert professionals.

Summary

Agent interoperability refers to the ability of different AI agents and systems to communicate, collaborate, and operate together seamlessly, regardless of their underlying technologies or vendors. Improving agent interoperability is crucial for building robust AI ecosystems where agents can coordinate tasks, exchange information, and adapt to changing environments without breaking workflows.

  • Adopt open standards: Use shared protocols and formats, like MCP or A2A, so your agents can connect and interact with others across platforms and vendors.
  • Design for flexibility: Build agent workflows and interfaces that allow easy swapping or upgrading of models and tools without reworking your entire system.
  • Centralize governance: Implement clear rules for identity, permissions, and audit trails at the protocol level to ensure every agent follows the same guidelines and can be tracked easily.
Summarized by AI based on LinkedIn member posts
  • View profile for Kumaran Ponnambalam

    AI / ML Leader & Author

    21,248 followers

    𝗜𝗳 𝘆𝗼𝘂 𝘀𝘄𝗮𝗽𝗽𝗲𝗱 𝘆𝗼𝘂𝗿 𝗟𝗟𝗠 𝘃𝗲𝗻𝗱𝗼𝗿 𝘁𝗼𝗺𝗼𝗿𝗿𝗼𝘄, 𝘄𝗼𝘂𝗹𝗱 𝘆𝗼𝘂𝗿 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, 𝘁𝗼𝗼𝗹𝘀, 𝗮𝗻𝗱 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 𝘀𝘁𝗶𝗹𝗹 𝘄𝗼𝗿𝗸... 𝗼𝗿 𝘄𝗼𝘂𝗹𝗱 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴 𝘀𝗻𝗮𝗽 𝗶𝗻 𝗵𝗮𝗹𝗳? Over the last few weeks, MCP (Model Context Protocol) has quietly gone from “cool open-source project” to real infrastructure for solving that exact problem:  • Microsoft just moved MCP support for Azure Functions to GA, with identity-aware, streamable tool triggers so agents can call serverless functions safely.   • Google announced official MCP support across Google Cloud services, with fully managed MCP servers for BigQuery, GKE, GCE and more.  • Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation, alongside OpenAI’s AGENTS.md and Block’s goose, making MCP a neutral, open standard that looks a lot like the “HTTP moment” for agentic AI. This is bigger than plumbing; it’s a shift in how we architect agents: 𝗧𝗼𝗼𝗹𝘀 𝗯𝗲𝗰𝗼𝗺𝗲 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀,𝘁𝗵𝗲 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝘁𝗵𝗲 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝗮 𝗿𝗲𝗽𝗹𝗮𝗰𝗲𝗮𝗯𝗹𝗲 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁. If you’re building enterprise AI agents, here’s how I’d think about MCP and standardized workflows:  1. 𝗗𝗲𝗳𝗶𝗻𝗲 𝘁𝗼𝗼𝗹𝘀 𝗮𝘀 𝗰𝗼𝗻𝘁𝗿𝗮𝗰𝘁𝘀, 𝗻𝗼𝘁 𝗵𝗲𝗹𝗽𝗲𝗿𝘀: treat each MCP tool as a versioned, testable API surface with strict schemas, auth scopes, and SLAs, not as a “convenience wrapper” hidden inside prompt code.  2. 𝗦𝗲𝗽𝗮𝗿𝗮𝘁𝗲 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗳𝗿𝗼𝗺 𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲: let your workflow engine (orchestrator) own state, routing, retries, and compensations, and let MCP tools + models handle reasoning and side effects behind that control plane.  3. 𝗖𝗲𝗻𝘁𝗿𝗮𝗹𝗶𝘇𝗲 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗮𝘁 𝘁𝗵𝗲 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹 𝗯𝗼𝘂𝗻𝗱𝗮𝗿𝘆: enforce identity, permissions, rate limits, tenant isolation, and audit logging at the MCP layer so every model and agent inherits the same guardrails by design.  4. 𝗗𝗲𝘀𝗶𝗴𝗻 𝗳𝗼𝗿 𝗺𝗼𝗱𝗲𝗹 𝗮𝗻𝗱 𝘃𝗲𝗻𝗱𝗼𝗿 𝗺𝗼𝗯𝗶𝗹𝗶𝘁𝘆: write conformance tests at the MCP level so you can plug different LLMs or agent runtimes into the same tool graph without re-wiring business logic.  5. 𝗠𝗮𝗸𝗲 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 𝗠𝗖𝗣-𝗻𝗮𝘁𝗶𝘃𝗲, 𝗻𝗼𝘁 𝗺𝗼𝗱𝗲𝗹-𝗻𝗮𝘁𝗶𝘃𝗲: when you design a new agentic workflow, start by asking “what MCP tools and flows do we expose?” rather than “what should this model prompt say?” so your investment lives in protocols, not in one provider’s SDK. If MCP is the “USB-C for AI agents,” the 𝗿𝗲𝗮𝗹 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁𝗶𝗮𝘁𝗼𝗿 won’t be who has the flashiest agent demo—it’ll be who designs the cleanest, most 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗯𝗹𝗲 𝗠𝗖𝗣-𝗻𝗮𝘁𝗶𝘃𝗲 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 across their stack.

  • View profile for Sam Julien

    Building developer product at WRITER

    5,303 followers

    Google recently announced their new Agent2Agent (A2A) protocol with more than 50 partners, including Writer. But what is it and why does it matter, especially for enterprise developers? AI is rapidly moving toward agent-based systems that can handle complex tasks, but these systems often operate in isolation. A2A is an open standard that allows different AI agents to communicate and collaborate while maintaining their independent operation. With A2A, agents can exchange context, status, instructions, and data without sharing their internal operations, maintaining the proprietary nature of each agent while allowing them to work together. What makes A2A particularly valuable is its enterprise-ready approach with key principles: 1. Opaque execution: agents don't share their internal thoughts or tools 2. Async-first design: built for long-running tasks and human-in-the-loop processes 3. Modality-agnostic: supports text, audio/video, forms, and other interaction types 4. Simple implementation: leverages existing standards like HTTP and JSON-RPC The protocol centers around task completion where agents communicate through well-defined objects: - Tasks: stateful entities tracking progress and exchanging messages - Artifacts: results generated by agents that can be streamed or updated - Messages: context, instructions, or other communication between agents - Parts: individual content pieces with specific types and metadata As with everything in this field, A2A is still evolving. Google is actively seeking community and partner feedback to refine the specification. If you're building agent-based systems, this is definitely worth exploring. Blog: https://lnkd.in/gSN6YkYv Repo: github.com/google/A2A Docs: https://lnkd.in/g66WYcWt Enterprise readiness: https://lnkd.in/gFU8q_37

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    715,774 followers

    𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗚𝗲𝘁𝘁𝗶𝗻𝗴 𝗦𝗺𝗮𝗿𝘁𝗲𝗿 — 𝗕𝘂𝘁 𝗢𝗻𝗹𝘆 𝗜𝗳 𝗧𝗵𝗲𝘆 𝗖𝗮𝗻 𝗧𝗮𝗹𝗸 𝘁𝗼 𝗘𝗮𝗰𝗵 𝗢𝘁𝗵𝗲𝗿 As AI shifts from single-task assistants to multi-agent systems, what truly powers this transformation isn't just bigger models — it's the rise of 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗲𝗱 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀. These protocols define how agents communicate, manage memory, invoke tools, and collaborate across ecosystems. To make sense of this emerging landscape, I mapped out 𝟭𝟬 𝗺𝗼𝗱𝗲𝗿𝗻 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 that are shaping how agents work — together. Here’s a breakdown of what’s included: • 𝗔𝗴𝗲𝗻𝘁 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗜𝗕𝗠): Lifecycle and workflow standardization • 𝗔𝗴𝗲𝗻𝘁 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹: Message routing between agents and external systems • 𝗔𝗴𝗲𝗻𝘁-𝘁𝗼-𝗔𝗴𝗲𝗻𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗚𝗼𝗼𝗴𝗹𝗲): Structured inter-agent collaboration (Gemini & Astra) • 𝗠𝗼𝗱𝗲𝗹 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰): Unified memory and tool embedding inside LLMs • 𝗧𝗼𝗼𝗹 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗟𝗮𝗻𝗴𝗖𝗵𝗮𝗶𝗻): Standard JSON for tool metadata • 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝗖𝗮𝗹𝗹 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗢𝗽𝗲𝗻𝗔𝗜): Schema-enforced function execution • 𝗧𝗮𝘀𝗸 𝗗𝗲𝗳𝗶𝗻𝗶𝘁𝗶𝗼𝗻 ����𝗼𝗿𝗺𝗮𝘁 (𝗦𝘁𝗮𝗻𝗳𝗼𝗿𝗱): Declarative task graphs and coordination • 𝗔𝗴𝗲𝗻𝘁𝗢𝗦 𝗥𝘂𝗻𝘁𝗶𝗺𝗲: Managing stateful, long-lived agents in enterprise settings • 𝗥𝗗𝗙 𝗔𝗴𝗲𝗻𝘁 (𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗪𝗲𝗯): Linked data agent reasoning using SPARQL • 𝗢𝗽𝗲𝗻 𝗔𝗴𝗲𝗻𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹: A community push toward cross-framework interoperability    This space is evolving quickly. Protocols like these are quietly becoming the 𝗿𝗲𝗮𝗹 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 behind the AI agents of tomorrow. Whether you're designing LLM workflows or deploying AI into production systems, these are the interfaces you'll be working with next. Curious which ones you've already explored — or plan to?

  • View profile for Dharmesh Shah
    Dharmesh Shah Dharmesh Shah is an Influencer

    Founder and CTO at HubSpot. Helping millions grow better.

    1,179,799 followers

    BREAKING: Anthropic launches Claude Opus 4.5 Several of the upgrades are squarely at people like me, developers building agents. I'm particularly impressed with how elegantly they handle one of the biggest issues developers are dealing with: tool calling. MCP is great (as a protocol) but has a major issue with how it front-loads the context window and consuming a lot of tokens. It's easy to have a run-away set of MCP servers/tools that clutter the context window and degrade performance. So, here's what we have now (that I'm playing with for the next 12 hours): 1. Tool Search Instead of shoving every tool definition into the model up front (like packing your entire house for a weekend trip), Claude can now fetch tool definitions only when it needs them. Fewer tokens. Faster responses. Less clutter. More joy. 2. Programmatic Tool Calling Claude can now write code to orchestrate multiple tools without dragging every intermediate result back into the model’s context. Think of it as moving from “chatty assistant” to “competent developer who actually reads the docs.” 3. Tool Use Examples You can now give example calls to show how a tool should be used—not just what fields it has. This dramatically reduces the “I see your schema and choose chaos anyway” problem. Why this matters: If you’re building agent workflows with lots of tools, these upgrades cut token usage, reduce latency, improve reliability, and generally make your agent behave more like a well-trained teammate and less like an overeager intern. My take: As agent architectures get more complex, the bottleneck isn’t the model—it's the orchestration. These features move us closer to agents that can reason, retrieve, call tools, and coordinate real work at scale. In other words: better plumbing, better agents. If you're building anything agent-heavy, it’s worth a look. p.s. The image was created with my new ImageGen .ai agent (single prompt consisting of the text of the post).

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35,285 followers

    We are not yet ready for this. A growing army of autonomous agents are engaging with not just humans and other agents, but also economic and legal institutions. An "agent infrastructure" of systems and protocols could maximize benefits and contain risks, suggest a group of researchers from Centre for the Governance of AI (GovAI) Harvard Law School University of Oxford University of Cambridge and others (link in comments). Most AI safety research is focused on AI system-level interventions. However different approaches are required in a proliferating multi-agent environment. The researchers propose 3 major functions in effective agent infrastructure: Attribution, Interaction, and Response: 💡 Attribution: Ensuring accountability. Attribution is critical for linking AI agent actions to responsible parties, such as users or organizations. Mechanisms including identity binding, to associate an agent’s actions with a legal entity. Certification provides verifiable assurances about an agent’s behavior, such as data handling policies or autonomy levels. Implementing agent IDs enables tracking and monitoring specific agents, facilitating incident response and accountability. 🤝 Interaction: Shaping behaviors. Interaction infrastructure defines how agents engage with the world to enable reliability and security. Dedicated agent channels isolate agent activities from regular digital traffic, reducing risks like data contamination or accidental disruptions. Oversight layers empower users or managers to intervene when necessary, improving operational control and accountability. Inter-agent communication protocols support seamless collaboration and negotiation among agents, promoting cooperative outcomes in multi-agent systems. 🔄 Response: Mechanisms to mitigate harm. Response infrastructure addresses problems caused by agents using proactive and reactive measures. Incident reporting systems collect detailed data on harmful events, enabling developers and regulators to understand root causes and implement safeguards. Rollback mechanisms allow reversal of unintended actions, such as erroneous financial transactions, protecting users from significant harm. The concept of agent infrastructure and proposed framework provide a very useful framework to build the next phase of scalable agent ecosystems. We need to develop and agree on these principles soon, as the foundations of a burgeoning agent economy will be built through this year.

  • View profile for Gajen Kandiah

    Chief Executive Officer, Rackspace Technology

    23,341 followers

    The AI-agent conversation is stuck. It is not only about efficiency. It is about reclaiming the opportunities we walked away from. 🚀 After years leading enterprise-scale digital programs and launching an AI Center of Excellence, I have learned that the noise around orchestration layers distracts us from the real prize. The goal is not simply to speed up today’s workflows. It is to revive strategic work we once labeled impossible. I watched a dormant lake of rail telemetry become a platform that now predicts failures, optimizes entire networks, and transforms daily operations. That is the frontier: turning forgotten data into predictive, revenue-generating engines that pay for their own growth. Beyond efficiency ➡️ recover abandoned value Think about the projects that never cleared pilot: • Indexing ten years of customer feedback. • Personalizing service for millions in real time. • Stress-testing every node in a global supply chain. Agents finally give us the cognitive muscle to tackle work at that scope—provided we pair them with rigorous retrieval pipelines and fine-tuned models rather than just “dropping an agent on the problem.” Why pilots stall ❌ weak data foundations Most stalled agent pilots I review break at the same point: the data model is blurry. No algorithm can reason with half-truths. Winning teams invest their energy up front, building precise domain-specific data structures before writing a single prompt. An agent’s power equals its data quality. My 4-step playbook ✅ 1. Model first – Design a semantic layer your agents trust. Capture the real language of your business. 2. Govern early – Create rules that let units share context without risking security or compliance. A strong data mesh is an accelerator. 3. Grow AI architects – Develop leaders who see abandoned opportunities and connect strategy, data, and delivery. 4. Iterate in the open – Run tight design–build–test loops. Visible progress builds trust each cycle. Five signs you are ready for agents 🔍 1. Architecture is model-first; data outranks UI polish. 2. Secure, context-aware agent calls (MCP, A2A—promising but still emerging) are planned from day one. 3. Observability—logs, replays, guardrails—is wired in up front. 4. A library of reusable agents stands on a common, trusted data layer. 5. Business and tech teams share a studio to co-create, monitor, and refine solutions. The race to agentic AI will not be won with marketplaces or shiny interfaces. Durable advantage belongs to leaders who transform lost ambitions and dormant data into measurable outcomes. 💡 #AIStrategy #DigitalTransformation #DataCentricAI #ValueCreation #AgenticAI #Innovation

  • View profile for Kris Kimmerle
    Kris Kimmerle Kris Kimmerle is an Influencer

    Vice President, AI Risk & Governance @ RealPage

    3,471 followers

    Agents are fantastic at chasing goals across multiple tools, yet each hand-off is a potential snag. USER INPUT → REASONING ENGINE Every prompt is untrusted text, whether it arrives through a UI, an API call, or a webhook. A single sentence can smuggle hidden instructions that can override system messages and guide later turns. Give each prompt the same scrutiny you reserve for raw SQL. AI AGENT → EXTERNAL TOOLS / FUNCTION CALLS When a prompt triggers an API request, shell command, or payment transfer, plain text turns into side effects. Keep every tool on a short leash: scope permissions tightly, issue short-lived tokens, and run a dry-run first. The extra step may feel slow, but it is cheaper than cleaning up a rogue file write. AI AGENT → MEMORY OR CONTEXT WINDOW Whether the agent is stateless (just the current context window) or stateful (writing to a vector store or database), yesterday’s data can become today’s weakness. A poisoned vector survives reboots and shapes future answers long after the attacker has gone. Tag every write with provenance, log every read, and purge what you no longer need. AI AGENT → PEER AGENTS / SWARM When agents start talking to each other every node assumes the last one played fair. A single compromised peer can push bad tasks through the whole mesh. Interop protocols such as MCP, ACP, and A2A make that collaboration possible, but they are still on the early part of the maturity curve and each one handles state, discovery, and message format differently. Until the standards settle, treat every hand-off as untrusted: sign and verify messages, run them through a policy filter, and log provenance so a clever cascade cannot hide in plain sight. Pull the thread tight at every seam, and the fabric holds. Let it fray, and you are sewing incident reports instead of new features.

  • View profile for Sarveshwaran Rajagopal

    Applied AI Practitioner | Founder - Learn with Sarvesh | Speaker | Award-Winning Trainer & AI Content Creator | Trained 7,000+ Learners Globally

    55,200 followers

    🤖 Building AI agents is no longer a monolith! Many developers believe creating powerful AI agents means being locked into a single ecosystem. The reality is a rapidly expanding universe of specialized frameworks designed for interoperability, especially with the rise of standards like MCP (Multi-agent Conversation Protocol). Here’s a look at the diverse toolkit available for building next-gen AI agents: ✅ Ecosystem-Specific SDKs: Major players are providing native tools to build on their platforms, including the OpenAI SDK, Vercel AI SDK, and Google ADK. ✅ Open-Source Powerhouses: Frameworks like Langchain, Semantic Kernel, and Praison AI offer incredible flexibility and community support for orchestrating complex agentic workflows. ✅ Standard-Driven Development: Dedicated MCP SDKs for Python and TypeScript, along with frameworks like Lastmile MCP Agent, are pushing for a future where agents can communicate seamlessly, regardless of how they were built. ✅ Specialized Tooling: From Composio for integrating tools to Copilotkit for frontend support, the ecosystem is filling every niche to accelerate development. Takeaway: The future isn't about building one master agent, but orchestrating a team of specialized agents that work together effectively. What frameworks are you experimenting with for your AI agents? Share your favourites below! #AI #GenerativeAI #AIAgents #LLMs #MCP #DeveloperTools #OpenAI #Langchain 👉 Follow Sarveshwaran Rajagopal for more insights on AI, LLMs & GenAI.

  • View profile for Philipp Schmid

    AI Developer Experience at Google DeepMind 🔵 prev: Tech Lead at Hugging Face, AWS ML Hero 🤗 Sharing my own views and AI News

    164,849 followers

    Why Do Multi-Agent LLM Systems “still” Fail? A new study explores why Multi Agent Systems are not significantly outperforming single-agent. The study identifies 14 failure modes multi-agent system. Multi-agent system (MAS) are agents that interact, communicate, and collaborate to achieve a shared goal, which would to be difficult or unreliable for a single agent to accomplish. Benchmark: - Selected five popular, open-source MAS (MetaGPT, ChatDev, HyperAgent, AppWorld, AG2) - Chose tasks representative of the MAS intended capabilities (Software D Development, SWE-Bench Lite, Utility Service Tasks, GSM-Plus) total of 150 tasks - Recorded the complete conversation logs, human annotators reviews, Cohen's Kappa score to ensure consistency and reliability, LLM-as-a-Judge Validation Multi Agent Failure modes: 1. Disobey Task Spec: Ignores task rules and requirements, leading to wrong output. 2. Disobey Role Spec: Agent acts outside its defined role and responsibilities. 3. Step Repetition: Unnecessarily repeats steps already completed, causing delays. 4. Loss of History: Forgets previous conversation context, causing incoherence. 5. Unaware Stop: Fails to recognize task completion, continues unnecessarily. 6. Conversation Reset: Dialogue unexpectedly restarts, losing context and progress. 7. Fail Clarify: Does not ask for needed information when unclear. 8. Task Derailment: Gradually drifts away from the intended task objective. 9. Withholding Info: Agent does not share important, relevant information. 10. Ignore Input: Disregards or insufficiently considers input from others. 11. Reasoning Mismatch: Actions do not logically follow from stated reasoning. 12. Premature Stop: Ends task too early before completion or information exchange. 13. No Verification: Lacks mechanisms to check or confirm task outcomes. 14. Incorrect Verification: Verification process is flawed, misses critical errors. How to improve Multi-Agent LLM System: 📝 Define tasks and agent roles clearly and explicitly in prompts. 🎯 Use examples in prompts to clarify expected task and role behavior. 🗣️ Design structured conversation flows to guide agent interactions. ✅ Implement self-verification steps in prompts for agents to check their reasoning. 🧩 Design modular agents with specific, well-defined roles for simpler debugging. 🔄 Redesign topology to incorporate verification roles and iterative refinement processes. 🤝 Implement cross-verification mechanisms for agents to validate each other. ❓ Design agents to proactively ask for clarification when needed. 📜 Define structured conversation patterns and termination conditions. Github: https://lnkd.in/ebmCg28d Paper: https://lnkd.in/etgsH6BH

  • View profile for Sean Falconer

    AI @ Confluent | Technology Executive | Advisor | ex-Google | Podcast Host for Software Huddle and Software Engineering Daily

    12,310 followers

    Google’s Agent2Agent Protocol Is a Big Deal, But It’s Missing One Thing Google’s new Agent2Agent protocol gives agents a shared language to collaborate, discovering each other, negotiating tasks, and working together across frameworks and vendors. It’s a bold step toward fixing the AI silo problem that I've previously written about. Just like HTTP made the web interoperable, A2A aims to do the same for agents. And paired with Anthropic’s MCP (which standardizes tool use), we’re starting to see the foundations of a true multi-agent ecosystem. But there’s a catch. A2A still relies on point-to-point communication (i.e. HTTP, SSE, webhooks). That works at small scale. But in real enterprise environments, it doesn’t hold up. Here’s why: • Too many direct connections (NxM problem) • Tightly coupled agents = brittle systems • Hard to coordinate multi-agent workflows in real time This is the same problem microservices faced and the solution is the same: event-driven architecture. By running A2A communication over Apache Kafka, we get: • Loose coupling between agents • Multiple consumers of every message (agents, logging, analytics) • Durable, replayable events for audit and recovery • Real-time orchestration without custom glue code Kafka doesn’t replace A2A. It fills in what’s missing, making it easier to scale, observe, and connect agents as part of a larger system. Check out my article linked in the comments for full details on this idea.

Explore categories