𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗚𝗲𝘁𝘁𝗶𝗻𝗴 𝗦𝗺𝗮𝗿𝘁𝗲𝗿 — 𝗕𝘂𝘁 𝗢𝗻𝗹𝘆 𝗜𝗳 𝗧𝗵𝗲𝘆 𝗖𝗮𝗻 𝗧𝗮𝗹𝗸 𝘁𝗼 𝗘𝗮𝗰𝗵 𝗢𝘁𝗵𝗲𝗿 As AI shifts from single-task assistants to multi-agent systems, what truly powers this transformation isn't just bigger models — it's the rise of 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗲𝗱 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀. These protocols define how agents communicate, manage memory, invoke tools, and collaborate across ecosystems. To make sense of this emerging landscape, I mapped out 𝟭𝟬 𝗺𝗼𝗱𝗲𝗿𝗻 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 that are shaping how agents work — together. Here’s a breakdown of what’s included: • 𝗔𝗴𝗲𝗻𝘁 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗜𝗕𝗠): Lifecycle and workflow standardization • 𝗔𝗴𝗲𝗻𝘁 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹: Message routing between agents and external systems • 𝗔𝗴𝗲𝗻𝘁-𝘁𝗼-𝗔𝗴𝗲𝗻𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗚𝗼𝗼𝗴𝗹𝗲): Structured inter-agent collaboration (Gemini & Astra) • 𝗠𝗼𝗱𝗲𝗹 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰): Unified memory and tool embedding inside LLMs • 𝗧𝗼𝗼𝗹 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗟𝗮𝗻𝗴𝗖𝗵𝗮𝗶𝗻): Standard JSON for tool metadata • 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝗖𝗮𝗹𝗹 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗢𝗽𝗲𝗻𝗔𝗜): Schema-enforced function execution • 𝗧𝗮𝘀𝗸 𝗗𝗲𝗳𝗶𝗻𝗶𝘁𝗶𝗼𝗻 𝗙𝗼𝗿𝗺𝗮𝘁 (𝗦𝘁𝗮𝗻𝗳𝗼𝗿𝗱): Declarative task graphs and coordination • 𝗔𝗴𝗲𝗻𝘁𝗢𝗦 𝗥𝘂𝗻𝘁𝗶𝗺𝗲: Managing stateful, long-lived agents in enterprise settings • 𝗥𝗗𝗙 𝗔𝗴𝗲𝗻𝘁 (𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗪𝗲𝗯): Linked data agent reasoning using SPARQL • 𝗢𝗽𝗲𝗻 𝗔𝗴𝗲𝗻𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹: A community push toward cross-framework interoperability This space is evolving quickly. Protocols like these are quietly becoming the 𝗿𝗲𝗮𝗹 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 behind the AI agents of tomorrow. Whether you're designing LLM workflows or deploying AI into production systems, these are the interfaces you'll be working with next. Curious which ones you've already explored — or plan to?
Robot Communication Protocols
Explore top LinkedIn content from expert professionals.
Summary
Robot communication protocols are the rules and standards that allow robots, AI agents, and devices to exchange information and coordinate actions, whether in a factory, embedded system, or multi-agent AI environment. These protocols shape how robots collaborate, share data, and safely interact with other systems, forming the backbone of modern interconnected automation and artificial intelligence.
- Choose wisely: Select a protocol based on your system’s requirements for speed, reliability, wiring complexity, and scalability, as each protocol fits different use cases from simple point-to-point chats to robust industrial automation.
- Prioritize security: Build communication stacks with safeguards like authentication, input filtering, and behavior monitoring to defend against vulnerabilities such as spoofing and data leaks.
- Focus on interoperability: Adopt standardized agent protocols to connect tools, devices, and AI agents across platforms, avoiding fragmented “silos” and enabling future expansion and collaboration.
-
-
🔧 Wired for Communication: Which Protocol Fits Your System? 🔌 In embedded design, how your devices talk matters just as much as what they say. Here's a breakdown of common protocols, their topologies, and where they actually get used — no fluff, just real talk: 🟨 UART – Point-to-Point Simplicity 🧠 Used in: Microcontroller to peripheral comms, GPS, sensors, debug ports 2 wires: TX and RX One-to-one — no bus, no sharing Asynchronous, basic, but super reliable Everyone’s first love in embedded systems ✅ Best for: Console logs, serial comms, and when you just need two devices to "text" each other. 🟩 SPI – Fast & Synchronous ⚙️ Used in: Memory chips, sensors, displays, ADCs One master, multiple slaves Separate lines for each slave’s select Super fast — great for high-speed peripherals Not scalable without extra GPIOs ✅ Best for: Talking fast to specific parts like flash memory, OLEDs, or IMUs. 🟦 I2C – Shared, Polite, and Efficient 📚 Used in: Low-speed sensors, RTCs, EEPROMs, PMICs Only 2 wires (SCL & SDA) Master-slave setup, but devices have addresses Everyone shares the same bus — polite conversation Speed limited but wiring is minimal ✅ Best for: Connecting lots of peripherals over short distances, like sensor clusters. 🟥 RS-485 / RS-422 – The Industrial Backbone 🏭 Used in: Industrial automation, BMS, long-distance sensor arrays Supports multi-drop communication Long cable runs (up to 1 km) Differential signaling = noise immunity Needs termination resistors ✅ Best for: Talking to multiple devices over long distances in noisy environments. 🔵 MIL-STD-1553 – Mission-Critical & Redundant ✈️ Used in: Aircraft, spacecraft, defense systems Bus + redundant backup bus One controller (BC), many Remote Terminals (RTs), and optional Bus Monitor Deterministic, synchronized, and rock-solid Requires transformer-coupled stubs ✅ Best for: Situations where failure is not an option. 🟠 EtherCAT – Industrial Speed Demon 🚀 Used in: Motion control, robotics, high-speed I/O Line topology with ultra-low latency Master controls frame; slaves modify it in transit 100 µs cycle times or better ✅ Best for: Fast, real-time, synchronized control of motors and actuators. 🟣 TSN – Ethernet Grows Up 🧠 Used in: Smart factories, EVs, real-time networks Ethernet with real-time guarantees Supports mixed traffic: control + data Needs TSN-capable switches ✅ Best for: Complex industrial networks with a mix of critical and non-critical data. 🚀 TL;DR: Protocol Topology Real Use UART Point-to-point Debugging, GPS, console logs SPI Master/slave Fast sensors, displays, memory I2C Shared bus Sensor hubs, low-speed comms RS-485 Multi-drop Long-distance industrial use 1553 Dual-redundant bus Aerospace, military systems EtherCAT Line High-speed real-time control TSN Star (Ethernet) Industry 4.0, EVs, mixed traffic 💬 What’s your favorite protocol to work with?
-
MCP vs. A2A: Understanding Modern AI Communication Protocols 📌 Key Architectural Differences: MCP: Client-server architecture with centralized resource management A2A: Direct peer-to-peer communication between AI agents 📌 MCP Benefits: Structured access to various data sources (local and web-based) Centralized governance and security controls Specialized servers for different functional needs Better resource management for enterprise environments 📌 A2A Advantages: Secure agent collaboration without intermediaries Dynamic task and state management Streamlined UX negotiation between agents Direct capability discovery 📌 Real-world Applications: MCP excels in enterprise settings requiring oversight and governance A2A shines in scenarios needing real-time, dynamic collaboration Hybrid approaches emerging for complex systems 📌 Implementation Considerations: Scalability: MCP requires scaling server infrastructure, while A2A distributes processing load Security: MCP offers centralized security policies, A2A requires peer-level security protocols Latency: Direct A2A communication potentially reduces response times Complexity: MCP simplifies agent design but creates server dependencies 📌 Industry Trends: Large tech companies favor MCP for controlled AI deployment Research environments often implement A2A for experimental flexibility Financial services adopt MCP for regulatory compliance and audit trails Healthcare exploring both models depending on use case sensitivity As AI systems evolve from single-agent to multi-agent architectures, these communication protocols will become fundamental infrastructure considerations. The choice between MCP and A2A (or hybrid approaches) will significantly impact system flexibility, maintainability, and security posture. What's your take on these approaches? Do you see hybrid models winning in the enterprise space? Have you implemented either protocol in your organization's AI systems?
-
The AI Agent Communication: Protocols, Security Risks, and Defense is really easy to read and full of actionable insights on securing next-generation autonomous AI agents. Key highlights include: • Mapping security risks across all three stages of agent communication: User-Agent, Agent-Agent, and Agent-Environment. • Analyzing real-world vulnerabilities such as prompt injection, multimodal exploits, SEO poisoning, agent spoofing, and denial-of-service. • Introducing defense strategies including semantic input filtering, source validation, agent orchestration, and lifecycle monitoring. • Demonstrating experimental attacks on MCP (Anthropic) and A2A (Google), exposing systemic weaknesses in cross-agent collaboration. • Proposing a taxonomy of protocols and risks to guide the design of safe, scalable communication stacks. • Emphasizing the gap between LLM safety and agent-level assurance with new attack surfaces emerging from autonomy, tool access, and multimodal execution. • Calling for holistic safeguards including authentication, behavior auditing, access control, and runtime sandboxing. Who should take note: • Security architects building agent platforms and orchestration layers. • AI and ML teams deploying tool-using or self-reflective agents at scale. • CISOs assessing risks introduced by multi-agent pipelines and real-world actuation. • Researchers working on protocol standardization and agent security evaluation. Noteworthy aspects: • Ground-up classification of communication behaviors tailored to real agent lifecycles, not just traditional LLMs. • Defense playbook against both user-side and environment-side threats including prompt-based overthinking, visual jailbreaks, and compromised tools. • Experimental proof that compromised agents can leak sensitive data, manipulate users, and execute malicious tasks. • Applicable to code-based and no-code ecosystems where agents act as intermediaries between users and services. Actionable step: Use this framework to establish communication-aware threat models and mitigation strategies across your AI agent stack, including interaction boundaries, agent trust scores, and contextual verification. Consideration: Securing agents is not just about model alignment. It is about designing resilient communication protocols, distributed trust models, and fail-safe execution paths for systems that reason, act, and collaborate.
-
𝗧𝗵𝗲 𝗺𝗼𝘀𝘁 𝗰𝗼𝗺𝗽𝗿𝗲𝗵𝗲𝗻𝘀𝗶𝘃𝗲 𝘀𝘂𝗿𝘃𝗲𝘆 𝗼�� 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 𝗷𝘂𝘀𝘁 𝗱𝗿𝗼𝗽𝗽𝗲𝗱! ⬇️ LLMs can now plan, reason, use tools, and collaborate. But most of them don’t speak the same language. And without a shared protocol, we’ll never unlock scalable, autonomous systems. It’s the missing infrastructure of the AI age. A team of researchers from Shanghai Jiao Tong University (great to see my former university here) just released what might be the most comprehensive survey on AI Agent Protocols to date. Their goal? To map the emerging landscape of how LLM-powered agents interact with tools, data, and each other — and why current fragmentation is holding us back. 𝗧𝗵𝗲 𝗽𝗮𝗽𝗲𝗿 𝗯𝗿𝗲𝗮𝗸𝘀 𝗻𝗲𝘄 𝗴𝗿𝗼𝘂𝗻𝗱 𝗯𝘆: * Proposing a new classification system for protocols * Comparing 13+ protocols (like MCP, A2A, ANP, Agora) * Outlining the technical gaps we need to solve * Showing how protocol design will shape the future of multi-agent systems and collective AI 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 6 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 𝘄𝗵𝗶𝗰𝗵 𝘀𝘁𝗼𝗼𝗱 𝗼𝘂𝘁 𝘁𝗼 𝗺𝗲: ⬇️ 1. 𝗔𝗴𝗲𝗻𝘁 𝗜𝗻𝘁𝗲𝗿𝗼𝗽𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗜𝘀 𝗕𝗿𝗼𝗸𝗲𝗻 ➜ Today’s agents are siloed. Everyone builds their own APIs, their own wrappers, their own formats. This is the early-internet problem all over again. 2. 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 𝗔𝗿𝗲 𝘁𝗵𝗲 𝗡𝗲𝘄 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 ➜ Think TCP/IP — but for agents. These standards will determine whether tools and agents can communicate across vendors, platforms, and environments. 3. 𝗠𝗖𝗣 𝗜𝘀 𝗟𝗲𝗮𝗱𝗶𝗻𝗴 𝗳𝗼𝗿 𝗧𝗼𝗼𝗹 𝗨𝘀𝗲 ➜ Anthropic’s Model Context Protocol (MCP) is one of the most advanced protocols for agent-to-resource interactions — and it fixes key privacy issues in tool invocation. 4. 𝗔2𝗔 𝗮𝗻𝗱 𝗔𝗡𝗣 𝗘𝗻𝗮𝗯𝗹𝗲 𝗠𝘂𝗹𝘁𝗶-𝗔𝗴𝗲𝗻𝘁 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻 ➜ Google’s A2A is enterprise-grade and async-first. ANP, on the other hand, is open-source and aims to create a decentralized Agent Internet. 5. 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗚𝗼𝗲𝘀 𝗕𝗲𝘆𝗼𝗻𝗱 𝗦𝗽𝗲𝗲𝗱 ➜ The report introduces 7 dimensions for assessing agent protocols — from security to operability to extensibility. It’s not just about performance. It’s about trust, adaptability, and integration. 6. 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀 𝗦𝗵𝗮𝗽𝗲 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 ➜ A protocol that works for a single-agent chatbot may fail in an enterprise-grade multi-agent orchestration scenario. Architecture matters. So does context. As we move toward a true Internet of Agents, the paper outlines the standards, challenges, and architectural shifts we need to unlock scalable, interoperable agent ecosystems. Important dicussion and great insights! At the end of the day, it’s about enabling agents to coordinate, negotiate, learn, and evolve — forming distributed systems greater than the sum of their parts. You can download the survey below or in the comments!
-
If you want to understand how AI Agents actually work together… start by understanding their protocols. AI agents don’t collaborate magically. They communicate, share memory, negotiate tasks, and stay safe because a whole ecosystem of protocols makes it possible. Teams focus on models and tools. But it’s the protocol layer that decides whether your agents scale, or fail. This map breaks down the core building blocks every agentic system relies on: 1. Core & Widely Used Protocols These are the fundamental standards that let agents talk to each other, execute tasks, and interact with tools in a structured, predictable way. They form the backbone of any agent-based architecture. 2. Transport & Messaging This layer keeps agents connected. It handles event streams, async messaging, real-time communication, and reliable delivery - everything needed for fast, fault-tolerant workflows. 3. Memory & Context Exchange Agents can’t reason or collaborate without shared context. These protocols help them store state, exchange histories, and retrieve past knowledge so the system behaves consistently over time. 4. Security & Governance Every agent interaction must be audited, authorized, and safe. These standards ensure identity, access control, compliance, and safe execution, especially when agents touch production systems. 5. Coordination & Control This is the orchestration layer. It handles oversight, delegation, decision-making, and task handoffs - enabling multi-agent pipelines to work as one coherent system. - Why this matters As AI agents move from prototypes to production, understanding these protocol layers becomes essential. Models generate intelligence - but protocols create order, safety, and scale. If you want agents that can collaborate, negotiate, and execute reliably, this is the foundation to build on.
-
Google just launched Agent2Agent (A2A) protocol that could quietly reshape how AI systems work together. If you’ve been watching the agent space, you know we’re headed toward a future where agents don’t just respond to prompts. They talk to each other, coordinate, and get things done across platforms. Until now, that kind of multi-agent collaboration has been messy, custom, and hard to scale. A2A is Google’s attempt to fix that. It’s an open standard for letting AI agents communicate across tools, companies, and systems, that securely, asynchronously, and with real-world use cases in mind. What I like about it: - It’s designed for agent-native workflows (no shared memory or tight coupling) - It builds on standards devs already know: HTTP, SSE, JSON-RPC - It supports long-running tasks and real-time updates - Security is baked in from the start - It works across modalities- text, audio, even video But here’s what’s important to understand: A2A is not the same as MCP (Model Context Protocol). They solve different problems. - MCP is about giving a single model everything it needs- context, tools, memory, to do its job well. - A2A is about multiple agents working together. It’s the messaging layer that lets them collaborate, delegate, and orchestrate. Think of MCP as helping one smart model think clearly. A2A helps a team of agents work together, without chaos. Now, A2A is ambitious. It’s not lightweight, and I don’t expect startups to adopt it overnight. This feels built with large enterprise systems in mind, teams building internal networks of agents that need to collaborate securely and reliably. But that’s exactly why it matters. If agents are going to move beyond “cool demo” territory, they need real infrastructure. Protocols like this aren’t flashy, but they’re what make the next era of AI possible. The TL;DR: We’re heading into an agent-first world, and that world needs better pipes. A2A is one of the first serious attempts to build them. Excited to see how this evolves.
-
🚀 Excited to share my latest project: a fully autonomous Smart Warehouse Management System built using the Agent Communication Protocol (ACP)! This innovative system features four intelligent agents InventoryBot, OrderProcessor, LogisticsBot, and WarehouseManager working seamlessly together to manage stock, schedule deliveries, and handle reorders, all through standardized, real-time communication. 🌟 What is ACP? ACP is a framework that enables autonomous agents to communicate effectively using structured messages with defined performatives (e.g., ASK, REQUEST_ACTION, TELL, CONFIRM). It ensures clear, reliable interactions, making it ideal for complex systems like smart warehouses where coordination is key. 🌟 How It Works: Scenario 1: Stock Alert & Reorder - The OrderProcessor checks stock levels with InventoryBot and triggers reorders to maintain minimum availability (e.g., reordering to fill low laptop stock). Scenario 2: Delivery Scheduling - The WarehouseManager directs LogisticsBot to schedule deliveries of goods, with LogisticsBot confirming the schedule including a tracking ID for transparency. Scenario 3: Low Stock Management - InventoryBot alerts the WarehouseManager of low stock (e.g., 5 tablets), prompting a confirmation that 15 tablets are needed; the WarehouseManager then requests OrderProcessor to place an order for 15 tablets, with OrderProcessor confirming via a PO number. The interactive frontend visualizes these interactions, complete with a Statistics dashboard (e.g., total messages: 6, active conversations: 3, registered agents: 4) to monitor performance, making it perfect for real-world adoption. 🏭Impact on Logistics: This solution transforms the logistics industry by reducing manual oversight, optimizing stock levels, and streamlining delivery schedules. With real-time data and automated reordering, warehouses can operate 24/7, cut costs, and improve customer satisfaction key drivers in today’s fast-paced supply chain. This showcase how AI and ACP can revolutionize warehouse management. Check out the demo video to see it in action!
-
2025 is the Year of ACP, not just MCP. IBM has introduced a new protocol for AI collaboration called Agent Communication Protocol, building upon the foundation laid by Anthropic's Model Context Protocol. ACP takes a leap forward in how AI systems work together, allowing complex multi-agent workflows that were impossible with MCP alone. Here's how ACP works: 1️⃣ Agent Orchestration ACP enables multiple AI agents to communicate seamlessly, allowing specialized agents to combine their capabilities. 2️⃣ Standardized Messaging The protocol uses structured message formats that help agents understand each other across different frameworks and languages. 3️⃣ Task Delegation Complex problems are broken down and assigned to the most capable specialized agents, then results are assembled into cohesive solutions. 4️⃣ Framework Independence ACP works with agents built in any programming language or AI framework, removing technical barriers to collaboration. 5️⃣ Dynamic Discovery Agents can discover and utilize each other's capabilities, creating flexible AI ecosystems that evolve to meet changing needs. Whether you're building complex AI workflows or connecting specialized agents, ACP elevates what's possible, enabling deeper collaboration and more powerful solutions. Here's how ACP is architecturally different from MCP: MCP: - Focuses on connecting a single AI to external data sources and tools - Creates one-to-many relationships between an AI and various resources - Uses JSON-RPC primarily for accessing information and executing actions - Designed to expand what one AI model can access and accomplish ACP: - Centers on connecting multiple AIs to each other in collaborative relationships - Creates many-to-many networks of specialized agent capabilities - Extends JSON-RPC with agent-specific communication patterns - Designed for dividing complex tasks among specialized AI team members Understanding these distinctions matters for building the right AI infrastructure. Some problems need better tools for one AI. Others need multiple AIs working together. ACP isn't just different from MCP; it's complementary: ✅ Solves problems too complex for any single AI agent ✅ Creates AI teams with specialized members handling different aspects of a task ✅ Enables more natural workflows that mirror human team collaboration The combination of MCP and ACP is essential. MCP gives individual AIs access to tools and data. ACP helps those AIs work together as teams. Together, they create AI systems that are more capable, flexible, and effective. Over to you: What complex problems could you solve with a team of specialized AI agents working together?
-
This one is for my friends in lab automation! I'm really proud that Device could be the home of PyLabRobot's coming out party! I remember meeting Stefan Golas & Rick Wierenga when I was working at Spaero and being impressed by how much progress they'd made toward solving such a fundamental problem in the field: getting robots to share a common language. The paper is free to read and fully open access here: https://lnkd.in/ghZHAHqJ Since Sergei Kalinin has already kindly contributed a very thorough Commentary discussing the importance of this work in the context of innovation growth and machine learning (https://lnkd.in/gR98gDV9), I'll take a second to zoom in on some of the other aspects of this work that I think make it more important than just its surface layer would imply. First, imagine that you're a life scientist working at a biotech company and you're interested in automation. If you want to get started with lab automation, you're faced with a plethora of robot choices that mostly use locked down programming interfaces that don't work with one another. Researchers are faced with learning to code or hiring an automation engineer, and even then you get locked into one brand of instrument almost immediately. Moreover, if you're looking to draw on the literature for protocols, you're essentially banking on them having used the same brand of robot that you have, otherwise it's completely irreproducible for you. Now, hyperlanguages don't necessarily solve this all at first blush, but they immediately enforce one of the most important parts of reproducibility: INTEROPERABILITY. Putting a protocol up in PyLabRobot's language is inherently more useful to the public because whether it's a Hamilton STAR or a Tecan Freedom that either the original authors or the reproducers used, the protocol should load and integrate any third party tools that are also PyLabRobot compatible. Especially given the open source nature of PyLabRobot, every small stride forward propagates out and makes the entire system stronger. So now that last problem is solved. But what if there is no literature protocol? What is really impressive about this paper is that ON TOP OF ALL OF THAT, they created an LLM-powered tool that takes in written instructions and generates PyLabRobot code for you. This is the closest we've been to "Copy/paste your lab notebook onto the robot and press play" we've ever been. When we finally cure cancer, there will have been a finite number of experiments required to get there. Lab automation gets us to that unknown number faster, but only if we're using it and using it well. In order to achieve that scale, we desperately need tools that enable resource sharing and allow bench scientists to participate in using them. I'm really glad that PyLabRobot is a huge first step in that direction and, since it's open source, the community will only make it better in the coming years.