Stop talking to AI, let them talk to each other: The A2A protocol


Stop talking to AI, let them talk to each other: The A2A protocol

Have you ever asked Alexa to remind you to send a WhatsApp message at a determined hour? And then you just wonder, ‘Why can’t Alexa just send the message herself?

Or the incredible frustration when you use an app to plan a trip, only to have to jump to your calendar/booking website/tour/bank account instead of your AI assistant doing it all? Well, exactly this gap between AI automation and human action is what the agent-to-agent (A2A) protocol aims to address.

With the introduction of AI Agents, the next step of evolution seemed to be communication. But when communication between machines and humans is already here, what’s left?

Well, thinking about creating a multi-agent ecosystem to break the siloes between data systems and applications, Google announced the A2A protocol last year, in collaboration with more than 50 tech partners, an open standard protocol that allows AI agents to communicate, securely exchange information, collaborate, and operate between agentic applications and complex enterprise workflows, no matter their underlying technology.

From prompting to orchestration: How A2A actually works

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

A2A is designed under five principles, which are natural capabilities. It enables the agents to collaborate in their natural modality without an intermediary tool, allowing agents to retain their individual capabilities and independence.

It is built on existing standards, making it easier to integrate with existing IT stacks, and paired with OpenAPI’s (Application Programming Interfaces) authentication schemes to guarantee secure collaboration. It provides real-life feedback as well as asynchronous notifications for long-running operations (LRO). Lastly, it was designed to support various modalities, including text, audio, and video streaming.

A2A protocol

Announcing the Agent2Agent Protocol (A2A). Image: Google

A2A works as a facilitator between a “client” agent and a “remote” agent. The client agent requests and communicates the tasks, while the remote agent is responsible for taking action on those tasks, looking for the best solution or input. This process involves several stages and key components:

  • Upon receiving a task request made by a human or other AI agent, the client agent evaluates the remote agents with their agent cards, which are structured profiles detailing identity, capabilities, service endpoints, and authentication requirements. The client agent then selects the best-fit agent and goes through authentication according to the security scheme specified in the agent card.
  • Afterwards, communication is established towards task completion. The task is defined by protocol and has a lifecycle, and it requires immediate action, or for LRO, agents communicate to stay in sync with each other until the task is done. The output of a task is an artifact.
  • Agents communicate with each other, exchanging context, replies, artifacts, or users’ instructions. Each message includes parts, which have a specified content like a generated image, allowing the agents to negotiate the correct format according to the user’s UI capabilities. As well as other specifications and errors that can be found here, for those intrepid souls with a hunger for knowledge. 

This protocol complements Anthropic’s Model Context Protocol (MCP) for building robust agentic applications, since MCP provides agent-to-tool communication, allowing a better understanding and processing of abstract APIs to be used as tools, while the A2A protocol enables agents to discover each other’s capabilities, supporting the growth of agentic systems.

Why A2A is a game-changer?

The A2A protocol was built to tackle the interoperability gap between specialized AI agents, with enterprise-scale adoption in mind. Instead of treating agents as isolated tools, like MCP works, A2A enables a shared ecosystem where agents can interact as agents, preserving their unique capabilities and higher-quality outcomes.

It also rethinks execution by allowing customizable, secure collaboration between opaque agents, preserving data privacy and intellectual property by design. As the number of agents and interactions grows, A2A addresses scalability head-on, enabling seamless integration and the emergence of complex AI ecosystems within enterprise systems, while relying on established standards, like HTTPS, and JSON-RPC to avoid reinventing core technologies, and existing web standards for authentication, authorization, security, privacy, tracing, and monitoring.

A2A has applications across a wide range of industries, including customer service, supply chain, human resources, healthcare, research, education, creative industries, public services, financial services, IT operations, and consulting.

By enabling agent collaboration across applications and organizations, it supports advanced data analysis and task automation, from background screenings and inventory logistics to enhanced fraud detection and highly personalized customer solutions.

 The frictions we can’t ignore

Despite its promises, A2A is not without challenges. Like most distributed systems, one of the main concerns is security. Continuous back-and-forth communication within agents increases the threat to security across multiple layers, from identity and message, to context propagation and system management.

This problem highlights the need for an intrinsic identity, integrity, and sequencing guarantees for A2A, alongside the challenge of including this without compromising its lightweight design and interoperability.

A second limitation emerges at the architectural level, particularly in enterprise-scale AI communication. A2A relies primarily on HTTPS and high-performance Remote Procedure Call (RPC) in direct point-to-point communication.

While this works on a small scale, it can become a complex and unsustainable risk for large-scale enterprise environments. Single changes, overlaps, failures, or misrouted messages can cause cascade effects, enabling potential operational risk unless complemented with additional orchestration and governance mechanisms.

Is A2A the future of AI?

The incredibly fast introduction and adoption of AI agents, from AI agents to Agentic AI, has made it a necessity to be able to evolve alongside the technology. A2A marks a clear shift in how AI systems are thought and designed, creating an ecosystem to break the siloes and allowing cross-collaboration among agents.

It is, doubtlessly, necessary for modernity, and while it presents challenges and limitations, it is worth remembering that it is still in early stages and improvements will come while the protocol matures.

Alongside the MCP and LLMs, A2A enables a broader agent stack that suggests an emerging blueprint for agentic AI: where communication, execution, and governance are managed at distinct layers, enabling agents to act in real-world systems.

The real significance of A2A is what it signals about where AI is heading. The next generation of AI will not be defined by a single, all-purpose model, but by interconnected ecosystems of agents designed to work together by default.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top