In April 2026, OpenAI, Anthropic, Google, Microsoft, AWS, and Block co-founded the Linux Foundation's Agentic AI Foundation to govern a new open protocol: Agent-to-Agent (A2A). While most teams are still figuring out MCP, the industry quietly agreed on the missing half of the agentic stack. Here's what A2A is, why it matters, and what engineering leaders need to understand before their agent architectures are locked into proprietary coordination layers.
When AI agents were demo projects, coordination was someone else's problem. One agent, one task, one model. The architecture was simple because the scope was narrow.
That era ended quietly in 2025. Gartner logged a 1,445% surge in multi-agent system inquiries in fourteen months. Enterprise engineering teams — the same ones still completing their MCP integrations — are simultaneously being asked to build systems where a planning agent delegates to a coding agent that calls a testing agent that reports to an orchestration layer. Production multi-agent deployments don't look like chatbot demos. They look like distributed systems with a new class of actor: AI agents that need to discover each other, negotiate task handoffs, maintain shared context, and fail gracefully when a downstream agent is unavailable.
The problem is that there was no agreed protocol for how agents should do any of that. Until April 2026, each team building a multi-agent system was also, implicitly, building a proprietary communication layer. That's the coordination problem nobody planned for — and it's the problem that A2A exists to solve.
The Model Context Protocol (MCP), released by Anthropic in late 2024 and now with over 97 million monthly downloads across Python and TypeScript SDKs, solved the vertical integration problem: how does an AI agent connect to tools, APIs, databases, and external services? MCP gave that connection a standard shape. An agent using MCP can discover available tools, invoke them with structured inputs, and receive structured outputs — regardless of who built the tool or what model is running the agent.
A2A solves a different problem: the horizontal coordination problem. How does one AI agent discover another agent, describe what it needs, hand off a subtask, and receive a result — especially when those two agents might be running on different models, built by different teams, or deployed by different vendors?
The architecture is now clearly defined by practitioners: MCP is the vertical axis (agent downward to tools and data), and A2A is the horizontal axis (agent sideways to other agents). Production agentic systems at any meaningful scale require both layers. The teams that built robust MCP integrations in early 2026 now have the harder problem of figuring out how their agents coordinate with each other — and A2A is the protocol specification that tells them how.
Technically, A2A defines a standardized way for agents to publish their capabilities (what tasks they can perform), accept structured task requests from other agents, return results or stream progress back to the requesting agent, and handle authentication and authorization in agent-to-agent contexts. It borrows from existing web standards — HTTP, JSON-RPC, and OAuth patterns — which makes it implementable without requiring new infrastructure, and testable with tools engineering teams already have.
Key Takeaways
Enterprise protocol standardization has a poor historical track record. Competing vendors rarely agree on anything until one of them wins the market and the others have no choice. The browser wars, the video codec wars, the messaging protocol fragmentation — the default pattern is proprietary entrenchment, not voluntary interoperability.
What happened in April 2026 is structurally unusual. OpenAI, Anthropic, Google, Microsoft, AWS, and Block co-founded the Linux Foundation's Agentic AI Foundation to provide neutral governance for both MCP and A2A. These are not minor industry players offering token support. They are the primary architects of the protocols that are currently running in production, choosing to place those protocols under a foundation with a 30-year track record of managing critical open-source infrastructure — Linux, Kubernetes, and the OpenAPI Specification among them.
The significance of this is difficult to overstate. When six of the most competitive companies in the technology industry agree to put their agent communication protocols under neutral governance before any of them has dominant market share, it signals a deliberate collective judgment: the coordination layer should not be a competitive moat. The value each company extracts from the ecosystem comes from their models, their tooling, their enterprise relationships — not from owning the protocol that makes agents interoperable.
This mirrors what happened with HTTP in the early web, or with Kubernetes governance transferring to the CNCF in 2016. Protocol standardization under neutral governance is the technical industry's clearest signal that a layer of infrastructure has become foundational — and that the smart money is on building above it, not re-implementing it.
Key Takeaways
Every engineering team that built a multi-agent system before A2A built some version of a proprietary coordination layer. Sometimes it was minimal — a shared message queue and a set of conventions about request shapes. Sometimes it was substantial — a custom orchestration framework with its own task routing logic, capability registry, and error handling. All of it was reasonable given what was available. All of it is now technical debt.
The specific debt isn't the code itself — most of it is probably functional. The debt is the lock-in. A proprietary coordination layer means your agents can only talk to other agents within that coordination system. When a new vendor agent becomes available — a specialized code review agent, a security scanning agent, a domain-specific data analysis agent — your team has to build a bespoke integration adapter rather than plugging into a protocol-compliant endpoint. Multiply that by the rate at which new agent capabilities are emerging in 2026, and the integration tax compounds rapidly.
The A2A-compliant alternative is structurally different. Protocol-compliant agents expose a capability manifest and accept standardized task requests. Adding a new agent to your network is closer to adding a new microservice behind an OpenAPI endpoint than building a custom integration. The discovery, negotiation, and error handling are handled at the protocol level, not the application level.
For teams with existing multi-agent systems, the migration path is not immediate — but the architectural direction is clear. New agents should be built A2A-compliant from the start. Existing coordination logic should be encapsulated behind A2A-compliant adapters as it's refactored. And procurement of any third-party agentic tooling in 2026 should include A2A compliance as a vendor evaluation criterion — the same way API teams started requiring OpenAPI specifications from SaaS vendors five years ago.
Key Takeaways
The security model for agent-to-agent communication is fundamentally different from the security model for human-to-system or system-to-API communication, and most engineering teams are discovering this the hard way.
In traditional systems, authorization is relatively simple to reason about: a human authenticates, receives a token, and that token governs what the human can access. The human is accountable, the token has a scope, and the audit trail is legible. In multi-agent systems, the authorization graph is a directed acyclic graph of delegation: a human authorizes a planning agent, which authorizes a coding agent, which authorizes a testing agent, which calls an external API. Each delegation step introduces questions that traditional auth models weren't designed to answer: Did the planning agent have the right to delegate this permission? Can the testing agent actually verify that the task it received came from an authorized predecessor? What happens to the permission scope when a task is delegated downstream?
A2A addresses agent-to-agent authentication and authorization explicitly — this is not a detail left to implementation teams. The protocol defines how agents establish identity, how task delegation is authorized, and how audit trails are maintained across agent hops. For enterprise teams operating under SOC 2, HIPAA, or the incoming requirements of the EU AI Act's high-risk system classification, this auditability is not optional. It is the difference between a compliant agentic architecture and a compliance liability.
The practical implication for CTOs is that multi-agent security cannot be bolted on after the architecture is built. The agent identity model, the delegation authorization model, and the audit trail requirements need to be designed in from the start — and building those components on the A2A specification rather than bespoke conventions means inheriting the security thinking of teams that have been designing for these failure modes since the protocol's inception.
Key Takeaways
The A2A and MCP protocol landscape changes the evaluation criteria for engineering partners in 2026 in a concrete way. Building agentic systems is no longer primarily a problem of 'can your team write code that calls an LLM API.' It is increasingly a problem of architectural judgment: how do you structure agent networks for composability, how do you design agent identities and authorization scopes correctly the first time, and how do you build coordination layers that stay flexible as the protocol landscape evolves?
These are senior engineering judgment problems. They require engineers who understand distributed systems architecture, who have internalized the security failure modes of agent delegation, and who can read and evaluate evolving protocol specifications critically rather than simply implementing them to pass a checkbox.
For CTOs evaluating nearshore or outsourced engineering partners for agentic projects, the qualification bar has shifted. The relevant questions are no longer 'do your engineers have LLM experience' — that bar is too low and too easily claimed. The questions that differentiate partners in 2026 are: Can your team describe the difference between MCP and A2A and when each applies? How do your engineers think about agent identity and delegation authorization in multi-step workflows? Have your teams built agentic systems in production, not just demos — and can they articulate the failure modes they encountered?
Senior-led teams with genuine distributed systems depth — the profile that has always characterized the strongest nearshore engineering organizations in Eastern Europe — have a structural advantage here. The agentic architecture problem is an infrastructure engineering problem with AI-specific failure modes layered on top. It rewards exactly the combination of fundamentals depth and current protocol literacy that senior engineers who have been tracking the space closely bring to the table.
Key Takeaways
The A2A governance announcement in April 2026 is an inflection point, not a finish line. The protocol is mature enough to build against, the governance structure is stable, and the major cloud vendors have committed to integration. The engineering decision is no longer whether A2A matters — it is when and how to incorporate it.
For teams with existing multi-agent systems: conduct an architecture review specifically focused on the coordination layer. Map the proprietary conventions your agents use to communicate with each other. Identify the highest-traffic agent-to-agent interfaces and assess their refactoring cost toward A2A compliance. The goal in the near term is not full migration — it is understanding your debt load and prioritizing the paths where A2A compliance creates the most optionality.
For teams building new agentic systems: the decision is simpler. Start A2A-compliant. The specification is implementable with existing infrastructure, the reference implementations are available from Google and the Linux Foundation, and the cost of retrofitting a proprietary coordination layer six months from now is substantially higher than the cost of building against the protocol today.
For teams evaluating agentic vendors or outsourcing partners: add A2A compliance to your vendor due diligence checklist. Ask specifically about agent capability manifests, task delegation authorization models, and audit trail generation. Any vendor or partner who cannot articulate their A2A roadmap in 2026 is either behind the market or not building for the enterprise agentic use cases you should be investing in.
The underlying principle is the same one that governed good infrastructure decisions during the cloud era: bet on open standards governed by neutral foundations, build proprietary advantage above the protocol layer, and treat the protocol itself as infrastructure rather than IP. A2A is that protocol for multi-agent coordination. The organizations that internalize it now will have the architectural flexibility that the organizations who built cloud-native on top of Kubernetes had — the ones who ignored it will spend years paying migration costs instead.
The A2A protocol is not the most visible development in AI engineering in April 2026. The most visible developments are the model releases — Claude Sonnet 5, Gemma 4, Qwen updates — arriving in a compressed window and generating the majority of industry attention. That attention is understandable and not entirely misplaced. But the governance decision made quietly this month — six competing companies co-founding a Linux Foundation entity to manage the agent coordination protocol — is the more durable structural development. Models will continue to improve on a release cadence that makes any specific model's capabilities a temporary advantage. Protocols, once widely adopted and governed by neutral foundations, tend to become permanent infrastructure. The organizations that built early on HTTP didn't have to rebuild when Apache replaced NCSA HTTPd. The organizations that standardized on Kubernetes didn't have to rebuild when GKE replaced their custom orchestration. The pattern is clear enough to bet on. The A2A decision is the kind of infrastructure bet that engineering leaders should be making now — not when the migration cost has already accumulated.
StepTo helps European and US companies build senior-led nearshore engineering teams in Serbia. Let's talk about what your next engagement could look like.
Start a conversationStepTo Editorial
stepto.net