Beyond A2A and MCP: How LOKA’s Universal Agent Identity Layer changes the game
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Agentic interoperability is gaining steam, but organizations continue to propose new interoperability protocols as the industry continues to figure out which standards to adopt.
A group of researchers from Carnegie Mellon University proposed a new interoperability protocol governing autonomous AI agents’ identity, accountability and ethics. Layered Orchestration for Knowledgeful Agents, or LOKA, could join other proposed standards like Google’s Agent2Agent (A2A) and Model Context Protocol (MCP) from Anthropic.
In a paper, the researchers noted that the rise of AI agents underscores the importance of governing them.
“As their presence expands, the need for a standardized framework to govern their interactions becomes paramount,” the researchers wrote. “Despite their growing ubiquity, AI agents often operate within siloed systems, lacking a common protocol for communication, ethical reasoning, and compliance with jurisdictional regulations. This fragmentation poses significant risks, such as interoperability issues, ethical misalignment, and accountability gaps.”
To address this, they propose the open-source LOKA, which would enable agents to prove their identity, “exchange semantically rich, ethically annotated messages,” add accountability, and establish ethical governance throughout the agent’s decision-making process.
LOKA builds on what the researchers refer to as a Universal Agent Identity Layer, a framework that assigns agents a unique and verifiable identity.
“We envision LOKA as a foundational architecture and a call to reexamine the core elements—identity, intent, trust and ethical consensus—that should underpin agent interactions. As the scope of AI agents expands, it is crucial to assess whether our existing infrastructure can responsibly facilitate this transition,” Rajesh Ranjan, one of the researchers, told VentureBeat.
LOKA layers
LOKA works as a layered stack. The first stack revolves around identity, which lays out what the agent is. This includes a decentralized identifier, or a “unique, cryptographically verifiable ID.” This would let users and other agents verify the agent’s identity.
The next layer is the communication layer, where the agent informs another agent of its intention and the task it needs to accomplish. This is followed by the ethics later and the security layer.
LOKA’s ethics layer lays out how the agent behaves. It incorporates “a flexible yet robust ethical decision-making framework that allows agents to adapt to varying ethical standards depending on the context in which they operate.” The LOKA protocol employs collective decision-making models, allowing agents within the framework to determine their next steps and assess whether these steps align with the ethical and responsible AI standards.
Meanwhile, the security layer utilizes what the researchers describe as “quantum-resilient cryptography.”
What differentiates LOKA
The researchers said LOKA stands out because it establishes crucial information for agents to communicate with other agents and operate autonomously across different systems.
LOKA could be helpful for enterprises to ensure the safety of agents they deploy in the world and provide a traceable way to understand how the agent made decisions. A fear many enterprises have is that an agent will tap into another system or access private data and make a mistake.
Ranjan said the system “highlights the need to define who agents are and how they make decisions and how they’re held accountable.”
“Our vision is to illuminate the critical questions that are often overshadowed in the rush to scale AI agents: How do we create ecosystems where these agents can be trusted, held accountable, and ethically interoperable across diverse systems?” Ranjan said.
LOKA will have to compete with other agentic protocols and standards that are now emerging. Protocols like MCP and A2A have found a large audience, not just because of the technical solutions they provide, but because these projects are backed by organizations people know. Anthropic started MCP, while Google backs A2A, and both protocols have gathered many companies open to use — and improve — these standards.
LOKA operates independently, but Ranjan said they’ve received “very encouraging and exciting feedback” from other researchers and other institutions to expand the LOKA research project.