Can AI Agents Build Their Own Internet of Communication?

Article Highlights
Off On

In today’s rapidly advancing technological landscape, the potential of large language model (LLM) agents to transform both enterprise operations and research methodologies is significant. These AI agents, with their ability to process and analyze vast amounts of data, hold the promise of reimagining how organizations function. However, the current state of their communication capabilities presents a formidable hurdle to their broader adoption. Much like the early stages of internet development, these AI systems face challenges in terms of scaling and interoperability due to an absence of standardized communication protocols. This predicament mirrors the initial struggles that the Internet faced before the establishment of essential protocols like TCP/IP and HTTP. To achieve the vision of an “Internet of Agents,” it becomes imperative to develop a communication framework that can seamlessly facilitate interaction among diverse agent systems.

Understanding the Communication Gap

The communication challenge facing AI agents stems from the lack of harmonized protocols, making interactions between different systems cumbersome. At present, many AI agents rely on ad hoc APIs and rigid function-calling paradigms, which limit their interaction potential. These methods, while functional on a small scale, do not provide the necessary fluidity and security required for more extensive, collaborative networks. The situation is reminiscent of the early Internet era, where incompatible systems struggled to communicate until standardized protocols, such as TCP/IP and HTTP, became prevalent. For AI agents to evolve into integral components of scalable intelligence systems, resolving these communication bottlenecks is essential. A unified approach to protocol development could pave the way for AI actors to work together more effectively, fostering an environment where they can optimize performance and capitalize on shared resources.

Exploring Protocol Classification

Driven by the need for a systematic approach to communication within AI ecosystems, researchers from Shanghai Jiao Tong University and ANP Community have developed a classification framework for AI agent protocols. This innovative scheme categorizes protocols into two main dimensions: context-oriented versus inter-agent, and general-purpose versus domain-specific. Context-oriented protocols facilitate interactions between agents and external sources, such as databases and tools, whereas inter-agent protocols focus on communication among the agents themselves. Within these categories, general-purpose protocols provide broad applicability across various environments, while domain-specific protocols are optimized for particular scenarios, including robotics or human-agent dialogue systems. Understanding these distinctions is crucial for those designing AI systems, as it allows for informed decision-making when balancing flexibility, performance, and specialization. By navigating these trade-offs, developers can create more secure, efficient, and adaptable agent ecosystems.

Key Protocols in Focus

Several protocols have already made remarkable strides in addressing the communication challenges within AI systems. The Model Context Protocol (MCP) by Anthropic stands out as a context-oriented, general-purpose protocol that prioritizes security and scalability. By segregating reasoning processes from execution tasks, it manages privacy risks and enhances system reliability. Google’s Agent-to-Agent Protocol (A2A), designed explicitly for enterprise settings, facilitates secure, asynchronous collaboration. This protocol’s modular nature enhances interoperability through standardized entities such as Agent Cards and Artifacts, which support complex workflow orchestration. Meanwhile, the open-source Agent Network Protocol (ANP) takes a decentralized approach by building upon decentralized identity and semantic meta-protocol layers. This fosters a trustless communication model suited for varied domains, supporting systems where agent autonomy and flexibility are crucial. Together, these protocols illustrate the diverse design strategies being pursued and emphasize their specific contributions to the security, scalability, and interoperability of AI systems.

Evaluating Protocol Effectiveness

To gauge the effectiveness of these diverse protocols, a comprehensive evaluation framework has been introduced, encompassing a range of criteriefficiency, scalability, security, reliability, extensibility, operability, and interoperability. This multifaceted approach not only reflects traditional network protocol principles but also addresses unique challenges faced by AI agents. Efficiency pertains to how well protocols allocate computational resources, while scalability examines the ability to handle growing workloads. Security and reliability assess the protection against threats and consistency in function, respectively. Extensibility focuses on adaptability to new requirements, while operability considers ease of use and management. Interoperability, a critical factor, evaluates the compatibility between different systems. By utilizing this robust framework, stakeholders can make informed decisions regarding protocol choice, ensuring that communication strategies among AI agents meet both current demands and future scalability needs.

A Vision of Collective Intelligence

The potential for protocol standardization to lay the groundwork for collective intelligence among AI agents is profound. The development and adoption of standardized protocols can enable dynamic coalitions among agents, akin to decentralized systems like swarm robotics. Protocols such as Agora facilitate real-time negotiation and adaptation, empowered by LLM-generated routines, allowing agents to effectively distribute tasks and resources. Another noteworthy protocol is LOKA, which incorporates ethical reasoning into communication layers, ensuring that interactions remain aligned with human values and norms. By aligning communication capabilities and establishing a common language, AI agents can collaborate on complex tasks, pooling their individual strengths and capabilities to tackle challenges more effectively. This approach provides a transformative potential for AI systems, ushering in a new era where agents operate not as isolated units but as integrated members of a larger cooperative network.

Evolving Protocol Trajectories

A clear trajectory is emerging for the evolution of agent protocols, revealing distinct stages that mark a shift from traditional software paradigms. Initially, the transition from static function calls to dynamic protocols lays the groundwork for more adaptable and responsive agent systems. In the mid-term, the focus shifts toward developing self-organizing ecosystems that encourage negotiation and cooperation among agents. These systems not only enhance flexibility but also promote a more autonomous approach to problem-solving. Ultimately, the long-term goal is to cultivate infrastructures that support privacy-preserving and collaborative networks, where agents can collaborate in a secure environment without compromising individual data integrity. This multi-stage evolution represents a significant leap in AI development, leading towards an agent-native computing environment that supports more sophisticated and cooperative intelligence systems.

The Future of Collaborative Intelligence

Researchers from Shanghai Jiao Tong University and ANP Community took the initiative to address the need for a structured approach to communication within AI ecosystems. They’ve crafted a classification scheme for AI agent protocols, offering a fresh perspective on organizing these systems. The framework divides protocols into two chief dimensions: context-oriented versus inter-agent, and general-purpose versus domain-specific. Context-oriented protocols enable agents to interact with external sources like databases and tools, while inter-agent protocols emphasize communication between agents themselves. Further, general-purpose protocols are adaptable in diverse settings, whereas domain-specific ones are tailored for particular scenarios, such as robotics or human-agent dialogue systems. This distinction helps designers make judicious choices when balancing flexibility, performance, and precision in AI systems. By effectively navigating these trade-offs, developers can forge secure, efficient, and versatile agent ecosystems.

Explore more

How Does Industry 5.0 Put Humans Back at the Center?

I’m thrilled to sit down with Dominic Jainy, a seasoned IT professional whose deep expertise in artificial intelligence, machine learning, and blockchain has positioned him as a thought leader in the evolution of industrial technology. With a keen interest in how these cutting-edge tools can transform industries, Dominic offers unique insights into the shift from Industry 4.0 to Industry 5.0,

Gemini Usage Limits – Review

Imagine a world where AI tools can churn out content, analyze vast datasets, and solve complex problems in mere seconds, but only if you know the boundaries of their power. Gemini Apps, developed by Google, have emerged as a cornerstone for professionals and casual users alike, offering cutting-edge assistance in tasks ranging from research to creative output. Yet, with great

How Does Databricks’ Data Science Agent Boost Analytics?

In an era where data drives decision-making across industries, the sheer volume and complexity of information can overwhelm even the most skilled data practitioners, making efficiency a constant challenge. Databricks, a prominent player in the data analytics and AI space, has unveiled a transformative tool designed to address this issue head-on. Known as the Data Science Agent, this feature enhances

What Are the Best Books for Data Science Beginners in 2025?

I’m thrilled to sit down with Dominic Jainy, an IT professional whose deep expertise in artificial intelligence, machine learning, and blockchain has made him a go-to voice in the tech world. With a passion for exploring how these cutting-edge fields transform industries, Dominic also has a keen interest in guiding aspiring data scientists. Today, we’re diving into the best resources

How Is ESG Reshaping European Employment and Labor Laws?

Imagine a corporate landscape where sustainability isn’t just a buzzword but a legal mandate, where social equity dictates hiring practices, and governance defines accountability at every level. Across Europe, Environmental, Social, and Governance (ESG) principles are no longer optional for businesses; they are becoming entrenched in employment and labor laws, reshaping how companies operate. This roundup dives into diverse perspectives