The new AI game: collaboration between agents
Artificial intelligence is no longer about having a single massive model solving everything on its own. That phase is over. What is shaping the current landscape is a completely different dynamic, where multiple AI agents operate in a coordinated way, each with a specific role, connected by layers of orchestration that determine workflow, boundaries of action, and the right moments to escalate decisions to a human. This paradigm shift is anything but subtle — it is redesigning how companies think about technology, processes, and even their own organizational structure.
Three recent developments illustrate this turning point pretty clearly. First, the conversation around orchestration by design has taken center stage in the market, signaling that companies need to rethink their internal org charts before even implementing any autonomous agent. Second, Perplexity AI introduced Computer, a system that brings together 19 distinct models working side by side to interpret visual interfaces, reason through complex tasks, and execute multi-step workflows. And third, the telecommunications industry is designing 6G networks that are born with artificial intelligence baked into their very architecture, with the ability to predict failures and self-correct in real time. Each of these fronts points to the same conclusion — global tech infrastructure is being rebuilt from the inside out.
Orchestration by design: the org chart comes before the algorithm
When we talk about orchestration in the context of artificial intelligence, we are not just talking about connecting APIs or chaining calls between models. The concept goes way beyond that. Orchestrating AI agents means defining clear governance rules, establishing decision-making hierarchies, creating fallback protocols for when things go wrong, and most importantly, ensuring there is an intentional design behind every interaction between machines and people. Companies moving in this direction have realized something important — there is no point in deploying an autonomous agent if the entire organization is not ready to absorb this new kind of operational dynamic. That is why the term orchestration by design has gained so much traction in recent months.
In practice, this means product, engineering, and operations teams need to sit down together before any implementation to map out responsibilities, define the limits of autonomy for each agent, and establish clear triggers for human escalation. An agent might be excellent at triaging support tickets, for example, but it needs to know exactly when to stop and hand the conversation over to a real person. Another agent might monitor performance metrics in real time, but it should have well-defined parameters about which anomalies warrant an immediate alert and which can be handled automatically. Without this upfront design work, systems tend to become unpredictable — and unpredictability is the opposite of what any company wants when putting AI into production.
According to the most recent discussions on the topic, sectors like manufacturing, healthcare, and retail are among those that have made the most progress with this kind of approach. In manufacturing, autonomous agents already coordinate entire production lines, adjusting speed and resource allocation based on real-time sensor data. In healthcare, orchestration allows different agents to handle patient triage, exam analysis, and appointment scheduling in an integrated way, always with well-defined human escalation points. In retail, multi-agent systems manage inventory, logistics, and customer service simultaneously, reducing bottlenecks and improving the end consumer experience.
What makes this discussion even more relevant is the fact that well-executed orchestration is not just a technical matter but also a strategic one. Organizations that manage to create efficient coordination layers between their AI agents gain operational speed, reduce rework costs, and can scale solutions with much more confidence. And the market is already pricing this in — investors and technical leaders are prioritizing companies that demonstrate maturity in agent governance, not just raw processing power. 🧩
Perplexity Computer and the era of multiple models
The arrival of Perplexity Computer on the market is one of the most concrete examples of how artificial intelligence is evolving toward collaboration between models. Instead of betting on a single monolithic model that tries to solve everything, Perplexity went with a radically different approach — combining 19 specialized models that work together as an integrated system. Each model has a specific role within the architecture. Some are responsible for interpreting the visual content of a screen, others focus on logical reasoning about the tasks presented, and still others are dedicated exclusively to executing sequential actions on real interfaces. The result is a system that can navigate complex multi-step workflows with a precision that would be impossible for any single model to achieve.
This kind of multi-agent architecture represents one of the most significant trends of the moment because it solves a fundamental problem with current AI — the inherent limitation of any individual model. No matter how advanced an LLM might be, it will inevitably have weak spots on certain tasks. By distributing responsibilities among specialized agents and coordinating them through an intelligent orchestration layer, the system as a whole becomes more resilient, more accurate, and more adaptable to varied scenarios. Think of it like a well-assembled team, where each person contributes their strongest skill and the collective result surpasses any individual performance.
It is worth noting that Perplexity Computer is not limited to processing text. It interprets screens, understands visual contexts, and acts on interfaces the same way a human user would — just with much faster processing and decision-making capabilities. This differentiator is especially relevant for automating corporate tasks that involve navigating between multiple systems, filling out forms, extracting data from dashboards, and executing workflows that require interaction with different tools along the way.
The impact of this approach goes well beyond Perplexity as a product. What the company demonstrated is an architectural model that will likely be replicated at scale by other companies in the coming months. The idea that a single giant model would be enough for every need is giving way to a more pragmatic and sophisticated vision, where the real intelligence lies in how different capabilities are combined and orchestrated. For developers and system architects, this shift opens up a huge range of possibilities — and also challenges, since designing, testing, and maintaining multi-agent systems requires skill sets that many teams are still building.
Telecommunications: networks that think for themselves
The telecommunications sector has always been one of the first to absorb major technological shifts, and with artificial intelligence it is no different. What stands out now is the level of integration being designed into next-generation networks. Discussions around 6G already incorporate AI as a native component of the infrastructure, not as an additional layer plugged in after the fact. This means future networks will have the ability to monitor their own performance, identify degradation patterns before they become failures visible to the user, and execute automatic corrections without human intervention. It is a structural change that transforms the network from a passive system into an organism that continuously learns and adapts.
For carriers, this evolution represents a massive opportunity to reduce operational costs and improve the end customer experience. Today, a large portion of network maintenance work is reactive — something breaks, a technician is dispatched, the problem is diagnosed, and then it gets fixed. With AI agents embedded in the telecommunications infrastructure itself, this flow is completely reversed. The network starts operating predictively, anticipating problems and resolving many of them before any user even notices degradation. On top of that, the orchestration capability between different agents allows complex decisions to be made in milliseconds — something essential in scenarios like autonomous vehicles, remote surgeries, and industrial applications that depend on ultra-low latency.
Another point worth highlighting is autonomous resource optimization. AI-native networks can redistribute bandwidth, energy, and computing capacity according to real-time demand. Imagine a sporting event with thousands of people streaming video simultaneously. Instead of relying on pre-planning and manual allocation, the network itself identifies the demand spike and reallocates resources from neighboring cells with lower utilization. All of this happens autonomously, orchestrated by agents that communicate with each other and make collaborative decisions.
This convergence between telecommunications and artificial intelligence is one of the most relevant trends for the years ahead because it affects virtually every other sector of the economy. The quality and intelligence of the communication network is the foundation on which all other AI applications will run. A 6G network with integrated autonomous agents is not just an incremental upgrade over 5G — it is a qualitative leap that will enable use cases that still seem futuristic today. And the fact that this construction is already happening in labs and standardization consortia shows we are not talking about something far off. The future of intelligent networks is being written right now. 📡
Security and guardrails: the invisible side of agentic AI
One point that often takes a back seat in discussions about agentic AI, but is absolutely critical, involves the question of guardrails — the protection and limitation mechanisms that determine how far an agent can go. Autonomous agents operating without adequate oversight pose a real risk to corporate systems, especially in highly regulated sectors like finance. Recent reports indicate that the absence of robust guardrails is one of the main factors slowing down the adoption of agentic AI in financial institutions, for example.
The challenge here is finding the right balance. Overly restrictive guardrails cancel out the benefits of agent autonomy, essentially turning them into automated scripts with a cosmetic AI layer on top. On the other hand, guardrails that are too loose create openings for unwanted behaviors, incorrect decisions, and even security vulnerabilities that can be exploited. The solution gaining traction in the market involves adaptive guardrails — protection systems that dynamically adjust their parameters based on the operational context, the level of risk involved, and the performance history of the agent in question.
This topic connects directly with the discussion around security models for enterprise systems. When AI agents start interacting with each other — including in automated commercial transactions, where one agent purchases services from another agent — the traditional security model based on human user permissions simply does not work anymore. New identity and authorization frameworks specific to autonomous agents need to be created, with full traceability of every decision made and every action taken. This is an area evolving fast and one that will demand increasing attention from information security professionals. 🔐
Agents buying from other agents: autonomous commerce
One of the most fascinating frontiers of agentic AI is the emergence of commercial transactions between autonomous agents. That is right — we are talking about scenarios where an AI agent identifies a need, searches for suppliers (which are also AI agents), negotiates terms, and closes a deal, all without direct human intervention. It might sound like science fiction, but this kind of dynamic is already happening in controlled environments and moving toward broader applications.
For business leaders, this movement requires a fundamental shift in how they think about supply chains, procurement, and commercial relationships. When the negotiation is done between machines, decision criteria need to be explicitly coded — maximum price, minimum quality, acceptable timelines, preferred suppliers. This entire set of rules needs to be defined in advance and reviewed periodically, because the agent will follow what was programmed to the letter. And here we circle back to the topic of orchestration — without a well-designed coordination architecture, autonomous commerce between agents can generate chaotic results and unexpected costs.
What changes in practice for those following technology
Looking at all these movements together — corporate orchestration, multi-agent systems like Perplexity Computer, telecommunications networks with native AI, adaptive guardrails, and autonomous commerce between agents — it becomes clear that the central axis of innovation in artificial intelligence has shifted. It is no longer about who has the model with the most parameters or who can generate the most convincing text. The competition now is about who can make multiple agents work together efficiently, securely, and at scale. This shift in focus has deep implications for technology professionals, managers, and anyone who wants to understand where the market is heading.
For those working in digital product development, the message is clear — investing time in understanding orchestration architectures and multi-agent system design is no longer optional. Current trends show that the tools, platforms, and infrastructure of the near future will be built on this collaborative logic between specialized agents. And keeping up with these changes closely, understanding both the technical foundations and practical applications in sectors like telecommunications, finance, and retail, is what will separate those who are prepared from those playing catch-up. The good news is that this knowledge is becoming more accessible by the day — and the time to absorb it is now. 🚀
