EdgeX Labs launches a new generation of distributed intelligent scheduling system, building the practical operational infrastructure for AI Agents
Note: This article is a submission and does not represent the views of ChainCatcher, nor does it constitute investment advice. Please approach with caution.
In May 2025, EdgeX Labs officially launched its distributed intelligent scheduling system for AI Agents, partnering with multiple collaborative projects to achieve real-world deployment. As a foundational infrastructure platform for AI + DePIN, EdgeX Labs is redefining the operational paradigm of AI—moving intelligent agents from the "centralized cloud" to "user local," providing a continuously online, low-latency, and high-privacy operating environment for the next generation of AI applications.
The newly released EdgeX OS AI Computing Scheduling Framework has successfully integrated with the ELIZA multi-agent framework and the Amiko private AI system, marking a key breakthrough in the transition of AI Agents from proof of concept to large-scale deployment.

Multi-layer Computing Architecture + Intelligent Scheduling Engine: Building AI Execution Networks in the Real World
EdgeX Labs has constructed a heterogeneous integrated computing network composed of edge devices, edge servers, and high-performance clusters, equipped with its self-developed EdgeX OS intelligent scheduling system to achieve dynamic distribution, resource optimization, and real-time execution of AI inference tasks.
Three-layer Computing Node Structure:
- light nodes: Suitable for light tasks such as voice assistants and local conversations
- Medium nodes: Support image recognition and multi-turn dialogue for moderate inference
- heavy nodes: Carry multi-modal large models and high-concurrency tasks

Multi-dimensional Scheduling Factors and Mechanisms:
- Distribution considers model complexity, geographical distance, node load, historical response, and other metrics
- Supports historical cache reuse, context migration, and asynchronous task processing
- All data execution is completed locally, avoiding cloud dependency and ensuring data sovereignty and privacy security
The scheduling logic of EdgeX Labs not only "finds computing power for AI" but also "finds the most suitable model for computing power," achieving a true decoupling and reconstruction of computing power and intelligence.
Multi-modal AI Support and Distributed Collaboration Mechanism
The EdgeX Labs system natively supports multi-modal inference capabilities, including:
- Local voice recognition and synthesis (TTS/STT)
- Emotion recognition and language style adjustment (SER)
- Image generation and visual model execution
- Long-term memory system (LTM) and context retention module
- Multi-agent collaboration and model fine-tuning mechanism

Through a modular scheduling system, AI Agents can deploy different sub-models across multiple edge nodes, achieving true distributed inference execution. This provides a stable and usable operational foundation for complex scenarios such as emotional agents, digital avatars, and proactive AI assistants.
Real Application Validation: Multiple Projects Have Been Integrated and Are Operating Stably
With the support of EdgeX Labs, several user-oriented AI products have been deployed, including:
- ELIZA: A multi-agent interaction engine with personalized expression and emotional perception, suitable for voice assistants, emotional coaching, and companion AI scenarios.
- Amiko: An AI digital avatar system emphasizing user sovereignty and operating on local devices, creating a continuously present intelligent agent experience through Kick (voice terminal) and Brain (edge host).


These applications and systems have completed pilot deployments in regions such as South Korea, Japan, Taiwan, the UK, and the US, covering various landing scenarios including voice assistants, health management, and digital identity, fully validating the scalability and multi-regional adaptability of the EdgeX Labs scheduling system.
Building the Operating Protocol Layer for AI × Web3: Open, Sovereign, Self-Running
As the next-generation agent infrastructure, EdgeX Labs is promoting the establishment of an open operating protocol system with the following characteristics:
- Open API and SDK: Support developers in integrating personalized intelligent agent models into the EdgeX OS scheduling network
- Tokenized computing power mechanism: Encourage users to contribute node resources and participate in task scheduling, achieving sustainable activation through on-chain incentives
- Adaptation of various role nodes: Including home-level nodes, regional cache nodes, and validator nodes, meeting different performance and security needs
- Verifiable operating mechanism: Integrating on-chain task indexing, computing power execution logs, model invocation proofs, and other modules to ensure trustworthiness throughout the process
EdgeX Labs is also advancing the integration with Web3 modules such as DID, on-chain storage, and identity binding, enabling AI Agents' identity, data, and execution to have complete on-chain mapping and sovereignty expression capabilities.
Next Phase Plans: Developer Ecosystem, Container Deployment, Global Node Expansion
In the future, EdgeX Labs will continue to promote the following work:
Launch a developer open program, providing standardized model deployment containers, data access frameworks, and plug-in model registration capabilities
Expand adaptation capabilities for emerging AI scenarios such as audio and video, robotics, smart homes, and digital humans
Establish a global edge node co-builder network, collaborating with hardware vendors, node operators, and developers to jointly expand the scale of infrastructure
Launch an on-chain task settlement and reward indexing system to connect the execution and incentive closed loop of AI + Web3
EdgeX Labs aims to build not just an operating system, but a globally covered, standard reusable distributed execution protocol layer for AI Agents.
About EdgeX Labs
EdgeX Labs is a technology company focused on decentralized edge computing and intelligent scheduling systems, dedicated to providing a stable, resilient, and private operating network for AI Agents. Its core system EdgeX OS has supported the local deployment and efficient operation of voice models, image models, emotional models, and long memory systems.
EdgeX Labs has currently deployed over 10,000 edge nodes, serving clients such as Google GCP, Bytedance TikTok, Tencent, ELIZA, and Amiko.
Core hardware facilities include:
- XR7 Edge Gateway (lightweight inference single node)
- PoS GPU Server (high-concurrency AI Agent inference node)
- 40SoC ARM Cluster Server (multi-node ARM architecture inference cluster)













