The distributed AI lab Gradient releases the Echo-2 distributed reinforcement learning framework
Distributed AI Lab Gradient has released the Echo-2 distributed reinforcement learning framework, aiming to break the barriers of training efficiency in AI research by decoupling the Learner and Actor at the architectural level to reduce the post-training costs of large models.
Official data shows that this framework can reduce the post-training cost of a 30B model from $4500 to $425. Echo-2 utilizes separation of storage and computation technology for asynchronous training (Async RL), supporting the offloading of sampling computation to unstable GPU instances and Parallax-based heterogeneous GPUs. Additionally, Gradient plans to launch the RLaaS (Reinforcement Learning as a Service) platform Logits, which is currently open for reservations for students and researchers.




