BlockBeats News, February 12, Distributed AI Lab Gradient released the Echo-2 distributed reinforcement learning framework, aiming to break through the AI research training efficiency barrier by decoupling Learner and Actor at the architecture level to reduce the post-training cost of large models.Official data shows that the framework can reduce the post-training cost of a 30B model from $4500 to $425. Echo-2 uses the compute-storage separation technology for asynchronous training (Async RL), supporting offloading the sampling power to unstable GPU instances and Parallax-based heterogeneous GPUs. In addition, Gradient also plans to launch the RLaaS (Reinforcement Learning as a Service) platform Logits, which is currently open for reservation for students and researchers.
