ME News Message, February 12 (UTC+8), the distributed AI laboratory Gradient today released the Echo-2 distributed reinforcement learning framework. Echo-2 decouples Learner and Actor + asynchronous RL (bounded obsolescence), reducing the single training cost after 30B to approximately $425/9.5 hours; the three-plane architecture supports plug-and-play, and Lattica can distribute 60GB+ weights in minutes; the paper claims that using Parallax to schedule distributed RTX5090 training for Qwen3-8B is 36% cheaper and does not diverge compared to concentrated A100. (Source: ME)