PANews February 12 news, the distributed AI laboratory Gradient today released the Echo-2 distributed reinforcement learning framework. Echo-2 decouples Learner and Actor + asynchronous RL (bounded obsolescence), reducing the single training cost after 30B to about $425/9.5 hours; the three-plane architecture supports plug-and-play, and Lattica can distribute 60GB+ weights in minutes; the paper states that using Parallax scheduling for distributed RTX5090 training of Qwen3-8B is 36% cheaper and does not diverge compared to centralized A100.