Gradient releases Echo-2 RL framework to improve AI research efficiency.

This article is machine translated
Show original

Odaily Odaily that Gradient, a distributed AI lab, today released the Echo-2 distributed reinforcement learning framework, aiming to break down the barriers to training efficiency in AI research. By completely decoupling the Learner and Actor at the architectural level, Echo-2 drastically reduces the post-training cost of a 30B model from $4,500 to $425. This translates to over 10 times the research throughput within the same budget.

This framework utilizes in-memory computation separation technology for asynchronous training (Async RL), offloading massive sampling computational power to unstable GPU instances and heterogeneous GPUs based on Parallax. Combined with breakthroughs in bounded stagnation, instance fault-tolerant scheduling, and the self-developed Lattica communication protocol, it significantly improves training efficiency while maintaining model accuracy. Alongside the framework's release, Gradient will also soon launch the RLaaS platform Logits, driving AI research from a paradigm of "capital accumulation" to "efficiency iteration." Logits is now open for reservations by students and researchers worldwide (logits.dev).

Gradient is an AI lab dedicated to building distributed infrastructure, focusing on the distributed training, service, and deployment of cutting-edge large models.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments