AI infrastructure company Gradient has launched Echo-2, its next-generation decentralized reinforcement learning (RL) platform. The platform addresses high computational costs by using distributed computing technology that leverages idle GPU resources worldwide. This approach targets the sampling process, which accounts for 80% of RL computation and is well-suited for high-level parallel processing. Gradient successfully reduced the training cost for a 30-billion-parameter model by more than 10 times, from approximately $4,490 on commercial clouds to around $425 per session. The training time was also significantly shortened to 9.5 hours. Echo-2 incorporates asynchronous RL technology based on "Bounded Staleness," which separates learners from actors and strictly manages time lags between model versions to maintain training stability. It also features the "Lattica" peer-to-peer protocol, which can deploy large model weights of over 60GB to hundreds of nodes in just minutes, and a "3-Plane Architecture" that independently manages rollouts, training, and data to create a ready-to-run environment without complex setup. A Gradient representative stated that Echo-2 will serve as a foundation for anyone to build and own state-of-the-art inference models without economic constraints.
AI firm Gradient launches 'Echo-2' to cut model training costs by over 90%
Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content






