On September 17, at Token2049’s R3al World DePIN Summit, Rock Zhang, founder of the DePIN & AI project Network3, announced plans to launch its new local large language model (LLM) feature, which will significantly improve the efficiency and performance of edge AI technology.
Rock pointed out that although edge AI has been deeply integrated into people's daily lives, its complexity and importance are often overlooked. For example, the ability of smartphones to complete data processing tasks at night fully demonstrates the potential of edge devices in utilizing idle resources for efficient computing. With the newly launched local LLM function, Network3 will significantly optimize the efficient computing capabilities of smart devices during idle time, especially for local reasoning and processing when there is no need to rely on cloud computing. This not only greatly reduces bandwidth consumption, but also improves data security and privacy protection.
Network3 will provide strong support and massive resources for edge AI by integrating idle edge device resources around the world. The upcoming local LLM feature will allow users to seamlessly enjoy AI chat services on mobile devices without relying on expensive cloud computing infrastructure. In addition, users can also earn tokens by interacting with the model, and customize and optimize algorithms according to personal needs to enhance personalized experience. According to Network3, the beta version is scheduled to be released in October, and users can then download and experience it through the official website.
It is reported that Network3 helps global AI developers to efficiently, conveniently and economically conduct large-scale reasoning, training or verification of models by building AI Layer2. Earlier news said that Network3 has completed $5.5 million in pre-seed and seed rounds of financing, and the next round of financing has been launched with the participation of several leading institutions.