Gonka discloses its Proof-of-Concept (PoC) mechanism and model evolution: aligning with real computing power and ensuring continuous participation of multi-level GPUs.

avatar
ME News
01-19
This article is machine translated
Show original
According to ME News, on January 19th (UTC+8), Gonka, a decentralized AI computing power network, explained its phased adjustments to the Proof-of-Concept (PoC) mechanism and model operation in a recent community AMA. The adjustments mainly include: using the same large model for both PoC and inference; changing the PoC activation method from delayed switching to near real-time triggering; and optimizing the calculation method for computing power weights to better reflect the actual computational costs of different models and hardware. Co-founder David stated that these adjustments are not aimed at short-term output or individual participants, but rather a necessary evolution of the consensus and verification structure as the network's computing power rapidly expands. This aims to improve the network's stability and security under high load conditions, laying the foundation for supporting larger-scale AI workloads in the future. Regarding the issue raised in community discussions about the higher token output of smaller models at the current stage, the team pointed out that there are significant differences in the actual computing power consumption of models of different sizes with the same number of tokens. As the network evolves towards higher computing power density and more complex tasks, Gonka is gradually aligning computing power weights with actual computing costs to avoid long-term imbalances in the computing power structure that could affect the network's overall scalability. Under the latest PoC mechanism, the network has reduced PoC activation time to less than 5 seconds, minimizing computing power waste caused by model switching and waiting, allowing GPU resources to be used more efficiently for AI computing. Simultaneously, by unifying model operation, the system overhead of nodes switching between consensus and inference is reduced, improving overall computing power utilization efficiency. The team also emphasizes that single-card and small-to-medium-sized GPUs can continuously earn rewards and participate in governance through mining pool collaboration, flexible participation by epoch, and inference tasks. Gonka's long-term goal is to support the long-term coexistence of different levels of computing power within the same network through mechanism evolution. Gonka states that all key rule adjustments are implemented through on-chain governance and community voting. In the future, the network will gradually support more model types and AI task formats, providing a continuous and transparent space for participation for GPUs of different sizes globally, promoting the long-term healthy development of decentralized AI computing power infrastructure. (Source: ME)

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
86
Add to Favorites
16
Comments