Original

Can AI computing power engage in peer-to-peer transactions using Crypto as a medium? 3 projects to introduce you to this new trend.

In the current era driven by data, artificial intelligence (AI) technology is evolving at an unprecedented pace. Particularly, the training of AI large models is continuously pushing technological boundaries, presenting significant challenges. In this context, decentralized distributed computing networks play a crucial role in the training of AI large models, but they also face substantial technical bottlenecks and challenges.

One of the primary demands in decentralized networks is the support for training AI large models. However, this process involves complex issues related to data synchronization and network optimization, the resolution of which is crucial for ensuring the efficiency and effectiveness of computing networks. Additionally, data privacy and security are indispensable factors. Addressing the conundrum of achieving effective model training while ensuring data privacy is an urgent challenge.

Currently, technologies such as secure multi-party computation, differential privacy, federated learning, homomorphic encryption, and others demonstrate advantages in specific scenarios. However, they have limitations, especially when dealing with data privacy issues in large-scale distributed computing networks. For instance, zero-knowledge proof (ZKP) technology holds significant potential, but its application in training large models in extensive distributed computing networks requires years of research and development. This not only necessitates more attention and resources from the academic community but also faces substantial technological expenses and practical application challenges.

In comparison to model training, decentralized distributed computing networks exhibit greater practical potential in model inference. The anticipated growth space in this field is expected to be substantial in the future. Nevertheless, the inference process still encounters challenges such as communication delays, data privacy, and model security. Due to lower computational complexity and data interactivity, model inference is more suitable for decentralized environments, but overcoming these challenges remains a topic worthy of in-depth exploration.

Against this backdrop, we will further explore representative projects in decentralized distributed computing networks — Akash Network, Gensyn, and Together. This exploration aims to gain a deeper understanding of this track that has the potential to reshape the future of production.

Akash Network: A fully open-source P2P cloud marketplace incentivizing global idle computing power with tokens

Akash Network is an open-source platform that revolves around establishing a decentralized peer-to-peer cloud marketplace, connecting users seeking cloud services with infrastructure providers possessing surplus computing resources.

Akash's platform is specifically designed for hosting and managing deployments while providing cloud management services for running Kubernetes workloads. In essence, Kubernetes is an open-source system used for automating the deployment, scaling, and management of containerized applications.

On the Akash platform, users, referred to as "tenants," are primarily developers who wish to deploy Docker containers to cloud providers that meet specific standards. An essential feature of Docker containers is that they include packaged code and dependencies, ensuring that applications run in the same way in any computing environment. This means that whether developing on a laptop, testing in a sandbox, or running in the cloud, no code modifications are necessary.

A unique aspect of the Akash market is its reverse auction model. This model allows users to independently set prices and describe their resource requirements for deploying containers. When cloud provider computing resources are underutilized, they can rent out these resources through the Akash market, similar to how Airbnb hosts rent out spare rooms. It's noteworthy that the cost of deploying containers through Akash is approximately one-tenth of the three major cloud service providers (Amazon Web Services, Google Cloud, and Microsoft Azure).

All transactions and records on the Akash Network are conducted on-chain through its token - Akash Token (AKT). This network builds its blockchain on the Cosmos SDK framework and utilizes the Tendermint Byzantine Fault Tolerance (BFT) engine to support its Delegated Proof-of-Stake (DPoS) consensus algorithm. AKT serves not only as a medium of exchange but also plays various roles in the Akash network, including ensuring network security, providing rewards, participating in network governance, and processing transactions.

In this way, Akash Network not only offers a more economically efficient cloud service option but also demonstrates innovative applications of blockchain technology in the modern cloud computing domain.

Gensyn: Breaking down complex machine learning tasks into multiple subtasks to enhance processing efficiency

Gensyn is a blockchain-based decentralized deep learning computing protocol designed specifically to address the demands of the artificial intelligence computing market.The core of this protocol lies in breaking down complex machine learning tasks into multiple subtasks and achieving highly parallelized computing through participants' computing resources. This approach not only improves computational efficiency but also automates task allocation, verification, and rewards through smart contracts, eliminating the need for centralized management.

In June 2023, the team successfully completed a $43 million Series A funding round led by the renowned venture capital firm a16z, bringing the total funding to $50 million.

The Gensyn protocol functions as an intelligent computation network with the following key features:

1.Probabilistic Learning Proof: Utilizes metadata from the gradient optimization process to construct certificates of task completion, enabling rapid verification of work completion.

2.Graph-Based Positioning Protocol: Adopts a multi-granularity, graph-based positioning protocol, combined with cross-verified execution, ensuring the consistency of work verification.

3.Truebit-Style Incentive Mechanism: Constructs an incentive game through staking and slashing mechanisms to ensure honest task fulfillment by participants.Additionally, the Gensyn system involves four main roles:

1.Submitter: The end user of the system who provides tasks for computation and pays fees.

2.Solver: Executes model training and generates proofs for validation by verifiers.

3.Verifier: Responsible for validating the accuracy of proofs provided by solvers.

4.Whistleblower: Acts as a security measure, reviews the work of verifiers, and raises concerns when issues are identified.

The Gensyn protocol exhibits significant advantages in terms of cost and performance. For instance, compared to Ethereum's transition from proof-of-work to proof-of-stake, Gensyn provides participants with a way to earn returns by leveraging their computational resources, reducing computation costs, and improving resource utilization. Python simulation results indicate that while Gensyn's time expenditure for model training increased by approximately 46%, its performance improved significantly compared to other methods.

As a blockchain-based decentralized computational power protocol, Gensyn aims to allocate and reward machine learning tasks through smart contracts to accelerate AI model training and reduce costs. Despite facing challenges such as communication and privacy, Gensyn offers an effective method for utilizing idle computational power, considering diverse model scales and requirements for broader and more flexible applications.

Together: Focus on Large Model Development and Applications, $20 Million in Seed Funding

Together is an open-source company dedicated to providing decentralized AI computational power solutions, focusing on the development and application of large models. The company's vision is to make AI accessible to anyone, anywhere. In May of this year, Together completed a $20 million seed funding round led by Lux Capital.Founded by Chris, Percy, and Ce, Together originated from their awareness of the substantial high-end GPU clusters and expensive expenditures required for large model training. They believe that these resources and the capability for model training should not be concentrated in the hands of a few large companies.

Together's development strategy emphasizes the application of open-source models and distributed computational power. They believe that a prerequisite for using decentralized computational power networks is that models must be open source, which helps reduce costs and complexity. A recent example is their LLaMA-based RedPajama project, initiated in collaboration with multiple research teams, aiming to develop a series of fully open-source large language models.

In the realm of model inference, Together's development team has undertaken a series of updates to the RedPajama-INCITE-3B model. This includes utilizing LoRA for cost-effective fine-tuning, enhancing the model's efficiency on CPU. As for model training, Together is addressing communication bottlenecks in decentralized training, encompassing optimizations in scheduling and communication compression.

The diverse expertise of the Together team spans various domains, ranging from large model development to cloud computing and hardware optimization, demonstrating a comprehensive approach to AI computing projects. Their strategy reflects a long-term perspective, covering the development of open-source large models, testing the application of distributed computing power in model inference, and deploying distributed computing power in large-scale model training.

Given the project's early stage, critical details such as network incentive mechanisms and token use cases remain undisclosed. These factors are crucial for the success of crypto projects. Consequently, the industry maintains a keen interest in Together's future developments and further disclosures.

The future of decentralized AI is vast, but the challenges within need to be gradually overcome

Examining the convergence of decentralized computing power networks and AI technology reveals a field full of challenges and potential. Despite being distinct domains, the combination of AI and Web3 exhibits a natural synergy in using distributed technology to curb AI monopolies and foster the formation of decentralized consensus mechanisms. Decentralized computing power networks not only provide distributed computing capabilities and privacy protection but also enhance the credibility and reliability of AI models, supporting rapid deployment and execution.

However, the development in this field is not without obstacles. The high communication costs in centralized computing power networks pose a significant challenge to decentralized networks, necessitating solutions to ensure node reliability and security, as well as effective management of decentralized computing resources.

Returning to commercial reality, the deep integration of AI and Web3, while promising, faces challenges such as high research and development costs and unclear business models. Domains like AI and Web3 are still in the early stages of development, and their true potential awaits validation over time.

Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
71
Add to Favorites
20
Comments
3