Currently, AI technology continues to break through, and exploration of AI training model innovations is becoming increasingly in-depth. In this wave, the monopoly risks of centralized models and the lack of incentive mechanisms for open-source models urgently need better solutions.
Against this background, the Pluralis project was born. Its team consists entirely of PhDs, mostly from Amazon. This article will introduce Pluralis's core technological concepts, team composition, financing situation, and innovative protocol learning training paradigm in the decentralized AI training field.
What is Pluralis?
Pluralis Research is committed to creating a decentralized, open-source AI development model through "protocol learning". This model aggregates computing resources from multiple parties through a decentralized approach for collaborative model training. In this process, it ensures that no single participant can obtain the complete model weights.
The core innovation of Pluralis's protocol learning lies in the protocol model, which utilizes a key characteristic of neural networks where no single participant can extract its complete weight set. This design ensures value flows to contributors while protecting model ownership, cleverly balancing the openness and monetization needs of AI development.
In Pluralis, model designers can propose their model concepts, while computing and data providers can contribute resources needed for model training. These protocol models are open and publicly developed, and by granting participants partial ownership of the trained model outcomes, they effectively incentivize all parties to contribute, moving towards the goal of truly open-source artificial intelligence.
Pluralis Background
The Pluralis team is formidable. Among the 8 team members listed on their official website, except for one intern, all others come from Amazon's AI research department and hold doctoral degrees.
Founder Alexander Longing: Holds a PhD in Computer Science from the University of New South Wales, previously worked as an Applied Scientist at Amazon from March 2021 to May 2024. His doctoral thesis focused on sample-efficient reinforcement learning and non-parametric memory in deep learning.
Founding Scientist Gil Avraham: PhD in Machine Learning from Monash University, Australia, worked as an Applied Scientist at Amazon from December 2021 to August 2024, later promoted to Senior Applied Scientist, joined Pluralis in October 2024.
Founding Scientist Yan Zuo: PhD in Electrical and Electronic Engineering from Monash University, Australia, interested in large-scale optimization, statistical modeling, machine learning, and computer vision, worked as an Applied Scientist at Amazon from August 2021 to October 2024.
Founding Scientist Ajanthan Thalaiyasingam: PhD in Computer Science from Australian National University, worked as a Machine Learning Scientist at Amazon from December 2020 to March 2024, later promoted to Senior Machine Learning Scientist, joined Pluralis in October 2024.
Founding Scientist Sameera Ramasinghe: PhD in Machine Learning and 3D Vision from Australian National University, co-founder and CTO of AI technology company ConscientAI, worked as an Applied Scientist at Amazon from May 2022 to November 2024.
It's not difficult to see that Pluralis's founders, founding scientists, and research scientists all have work experience at Amazon and excel in machine learning, computer vision, and large language models (LLMs), with some members having previously served as postdoctoral researchers.
In terms of financing, Pluralis completed a $7.6 million funding round in March 2025. This round was led by CoinFund and Union Square Ventures, with participation from Topology, Variant, Eden Block, and Bodhi Ventures. The funding was conducted in the form of equity, with accompanying warrants for future cryptocurrency.
What is Protocol Learning?
In Alexander Longing's paper "Protocol Learning, Decentralized Frontier Risk and the No-Off Problem", a new AI model training paradigm - Protocol Learning - was proposed. Its goal is to collaborate in model training by leveraging a decentralized incentive network, breaking through the limitations of current centralized and open-source methods.
Alexander Longing pointed out that while centralized models are efficient, they carry monopoly risks and lack governance transparency; open-source models lack sustainable incentive mechanisms. Protocol learning, as a compromise solution, builds a decentralized training network by incentivizing participants to contribute computing resources, theoretically capable of aggregating computing power several orders of magnitude larger than centralized training.
From a technical feasibility perspective, decentralized training requires efficient communication, model sharding, elastic training, Byzantine fault tolerance, and heterogeneous node support. Although research has made some progress in distributed training, pipeline parallelism, and fault-tolerance mechanisms, it has not yet been fully integrated into large-scale (100B+ parameter) models. Additionally, while ownership can be allocated through computing contributions to form economic incentives, computing verification problems still need to be solved, such as using game theory staking or zero-knowledge proof technologies.
Of course, protocol learning also comes with new risks. Decentralized models are difficult to unilaterally terminate, and if a model goes out of control or is misused, it would require network-wide coordination to stop, which is extremely difficult to implement. Moreover, a balance must be sought between incentives, security, and controllability to prevent malicious behavior.
Pluralis believes that the future of artificial intelligence is not just distributed, but decentralized. The technical barriers to decentralized training are not insurmountable, and the benefits they bring will be immense.
In summary, Pluralis is building decentralized AI training infrastructure, aiming to promote collective creation of frontier models through protocol learning, fundamentally democratizing the production and access of AI base models.




