Today we’re releasing Trinity Large, a 400B MoE LLM with 13B active parameters, trained over 17T tokens The base model is on par with GLM-4.5 Base, while being significantly faster at inference because it’s sparser and hybrid The architecture we picked is one of my favorites:

Prime Intellect
@PrimeIntellect
01-28
We're excited to introduce @arcee_ai's Trinity Large model.
An open 400B parameter Mixture of Experts model, delivering frontier-level performance with only 13B active parameters.
Trained in collaboration between Arcee, Datology and Prime Intellect. x.com/arcee_ai/statu…
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content




