Today
Intel
Market
Earn
Settings
Account
Theme Selection
Light
Dark
Language
English
简体中文
繁體中文
Tiếng Việt
한국어
Followin APP
Mine Web3 Possibilities
App Store
Google Play
Log in
samsja
833 Twitter followers
Follow
Research Engineer at @PrimeIntellect, previously training llm at @NyonicAI, maintainer of @docarray
Posts
samsja
03-11
can't believe we are living in the timeline where terminal, tui, markdown and git are going to take over the world
samsja
02-18
we just merged slurm support into prime-rl, a single command that render and start a slurm script to start a large moe training over 512+ gpus we also have k8s support
RENDER
0.48%
samsja
01-28
Today we’re releasing Trinity Large, a 400B MoE LLM with 13B active parameters, trained over 17T tokens The base model is on par with GLM-4.5 Base, while being significantly faster at inference because it’s sparser and hybrid The architecture we picked is one of my favorites:
Prime Intellect
@PrimeIntellect
01-28
We're excited to introduce @arcee_ai's Trinity Large model. An open 400B parameter Mixture of Experts model, delivering frontier-level performance with only 13B active parameters. Trained in collaboration between Arcee, Datology and Prime Intellect. x.com/arcee_ai/statu…
SWA
0%
samsja
05-03
Decentralized training is a great solution for Neo Cloud to repurpose their old generation of hardware (google has been doing this for a while with their TPU)
SemiAnalysis
@SemiAnalysis_
05-03
2️⃣ Blackwell Mass Deployment has Started Mass production and initial deployments of @nvidia Blackwell (B100/GB200) are ramping fast. These next-gen GPUs offer better performance per watt and lower TCO, making H100s yesterday’s news for neocloud customers. As a result, many
Loading..