avatar
Aligned
04-01

a new paper by our co-founder @fede_intern and @diego_aligned. they introduced a new way to achieve practical verifiable ai. we are brainstorming if this could be added to improve or build new products to our stack. we also need a new whitepaper tshirt for diego :)

Fede’s intern
@fede_intern
04-01
LLMs now make critical decisions in hospitals, defense, banks, and governments. Yet nobody can verify which model actually ran, or whether the output was tampered with. A provider or middleman can swap weights, silently requantize the model, alter decoding, inject hidden prompts,
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments