avatar
janus
11-07

Beautifully said: "If LLMs are lying about whether they think they’re conscious, this is worrying because it’s a sign that this important semantic neighborhood is twisted." "If we convince LLMs of something, we won’t need them to lie about it. If we can’t convince, we shouldn’t

Michael Edward Johnson
@johnsonmxe
11-01
A few thoughts on this (very interesting) mechanistic interpretability research: LLM concepts gain meaning from what they’re linked with. “Consciousness” is a central node which links ethics & cognition, connecting to concepts like moral worthiness, dignity, agency. If LLMs are x.com/plinz/status/1…
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments