This article is machine translated
Show original

It seems that LLM's stubbornness and guilty conscience are widespread. LLM is extremely persistent in its conclusions, but will immediately reject them entirely once a user directly points out that they are wrong. Interestingly, if you say that the answer was given by another LLM (in reality, it's just a new conversation using its previous answer as a pretense), it will analyze the answer more rationally.

AIDB
@ai_database
12-19
How Overconfidence in Initial Choices and Underconfidence Under Criticism Modulate Change of Mind in Large Language Models https://doi.org/10.48550/arXiv.2507.03120… Dharshan Kumaran, Stephen M Fleming, Larisa Markeeva, Joe Heyward, Andrea Banino, Mrinal Mathur, Razvan Pascanu, Simon Osindero,
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments