Google just proved an easy hack to improving accuracy in your prompts is to literally paste them twice 😂
The study says Gemini Flash Lite jumped from 21% to 97% accuracy with repetition 🤯
It has nothing to repetition. It has to do with how these unidirectional models work. Traditionally, earlier tokens can't see later tokens. So token #5 is frozen before token #100 is even processed.
However, if you duplicate your prompt:
[QUERY A B C D] [QUERY A B C D]
A2 can look back at D1 from the first part and provide a much stronger answer.
Wild stuff
This works extremely well with non reasoning models. With reasoning models, not so much.
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content
