When building complex skills in LLMs like Claude, the longer a conversation goes, the more rules the model forgets. @BrandonGleklen breaks down why the fix isn't writing better rules, it's to make the wrong output impossible by moving deterministic rules out of prose and into code. Read more about his approach to skill hardening ⬇️ twitter.com/BatteryVentures/st...

From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments