提示他們你要提示他們什麼,提示他們,再提示他們你剛剛提示過他們什麼。
本文為機器翻譯
展示原文

BURKOV
@burkov
02-18
LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the

來自推特
免責聲明:以上內容僅為作者觀點,不代表Followin的任何立場,不構成與Followin相關的任何投資建議。
喜歡
收藏
評論
分享




