This article is machine translated
Show original
I recently spent the weekend carefully reading an interview with OpenAI founder Ilya. This interview is worth watching several times. Besides discussing the shift from the scaling era to the research era (that greater intelligence cannot be achieved solely through continuous expansion of computing power), what impressed me most was his discussion of "research taste."
This taste, in the research process, allows him to use his own taste (beliefs and experience) to validate highly uncertain things from a top-down perspective. In AI, the core of this belief is the anthropomorphism of neural networks (the principles of the human brain). These feelings of taste are fundamental. When experiments and beliefs are inconsistent, sometimes it might be due to bugs in the data itself, but if we only look at the current situation and the known data, we might not find the truly correct path.
This research taste isn't just applicable to AI LLM research. Whether you're starting a business, investing, participating in airdrops, or developing new products, you're dealing with highly uncertain things. Your taste is your basic understanding of the essence of things, or some fundamental principles or other basic dimensions.
For example, if you're a product manager and you see a feature that almost no one uses, you might conclude that users don't need it and cut it. However, it's also possible that your design was flawed and users simply didn't notice the feature. When you lack product taste, you might make decisions based solely on the limited information you can see.
Many years ago, Ilya read a deep learning article on CSDN explaining how to implement addition, subtraction, multiplication, and division using RNNs. At the time, he found it fascinating, but Ilya's curiosity led him to think that if it could predict addition, subtraction, multiplication, and division, it should be able to do other more complex things. He also realized that neural networks are based on the theoretical foundation of simulating the structure of the brain. These two points laid a crucial foundation for Ilya's exploration of intelligent LLM research.
Chinese version | _2024111120230_ |

Airdrop Aggregator
Channel.SubscribedNum 75400
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content






