McKinsey's Lilli case provides key development insights for the enterprise AI market: edge computing + potential market opportunities for small models. This AI assistant, which integrated 100,000 internal documents, not only achieved a 70% employee adoption rate but was also used an average of 17 times per week, an unprecedented product stickiness in enterprise tools. Below are my thoughts:
1) Enterprise data security is a pain point: McKinsey's 100-year accumulated core knowledge assets and specific data accumulated by small and medium-sized enterprises have extremely high data sensitivity and cannot be processed on public clouds. Exploring a balanced state where "data stays local and AI capabilities remain uncompromised" is a practical market necessity. Edge computing is an exploration direction;
2) Professional small models will replace general large models: Enterprise users do not need "billion-parameter, all-purpose" general models, but professional assistants that can precisely answer specific domain problems. In contrast, large models have an inherent contradiction between generality and professional depth, and in enterprise scenarios, small models are often more valued;
3) Balancing self-built AI infrastructure and API calls: Although the combination of edge computing and small models requires significant initial investment, long-term operational costs are significantly reduced. Imagine if the AI large model used by 45,000 employees through API calls creates dependencies, usage scale, and discourse that would make self-built AI infrastructure a rational choice for medium and large enterprises;
4) New opportunities in edge hardware market: While large model training cannot do without high-end GPUs, edge inference has completely different hardware requirements. Chip manufacturers like Qualcomm and MediaTek are seizing market opportunities with processors optimized for edge AI. When every enterprise wants to build its own "Lilli", edge AI chips designed for low power consumption and high efficiency will become infrastructure necessities;
5) Decentralized web3 AI market will simultaneously strengthen: Once enterprise demand for computing power, fine-tuning, and algorithms on small models is driven, resource scheduling balance becomes an issue. Traditional centralized resource scheduling will become challenging, directly bringing significant market demand for web3 AI decentralized small model fine-tuning networks and decentralized computing power service platforms;
While the market is still discussing the boundaries of AGI's general capabilities, it is more gratifying to see many enterprise-side users already exploring AI's practical value. Clearly, compared to past resource monopoly leaps focused on computing power and algorithms, when the market shifts focus to edge computing + small models, it will bring greater market vitality.




