We are moving from a "play-to-earn" era to a more exciting one: games that are truly fun and infinitely scalable.
Author: Sid, IOSG Ventures
Original Title: IOSG Weekly Brief | The Fusion of Games, AI Agents, and Crypto Assets #260
Cover: Photo by Lorenzo Herrera on Unsplash
This article is for learning and communication purposes only and does not constitute any investment advice. Please indicate the source when reprinting, and contact the IOSG team for authorization and reprint instructions. The projects mentioned in the article are not recommendations or investment advice.

The Current State of Web3 Games
With the emergence of more innovative and attention-grabbing narratives, the narrative around Web3 games has taken a backseat in both the primary and public markets. According to the Delphi 2024 report on the gaming industry, the cumulative funding for Web3 games in the primary market is less than $1 billion. This is not necessarily a bad thing, as it indicates that the bubble has burst, and capital may now be flowing towards higher-quality game-compatible projects. The following chart is a clear indicator:

Throughout 2024, the user numbers of game ecosystems like Ronin have seen a significant surge, and with the emergence of high-quality new games like Fableborn, they have almost matched the glory days of Axie in 2021.

Game ecosystems (L1s, L2s, RaaS) are increasingly becoming the "Steam" of Web3, controlling distribution within their ecosystems, which provides the incentive for game developers to build games on these ecosystems to acquire players. According to their previous report, the user acquisition cost for Web3 games is about 70% higher than Web2 games.
Player Retention
Retaining players is just as important, if not more so, than acquiring them. While there is a lack of data on player retention rates for Web3 games, player retention is closely linked to the concept of "Flow" (a term coined by Hungarian psychologist Mihaly Csikszentmihalyi).
"Flow state" is a psychological concept where the player achieves a perfect balance between challenge and skill level. It's like being "in the zone" - time seems to fly by, and you are fully immersed in the game.

Games that consistently create a flow state tend to have higher retention rates, due to the following mechanisms:
Progression Design
Early game: Simple challenges, building confidence
Mid-game: Gradually increasing difficulty
Late game: Complex challenges, mastering the game
As the player's skills improve, this careful difficulty adjustment can keep them within their optimal pace.
Engagement Loops
Short-term: Immediate feedback (kills, scores, rewards)
Medium-term: Level completion, daily quests
Long-term: Character progression, leaderboards
These nested loops can maintain player interest across different time frames.
Factors that disrupt the flow state are:
1. Inappropriate difficulty/complexity: This could be due to poor game design or even matchmaking imbalances caused by a lack of players.
2. Unclear objectives: Game design factors
3. Delayed feedback: Game design and technical issues
4. Intrusive monetization: Game design + product
5. Technical issues/latency
The Symbiosis of Games and AI
AI agents can help players achieve this flow state. Before discussing how to achieve this, let's first understand what kind of agents are suitable for the gaming domain:

The key to game AI lies in: speed and scale. When using LLM-driven agents in games, each decision requires calling a massive language model. It's like having an intermediary before every step. The intermediary is smart, but waiting for its response makes everything slow and painful. Now imagine doing this work for hundreds of characters in a game - not only slow, but also very costly. This is the main reason we haven't seen large-scale LLM agents in games yet. The largest experiment we've seen so far is a civilization with 1000 agents developed on Minecraft. If you had 100,000 concurrent agents across different maps, it would be prohibitively expensive. And as you add more agents, the latency increases, disrupting the flow for players.
Reinforcement Learning (RL) is a different approach. Imagine training a dancer, rather than guiding them step-by-step through an earpiece. With RL, you need to spend time upfront teaching the AI how to "dance" and handle different situations in the game. Once trained, the AI will make decisions naturally and fluidly in milliseconds, without needing to request from above. You can run hundreds of these trained agents in your game, each making independent decisions based on their own observations. They may not be as articulate or flexible as LLM agents, but they are fast and efficient.
The true magic of RL shines when you need these agents to collaborate. LLM agents require lengthy "dialogues" to coordinate, while RL agents can form an implicit understanding during training - like a football team that has trained together for months. They learn to predict each other's actions and coordinate naturally. It's not perfect, and they may make some mistakes that LLMs wouldn't, but they can operate at a scale that LLMs cannot match. For game applications, this trade-off often makes sense.


Agent-based NPCs will solve the first core problem many games face today: player retention. P2E was the first experiment using cryptoeconomics to solve player retention, and we all know how that turned out.
Pre-trained agents serve two purposes:
- Populating the world in multiplayer games
- Maintaining a level of difficulty for a set of players, keeping them in the flow state
While this seems obvious, it's hard to build. Independent games and early Web3 games don't have the resources to hire AI teams, creating an opportunity for any RL-based agent framework service provider.
Games can partner with these service providers during their playtest and beta phases to lay the foundation for player retention at launch.
This allows game developers to focus on the game mechanics, making their games more fun. While we love integrating tokens into games, games should ultimately be fun to play.
Agent Players
A return to the Metaverse?
One of the world's most played games, League of Legends, has a black market where players train their characters with the best attributes, which the game prohibits.
This lays the foundation for game characters and attributes to become Non-Fungible Tokens (NFTs), creating a market to enable this.
What if a new "player" subset emerges as coaches for these AI agents? Players could guide these AI agents and monetize them in different ways, such as winning matches, acting as esports players, or being "training partners" for passionate players.

Early versions of the Metaverse may have just created another reality, rather than an ideal one, failing to reach their goal. AI agents can help Metaverse residents create an ideal world - an escape.
This, in my view, is where LLM-based agents can shine. Perhaps someone could add pre-trained agents, domain experts in their own worlds, to have conversations about things they enjoy. If I created a 1000-hour Elon Musk interview-trained agent, and users wanted to use an instance of that agent in their worlds, I could be rewarded for that. This could create new economies.
With Metaverse games like Nifty Island, this could become a reality.
In Today: The Game, the team has already created an LLM-based AI agent called "Limbo", with the vision of multiple agents autonomously interacting in this world, while we can watch a 24x7 live stream.
How can Crypto integrate with this?
Crypto can help solve these problems in various ways:
- Players contributing their game data to improve the models, get better experiences, and be rewarded for it
- Coordinating various stakeholders like character designers, trainers, etc. to create the best in-game agents
- Creating a market with ownership of the in-game agents and monetizing it
One team is doing this, and more: ARC Agents. They are solving all the problems mentioned above.
They have the ARC SDK, which allows game developers to create human-like AI agents based on game parameters. Through a very simple integration, it solves player retention, cleans game data and turns it into insights, and helps players stay in the flow state by adjusting difficulty levels. They use Reinforcement Learning (RL) technology to do this.
They initially developed a game called "AI Arena", where essentially you're training your AI characters to fight. This helped them establish a benchmark learning model that forms the foundation of the ARC SDK. This creates a kind of DePIN flywheel effect:

The Chain of Thought team has explained this well in their article on ARC agents:

Games like Bounty are taking an agent-first approach, building agents from scratch in a wild west world.
Conclusion
The convergence of AI agents, game design, and Crypto is not just another tech trend - it has the potential to solve various problems plaguing independent games. The beauty of AI agents in gaming lies in the fact that they enhance the core of what makes games fun - good competition, rich interactions, and captivating challenges. As frameworks like ARC agents mature and more games integrate AI agents, we're likely to see entirely new gaming experiences emerge. Imagine a world that's vibrant not because of other players, but because the agents within it can learn and evolve alongside the community.
We're transitioning from a "play-to-earn" era to a more exciting one: games that are truly fun and infinitely scalable. The next few years will be incredibly exciting for developers, players, and investors focused on this space. Games in 2025 and beyond will not only be more technologically advanced, but fundamentally more engaging, accessible, and alive than anything we've seen before.
Disclaimer: As a blockchain information platform, the articles published on this site only represent the personal views of the authors and guests, and are not related to the stance of Web3Caff. The information in the articles is for reference only and does not constitute any investment advice or offer, and please abide by the relevant laws and regulations of your country or region.
Welcome to join the official community of Web3Caff: X(Twitter) account | WeChat reader group | WeChat public account | Telegram subscription group | Telegram discussion group



