As one of the most influential open-source narrative scripting tools in the indie game world, Yarn Spinner's stance is crucial for many developers. In this official article , the team explains in a sincere and firm tone why they chose not to include generative AI in the tool.
They not only approached the issue from a technical perspective, but also conducted in-depth reflections on labor rights, development ethics, and "the essence of game development." This may offer some food for thought for the current industry landscape, which is obsessed with technological advancements.
The following is the complete Chinese content compiled by Dynamic Zone:
We're frequently asked questions about AI. Will we incorporate it into Yarn Spinner? Do we use it ourselves? What are our thoughts on it? These are all fair questions. It's time to document them all.
Yarn Spinner does not use technologies currently referred to as "AI". Our product does not have generative AI capabilities; we do not use code generation tools to build it, nor do we accept contributions that we know contain generated content. Let's discuss why.
TL;DR: AI companies create tools that can harm people, and we do not want to support such behavior.
past
Let's start with a bit of history. Our background covers a significant amount of AI and machine learning work (we shouldn't use these two terms interchangeably, but since everyone uses them, we'll use them interchangeably as well).
We've conducted workshops for game developers and non-programmers; written small machine learning bots for games; and done research and academic work. We've even written a book about using machine learning in games, primarily about procedural animation. It's a very interesting and worthwhile area to explore, and we have indeed explored it.
When we started college, neural networks and deep learning—the core technologies of most AI products today—were too slow and difficult to work with. By the time we finished our PhDs, things had changed. Tools like TensorFlow had made it all simple and fun, and the ease of access to GPUs allowed people without the budgets of big tech companies to train and infer. For quite some time, we were genuinely excited about this potential.
Then, things started to change.
It's hard to pinpoint exactly when. Perhaps things have always been this way, but we just haven't noticed. But by the end of 2020, it became clear: the AI we like isn't the kind that tech companies are interested in.
They increasingly favor generative images, chatbots that write for you, and "summaries" of art rather than allowing people to "access" art. Efforts to mitigate known problems (such as exacerbating cultural biases, difficulty in achieving certainty or interpretability) are downplayed and ignored. Researchers and developers who raise concerns are fired.
From then on, things only got worse.
If you look at what AI companies are selling right now, that's not what we want. When you strip away everything they say, the tools they're creating are essentially either for firing employees or for demanding more output without hiring more people. That's the "problem" AI companies are trying to solve.
Any other achievements they made were merely unexpected surprises on the path of "firing as many friends and colleagues as possible".
In these times of extreme difficulty in finding new employment, with unemployment and even potential life-threatening situations, AI has become a "tool for firing people." We don't want to be part of that. Until this problem is solved, we will not use AI in our work, nor will we integrate it into Yarn Spinner for others to use.
We don't want to support the companies that create these tools, nor do we want their behavior to become the norm. Therefore, we don't use them.
future
We occasionally see comments that sound like a fait accompli: "If you don't adopt AI, you'll be left behind," or its relative: "Everyone's using it"... We disagree.
Regardless of our views on AI, this is not the right way to develop. This is "tool-driven development." The goal should never be "We use this tool," but rather "How can we help you make a better game?"
Great games are born from people's passion for an idea and bringing it to life. This usually means "less" rather than "more." It involves changing your mindset, maintaining your own and your colleagues' well-being, and being willing to adapt and accept feedback. Good tools need to do the same.
We constantly ask ourselves, "How can this help make a better game?" and follow the answers. The exploration process is important, and most of the time we find that many ideas don't even stand up to scrutiny. We'd rather have a few sophisticated features that solve real problems than a bunch of junk that exists solely for marketing copy.
We are proud of Yarn Spinner. We don't believe it's purely coincidental that it's used in so many games. Our development process is efficient, and we are constantly adding new features. If certain features don't meet the developers' needs, we will modify or remove them.
We're constantly discussing potential ideas and approaches with internal staff, other game developers, and even non-developers. We continue to ask, "How can this help make better games?" and release features that have proven effective.
Who knows? Maybe the world will change, and then we can re-examine machine learning.
Frequently Asked Questions (FAQ)
Why are you only concerned about people getting fired? I've heard AI is terrible in "other ways" too! AI (especially the companies that make it) has many problems. Some are potential, even hypothetical concerns; some are very real, happening right before our eyes. Some problems are far more serious than "getting fired." Even worse problems have arisen between the time we started writing this blog and its publication. Even if the labor rights issues surrounding AI suddenly disappear, many problems will remain before we can use it with peace of mind. But focusing on one argument at a time is more effective. Labor rights issues are fixable and should be resisted. Once that's resolved, we can move on to the next issue.
Why don't you develop machine learning "properly" so no one gets hurt? Given our background and experience, we might be able to develop a suite of AI tools that we believe are beneficial, ethical, and don't fund companies we oppose. But there are two problems: First, these things take a huge amount of time to develop, and as we've said, most ideas don't make it past the initial exploratory phase. Balancing the exploration of ideas with building new models to test them is extremely difficult. Second, while we could create our own tools, most people wouldn't. If they see us using a certain technology and want to try it themselves, they'll end up supporting companies we oppose. We don't want this behavior to become the norm, so we must lead by example and not use it.
My boss is asking me to use AI in my work. Am I part of the problem? Finding and keeping a job is a necessity for survival, and the pressure has increased recently. If you can object to this, try to do it. But no one will blame you for wanting to keep your job.
Will you ban people who use AI from using Yarn Spinner? No. While we hope you won't use it, we understand this is "our" bottom line, not yours. We will continue to advocate against these tools and are concerned about the harm they cause. You need to realize that if you use them, you are financially and socially supporting those despicable companies doing shady things. They will use your support to advance their agenda. We are genuinely happy if these tools are helpful to you, but we also ask that you stop using them.
"I really like using AI, and nobody in my company has been fired, right?" This kind of comment is common, usually coming from programmers. Unfortunately, because the term "AI" is so confusing right now, the same concerns still apply. Your adoption helps promote the companies that make these tools. When others see you using it, they'll pressure others in the studio to use it, or it will spread to other workplaces. In our observation, this leads to layoffs and overwork. If this doesn't happen to you and your colleagues, that's fine, but you're still helping it happen elsewhere. And as we said, even if labor issues are resolved tomorrow, there are many other issues. There's much more to worry about than just being fired.
Are you simply fanatics or Luddites who hate AI? No. We're just angry at the people who create these things. AI and machine learning have enormous potential, but it's being squandered on "making the already despicable rich richer and more despicable." We still follow technological development because we hope to explore it again someday. But for now, the people who are pushing these tools are not the people we want to give money or support.




