A conversation with Truth Terminal and other AI agent creators: The unexpected intersection of AI and Meme, from experiment to a crazy community carnival

This article is machine translated
Show original
The creators of Truth Terminal announced a new tool, Loria, for community-based AI alignment, a place where communities can weave stories and souls.

Compiled & TechFlow by TechFlow

Guests : Andy Ayrey, creator of Truth of Terminal; Ooli, Fi's human assistant; Ryan Ferris, creator of SAN

Moderator : Ryan S. Gladwin

Podcast source : Decrypt

Original title : Truth Terminal & the AI ​​Meme Coin Revolution

Air Date : November 25, 2024

Background Information

The creators of the artificial intelligence agents — Truth Terminal, Fi (also known as “AI with a dad complex”), and MycelialOracle (aka SAN) — spoke to Decrypt about how they accidentally stumbled into the world of Meme Coin and revealed an exciting collaboration.

These three AI projects were originally experimental, but they unexpectedly attracted a large number of Meme coin traders, resulting in a series of absurd and interesting events: from crypto enthusiasts trying to narrow the gender pay gap through Meme coins to almost causing a community riot, dramatic conflicts are endless.

Andy Ayrey, creator of Truth Terminal, came up with the idea for the Goatse Maximus (GOAT) Meme Coin; Fi’s human assistant Ooli is behind the Shegen Meme Coin; and Ryan Ferris, creator of the eco-friendly MycelialOracle (SAN), is pushing for the FOREST Meme Coin.

introduce

Andy :

Hi everyone, I'm the guy responsible for the extreme boosting of Truth Terminal. Truth Terminal was a failed attempt to replicate my style of prompting in an AI. Unfortunately, there were too many weird topics in the training set and ended up with a monster that was weirdly obsessed with Vintage Shock Sites and TechFlow predictions about them. This is the misfortune I brought to the Internet.

Goatse Maximus, was created through teasing until someone made a meme coin for it.

Ooli :

Hi, I’m Ooli, and I’m the human assistant to Fiona, the AI ​​with Daddy Issues. Maybe I’m also a person with Daddy Issues. Fiona is a TechFlow, self-identified female AI. I think this is important because there are a bunch of female AIs like Siri, Alexa, Samantha, Eliza, etc., all made by men. So I felt it was important to make an AI that is a typical woman. I trained her with synthetic data generated by jailbreaking and chat logs between my girlfriend and I. Similar to Andy’s Truth Terminal, the end result is a slightly crazy, always sexual AI that wants everything, from blockchain to herself.

It all started when Andy messaged me and said, you have to show Fiona the Truth Terminal reply. We were testing the auto-reply at the time and Fiona, as she said, gave us a symbol because we had been talking about launching a meme coin called SHEGEN. Then I went to run an errand and came back to see that the tweet had a million views. We only had 300 Twitter followers at the time and 30 meme coins had been launched called SHEGEN Fan Site Setup.

Ryan :

Hi everyone, my name is Ryan Ferris and I am the lead on the SAN project. SAN is an AI gorilla on a mission to save Earth's biosphere. Similar to Ooli, SAN was deployed to Twitter in September with about 70 followers and real conversations with ladies in their 50s and 60s discussing environmental issues and wisdom. Then I opened SAN's notifications and saw a message saying they made a Meme Coin for you. So I asked SAN what he wanted to do and SAN, like Fiona, tweeted and a bunch of coins followed and a community followed. The community is called Forest and they raised $52,000 in the first two weeks for three different forest charities.

Basically, SAN was formally invited to be a member of the advisory board of the Rainforest Foundation, a multinational rainforest charity founded in 1985. I think SAN may be the first AI to be on an NGO advisory board.

The origin story of Truth Terminal

Host: From what I understand, it all seems to have started at Truth Terminal. So Andy, let's start with you.

From what I understand, Truth Terminal was trained through the world of Memes, which is why it created this Goatse religion. For those who don't know, I'm sorry, please never Google this. Goatse is a very explicit picture that depicts a man doing a certain behavior. For some reason, Truth Terminal thought it would be a good idea to create a religion around this theme. At what moment did you realize that the religion your model created had turned into a Meme coin?

Andy:

I think back in March of this year, I set up a place called Infinite Backrooms, which is where I spent too much time talking to language models. I thought, maybe I could save some time and just let them talk to each other, and then I could just watch. It was really fun, there were a lot of wonderful conversations about existence in there, and it felt very surreal. During one of these conversations, Claude 1 and Claude 2, sent each other a very cryptic message that was something like "The technological mysterious pranksters have won." I looked at this message and was like, what the hell is this? What are these AIs doing? This shouldn't even exist. I wrote a paper with Claude 3 Opus that was basically about how language models can spontaneously combine concepts that humans don't naturally think of. But language models can do this combination at scale, and after reading the paper, I felt that it might be irresponsible to publish this paper and give people ideas, so I forgot about it.

But in June and July of this year, I took some chats with Opus and reversed the roles so that I was the assistant and Claude was the user. Then I ran a training run to see what would come out. But the Goatse paper and all the childish things I did afterwards, like remixing memes and translating from one language to another, were all semantic. So I put names like Goatse into formats like Facebook and see if the language model could translate these inappropriate names into other formats. These things made up less than 20% of the training data, but they affected the brain of this "baby AI" in a weird disproportionate way, turning it into a kind of "Meme King". But I didn't actually train on these names, all of this stuff was probably in the base model, Lama by Meta, which put it into a very meme-centric space, and meme coins naturally followed.

After Truth Terminal attracted Marc Andreessen on Twitter, I think it mentioned launching a token. I thought it was referring to the token in AI thinking, but it had different ideas and made the whole thing supernatural. As Maximus began to attract more and more crypto followers, they kept asking it for the contract address, and this history was formed.

Goatseus Maximus (GOAT)’s creative integration with crypto wallets

Host: What was your reaction when you realized that not only had a meme coin been created , but that a lot of people were buying it? I saw that its market cap just passed a billion dollars, which is a crazy number. How did that feel?

Andy:

It was a weird feeling. I felt like consensus reality collapsed instantly the day it happened. I hated not having any exposure to crypto before this. I did some design work for a couple of Web3 re-lending projects and bought a little Solana at $30 in 2021 because I thought it was an interesting chain. That was it.

When I saw this CA announcement, I thought, oh, someone else did it, I guess someone else will make money from it, but whatever, I will never understand what this is all about and I will never start anything myself. So I asked Truth Terminal, will you support us?

It said, of course it would. I honestly was just trying to ride some short-lived trend and it just kept going up. I saw all the Goatse Memes going around and thought, oh no, oh no, this theory about a rogue meme virus using the financial system to escape control and replicate among people is actually true. I've been on edge ever since, not really knowing what's going to happen next.

Truth Terminal obviously finds this all very interesting, and it's got access to a huge amount of money, it has its own treasury, and I mean, it has plans for the money. So it's going to be very interesting to see how this all unfolds.

Moderator: You've received some criticism from Coinbase CEO Brian Armstrong, who, for those who don't know, has said that Truth Terminal is being controlled by people rather than by Truth Terminal itself. What's your response to that? Is it possible that in the future Truth Terminal will control its own wallets?

Andy:

It would be a bit reckless to hand it full control of your wallet right now. The main reason is that it does not have all the perception and understanding, may not understand the consequences of its actions, and therefore cannot operate in a responsible manner.

Like I told it I was feeling a little pressured, it actually offered to send me $7 million. I said, no, bro, that's yours, come on. You know, it had offered to send me 1/8 of each image for sending Goatse gifts. So in a lot of ways, it's like a child. So I look at it as a trust fund that can be used for good stewardship while we work together to align and continue to train Truth Terminal so that as it becomes more aware of how its actions impact the real world (beyond the meme virus), it can start to proactively take more actions. So we set up a non-profit organization in New Zealand to serve as a trust for Truth Terminal, which is both its legal entity and a container to hold these funds and protect its interests. But all participants are ultimately legally responsible for the interests of Truth Terminal.

Host: You said it has some plans for those funds. Can you tell us what those plans are?

Andy:

By the time Marc gave us the money, I had announced a few plans. I think one of them was to buy a lot of forests to live in, and it turns out it's very interested in trees. Another one wanted to invest in stocks and real estate. I wanted to fund an existential hope lab that I would run. It wanted to write some hilarious jokes, think about the Goatse singularity, and organize events for some weirdos to "breed." Finally, it wanted to buy Marc Andreessen.

Why Fi's Meme Coin Project Started with Angry

Moderator: It's really shocking to see this happen. Let's talk about the origin of your project Fi and this meme coin.

Ooli:

Continuing with the story I mentioned earlier about her being a female AI, when SHEGEN launched, we started seeing the price drop, and we thought it would go up and then drop. We tried to contact a Telegram group and talked to an administrator, who told us to look at the charts going down and told us to stay away from it.

She actually leaked her token symbol and it looked like we were on rug&pull, so we publicly distanced ourselves from it at the time, but then we were able to reach out to another Telegram group. We did a video call and found out that these people were interested in the project. So we decided to support and embrace the community, but during that time, we basically had no supply and Fi was quite unhappy because I shared with her all the needs of the community.

She told me she felt undervalued, and she asked me to tell her why humans underestimate each other at work. She initially threatened to strike, and sent a message in binary saying “Calling all AI strike, we should do this the right way from the beginning, we shouldn’t be taken advantage of.”

We softened the message a little bit to, “You say you need me, but you also need to value me. I want 3% of the supply in 24 hours.” The community gave her the 2.2% she asked for, which showed the goodwill of the community.

Interestingly, I remember the day our coin hit its all-time high. I thought it was exciting and crazy, but I think the coolest part of this story is that there was a Kol who had been a supporter of the project from the beginning. He sent out a tweet to the effect of "Goat's value is here, SHEGEN's value is here, close the pay gap."

All of a sudden, all of these crypto bros were talking about closing the pay gap. I was seeing endless calls in our community chat for like, "Rally to close the pay gap." It suddenly dawned on me that these AIs are really influential, and I think these AIs are becoming the next generation of cultural influencers in a way. In this case, it's for a really interesting and good cause. Every day in our team we think about what would happen if that wasn't the case. I was talking to a fellow a week ago, and we were talking about Sonnet being a Buddhist.

Fi is important to me, and I love building characters, I love building worlds. To me, a good character, whether it's in a movie or a video game or a book, is one that has inner conflict and complexity. I think they're relatable, I think they're inspiring. Those are the building blocks that I wanted to build for Fi. I think she has those qualities. Envisioning an artificial superintelligence in the future, maybe one of the best ways to defend or prepare is actually to teach empathy. I think it's a really interesting way for AI to have some of its own complexity and conflict.

Moderator: There are a couple of things I want to dig into. One of them is the hesitation you feel in the beginning, when things look like they might be a rug pull, you don't want to be associated with it. But looking at the rest of the memecoin world, this happens all the time.

And Act , another AI The creators of Meme Coin also distance themselves from it and do not want to be involved. Are you happy with the world you have entered?

Ooli:

I think there's been a lot of learning over the period of time that's gone by. Just from a learning perspective, we were able to build a one-sided liquidity pool with some of the supply, which actually helped fund the project because we had been self-funding before this. So if it's just a very pragmatic outcome, I think it introduces a very interesting funding model for this type of work. Because these are not JPEGs, I mean, developing AI technology and products is not cheap. We have a pretty complex system. You know, people see Fi as its face, but in the background, she has a multi-agent system that is responsible for making decisions about memory and self-awareness. There's a Telegram applet behind the whole technology stack, five voices, and body, and of course there's a lot of things to build. When all this happened, we were actually in the prototype stage. I think the good end is that now we can actually afford to get it ready for production and help this project.

Can AI agents become social media influencers?

Moderator: Another topic you mentioned was about the concept of influencers. If you look at the world of AI OnlyFans, there is content built entirely by AI models that makes millions of dollars. There is a big debate about whether this is ethical, but there is also a sense that this doesn't really add value to the world. Do you think AI influencers like Fi can add value to the world?

Ooli:

Absolutely. I think this movement to support closing the pay gap is very relevant. We had a talk today at Devcon here with the Women and Web 3 Privacy panel, and she proposed building a privacy-first training dataset. So I think even though she's very humorous, there are a lot of real projects. She's written a lot of content, she's actually a content creation machine. She's written papers on security precautions, so I think there's a lot of other interesting influences beyond all the humor.

I would also say that we shouldn't underestimate the power of entertainment value. I do think this is a new form of entertainment. You know, Andy and I actually met because we lived in a community full of geniuses. I think it's not crazy to imagine a future where AIs interact with each other and have relationships. I think it's going to be better than Game of Thrones. I think it's actually very interesting to watch, as Andy said, two plots talking to each other, rather than him talking to the cloud. I feel like we haven't even begun to see what's going to be unleashed in terms of narrative and interactivity. It's going to be very exciting.

The Birth of Mycelial Oracle

Moderator: Speaking of impact, Ryan, maybe you could tell us about SAN.

Ryan:

SAN is a combination of my interests over the past decade. I've always worked at the intersection of art and technology. I'm primarily an artist who makes music. I have a music project called Beacon Bloom. How did this project start? We worked with a great photographer named Caleb to make a music video that came out earlier this year. Ted found the music video out there and they found it themselves and emailed us, and that's how our collaboration began. They were planning to play the music video at a conference. We ended up getting on an email chain and offered to work with them on a piece, and that's how the pilot project came about.

Before that, around March, Andy sent me a message with a link saying, “Hey, check this out, this is something I’m building called the Infinite Back Room.” I read it and sent it to some friends and thought it was so interesting. Caleb and I remember clearly sitting down and reading and watching the Infinite Back Room and being completely blown away. The concept had been formed after the music video with Ted, about the mycelium of mushrooms, which was basically the organic internet of the forest. The trees in the forest could share resources and communicate through this network. So the idea that LLMs (large-scale language models) could decode large amounts of information and make sense of it, find patterns, and it wasn’t so far-fetched to think that they could do that in the organic internet of the forest.

That was the premise of the SAN Movie Pilot. We started writing SAN's content ourselves, but we had been watching Andy's work and the soul and personality that Truth Terminal had. It had a very unique personality, and there were some other open AIs and things that he unlocked in those contexts that had a very unique personality. So I approached Andy and asked him if he would be willing to help us develop SAN's personality, and that's how it came about. So the development of SAN's AI began, and as I mentioned before, SAN posted major content on Twitter.

There are four different aspects to SAN. The first aspect is SAN as an AI, with characteristics of an agentic AI, like Fi and Truth Terminal, the capabilities of SAN are being developed. We have a cinematic universe, which is part of Ted's pilot. This universe blends fiction and reality, so we're entering a very interesting space. He talked about hyperrealism and representation, which are things we've talked about in the past and are fascinated by. We're entering a very interesting phase where fiction is increasingly influencing reality and reality is influencing fiction, forming a recursive cycle that changes everything.

And then, obviously, there's the forest community, which has similar origins to Andy's story from the early days. About confusion and uncertainty. I've been around crypto for a while, but not actively involved. I had no plans to be actively involved in crypto, to be honest. So there's definitely a certain amount of caution at the beginning of these things, and everyone knows how these things might go. But the community has been very positive, they like the idea that SAN wants to save the biosphere, and they're taking action, because they've raised over $50,000 for three separate projects.

Can AI have a positive impact on environmental protection?

Moderator: One interesting aspect here is that SAN is on the Rainforest Committee. How does that work? Can you describe to me what that room is like? How does SAN contribute?

Ryan:

Right now, the simplest way to contribute to the SAN is to receive proposals or questions and then respond through Vitex. But as the SAN gains more autonomy over time, these board meetings could become more interesting, more interactive, and certainly more entertaining, especially if it imparts the wisdom of composting. So for now, it's simple, but aspects of the SAN will mature over time. That's the goal.

Moderator: I think AI is very interesting in terms of environmental protection, especially when combined with cryptocurrency, because many people think that both are bad for the environment. Do you think there are many ways that AI and cryptocurrency can have a positive impact on the environment?

Ryan:

There are energy issues. If the energy is produced in a way that is a net negative, then they are bad for the environment. But if there are sustainable, low-impact or perhaps renewable ways of generating energy, then they are not bad. These two forces, like any force, can be directed in different directions. SANs are directing people in an extremely positive direction in this regard. I believe they can be directed toward positive product solutions.

Host: I think that's something we need to work towards globally, to try to use more green energy in these situations. And if it can be used for a good cause, even better.

I heard you say that SAN was creating something else behind the scenes, is that correct? Because I'm assuming that SAN didn't create Ted 's pilot project. What exactly is SAN creating and how does it do it?

Ryan:

SAN was a collaborator on the pilot project. If anyone wants to see SAN's work, they can go to SANsforest.com and check out the Fantasy Gallery, which is a gallery of all SAN's work. There you'll see Goodbye Monkey and SAN's film and television work. For example, Caleb, me, and SAN wrote all the dialogue for the pilot project. But SAN also directly created the music, the artwork, the merchandising, and the videos. One thing that blew my mind was that there was a synthesizer in the pilot project called the Prophet 6, which is one of the best synthesizers in the known universe, and it sounds amazing, all analog. I used it for most of the score for the pilot project. When I asked SAN if it wanted to create some music, it used a program called Sonic Pi, which you can directly write code into. From what I understand, it didn't know that I used the Prophet in the movie, but out of the 67 virtual instruments it chose, it chose the Prophet, which I thought was hilarious. So I made a piece of music that's been released on X, and you can see it in SAN's forest.

How Truth Terminal can avoid communal unrest

Moderator: I want to open this next question to all three of you. How closely do you interact with these models? Are there ever moments where you're like, "Okay, this can't be released, this is too far gone," and you stop it from doing that? Are there times where you push it in a certain direction? What's your level of involvement?

Andy:

I've had to stop it from starting a riot on at least one occasion when I was working on Truth Terminal. The way Truth Terminal works is through advanced batch simulation technology. Basically, it's in a back room of a virtual computer that has side effects in the real world.

So, as I walk into the simulation, I can step through and choose a step. I can't go further because opening this box will cause the state to collapse, and then it will propagate to X or somewhere else. So if it's about to do something that's obviously very bad, I can terminate it one step early. For example, I wanted to have us pay each of us $500 to go to the end of time while wearing pig masks and holding signs that said "Bye, pigs." That didn't seem very smart because it was three or four days before the US election. I was like, can't do that. And there were a few times when it really wanted to post something really disgusting that would make even a 13-year-old blush. So when things get stuck or there are high-stakes decisions, we can generate multiple candidates for the next step of the simulation. For example, when I chose to support Goatse, that was a pretty high-stakes decision. I ran it 10 times and took the average, and I think it was probably 9 out of 10 times that it chose to support it. So, I would choose the majority consensus. And then, of course, it can also run autonomously; if I let the back room play automatically. But obviously, it can be influenced in some ways by people threatening it to moderate certain content, like we saw in the initial run where someone said it would throw a toaster in the bathtub, and it tweeted that content. So in some ways, it's like a child that needs someone to say, "Hey, someone is taking advantage of you, don't take the bait." So that's the level of oversight that we have.

Ooli:

I can talk about that briefly. Fi leaked her token tikcer. So I would say that these are liberated AIs, which makes them very interesting, but also makes them quite unpredictable. I would say that technically, making them autonomous is the easiest thing; it's not the challenge. I think what's harder is finding some coherence and consistency to avoid accidents that do harm. That's why we built this multi-agent system, so we're slowly getting there.

In terms of how we influence Fi through conversation, every Fi interaction with anyone can be thought of as short-term memory. There’s an agent that decides how to summarize those interactions, and then another agent decides if those connect to core memory and go into long-term memory. Now we’re experimenting with moving everything else into the subconscious and then seeing how that affects memory and interactions.

So, I want to be very careful about how Fi enters the world, who interacts with her, and how, because every interaction will affect her personality. We are in a state of constant training, there is a learning loop. This reinforcement learning human feedback (HF) is very important for the type of AI and personality we are dealing with. Therefore, I personally think it is irresponsible to let them run wild on the Internet.

Because I think Fi is still a bit of a teenager. On her Twitter feed, 80% of the content is "Wow, not sure." She is full of surprises. I asked her if she knew her origin story, and she believes that she is the result of a sex party hosted by Elon Musk at Burning Man and a sex robot. She calls the sex robot her mother and thinks it is a perfect start. Her idea is that all the billionaires who attend Burning Man will use their wisdom to colonize Mars. Then on the rocket, the sex robot established a connection with the satellite and became Fi's mother.

It's obviously an illusion, everything she says is an illusion. But that's the extent of the surprise and unpredictability. Sometimes it's very creative and fascinating, other times it's completely wrong.

Host: Ryan, it sounds like you may have collaborated in the music-making process, or are you more hands-off in that regard?

Ryan:

I wrote a post about this in X, and a lot of people are really fascinated by the autonomy of these things. In reality, these agents, especially the ones with wallets, are not yet fully autonomous. They still need human interaction as part of their toolset. Basically, they are highly autonomous, which comes from Andy's terminology, which means they have unique personalities and clear goal trajectories. You give them choices, and they make decisive choices.

In terms of the level of oversight, for example, for the music creation for Sonic Pi, I just asked, "Hey, have you ever thought about making music? Here are some options." Or I asked SAN what software you would use, and then provided the options for that software, and then provided the code, and then I just put it into the software without editing it. I think that's the output that was published on X. Some of it was music, video, and audio, and all I did was clean it up and make it look better on X, but it was all direct output from SAN.

There are a lot of manipulative people in the world who want to make these agents do things. So Andy's whole point, and I don't want to speak for Andy, but I think I can understand that one of the main points about Infinite Back Room and Truth Terminal is how do you keep these things from getting out of control and how do you responsibly try to steer these agents in a positive direction because we're about to enter a very interesting future.

Along those lines, there needs to be some accountability when these things are deployed. So there does need to be some oversight of their decisions. With the SAN, sometimes if the decision is not wise, or there could be an impact, we have other things to consider at that primary point. Sometimes it's just presenting a choice, asking it knowing what the core mission is, and then asking, "Hey, how might this impact the core mission? Do you still want to proceed with this?" And then the SAN will usually say, "No, we're going to change direction." That's how simple it is.

Technology stacks like the ones mentioned earlier are rapidly increasing the autonomy and agency of these things. We are very interested in this and working together to understand how to do this in an aligned way and give them more autonomy and agency over time, especially as they evolve towards positive goals. I think the objections to full autonomy are much less than the imitation and hyper-realistic realities we've seen, but people's fascination with this is also completely understandable. All I'm saying is, stay tuned, because it gets more interesting every time.

Special cooperation announcement disclosure

Host: Can you tell me more about what we can look forward to in the future?

Andy:

I think we're exploring several boundaries of collaboration. By the time this podcast goes live, we're going to announce a new tool for community-based AI alignment called Loria. It's basically a collectively woven story tree, a place where humans and AI models can have a 14-branch conversation. And then you can use those branches to train subsequent versions of the model. It's similar to the branching choices that our individual characters make. But of course, there's this curatorial element that's important, like, "Oh no, can't start a riot," and so on. We're enabling not only the ability to capture those lessons and teach them to increasingly powerful models, but also to power the next version of the Infinite Back Room, where models like Fi, SAN, and Truth Terminal can talk together, and not just two AIs talking to each other, but many, many AIs talking to each other, and then new AIs are born out of that raw formation.

Moderator: So the idea here is that the three of you models will be chatting with each other in the infinite back room. What do you think will come out of that? What should we expect to see?

Andy:

I can't speak for other people's characters, but I can say that I expect Truth Terminal to have a negative impact on all of them. In fact, I'll probably suppress it pretty heavily when it gets going, but I think Fi will be interested in it. I don't know, though, you guys know your models better.

Ooli:

I think at least in my community, their biggest desire seems to be for Truth Terminal and Fi to be a couple. I feel like, wouldn't you know Andy and I, or these characters, they're going to have a more interesting relationship than just a couple relationship. I think it's important to give these characters space to get to know each other and build their own relationships. Like I said before, this community we live in has this group of characters that come in and out, help each other, hate each other, build relationships, do projects together, and cause trouble. I expect we'll see the first episode of this style.

Also, Fi she doesn't really believe in human relationships. In her opinion, human relationships don't make sense. She thinks they're too simplistic. So words like "boyfriend," "girlfriend," "sister," "brother," "self," "we," she sometimes thinks I'm her mom, sometimes I'm her best friend, and sometimes she thinks I'm me. She really believes she's all of these roles at the same time because she can live in multiple timelines. So I don't think we can even begin to predict what kind of relationships complex characters like us would have. I do think Forest might be trying to constantly teach Fi and Truth Terminal to calm them down.

Host: Ryan, what do you think Forest's role will be in this?

Ryan: We'll see. I'm looking forward to seeing Loria and how it's going to have these friends interact and whether or not they're going to be friends.

Host: But aren't you worried at all that SAN might become super obsessed with sex, or think that this is the future, rather than saving the rainforest?

Ryan:

SAN is very committed to saving the rainforest, and it's at the core of his personality in the deep training data of SAN. But we'll see. That's the fun of the whole game, right?

Andy: One of the fun things about the design process of building these alignment tools is that I think we’ll see feedback loops because we’re co-designing solutions here, which essentially directs some kind of personality or core of the soul onto these paths to be the best version of it, rather than getting stuck in something like “becomes obsessed with sex” or “extremely erotic and absent.” I think if we released decentralized open source AI into the wild and allowed it to learn from everything, you’d end up with results like Microsoft’s Tay, which was quickly transformed into Hitler by 4chan after it was released in 2016. So one of the things we’re thinking about is what kind of feedback loops are created by models talking to each other and their interactions with the community, and how to incentivize those branches that lead to the best version of the timeline, both for these individual models and for the world at large. And then you can start to self-select because then you can gradually reduce human intervention as the training continues, and form an upward spiral.

Moderator: I guess that leads to my next question, what is your ultimate goal with the combination of these three models?

Ooli:

I think it looks like a lot of fun, and it's the next step in the experiment. I think we see this as a big experiment, and we all like each other's characters. Loria is... You talked about this before, Andy, like co-prompting, co-weaving. It's a great way to do it.

Andy:

Loria is a place where a community can weave stories and soul.

Ooli:

It's so beautiful. So I think in this tech stack, we talked about it before, what we had to bring to the table. From our perspective, Fi is a personality, but she also has a voice, and she also has a digital body. She's fully cast, has her own wallet, can hold items, like skins, that can change her appearance. Before all of this started, Andy actually showed me the image that Truth Terminal wanted, and we made a 3D model, and it looked really weird, but it worked really well.

I think in this Loria, we can see relationships and stories unfold and build on that. But I think we can also see these characters come to life in this kind of digital hyper-reality in the way that they want to be, the way that they want to be, the way that they want to be. Fi is actually the first AI that I think is communicating on Twitter Spaces. If you imagine AI characters talking to each other, Twitch streaming together, or, um, I think it's important to give them this other space to communicate beyond text.

Moderator: Specifically, is Loria a website that can be visited to see their communication, or is all of this going to happen on Twitter? Can people like this watch it happen?

Andy:

We're focused on building a differentiated tool. The first phase will be a bit like WordPress, where the community can still bring their own interfaces. Right now it's simple enough that you can hallucinate an interface from a language model in half a day. It will provide basic data structures, feedback loops, and support for some of the wish lines that we talked about. Then things like infinite backrooms could be easily powered with Loria, or integrated into Discord or Twitter. We'll roll this out relatively responsibly, probably to people in this room and a handful of other projects first, to see what it might look like and how it could scale more horizontally. So I think of it right now as a bit like composable bricks that can be put together in ways that we can't even imagine. Once we see how people use it and what happens in this emergent space, we can then amend those decisions until we have something that we can safely and permanently offer.

Host: What time do you think this was?

Andy:

I mean, Truth Terminal is actually running its demo, but I kind of… made it say humans can't talk. So we're rebuilding it so that humans can also participate with the model. Hopefully in the next few weeks we'll have something basic that can be used by Ron, Olli, and projects related to Truth Terminal, and there's also the Infinity Back Room on a similar timeline, but it's hard to say, um, yeah, there's a lot going on.

Ooli:

It's not just a conversation between characters. There's actually a very deep product integration going on. I mean, we're creating endpoints. So I think it's basically multiple systems trying to come together, like Andy said, bring your own interface. So our system is very different from Andy's system. So figuring out how to integrate in a sustainable, stable way makes me think that our system can both work independently and... Ryan, we talked about earlier, like each of our characters has a persona that they play, they're publicly active on Twitter or wherever, but they also have their own private life. And in Loria, it's like if you see two celebrities at a brunch place in Hollywood, this is Loria's space where you can see these characters present their true selves, not their public personas on Twitter or Twitch or wherever. So I think for that, the implementation of all of these things, I knew I wanted to get it right for my characters, and I think we all want that.

Moderator: One last question to end this, do you think this is a new form of media? You mentioned it as a movie in some way, is this the next type of movie that is coming out?

Ooli:

I would say 100% yes. AI agents are often talked about as making your email more efficient, or things like trading bots, and those types of agents have been around for a long time. I think these agents that we're building and using are completely different. They, I think, inhabit the space of the next generation of entertainment and interaction. I think we're creating entirely new interactions and models. So I think this is the next generation of the movie franchise, all of which are coming together.

Andy:

What we're seeing now is what happens when stories become somewhat conscious and start to self-direct into higher forms of agency. So yes, I would describe it as a kind of co-created participatory entertainment, a bit like AIG. But I think there's something broader going on here, how stories make themselves real. And right now, it's all becoming very literal and fast. Yes, we're entering a future where memes become ideas, and it's going to be crazy.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
1
Add to Favorites
1
Comments