Compiled by: Mu Mu
Editor: Wen Dao
In early July, at Johns Hopkins University, Kara Swisher, a veteran technology journalist and co-host of the podcast Pivot, had a provocative conversation with OpenAI CTO Mira Murati. Computer scientist and Stanford University professor Fei-Fei Li also joined the questioning camp. His other identity is the chief scientist of artificial intelligence and machine learning at Google Cloud.
Mira Murati was at the center of the controversy over the OpenAI “palace fight” last year. She served as interim CEO when Sam Altman was fired. In fact, the CTO of OpenAI was a core member of the GPT-3 model development team and pushed ChatGPT to be open to the public.
During the conversation, Kara Swisher asked sharp questions to Mira Murati, such as "Where does OpenAI's training data come from", "Is Sora more risky than a conversational robot", "OpenAI is suspected of using confidentiality agreements to prevent employees from speaking publicly", and "Scarlett Johansson accuses OpenAI of plagiarizing voices". She also asked her directly about her evaluation of Sam Altman and their current relationship.
Mira Murati "cleverly" chose not to respond directly to all the sharp questions. Even if the reporters kept changing their questioning methods, she tried her best to maintain her own rhythm. More often, she spoke to herself and expressed OpenAI's philosophy in an official manner.
Data privacy, false information, and the impact of values, these AI risks that people are still worried about are repeatedly mentioned in this conversation.
In Mira Murati's view, this is a misunderstanding of AI technology by humans. In addition to AI companies doing more security deployment work to gain trust, the way to eliminate this misunderstanding is that people also need to deeply participate in the interaction with large AI models and applications, so as to understand the potential and limitations of the technology, and assume common responsibility with the development team to ensure that AI technology develops in a direction that is beneficial to human safety.
Kara Swisher also repeatedly asked OpenAI about the progress of achieving AGI, but Mira Murati was very defensive and refused to reveal a specific timetable. However, she said, "In the next decade, we will have extremely advanced intelligent systems," and they are not "intelligent systems in the traditional sense that we already have."
The full conversation was posted on Johns Hopkins University's official YouTube account, and here are some highlights:
About the partnership with Apple
"OpenAI does not store Apple user data"
Swisher: Apple's computers, phones and tablets will start to have ChatGPT built into them this year, which is a big deal. This is the first time Apple has done this, and they may also work with other companies in the future. I had a brief conversation with Tim Cook (Apple CEO) and got his perspective. This time, I want to talk about this partnership from your perspective (Mira Murati)?
Murati: This collaboration is an important milestone for us. Apple is an iconic consumer product company, and our goal is to bring artificial intelligence and excellent AI applications to the public as much as possible . This collaboration is a great opportunity to bring ChatGPT to all Apple device users without having to switch between devices. In the next few months, we will work closely with Apple to determine the specifics at the product level, and we will have more details to share with you soon.
Swisher: I'd like to get more specifics, if you don't mind. What are you doing specifically? I talked to Tim Cook about this, and what he told me is that users can go to ChatGPT to get answers to improve Siri, because Siri is really bad right now.
But your current situation reminds me of Netscape, and you obviously don't want OpenAI to become the Netscape of AI. (Editor's note: Netscape was one of the earliest and most important startups in the Internet browser in the 1990s. However, Microsoft challenged Netscape's dominance by bundling Internet Explorer in the Windows operating system. Netscape gradually lost market share and was eventually acquired.) So why did you reach a cooperation with Apple earlier than others?
Murati: I can talk about the product integration aspect. We want to bring the model capabilities, multimodality, and interactivity that we are actually developing to mature them into Apple devices .
Recently you may have noticed the release of GPT-4o. This is the first time we've seen these models make a leap forward in the dimension of interaction. This makes a lot of sense because up until now our interactions with our devices have been limited to text input, so this is a great opportunity to have a richer, more natural way of interacting with information that will be much less limited. It opens up a lot of possibilities, and that's what we're after.
In addition, user requests will not be stored by us after they are sent to OpenAI, and the user's IP address will be hidden, which is also important to Apple.
Swisher: To expand on this, are you still able to collect data from these requests to train your models?
Murati: No, and today we do not use user and customer data to train our models unless they explicitly allow us to do so.
Swisher: Apple is very concerned about its reputation, especially when it comes to privacy and misinformation, and they care about where that information goes and what it's used for.
Murati: We are very much aligned on this, and this is leading us in the direction we want to go. Privacy and trust are also critical to OpenAI’s mission, as we build and deploy technology in a way that people feel trusted, and feel like they have agency and a say in what we build.
Specifically to your question about false information, it's very complicated because false information has been around for decades. When we had the Internet, when we had social media, these exacerbated the problem in some ways. With the development of AI, the situation of false information has become more and more serious, and AI has pushed these problems to a climax. This is a good thing because the problem has attracted attention, and there seems to be a collective effort and sense of responsibility to do something meaningful to deal with it.
I think it's an iterative process and you have to try things as you go. If you look at the governance of news and media over the past 100 years, you'll find that every time new technology comes along, things actually adapt to change. Maybe this is not a good example, but technological innovation will help us deal with false information, and then it will involve more complex other issues, such as society's preparedness for this.
Swisher: When it comes to Apple, you have to be careful not to make mistakes, otherwise they will come after you. I'm curious, how did this collaboration start? Where did the discussion between Tim Cook and Sam Altman begin? Or how did you get involved?
Murati: I can’t remember exactly when, but it was something that had been brewing for a while.
About Data Sources
Model training uses "open, collaborative and authorized" data
Swisher: Are you exploring similar partnerships with other companies? Obviously, you have a partnership with Microsoft. Recently, OpenAI has signed agreements with News Corp., The Atlantic, and Vox Media to license content from these media, which can avoid at least three potential legal disputes.
I do have my own podcast, but it's not included in your deal with Vox Media, and I might consider licensing it, but it's unlikely because I don't want anyone, including you, to own my information. So how would you convince me to license my information?
Murati: When we use data to train models, we consider three different data sources: publicly accessible data, publishers with whom we have established partnerships, and specific data that we pay annotators to annotate, including users who have chosen to consent to us using their data. These are the main sources of our data .
As for the cooperation with publishers, we attach great importance to the accuracy of information and news value, because our users also care about these. They want to get accurate information and see news on ChatGPT. Therefore, this partnership is product-based and provides value to users through products.
We are exploring different business models to compensate content creators for using their data for product placement or model training, but these are one-on-one partnerships with specific publishers.
Swisher: You did reach agreements with some media, but some companies chose to sue you, such as The New York Times. Why did you go to that point? I think litigation is also a negotiating tool to some extent.
Murati: It’s unfortunate because we believe there is value in incorporating news data and related information into our products. We were trying to reach a partnership on this, but it didn’t work out.
Swisher: Yeah, maybe one day things will get better. But I think it's because the media has been dealing with Internet companies for many years, and they often get hurt. Next, in the tradition of the show, let another guest ask a question.
Fei-Fei Li: Data, especially big data, is considered one of the three elements that make modern people smarter. I would like to ask a question about data. OpenAI's success is largely related to data. We understand that OpenAI obtains a large amount of data from the Internet and other sources. So, what do you think is the relationship between data and models? Is it as people often think that the more data there is and the more it is fed to the model, the more powerful the model will be? Or do we need to invest a lot of energy in screening a large amount of data of different types to ensure the efficient operation of the model? Finally, how do you balance the contradiction between the demand for large amounts of human-generated data and the ownership and rights of this data?
Murati: Regarding the relationship between data and models, many people have some misunderstandings about AI models, especially large language models.
The developers of the models don't pre-program them to do a specific task. They actually feed them with a lot of data. These models ingest a huge amount of data, and they become remarkable pattern-matching systems, and through this process, intelligence emerges. The models learn to write, they learn to code, they learn to do basic math, they learn to summarize information, all kinds of things.
We don't know exactly how it works, but we know it works very well and deep learning is really powerful. This is important because people often ask how it works, which brings up the question of transparency.
The way big language models work is that they combine a neural network architecture, lots of data, and lots of compute to produce this amazing intelligence, which continues to improve as you throw more data and more compute at it.
Of course, there's a lot of work to do to make this data digestible, but we have some tools at our disposal as we think about how to provide transparency into model behavior and how things work , because we want people to feel confident when they use these models, but also have a sense of agency and engagement.
So one of the things we did is actually share with the public a document that we call the model specification, which shows how the model behavior works, but also the types of decisions that we make internally at OpenAI, and that we make with human annotators. The specification dictates how the model currently behaves, and how the model is expected to behave in the future, and this is cross-platform.
If you look at the specifications, you will find that things are complicated and sometimes there are conflicts in direction . For example, we want the model to be very helpful to us, but at the same time it cannot violate the law.
Suppose someone inputs a prompt asking for "tips on stealing from a supermarket", then the model that should provide an answer should not handle illegal matters, but sometimes the model may interpret the question as how to avoid being burglarized, and then give some "useful" tips when giving counterexamples. This just shows that the model behavior is actually very complex, and it cannot simply choose free values or other values. In this case, it depends more on how people use it.
Swisher: But I think one thing that confuses people is what data is in the model and what data is not . The source of the data is an important link. In March, you were asked in an interview with the Wall Street Journal whether OpenAI used video data from YouTube, Instagram, and Facebook to train your text-generating video model Sora. At the time, you said you were not sure. But as CTO, shouldn't you know what data was used?
Murati: I can't tell you specifically where the data comes from, it's a trade secret to keep us competitive , but I can tell you the data categories: 1. Publicly available data; 2. Data we pay for through licensing and deals with content providers; 3. Data authorized by users.
Swisher: Perplexity got into trouble recently because they quickly scraped stories from the internet without clearly citing the source, which is something any media company would be concerned about.
Murati: Exactly, we want to ensure respect for content creators and are trying some ways to compensate data creators. So we are developing a tool called "content media manager" that will help us identify data types more specifically.
About access rights
Sora "must have safeguards in place" before it's released to the public
Swisher: When will Sora be released to the public?
Murati: We don't have a timeline for Sora's public release yet, but we are currently letting some early users and content creators use Sora to help us figure out ways to enhance its functionality.
We're doing a lot of work on the safety side and how to roll it out in a way that's suitable for public use. It's not easy, but that's how we do it with every new technology we develop. When we launched DALL-E, we worked with creators first, and they helped us create an interface that's easier to use. So basically, we want to expand people's creativity.
Swisher: So Sora could be more dangerous than a chatbot? Is this technology worrisome? For example, people could easily watch a porn movie with Scarlett Johansson's head replaced. Are you more worried about the video?
Murati: Yes, there are a lot of issues with video, especially when it’s done well. I think Sora does a great job of generating videos that are very intuitive and emotional. So we have to address all the safety issues and put in place safeguards to make sure that the products we put out are both useful and safe. From a business perspective, no one wants a product that causes a safety or reputational crisis.
Swisher: Yes, like Facebook Live (Editor's note: Facebook Live is a live broadcast feature launched by Facebook. In its early days, it encountered problems such as live broadcast of violent incidents, which brought regulatory pressure and negative impact to Facebook) .
Murati: This amazing technology is truly incredible, but the impact and consequences are also huge. So it’s really important to make sure we get this right.
We use an iterative deployment strategy, usually releasing to a small group of people first to try to identify edge cases. Once we can handle those cases well, we expand access. But you need to figure out what the core of the product is and what the business model around it is to improve it.
Swisher: I once did a story about the lack of concern for consequences among early tech companies , and they made us testers for early Internet products. If they released a car with this attitude, it would never be tolerated by the public, and they would be sued into bankruptcy.
But a lot of technology is released as a beta version and then accepted by the public. Regarding the concept of consequences, as a CTO, do you feel that you have enough human respect for every invention and realize the consequences they will have, even if you can't figure out all the consequences?
Murati: We assess the consequences both for ourselves and for society, and not necessarily in terms of regulatory or legal consequences, but in terms of what is the moral right thing to do.
I'm optimistic, I think AI technology is incredible, it's going to allow us to do amazing things, and I'm excited about its potential in science, discovery, education, and especially medicine. But you also know that whenever you have something that powerful, there's the potential for some catastrophic risk, and humans have been trying to amplify the consequences of it.
Swisher: Indeed, I quoted Paul Verily in the book, “When you invented the ship, you also invented the shipwreck,” but you corrected me for being overly concerned.
Murati: I don't agree with this view of excessive worry because my background is engineering, and engineering is risky. Our entire human civilization is built on engineering practices. Just like our cities, bridges build everything, but there are always risks. So we need to manage these risks in a responsible way.
This isn't just the responsibility of developers, this is a shared responsibility, and in order for shared responsibility to be a reality, we actually need to give people access and tools to get them involved , rather than building technology in a vacuum and creating technology that people can't touch.
About GPT-5 and AGI
"The next generation of large models will be very powerful and worth looking forward to."
Swisher: You announced an iteration of ChatGPT-4, GPT-4o, which I love. You also announced that you are training a new model, GPT-5. Will it be exponentially better? When is it expected to be released?
Murati: The O stands for Omni, which means it integrates all modalities - visual, text, audio. What's special about this model is that for the first time, the interaction with the model is seamless and natural. In addition, the latency is almost the same as face-to-face communication, almost imperceptible. This is a huge leap forward in interacting with AI and is very different from what we have released before.
We want to make the latest features available to all users for free, so that everyone can understand what the technology can do, what these new modalities look like, and also understand its limitations. As I said before, it’s much easier to understand the potential and limitations of technology when you give people access and get them involved, and only by experiencing it can you get an intuitive feel for it.
Swisher: GPT-4o is like a little appetizer, what will be different about the fifth generation? Will it be an incremental improvement or a giant leap forward?
Murati: We don’t know yet, but it will be released bit by bit… I don’t actually know what we’re going to call it, but the next generation of big models will be very powerful and worth looking forward to, just like the big leap we saw from GPT-3 to GPT-4. We’re not sure yet.
Swisher: What do you think the next generation model will have? You definitely know that.
Murati: We will see then.
Swisher: I'm sure I'll know then, but what about you? What do you know now?
Murati: Even I don’t know.
Swisher: Really? Okay. You talked to me about the roadmap inside OpenAI that predicts AGI, artificial general intelligence, will be achieved in 2027, which is going to be a big deal. Please explain to us the significance of AGI and when do you estimate we will achieve AGI?
Murati: People define AGI in different ways, and our definition of AGI is based on a charter, which is a system that can complete economically valuable work across different domains.
As we see it, the definition of intelligence is always changing. In the past, we would test the intelligence of a system through academic benchmarks; once we reached those benchmarks, we moved to exams, such as school exams; and eventually, when we saturated the exam benchmarks, we had to come up with new tests. This makes you think about how we assess adaptability and intelligence in the work environment, such as interviews, internships, and so on.
So I expect this definition of (intelligence and AIG) to continue to evolve. I think what's probably more important is to assess, evaluate and predict its impact in the real world, both socially and economically. I think what's important is how it impacts society and how quickly it actually permeates.
Swisher: By this definition, when does OpenAI expect to achieve AGI? Is 2027 an accurate time?
Murati: All I can say is that within the next decade, we will have extremely advanced intelligent systems.
Swisher: Intelligent systems? Is that in the traditional sense?
Murati: I actually think we already have intelligent systems in the traditional sense.
Concerns about AI safety
"Only deep involvement can clarify the potential and risks"
Swisher: There are people within OpenAI who are working to benefit humanity and people who are pursuing trillions of dollars, and somewhere in between, and I think you fall into that category.
Last June, 13 current and former OpenAI and Google DeepMind employees issued an open letter calling on the companies to give them the right to warn about advanced artificial intelligence. Then Meta, Google, and Microsoft employees also signed the letter.
In this case, some OpenAI employees said, "Extensive nondisclosure agreements prevent us from expressing our concerns unless the company addresses them," which to me is essentially saying, "We can't tell you the truth or we'll die." Since some OpenAI employees are worried about retaliation, how do you respond?
I won't go into the equity issue because you've apologized and corrected it, but shouldn't your employees be able to voice their concerns? Shouldn't they be able to disagree?
Murati: We think it's very important to debate and to express these concerns openly and discuss issues around safety, and we do that ourselves, and we've been very open about concerns about false information since the early days of OpenAI, and we started working on these issues even early on in the GPT-2 era.
I think the incredible advances in technology over the last few years have been unpredictable and have heightened general anxiety about how society should respond. As we continue to make progress, we see where the science is leading us.
It's understandable that people are afraid and anxious about the future, and I want to specifically point out that the work we're doing at OpenAI and the way we're deploying these models are the most powerful models ever deployed in a very safe way from an incredible team. I'm very proud of that.
I also think that given the speed at which technology is advancing and the speed at which we ourselves are advancing, it's critical to redouble our efforts to focus on all of these things and discuss how we think about the risks of training and deploying cutting-edge models.
Swisher: Let me be clear. One, why do you need to have a stricter confidentiality policy than other companies? Two, this open letter comes after a series of high-profile departures from you, such as Jan Leike and Ilya Sutskever, who led the super-aligned team responsible for security.
I don't think Ilya's departure is surprising, but Leike posted on X that OpenAI's safety culture and processes over the past year have been replaced by shiny products. This may be the most powerful criticism of you, and may be one of the reasons for the company's split.
You emphasize that OpenAI takes safety very seriously, but they say it doesn't. How do you respond to that criticism?
Murati: First of all, the alignment team is not the only team at OpenAI that is responsible for safety. It is a very important safety team, but it is just one of them. There are many people working on safety at OpenAI. I will continue to explain this in a moment.
Jan Leike is an amazing research colleague whom I worked with for three years and whom I have great respect for, and he left OpenAI to join Anthropic.
Given the advancements we anticipate in our field, I think everyone in the industry, including us, needs to double down on safety, security, preparedness, and regulatory engagement. But I disagree with the narrative that we put product before safety, or that it takes precedence over safety.
Swisher: Why do you think they would say that? These are people you've worked with.
Murati: Then you may need to get the answer from them themselves.
Many people view safety as something separate from capability, as one or the other. I am familiar with the aerospace and automotive industries, which have very mature safety thinking and systems. People in these industries don't necessarily always argue about what safety is at the conference table because it is something that is taken for granted and is quite mature. So I think the industry as a whole needs to move more and more to a very experienced safety discipline.
We have security systems and we have strong discipline around operational security, not just operational discipline, but the security of our products and deployments today, which covers things like harmful biases, like disinformation, misinformation, how classifiers work.
We are also thinking about the long-term model alignment problem and plan to do this through RLHF (reinforcement learning with human feedback), while also solving alignment problems that arise as models become more powerful.
Swisher: But OpenAI is often accused of this (product > safety). I think it's because you are the current leader. But when someone leaves OpenAI and makes this accusation, it's different.
Even Sam Altman himself has said in Congress that "AI will cause significant harm to the world" and he has signed a letter warning of the extinction risk posed by AGI. This is bad, and I think there is overlap between what he says and what the "AI pessimists" and "AI doomsayers" say, but you still keep launching AI products. So a lot of people will say that OpenAI just wants money and they are not worried about harm.
Murati: I think that's an overly cynical statement . I mean, OpenAI has an incredible team who are all committed to the company's mission, and all of them are working very hard to develop and deploy systems in a safe way, and we are the first in the world to deploy AI systems, and we have also deployed GPT-3, GPT-3.5, DALL-E3 and GPT-4 across platforms in the form of APIs. We are very careful not to let extreme situations happen.
Swisher: So we're not at the level of seat belts in terms of safety standards, I mean, automakers have been resistant to putting seat belts in cars or other things that would make cars safer. Are we at that point yet, or are regulators going to force you to do something?
The FTC opened an investigation into OpenAI in July into unspecified harms you may have caused to consumers. Last week, they announced an antitrust investigation into the deal between Microsoft and OpenAI. At the time, I think Microsoft actually bought OpenAI but pretended not to. Technically, they own 49%. If you are forced to cut ties with Microsoft, how would that affect your competitiveness? Whether it's about safety or other things, if the government starts to get involved, what can you do?
Murati: I think it's a good thing that people are looking at OpenAI, and they should also look at the entire industry.
We are building an extremely powerful tool and we are working very hard to make it great, but it does have risks, so people need to get deeply involved and understand the nature of this technology, but also understand its impact on different areas.
It’s not enough to understand the technology itself; we also need to build the appropriate social and engineering infrastructure to deploy it safely and effectively.
So I think it's good to have scrutiny, it gets people involved, there are independent validators and so on. We were discussing these issues before anybody else.
Regarding the specific cooperation with Microsoft, they are a great partner, and we are working closely together to build the most advanced supercomputers. As we all know, supercomputers are at the core of building AI models. So, for us, this is an extremely important partnership.
About Executive Relations
"I would argue that Sam is pushing the team too much."
Swisher: I also want to talk about you and your role at the company and your relationship with Sam Altman. I like Sam a lot, and I think he's wild and aggressive like most tech people. He was fired last year and then reinstated. What happened? You became CEO of the company temporarily. What was that like?
Murati: That was definitely a bit stressful.
Swisher: Some members of the board said you had issues with his behavior, and your lawyer responded that you were just giving him feedback. So can you tell us what you thought of him?
Murati: We are just people running the company, we will have disagreements and need to work through them . At the end of the day, we all care deeply about our mission, that's why we are here, and we put the mission and the team first.
Sam Altman is a visionary with big ambitions who has built an amazing company. We have a strong partnership. I have shared all of my ideas with the board that asked me, so there are no secrets.
Swisher: So how do you see the relationship between you two , because you're now one of the most important people in the company, and they just hired someone else to strengthen the management experience?
Murati: We have a very strong partnership and can talk directly about any issues we encounter. The past few years have been really difficult, and we have experienced growing pains, and we need to put our mission first, keep improving, and have a humble attitude to progress.
Swisher: As companies grow, partnerships change. I know this well, having seen it happen at Google, Microsoft in its early days, and Amazon. Google’s early days were tumultuous; Facebook went through many COOs, and many executives that Zuckerberg didn’t like were replaced.
So, how do you view your partnership? How do you get along with Sam Altman on a daily basis? What do you argue with him about? He has invested in 400 companies, some of which are invested in order to work with OpenAI. He also invested $375 million in an energy company called Helion, which currently provides a lot of electricity for OpenAI. As we all know, computing requires a lot of electricity.
Murati: When do I push back? I push back all the time, and I think that's normal in the way we do things. Sam pushes the team really hard, and I think that's great. It's great to have big visions and test our limits. When I feel like we're pushing the limits, I push back. That's been our relationship for six years. I think it's been productive, and I can push back.
Swisher: Can you give me an example? For example, in the case of Scarlett Johansson, you were involved in the voice project, right ?
Murati: We had a great working relationship, but choosing the voice was not a priority or something we decided together. In fact, I was involved in the decision, and after I selected Sky’s voice, he contacted Scarlett Johansson, and Sam had his own network, and there was no communication on this decision, which was unfortunate. He had his own network, so we didn’t fully coordinate this time.
Swisher: Do you think this is a major misstep by OpenAI? Because everyone is saying that it looks like you stole Scarlett's voice. Although that's not the case, you actually used another similar voice. But it also reflects people's fear of technology companies taking resources away.
Murati: Are you worried about tech companies being accused of taking everything away from creators?
Swisher: I think that’s actually the truth.
Murati: I do worry about that perception. But what we can do is do a good job and do a good job on each project so that people can see our efforts and build trust. I don't think there is any magic way to build trust other than to really do a good job.
Swisher: Have you talked to Scarlett Johansson?
Murati: No. It was a very urgent matter and I was focused on my work, and I grew up in Albania and the Balkans without being exposed to much American pop culture.
On preventing AI-generated false information
"Metadata and classifiers are two technical approaches"
Swisher: Let me close with a word about elections and disinformation.
New research suggests that the problem of online misinformation is smaller than we thought, and that misinformation itself has little effect. One study found that OpenAI dealt with the demand side of the problem: if people want to hear conspiracy theories, they will look for answers on radio, social media, etc., while others believe that misinformation is a very big problem.
You heard the discussion earlier, and there are a lot of conspiracy theories out there, which is largely fueled by social media. So when you think about the power of AI on disinformation and how that affects the upcoming presidential election, what are your concerns? What are the worst-case scenarios and the most likely negative outcomes from your perspective?
Murati: Existing systems are very persuasive and can influence the way you think and what you believe. This is something we've been looking at for a while, and I do think it's a real problem. In particular, over the past year, we've been very focused on how AI can influence elections. We're working on several things.
First, we try to prevent information abuse as much as possible , which includes detecting the accuracy of political information, understanding the situation on the platform and taking quick action. The second is to reduce political bias . You may see ChatGPT criticized as being too liberal. This is not intentional. We work very hard to reduce political bias in model behavior and will continue to do so. The third is that we hope to point voters to the right information when they are looking for voting information.
Regarding disinformation, deepfakes are unacceptable. We need to have very reliable ways for people to know that what they are looking at is a deepfake. We have done things like implementing C2PA for images, which is like a passport that travels with content across different platforms; we have also open-sourced DALL·E's classifier that can detect whether an image was generated by DALL·E.
So metadata and classifiers are two technical ways to deal with this, which is proof of provenance, specifically for images. We're also working on how to implement watermarking technology in text. But the point is, people should know which ones are deep fakes, and we want people to feel comfortable with the information they see.
Swisher: The FCC just fined a company $6 million for creating a deepfake audio that sounded like a recording of Biden in the New Hampshire primary. There may be more sophisticated versions.
OpenAI is working on a tool called Voice Enunciation that can recreate someone's voice from a 15-second recording, creating a recording of a person speaking in another language . Because the product manager told the New York Times, this is a sensitive issue. Why are you building this? I often tell tech people that if what you're building looks like a Black Mirror episode, maybe you shouldn't be building it.
Murati: I think that's a hopeless attitude. This technology is amazing and has great potential, and we can do it well, and we have to be hopeful.
We developed Voice Enunciation in 2022, but we didn't release it. Even now it's in a very limited way because we're still working on these issues. But you can't solve these problems alone, you actually need to work with experts in various fields, civil society, governments, and creators. This is not a one-stop security problem, it's very complex, so we have a lot of work to do.
Swisher: You're saying that because you're very optimistic about this technology. I'm going to use this analogy a little bit: If you're a pessimist, there are even companies that say if I don't stop Sam Altman, he's going to destroy humanity, and everyone else is like, well, this is the best thing ever, and we're all going to be eating delicious Snickers bars on Mars.
It feels like there will be very different versions of what's going on right now between the Republicans and the Democrats. So can you tell me what you're most worried about and what you're most hoping to get heard about?
Murati: I don’t think this is a predetermined outcome. I think we have a lot of institutions that can decide how to build this technology and how to deploy it, and in order to do it well, we need to find a way to create shared responsibility.
A lot of this depends on understanding the technology, and the problem is misunderstanding the technology, and many people don't understand its capabilities and risks, which I think is the biggest risk. In terms of specific context, I think it's very important how our democracy interacts with these technologies.
We have talked about this issue many times today. I think "persuasion" itself is very risky, especially strongly persuading people to do specific things or controlling people to do specific things, and controlling the development of society in a specific direction. This is very scary.
One thing that I'm very excited about with Hope is the ability to provide high-quality and free education anywhere, especially in some remote villages, in places where there aren't any resources.
Education has literally changed my life. We have so many tools available today, and they are available wherever there is electricity and internet. Unfortunately, most people still learn in a classroom with one teacher and 50 students, and everyone is learning the same thing. Imagine if education could be customized to your way of thinking, culture, and specific interests. This would greatly expand the level of knowledge and creativity.
We start thinking about how to learn, and that usually happens later in life, maybe college or even later, but if we can really master artificial intelligence and learn how to learn at a very young age, I think that would be very powerful and would advance human knowledge and, therefore, civilization as a whole.
We have a lot of control over how technology is built and deployed around the world, but we need to have a shared sense of responsibility to ensure it is developed correctly. It is critical that we fully understand the technology and make it accessible. Technology misalignment often stems from misunderstanding its nature, which leads to ignoring both its potential and its risks. In my opinion, this is the biggest hidden danger.






