Okay, so I'm gonna try out chat GPT for the first time. This is one of our producers, Eliza Dennis. And last night, she decided to try out chat GPT.
好,那么我第一次尝试聊天GPT。这是我们的制片人艾丽莎·丹尼斯。昨晚,她决定尝试聊天GPT。
So I'm navigating to my web browser. Okay. Going to the open AI website. Yes, I am a human. Thanks for asking.
所以现在我要打开我的网页浏览器。好了。打开 Open AI 的网站。是的,我是一个人类。谢谢您的询问。
Chat GPT is a chatbot developed by a company called OpenAI. It uses generative artificial intelligence. The chatbot scrapes the internet to learn how to come up with an answer for pretty much whatever you ask it for. And it gets better the more it's used. When you talk to chat GPT, it can almost seem like you're talking to a human.
Okay, chat GPT. Let's see if you can make my job obsolete. Let's have you write an introduction with music for post reports podcast about chat GPT. Chat GPT has us starting with music. So hit it.
Some music plays a slow, building electronic track with a pulsing beat. Welcome to the post reports. Music builds to a climax. Exploring the frontiers of technology where the future is shaped by the power of artificial intelligence. Music fades out to a quiet, steady pulse. And today, we delve into the world of chat GPT. Music continues to pulse in the background.
The language model created by OpenAI, capable of understanding and generating human-like text. This cutting-edge technology has the potential to revolutionize the way we interact with machines and open up new possibilities for AI applications. Music crescendos to a dramatic climax.
Chat GPT doesn't usually talk out loud like this. It lies a audit to answer this way by using a text-to-speech function. But either way, we were kind of impressed by its writing chops. Here's Tech reporter Natasha Tiku.
Chat GPT is a piece of generative AI that lets users ask questions, generate essays, spin up sign-feld scripts, and get kind of a conversational response from the chatbot. And it's an example of this new technology generative AI that's kind of taken the internet by storm. When Chat GPT was released to the public, people did have a lot of fun with it.
People were asking just random things like write a biblical verse in the style of the King James Bible, explaining how to remove peanut butter from a VCR, and explain physics to me like I'm a five-year-old, or you could say, tell me the bill of rights in a limbic form. In most cases, it's able to generate these responses almost instantaneously, kind of depending on how overworked open AI servers are.
Natasha, who has been covering AI for years, isn't as charmed by this chat pod as a lot of other people were.
纳塔莎这几年一直报道人工智能,对这个聊天机器人并不像许多其他人那样感到迷恋。
Oh, yeah. I mean, I always think like, you know, when people get marvel at what Chat GPT says, I'm like, well, it's just reflecting human ingenuity back at you, right? Yeah. Exactly. You know, I just feel like we've moved from like strip mining, you know, the environment. We're just like strip mining humans for, for their like intelligence and their creativity.
From the Newsroom of the Washington Post, this is Post Reports. I'm Ella Hay-Ezzati, your human host. It's Monday, February 13. Today, we talk about how this technology getting into the hands of average consumers has kicked off an AI arms race and the dangers that could come with this new Silicon Valley War. Join us as we dive into the world of Chat GPT and discover its potential impact on our future. Music fades out to end the introduction.
So Natasha, last week, there were these big tech companies that announced how they're going to integrate chat bots like Chat GPT into some of their existing products, essentially starting what seems like an AI arms race. And I do want to get into the consequences of that later and what that all means. But first, can you just explain the technology behind Chat GPT and how it's different than other predictive texts that we might encounter before with, I don't know, like customer service bots and that sort of thing?
Yeah, it's much more sophisticated than what you would have previously encountered. I think we all know how stilted and awkward and really unhelpful those customer service bots can be. These models in contrast have been trained on massive amounts of data, text, scrape from the web so you can think like a corpus of books, links from Reddit and they have been told to essentially predict the next word in a sentence. And because this is machine learning, the model actually kind of teaches itself how to find patterns between words. And the developers have found that these models were able to generate really human-like, kind of instantaneous original responses to questions. So the end result is like a much more fluid back and forth. And something that people found really appealing, it just was leaps and bounds from like a corporate chatbot.
What are some of the ways this technology can be used? Like what are researchers saying about the need and benefit for this technology and why we even need it?
Well, one thing people should know is that these models are already in use. You just normally don't see them. You don't encounter them face to face.. They're usually more at the infrastructure level.
So Google, auto complete in your emails, content moderation on Facebook, language translation via Google, all of this relies on these same models. But this was the first time for almost all of us that we had access to this state of the art technology that is normally locked up behind closed doors in corporate labs. Like Google, like Facebook, they keep it under wraps.
And when they do release it, it's in a really neutered form. And here OpenAI had put this out as a consumer product for free. For anyone to use, you could access it just from your web browser. And all of a sudden, you are able to play with this state of the art technology.
You can almost think about it like Wikipedia on demand. You know how Wikipedia is really good at kind of summarizing really complex concepts. You can ask it to explain things at different levels, like two of fifth graders, two of high school students. It has a wide breath of knowledge. Which chances are if it was on the internet a lot, chat GVT knows it.
People use it as a way to generate ideas. It's really successful. I think that way, like say you're having writers block or something like that. So it's good as like a creative tool in that way. I mean, listen, like if it were accurate, I would love this because in theory, it is like the best little assistant you had.
So the pontification about how this could change the internet is totally unbridled. There's a lot of enthusiasm here in Silicon Valley. So it's being called the next platform. There's been a lot of anticipation of, okay, after mobile, after the cloud, like what's going to be the next big thing? And these models are really being heralded as the next mode of the web.
We saw that with Dolly too, which was a text to image generator. It became really popular last summer.
我们也看到了多莉,它是一个文本到图像的生成器。它在去年夏天变得非常流行。
These generative models that make it easy for people to create music, text, essays, you know, cheat on your college exam will really transform the way that we interact online.
Natasha, can you also give us an overview of what are some of the potential dangers of this technology and especially releasing it to the public this way?
Yeah, I think, you know, the biggest danger is really that we don't know exactly what's going to happen. This is really untested, you know, in a lot of ways. It's going from the lab to billions of people without that much vetting.
And if you look at the people who built this technology, it's very homogenous. It's oftentimes like very white, very male, very Asian. In addition to that, you know, these models really reflect the data that they were trained on.
That data was scraped from a narrow portion of the web. It's the English speaking portion of the Western web for the most part. And I think if you've ever been online, if you've ever been on Reddit, you understand that that is probably going to come with a lot of bias and stereotypes.
And that is reflected in the types of text and behaviors that the model generates. So, that's something where the developers behind these models, such as OpenAI, they have tried to fix that by cleaning up the data sets to make the machine less racist, spit out less hate speech, less bias towards women. You know, they do take out some of the porn and gore and violence from the data sets.
And then after the training process, they put filters on what the machine is allowed to generate. But it's still being put out there without specific end goal in mind. So they're kind of just waiting to see how people use it.
And when you do that with billions of people in a product that is super influential online, you're willing to take that risk. And I think for technological historians, for AI ethicists, for people who have released products before, that's just not the way that it's supposed to go.
Developers of these kinds of tools have been pretty clear that these tools have a real tendency to give inaccurate information. And not only do they give inaccurate information, but they also do something called hallucinate, which is when they give inaccurate information confidently.
And it's not a problem that the industry has solved. They've made some progress towards it, but they're releasing it to the public. So you have this system that is being treated as this like all-knowing, super useful tool.. It's being marketed to people as the future of information, the future of the internet, and it might be wrong, confidently wrong, and you won't know when. You know, and all you get is maybe a little bit of warning when you log on and a little disclaimer at the bottom.
Because we know from, I don't know, the last 10 years, especially, there is a lot of debate about truth online. And here we are introducing a AI model in all sorts of places online that is confidently giving us wrong information. And there's not a lot of AI literacy among online readers about how to interpret this. Should we be looking at chat GBT like an Oracle? Should we trust it more than Google? Less than Google? Is it our friend? We don't really know how that's going to play out, but it can't help our current information dystopia.
Natasha, tell me more about the company that developed chat GBT.
娜塔莎,请再向我介绍一下开发聊天GBT的那家公司。
Sure, the company behind chat GBT is super fascinating. It's called OpenAI. It was founded back in 2015 as a nonprofit actually by Elon Musk, Peter Teal and Sam Altman with a pledge to donate a billion dollars. And the idea was very different from the way the company operates now. The initial idea was that they wanted to provide an alternative to having super intelligence, you know, a really powerful AI be in the hands of a corporation like Google or a foreign government. So the idea was to make it open and transparent and distribute the benefits of AI to everyone around the world.
And I think they very quickly found that they weren't able to get enough money and like get further investment. So they turned from open to one of the most secretive companies working in AI. But they made this very big bet that the future of AI was going to be in making these models bigger and bigger. And it has really paid off. You know, they are now leading the race and they helped instigate a race actually towards putting out these models, these larger and larger models. This happens to be exactly what they said they didn't want. Their end goal is something called AGI. That stands for artificial general intelligence.
It's this idea of AI that's comparable to human intelligence. You know, you can almost think about it as like the singularity. You know, it's this end goal that artificial intelligence experts have always been working towards or like some portion of the industry. It's very like sci-fi type goal, right? That was one just fringe, but that has become much more mainstream. So it's really now leading the pack and it has a relationship with Microsoft. And recently they just put a reported $10 billion into open AI.
So open AI focuses on putting out these products on the way to building AGI and then Microsoft is supposed to commercialize them. And the reason a lot more people are learning about this company and its name is becoming like much more of a household name is because of the way that their philosophy is around risks. They think that there's no way to like kind of fully make an AI model safe. So they need to release it into the public and see how people interact with it and closely monitor it. And that's the way you are going to kind of figure out where the dangers are.
After the break, Natasha and I talk about why big tech has been cautious for years about this technology. And why now they're throwing caution to the way. We'll be right back.
Yeah, it was, you know, maybe the worst kept surprise.
是的,你知道的,可能是最没有保密的秘密。
It's great to be here with all of you today. You've been working on something we think is pretty special.
今天与你们所有人在一起感觉很棒。你们一直在做一些我们认为非常特别的事情。
Microsoft announced that it would be incorporating a more updated version of chat GPT into its web browser Bing. Normally the punchline of a joke, right? That you probably haven't been to in a decade or maybe a decade and a half. And that it would be kind of radically altering its search offering.
Infused with AI and assembled as an integrated experience, we're going to reimagine the search engine, the web browser and new chat experiences into something we think of as your co-pilot for the web.
In order to compete with Google. With our innovation, they will definitely want to come out and show that they can dance. And I want people to know that we made them dance and I think that'll be a great day.
Satya Nandela, the CEO of Microsoft, gave this quote to the verge and made it very clear that they are trying to get a bigger portion of this massive search market where Google has dominated for 20 years, 25 years.
So the big change, the one that everyone was waiting for, is the way that you are able to interact with search. At the center of this new co-pilot experience is an all new Bing search engine and edge web browser. Not only does it give you the search results, but it will actually answer your questions.
So they say you're able to put in like much more conversational queries. You know, there's no like Nike plus sneakers plus size five. We're going to let you chat. We're going to let you just talk to it naturally. You can just put in a question the same way you would talk to your friends.
And the results come up like the same kind of links you'll see on the left side and on the right side, you'll get a chat GPT-like response. So a kind of summary written like a conversation and you'll see little kind of footnotes of where the information was sourced from. And then you'll also have an option of chatting back and forth and asking follow-up questions.
Say you said where are the best sneakers in size 6.5. And then you could ask, you know, where can I get them in blue? And then it has this other feature, the one that I'm really excited about, where you can open this up on any page on the web in the demo what they showed is opening up this feature on GAPS financial page. And they were able to get like a summary of the results and then compare it with the financial results for Lulu Lemon, like instantaneously. So it could let you say summarize an article as you're reading it or do any of the things you're able to do with chat GPT.
And yet, you know, for Bing, I think this is just a total game changer for them. It's probably a once in a lifetime chance to try to catch up with Google, which has had somewhere between 80 and 90 percent market share forever.
We've long been pioneers in this space, not just in our research, but also in how we bring those breakthroughs to the world and our products in a responsible way. And then the very next day after this announcement from Microsoft, Google had their own, they demonstrated their own chatbot.
Back at IO in 2021, we unveiled our Lambda AI models, a breakthrough in conversational technology. Next, we're bringing Lambda to an experimental conversational AI service, which we fondly call BARD.
So Natasha, was there anything noteworthy about that or that stood out to you there?
嗨,娜塔莎。在那里,有什么值得注意的事情或者对你来说特别突出的吗?
Well, I should say neither of these are fully accessible to the public yet. You can kind of get a little bit of a demo of Bing, which they're calling new Bing. The same your grandmother's Bing. Yes, but you can sign up for a wait list and the demo at Microsoft was an actual demo. You know, people could interact with it. Our tech columnist, Jeff Valor, was able to do a bunch of searches. In contrast, Google was showing what most people said was kind of just a design mockup. You know, they're saying that only some testers are getting access to this. So there's a big difference between like this is what it might look like and here, you know, we're shipping a product. We'll be able to use it soon here. A bunch of people are already trying it out.
So Google's kind of hastily put together answer to this was really not well received. Google's new highly touted AI chatbot BARD has already made a booboo.
Introduce this week, BARD was touted in an online ad by Google that ran in the company's Twitter feed. In the ad, BARD has given the prompt, quote, what new discoveries from the James Webb Space Telescope or JWST? Can I tell my nine year old about it mentioned that the James Webb telescope was the first time telescope was able to take a picture of an exoplanet that wasn't true. Oh, no.
The error was spotted hours before Google hosted a launch event for BARD in Paris where Google senior executive touted BARD as the future of the company. And the internet just realized this at the same time that they're doing this demonstration and the stock ended up tanking $100 billion.
Yes. It's just like Wall Street, twitchy fingers and the amount of hype and obsession and interest around generative AI right now. This is not based on like a fully thought out understanding of how this all might shake out.
Google, it should be said, is the original developer behind a lot of the core components of generative AI. It is the place where transformers, this architecture that is used to build these models was first developed. It has its own language models. You know, it does not use them as consumer tools. It's been really late to the game, but yeah, it was, you know, almost makes you feel bound for a truly not a company. Almost.
And what about companies outside of Microsoft and Google? Is anyone else in this sort of arms race to sharpen their products with AI?
还有微软和谷歌以外的公司呢?还有其他公司在这种通过人工智能来提升他们的产品的竞争中吗?
Oh, my God. I saw this map of generative AI startups. Like my eyes just widened.
哦,我的天啊。我看到了这张生成型人工智能创企地图,我的眼睛都快瞪出来了。
Yes, there's a ton of money flowing into generative AI startups. There is like a bottleneck because in order to build these models, you have at least up until now needed a lot of access to data and needed a lot of money for compute power.
And so only very few labs like DeepMind, which is owned by Google, Google, OpenAI, a company called Anthropic, you know, companies that have raised hundreds of millions or billions of dollars have been able to compete. But some of them are able to access OpenAI's software, which they're making available to businesses. Google has said it will make its large language model available to businesses. So this is definitely not the end that you will hear about it.
所以只有像 DeepMind 这样由 Google 拥有的实验室、Google、OpenAI、一家名叫 Anthropic 的公司等仅有一些已经筹集了数亿或数十亿美元资金的公司能够竞争。但是其中一些公司能够获得 OpenAI 开发的软件,这些软件现在已经对商业开放。Google 表示将使其大型语言模型可用于商业用途。因此,这肯定不是我们听到的最后一个消息。
So I mean, clearly because there's so much investment in this space, it must mean that this technology companies are viewing it as being very valuable and eventually very profitable.
我是说,显然因为这个领域有如此多的投资,这一定意味着科技公司认为它非常有价值,最终非常赚钱。
So Natasha, can you just explain or break down like how is it that generative AI is going to make money for companies? Is it just like, oh, this technology will replace human workers? Are my being too cynical about that?
I mean, I don't think you're being too cynical because that has been the mantra we've been hearing from Silicon Valley. I mean, it's also kind of a trope when they make a big breakthrough. They like to say like who it's going to put out of business.
I always think about the self-checkout at drug stores or at grocery stores. And this is from listening to really smart historians about the history of technology. Very often they talk about automation as eliminating the need for humans, which it certainly has gotten rid of entire categories of jobs. But oftentimes what happens is that sense that technology is coming in that fear allows employers to drive down wages and make the job more perilous. But the human are still needed to be there because the automation, the machine, is not flawless.
And it still needs help and it's not perfect. And you need to be there to check it. So here you are kind of like working alongside it. At this particular moment in time, you also have this other kind of countervailing force of people in Silicon Valley who really believe in this technology and they want to see it released in this unbridled way. They call themselves like accelerationists. And they're kind of happy to see some of these job categories fall away. So there's still just so much to be determined.
It's still such an early stage and the models are changing so fast. And certainly there have been a lot of job categories that they say this will put out of business. Writers, marketers, artists. So there is the idea that businesses will pay for this software rather than pay for advertising companies or marketing companies, pay for this rather than writers. But I'm sure you will also be seeing a number of startups that just can't make the numbers work in their favor. But if there's enough venture capital flowing their way, they can be venture subsidized for a while.
So Natasha, we're seeing this AI arms race really heating up in Silicon Valley and given everything we've discussed, should we be worried like what are the consequences of these companies racing into this technology?
那么,纳塔莎,我们看到这种 AI 武器竞赛在硅谷真的热闹起来了,考虑到我们所讨论的一切,我们应该担心吗?这些公司竞相涉足这项技术会有什么后果呢?
I'm worried. I was talking to an AI engineer recently who said it just feels like everyone is waiting with baited breath. It's really hard to tell exactly how this will be received, especially when it's going from straight from the lab into the hands of billions of people through search, which is like one of the most impactful parts of the web, right? It's how you get your information. It's like your portal to knowledge for most of us, for me at least.
And AI ethicists, technologists, researchers themselves have been warning about this race dynamic for a long time because the arms race analogy is apt, right? They argue that someone else is going to be doing it. So we have to do it and it incentivizes companies or it gives companies a justification for putting technology out there without fully testing the potential harms and risks without thinking about safety.
I mean, we've seen how this goes, right? Like we've seen an arms race before. We actually just lived through the results of Silicon Valley's last arms race. At the time, we called it growth at all costs or if you're familiar with the Facebook motto, move fast, break things. And we know how that ended for democracy, for civic values, for the way that we ingest information, for political polarization, for our ability to, I don't know, talk to our neighbors.
So just as the public is kind of able to give feedback to these companies that we want you to take our safety seriously, we want you to take your impact in the real world seriously and have some accountability.
Here we have this arms race for AI, which is completely emblematic of the same problems we saw through social networks, which is like that the developers don't often know why the technology is doing what it's doing.. You know, it might not know why it is giving one answer and not the other.
It was instructed to just find patterns between words and try to please you when you ask it a question. So yeah, we're back to move fast, break things and we know how that went the last time.
Natasha, thank you so much for your time. Thanks for having me. Natasha Tiku covers Silicon Valley for the post. Special thanks to Rachel Lerman for her reporting on Microsoft.
That's it for post reports. Thanks for listening. Today's show was produced mixed and edited by humans, namely Eliza Dennis, Sam Bear and Maggie Pemman.