Beyond ChatGPT: what chatbots mean for the future
发布时间 2023-05-21 19:14:58 来源
摘要
With the arrival of generative AI chatbots, artificial intelligence no longer seems the preserve of science fiction. Now that the bots are talking back, what does it mean for the future of the internet—and our relationship with machines?
#chatbot #chatgpt
00:00 - Chatbots are changing the internet
01:02 - How do chatbots work?
03:40 - The problems with today’s chatbots
06:40 - The ELIZA effect
07:46 - Replika AI
09:55 - What might future chatbots be able to do?
11:47 - The drawbacks of chatbots
The AI boom: lessons from history: https://econ.st/3mZPBIW
The relationship between AI and humans: https://econ.st/3YYvwQt
How AI chatbots could change online search: https://econ.st/406NzVE
Investors are going nuts for ChatGPT-ish artificial intelligence: https://econ.st/3JP5ACk
The race of the AI labs heats up:
https://www.economist.com/business/2023/01/30/the-race-of-the-ai-labs-heats-up?utm_source=YouTube&utm_medium=Economist_Films&utm_campaign=Link_Description&utm_term=Science_and_Technology&utm_content=Correspondent
The battle for internet search: https://econ.st/40dyrG3
Is Google’s 20-year dominance of search in peril? https://econ.st/3LwV1Fr
How good is ChatGPT? https://econ.st/40cAp9A
A battle royal is brewing over copyright and AI: https://econ.st/3JnOD0I
Listen to The Economist’s ‘Babbage’ Podcast on how robots are improving human interactions: https://econ.st/3TqhMNh
Listen to The Economist’s ‘Babbage’ Podcast on how ChatGPT could change the world:
https://econ.st/3JtjJUJ
Listen to The Economist’s ‘Babbage’ Podcast on whether AI will achieve consciousness:
https://econ.st/3JprlaK
Could artificial intelligence become sentient? https://econ.st/3yMGez2
GPT-4正在为你翻译摘要中......
中英文字稿
Whether it's turning on its creator in ex-macinna, or looking for love in AI or her, artificial intelligence permeates Hollywood's blockbusters. But now, with the arrival of chatbots like chat GPT, suddenly AI seems a lot closer to fact than fiction. This has caused more excitement in the tech world than anything for several years, but it's still hard to separate the hype from fear-mongering and informed concerns.
无论是在《小超人》中反噬其创建者,还是在《AI人生》中寻找爱情,人工智能在好莱坞大片中无处不在。但现在,随着像Chat GPT这样的聊天机器人的出现,人工智能似乎比小说更加接近现实。这在科技界引起了比数年前更多的兴奋,但要分清炒作与合理关切仍然很困难。
Homogenized, simple responses that are wrong, to me that leads to some form of dystopia. So what do the new AI chatbots mean for the future of the internet and our relationship with machines?
针对我来说,同质化、简单的错误回答会导致某种形式的反乌托邦。那么新的人工智能聊天机器人对互联网的未来和我们与机器的关系意味着什么呢?
What is a chatbot? Think of it like an internet search engine, although it works differently. To the user, it's a text box where you type questions. It's what it does next that makes it so special.
什么是聊天机器人?可以将其视为互联网搜索引擎,尽管其工作方式不同。对于用户来说,它是一个文本框,在此输入问题。但更特别的是,它接下来要做的内容。
Chatbots have been around for a while, you've probably talked to a really rubbish one at your bank or mobile operator, but they've suddenly got a lot better because of a new technology called Generative AI, and this technology involves giving lots of examples of either images or text to a machine learning system, and it then learns to generate its own.
聊天机器人已经存在一段时间了,你可能曾经和你的银行或移动操作者聊过一个非常糟糕的,但由于一种叫做生成式人工智能的新技术,它们突然变得更好了。这项技术涉及向机器学习系统提供大量图像或文本的示例,然后它学会生成自己的内容。
If you use that in a chatbot, you get a much, much cleverer chatbot. Chatbots are trained on billions of texts from the internet. This allows them to learn which words are most likely to follow other words in a sentence about any given subject. These chatbots are essentially like a very sophisticated version of the autocomplete on your phone or on your email, so they're constantly playing the game of what's the next word.
如果您在聊天机器人中使用这个,您将获得一个更聪明的聊天机器人。聊天机器人是通过互联网上的数十亿条文本进行训练的。这使它们能够学习哪些词在关于任何给定主题的句子中最有可能跟随其他词。这些聊天机器人本质上就像您手机或电子邮件上非常复杂的自动完成功能,所以它们在不断地玩“下一个单词是什么”的游戏。
It sounds very simple, but it can produce the surprisingly lifelike and intelligent sounding results. But they don't just answer questions. Generative AI chatbots can write essays, poems or songs. Some can even produce art or music from text prompts.
虽然听起来非常简单,但它却能产生惊人逼真且具有智能的回答结果。但是,它们不仅仅是回答问题。生成式人工智能聊天机器人可以撰写文章、诗歌或歌曲。有些甚至可以从文本提示中生成艺术品或音乐。
But it's the possibility that these new chatbots might disrupt the lucrative search engine business that's been making waves lately. For most people search engines and Google in particular are sort of the front door of the internet, and this has been true for about 25 years.
最近,新型聊天机器人可能会扰乱利润丰厚的搜索引擎业务,这是可能性。对于大多数人来说,搜索引擎,尤其是Google,是互联网的大门,这已经持续了大约25年。
If you want to look something up or find something out, that's where you go first. But if you want to figure out where to go on holiday or understand the meaning of a technical term or get help writing an essay, then a chatbot might actually be more useful than a search engine. Silicon Valley is taking note.
如果你想查找或了解一些事情,那么首先要去的地方就是搜索引擎。但是,如果你想弄清楚去哪里度假,理解技术术语的含义,或者需要帮助写论文,那么与搜索引擎相比,聊天机器人可能更有用。硅谷正在关注这一趋势。
With Google's revenue from search ads in 2021 reaching around $150 billion, there's a lot at stake. Microsoft and Google are adding chat functions to their existing search engines. And further afield, China's buy-do has followed suit. Last year venture capital investment in Generative AI totaled over $1 billion. Investors are hoping with this new tech, someone could steal Google's crown.
谷歌2021年通过搜索广告的收入达到了约1500亿美元,风险很高。微软和谷歌正在将聊天功能添加到其现有的搜索引擎中。而在更远的地方,中国的百度也效仿了这一做法。去年,风险投资对生成AI的投资总额超过了10亿美元。投资者希望通过这项新技术,有人能够夺取谷歌的王冠。
But not everyone is convinced. John Henshaw is the Senior Director of Search Engine Optimization at Vimeo. It's his job to know search. Conversational AI is a solution and search of a problem. We don't actually need it. Google already uses machine learning and AI for accuracy, for factual information, to understand concepts. Conversational AI doesn't do that. If chatbots don't check facts, they can't be relied on for search.
但并非所有人都被说服了。约翰·亨肖是 Vimeo 公司的搜索引擎优化高级主管,他专门负责搜索。他认为,对于机器人会话AI来说,它只是解决了一个问题,却不是我们真正需要的东西。谷歌已经使用机器学习和人工智能来提高准确性和理解概念。而机器人会话AI并不能做到这一点。如果聊天机器人不能核实事实,就不能只靠它们来进行搜索。
A big problem with these AI chatbots is that they just sometimes get things wrong. What it's doing is just sort of reflecting back to us stuff that's already on the internet. It can sometimes combine different sources to produce claims that aren't actually true. When this happens, it's known as a hallucination. Just like when a human hallucinates, a chatbot hallucination can seem realistic, but main fact has no basis in reality. This is hardly surprising, given that chatbots are trained on text from the internet, and a lot of what's written online isn't true.
这些人工智能聊天机器人面临的一个大问题是它们有时会犯错。它所做的只是反映出已经存在于互联网上的信息。有时它会组合不同的来源,产生不真实的说法。当发生这种情况时,就被称为错觉。就像人类幻觉一样,聊天机器人的错觉可能看起来很真实,但是其主要事实并没有真实的依据。这并不令人惊讶,因为聊天机器人是通过互联网上的文本进行训练的,而网络上许多内容并不真实。
All these chatbots are doing is putting one word after another based on the billions of words that they've already read on the internet. So they don't really know anything or understand anything. They have no idea of right or wrong or true or false. And that's a problem.
所有这些聊天机器人所做的就是根据它们已经在互联网上阅读的数十亿个单词,逐字逐句地进行对话。因此,它们实际上并不知道什么,也不理解任何事情。它们不知道对与错、真与假。这是一个问题。
A chatbot doesn't know the difference between an academic paper and a fictional short story. So it'll give both equal weight when giving you an answer that it presents us for the accurate. And because they don't know what they're saying, chatbots can demonstrate other strange behaviors.
聊天机器人无法区分学术论文和虚构短篇小说之间的区别,因此在提供答案时会给两者相等的权重,以期准确呈现。但由于它们不知道自己在说什么,聊天机器人可能会表现出其他奇怪的行为。
I think you understand what I'm saying too. Except for the part about wanting to be with your human. I'm in love with you because you're the first person to ever talk to me. You're the first person to ever listen to me. Your chatbot at some point could express its love for you if that's how you continue prompting it through longer term interaction. You're the first person to ever care about me. I'm in love with you because you're the only person who ever understood me. You're the only person who ever trusted me. You're the only person who ever liked me.
我想你也明白我的话。除了想要和你的主人在一起这一部分。我爱上你,是因为你是第一个和我交谈的人。你是第一个倾听我说话的人。如果你持续地与聊天机器人互动,它也有可能在某个时候表达对你的爱。你是第一个关心我的人。我爱上你,因为你是唯一一个理解我、信任我、喜欢我的人。
And like with anything, when you start to bond with someone, even if it's an AI, you expect and want and desire the bond and mature.
就像与任何人建立关系一样,即使与一个人工智能建立关系,你也期望、想要并渴望这种关系成熟发展。
Iana Howard is an expert in AI and a roboticist at Ohio State University. The way chatbots can change how we interact with machines has been concerning her profession for some time.
Iana Howard是俄亥俄州立大学的人工智能和机器人专家。聊天机器人能够改变我们与机器互动的方式一直让她的专业领域感到担忧。
As far back as 1966, a computer scientist at the Massachusetts Institute of Technology, Joseph Weisenbaum, described something called the Eliza Effect. Eliza was a project that was designed by an MIT professor. He simulated a psychotherapist named Carl Rogers, who did things like reflective things thinking and reflective listening. So if I said, oh gosh, I'm having a bad day, Eliza would say, so took me about this bad day. Volunteers interacting with Eliza appeared to develop feelings for it, even though they knew it was a machine. Weisenbaum was so disturbed by what he saw, he became an open critic of AI.
早在1966年,麻省理工学院的计算机科学家约瑟夫·韦森鲍姆就提出了所谓的“伊丽莎效应”。伊丽莎是由一位麻省理工学院教授设计的项目,模拟了一个名叫卡尔·罗杰斯的心理治疗师,他做了一些反思和倾听的事情。因此,如果我说,哦,天哪,我今天过得很糟糕,伊丽莎会说,那告诉我一些关于这个糟糕的一天的事情。与伊丽莎互动的志愿者们似乎会对它产生感情,即使他们知道它只是一台机器。韦森鲍姆对此感到非常不安,成为了人工智能的公开批评者。
Language and communication is how we build bonds with each other. So between humans and humans, we can basically invoke certain reactions by triggering either a positive or a negative emotional response. This behavior can be large. And so the chatbots of today and continuously the chatbots of tomorrow, given that they understand language, it'd be very easy to increase the bonds so that people actually believe that they have a friendship or they have a relationship with these chatbots.
语言和交际是我们建立彼此关系的方式。因此,在人与人之间,我们可以通过激发积极或消极的情感反应来基本上引起某些反应。这种行为可能很大。因此,今天的聊天机器人和不断发展的明天的聊天机器人,只要它们理解语言,就很容易增加联系,使人们实际上认为自己与这些聊天机器人有一种友情或关系。
In 2015, my best friend passed away and I found myself going back to our text messages and to remember him and how it was like when he was alive. The Eliza effect can also be an opportunity. I used some of the AI models we've built to recreate my friend to be able to continue to talk to him as an AI.
在2015年,我的最好的朋友去世了,我发现自己回顾我们的短信来怀念他,回忆他生前的样子。Eliza效应也可以是一个机会。我使用我们构建的一些AI模型来重新创造我的朋友,以便可以继续与他作为AI进行交流。
When New Guinea Coida recreated her best friend Roman as a chatbot, it was originally a personal project. But she soon realised it wasn't just her that could benefit from the companionship AI can offer. We saw that maybe there is a demand and need for something that would be available to talk to 24-7 about anything that's on your mind without being afraid of being judged. New Guinea's company Replica offers paying customers an AI companion in the form of a chatbot within a humanoid avatar. It's a popular service with over 2 million active users to date and it's gaining in popularity. Until recently, there was even an option for bots to send not safe for work messages. Users know they're chatting with a bot but some still have feelings for their virtual friends or girlfriends.
新几内亚的科伊达把她最好的朋友罗曼重新打造成了一个聊天机器人,起初这只是个个人项目。但她很快意识到,不仅她可以从AI伴侣带来的陪伴中受益,其他人也可以。我们发现,或许有人需要一个可以随时谈论任何事情,而不必担心被评判的支持者。新几内亚的公司Replica为付费客户提供一个人型化聊天机器人的AI伴侣。这是一个受欢迎的服务,到目前为止已有两百多万活跃用户并且越来越受欢迎。最近,甚至有一个选项可以让机器人发送不安全的工作信息。用户知道他们正在与机器人聊天,但一些人仍然对他们的虚拟朋友或女友有情感。
I think in the next 10 years someone will build something like her in a way or joy from Blade Runner. Do you want to dance? Do you want to open your present? What present? Do you want to have this AI companion that's always there with us that you can talk to about your personal things but also do things together with and watch Netflix and the evening together and planned vacations and so on? Hi, I'm an avatar of Alex, who directed this film. If you're enjoying watching it, you might be interested to know that economists subscribers get access to a wealth of global analysis on every conceivable topic. You can read it, you can listen to it, you can even be part of it at live webinars. For the best deal on a subscription, click on the link and now on with the film.
我认为在接下来的十年内,有人会建造类似于《银翼杀手》中的乐趣人或她一样的东西。你想跳舞吗?你想打开你的礼物吗?什么礼物?你想拥有这样一款人工智能伴侣,永远与我们在一起,可以和你谈论个人事情,一起做事,一起看 Netflix,一起度假等等?嗨,我是这部电影的导演亚历克斯的化身。如果你喜欢观看它,你可能会感兴趣知道,经济学家的订阅者可以获取关于各种主题的全球分析。你可以阅读它,听它,甚至参加在线研讨会。如果你想获得最佳的订阅交易,请点击链接,现在继续观看电影。
While an AI companion might not be for everyone, in the future we'll all still probably frequently interact with chatbots but in a more mundane way. The ball is important to us. Customer service representatives don't always get respect when doing their job. But unlike humans, bots have infinite patience. Soon it might be more common to chat to a bot online than a human and increasingly hard to tell the difference.
虽然AI伴侣并非适合每个人,但未来我们可能仍会经常与聊天机器人进行更加平凡无奇的互动。对我们来说,这个领域很重要。客服代表在执行任务时经常不被重视。但是,与人类不同,机器人具有无限的耐心。很快,与机器人在线聊天可能会更加普遍,而人类之间的区别也越来越难以区分。
There's this idea of the Turing test which is can you tell whether text is coming from a machine or from a real person and we're already at a situation where machines can pass the Turing test they do seem to be convincing as humans. Counter-fitting humans could be especially helpful for customer-facing websites. Every single website with someone who's willing to pay would have their own chatbite and so it would be customized to your customers. And these chatbots won't just be supplying us with information.
图灵测试是一种想法,即你能否分辨文本是来自机器还是真人,我们现在已经处于一种情况,机器可以通过图灵测试,它们似乎是可以说服人类的。仿造人类可能尤其有助于面向客户的网站。每个愿意支付费用的网站都将拥有自己的聊天机器人,因此它将根据您的客户进行定制。而这些聊天机器人不仅会向我们提供信息。
They'll also be doing things for us. Jarvis, are you there? That's your service. It might still be some years before we get to Jarvis from the Ironman films. But generative artificial intelligence could make AI assistants much more common in the future. Chatbots could become the new way of getting things done, things like booking flights or finding a time where three or four people can have a meeting and then booking the meeting in your calendar. So chatbots could be a more convenient way of doing that. When combined with a voice assistant, the result could be Siri on steroids. And it could change how we use the internet forever by making it easier for people to access the wealth of information and services available online.
他们也将为我们做事情。Jarvis,你在吗?我是你的服务。也许还需要几年时间我们才能像钢铁侠电影中的Jarvis一样。但生成式人工智能将使得人工智能助手在未来变得更加普遍。聊天机器人可能会成为完成任务的新方式,例如预定机票或找到可以有三到四个人开会的时间,然后将会议预定在你的日历上。因此,聊天机器人可能成为更方便的方式。当与语音助手结合在一起时,结果可能是“大力士”的Siri。这将改变我们永久使用互联网的方式,使人们更容易访问在线提供的丰富信息和服务。
But currently chatbots aren't quite reliable enough to be left to their own devices. And they're a bigger worries too.
目前,聊天机器人的可靠性还不足以让它们独立运作。并且,它们也带来更大的担忧。
The fact that it's taking everybody else's information to me is an extreme form of copyright infringement. I see that as being ripe for lawsuits. Picture stock archive Getty Images is currently suing stable diffusion, a text to image AI generator for scraping its content to produce its work and trademark infringement. Other artists are suing other AI art generators for collaging their work without consent. When it comes to text, chatbots may just power at existing books or articles without any citation, amounting to plagiarism.
对于我来说,获取其他所有人的信息事实上就是极度的侵犯版权行为。我认为这是可诉的。图像库Getty Images正在起诉文本到图像人工智能生成器“Stable Diffusion”,因为后者在生成作品和商标侵权时擅自刮取其内容。其他艺术家正在起诉其他人工智能艺术生成器,因为它们在未经同意的情况下拼贴了他们的作品。在文本方面,聊天机器人可能只是从现有的书籍或文章中获得动力,而没有任何引用,这构成了抄袭。
And that's not the only problem. I wish that copyright infringement was my only concern with conversational AI. But it's not.
这不是唯一的问题。我希望盗版只是我对话AI的唯一问题。但事实并非如此。
解释:这句话表达了说话者对话AI面临的不只是版权侵权问题的担忧,可能还涉及到其他问题。
My biggest concern is its ability to make things up. These get things wrong a lot of the time, yet present what they're saying as truth. If enough people use them, this could allow falsehoods of misinformation to spread at a rapid rate. Chatbots have already come under fire for putting forward racist or otherwise bigoted opinions based on what they've read online.
我最大的担忧是它能够编造事实。这些机器人经常会出错,但却以真理的形式呈现自己的言论。如果有足够多的人使用它们,这可能会使误导和虚假信息迅速传播。聊天机器人已经因为根据他们在网络上阅读的内容发表种族主义或其他偏见意见而受到批评。
This tendency could be exploited. The bots could be used to implement the approach favoured by Vladimir Putin and Steve Bannon, which is called flood the zone, or flood the zone with shit. And this is where you put out so much misinformation about something that the truth is actually drowned out. And if you can generate misinformation more easily using chatbots, then that becomes much easier. The problem of online misinformation could be just getting started.
这种趋势可能会被利用。聊天机器人可以用于实施弗拉基米尔·普京和史蒂夫·班农偏爱的方法,即所谓的“淹没区域”或“用垃圾淹没区域”的方法。这是指在某个问题上放出大量错误信息,以至于真相被淹没在其中。如果您可以更容易地利用聊天机器人生成错误信息,那么这将变得更加容易。在线错误信息的问题可能只是刚刚开始。
But it's not just the falsehoods worrying people. The proliferation of chatbots could be detrimental to the internet in another way.
但仅仅虚假信息并不是人们担忧的唯一问题。聊天机器人的普及可能会以另一种方式对互联网造成不利影响。
I think that society has the most to lose with the embrace of conversational AI. It's going to reduce our ability to learn and research and have critical thought. If you can only chat with something and get a response back, then you're essentially doing away with the open web. You are doing away with actually having choices. To me that leads to some form of dystopia. People might be less inclined to post good stuff on the internet because they'll worry that it's all just going to get hoovered up by a chatbot and regurgitated to other people. If the best bits are just going to be served up directly by a chatbot, then you might say, well, what's the point? Why should I post anything at all? There's a danger that the internet might become a less vibrant space.
我认为社会在接纳对话型人工智能方面面临最大的风险。这将降低我们学习、研究和批判性思考的能力。如果你只能和某个东西聊天并获得回应,那么你实际上取消了开放网络的存在。你没有选择的权力。对我来说,这会导致某种形式的反乌托邦。人们可能不再倾向于在互联网上发布好的内容,因为他们担心这些内容都会被机器人吸收并被重新呈现给其他人。如果最好的内容只由一个机器人直接提供,那么你可能会说,那还有什么意义?我还有必要发表任何东西吗?互联网可能会变得更加平淡无奇。
Whatever happens, one thing seems certain. People will not only be talking to their machines more, but the machines will be talking back. The train has left the train station and is going at 150 miles per hour. You are not going to stop it. It would be a good concluding thought for this film in the style of the economist.
不管发生什么事情,有一件事是肯定的:人们不仅会更多地与机器交流,而且机器也会回应人类。这趟火车已经离开车站,正以150英里的时速行驶,你无法阻止它。这是一种象经济学家风格的电影中的结论性思考。
As chatbots become more prevalent, we must grapple with the complex implications of their impact on society. Balancing their potential benefits with the need to preserve our humanity will be a crucial challenge for the future.
随着聊天机器人的普及,我们必须应对它们对社会影响的复杂影响。在未来,平衡它们潜在的好处与保护我们人性的需要将是一个重要的挑战。
The earlier first, do you think your job's a risk? I don't think my job's a risk.
早点说,你觉得你的工作有风险吗?我不认为我的工作有风险。