Meta's Chief AI Scientist Yann LeCun talks about the future of artificial intelligence
发布时间 2024-02-25 09:29:18 来源
摘要
Meta's Chief AI Scientist Yann LeCun is considered one of the "Godfathers of AI." But he now disagrees with his fellow computer pioneers about the best way forward. He recently discussed his vision for the future of artificial intelligence with CBS News' Brook Silva-Braga at Meta's offices in Menlo Park, California.
"CBS Saturday Morning" co-hosts Jeff Glor, Michelle Miller and Dana Jacobson deliver two hours of original reporting and breaking news, as well as profiles of leading figures in culture and the arts. Watch "CBS Saturday Morning" at 7 a.m. ET on CBS and 8 a.m. ET on the CBS News app.
Subscribe to “CBS Mornings” on YouTube: https://www.youtube.com/CBSMornings
Watch CBS News: http://cbsn.ws/1PlLpZ7c
Download the CBS News app: http://cbsn.ws/1Xb1WC8
Follow "CBS Mornings" on Instagram: https://bit.ly/3A13OqA
Like "CBS Mornings" on Facebook: https://bit.ly/3tpOx00
Follow "CBS Mornings" on Twitter: https://bit.ly/38QQp8B
Subscribe to our newsletter: http://cbsn.ws/1RqHw7T
Try Paramount+ free: https://bit.ly/2OiW1kZ
For video licensing inquiries, contact: licensing@veritone.com
GPT-4正在为你翻译摘要中......
中英文字稿
Thanks for joining us. A pleasure. Excited to chat. I wish we had days, but we have like 40 minutes, so we'll get through as much as we can in this time. This is a moment of a lot of public facing progress, a lot of hype, a lot of concern. How would you describe this moment in AI? A combination of excitement and too many things happening that we can't follow everything. It's hard to keep up. It is, even for me. And a lot of, perhaps, ideological debates that are both scientific, technological, and even political. And even moral, in some ways. And moral. Yeah, that's right. Boy, I want to dig into that. But I want to do just a brief background on your journey to get here. Is it right that you got into this reading a book about the origins of language? Was that how it started? It was a debate between Noam Chomsky, the famous linguist, and Jean Piaget, the developmental psychologist, about whether language is learned or innate. So Chomsky on website saying it's innate, and then Piaget on the other side saying, yeah, there is a need for structure, but it's mostly learned. And there were interesting articles by various people at this conference debate that took place in France.
谢谢加入我们。很高兴。很兴奋能和您交谈。虽然我们希望有更多时间,但只有大约40分钟,所以我们会尽可能在这段时间内讨论尽可能多的内容。这是人工智能领域的一个时刻,公共进步很大,炒作很多,关注也很多。您如何描述这一时刻中的人工智能?既兴奋又有太多事情发生,我们无法追随所有事情。跟上步伐很困难。对我来说也是如此。也可能有很多意识形态上的辩论,既是科学的、技术的,甚至是政治的。甚至在某种程度上也是道德的。是的,道德的。是的。哇,我想深入探讨这一点。但我想简要介绍一下您的旅程如何走到这一步。您是通过阅读关于语言起源的书开始的吗?是的,当时有一场关于是否语言是被学习还是与生俱来的辩论,涉及到著名语言学家诺姆·乔姆斯基和发展心理学家让·皮亚杰。乔姆斯基认为语言是与生俱来的,而皮亚杰则认为有结构的需要,但主要是学习来的。在法国举行的辩论会上,有很多有趣的文章。
And one of them was by Seymour Papard from MIT, who was describing the perceptron, which was one of the early machine learning models. And I read this. It was maybe 20 years old, or something. I looked fascinated. But it's ideal that a machine could learn, and that's what got me to it. And so you got interested in neural nets, but the broader community was not interested in neural nets. No, we're talking about 1980. So essentially, very, very, very few people working on neural nets then. And there were not really being published in domain venues and anything like that. So a few cognitive scientists in San Diego, for example, working on this, David Rammoharit, Jim MacLaland, and then Geoffrey Hinton, who I ended up working with after my PhD, who was interested in this. But it was really kind of a bit alone. There were a few isolated people in Japan, and Germany working on this kind of stuff. But it was not a field. It started being kind of a field again around 1986 or something like that. And then there's another big AI winter. And what's the phrase you used? You and Geoffrey Hinton in Yasuo Benjio had a type of conspiracy you said to bring neural nets back. It was that desperate. It was that hard to do this work at that point? OK, well, the notion of AI winter is complicated, because what's happened since the 50s is that there's been waves of interest in one particular technique, an excitement, and people working on it. And then people are realizing that this new set of techniques was limited, and then sort of interest wayings or people start using it for other things and lose the ambition of building intelligent machines.
其中之一是来自麻省理工学院的西摩·帕帕德,他描述了感知机,这是早期的一种机器学习模型。我读了这篇文章。大概是20年前的事了,我看得着迷。但是,机器能够学习是理想的,这就是吸引我去研究它的原因。所以你对神经网络感兴趣,但更广泛的社群对神经网络并不感兴趣。不,我们说的是1980年。那时,基本上很少有人在研究神经网络。并且当时并没有在领域会议等地方发表相关论文。例如,圣地亚哥有几位认知科学家在研究这个,如大卫·拉莫哈尔特、吉姆·麦克兰德,然后是我在博士毕业后合作的杰弗里·辛顿,他对此很感兴趣。但实际上,他们都比较孤立。日本和德国也有几个人在从事这方面的研究,但并不是一个领域。大概是在1986年左右,神经网络才开始重新成为一个领域。然后又经历了一次AI寒冬。你和杰弗里·辛顿以及Yasuo Benjio成了一种阴谋来重新推广神经网络。情况是那么绝望吗?在那时做这项工作是如此困难吗?好的,AI寒冬的概念是复杂的,因为自上世纪50年代以来,人们对某一种特定技术产生了兴趣的波动周期,产生了激情,有人在研究它。然后人们意识到这种新技术的局限性,兴趣就会消退,人们开始将其用于其他方面,并失去了建造智能机器的雄心。
And there's been a lot of waves like this, with the perceptron, things like that, with sort of more classical computer science, a lot of logic-based AI. And there was a big wave of excitement in the 80s about logic-based AI, what we call world-based systems, expert systems. And then in the late 80s, about neural nets, and then that died in the mid-90s. So that's the winter that I was out in the cold. Like your pardon. And so what happened in the early 2000s is that Jeff here, Shrana, I kind of got together and said, we have to rekindle the interest of the community in those methods, because we know their work. We just have to be a little more show experimentally that their work can perhaps come up with new techniques that are applicable to the new world. In the meantime, what's happened is that the internet took off, and now we have sources of data that we didn't have before. And the computer's got much faster. And the computer's got faster. And so all of that converged towards the end of the 2000 and early 2010, when we started having real results in speech recognition, image recognition, and then a bit later, natural language understanding. And that really sort of sparked a new wave of interest in sort of machine learning-based AI.
在过去有很多类似的浪潮,比如感知器、类似的东西,以及更多传统的计算机科学,很多基于逻辑的人工智能。在80年代,有一股关于基于逻辑的人工智能的热潮,我们称之为基于世界的系统,专家系统。然后在80年代末,关于神经网络,然后在90年代中期消失了。这就是我曾经被冷落的“冬天”。请见谅。在21世纪初,我和杰夫(Jeff)这边,我们有点聚在一起说,我们必须重新激起社区对这些方法的兴趣,因为我们知道它们是有效的。我们只是需要通过实验显示它们可能提出适用于新世界的新技术。与此同时,因特网蓬勃发展,现在我们有了之前没有的数据来源。计算机变得更快了。计算机变得更快了。所以所有这些在2000年末和2010年初汇聚在一起,我们开始在语音识别、图像识别方面取得真正的成果,然后稍后一点,自然语言理解。这真的点燃了对基于机器学习的人工智能的新一波兴趣。
So we call that deep learning. We didn't want to use the word neural nets because it had a bad reputation, so we changed the name to deep learning. It must be strange, I imagine, having been on the outside, even of just computer science for decades, to now be at the center, not just of tech, but in some ways the global conversation. It's quite a journey. It is, but I would have expected the progress to be sort of more continuous if you want, instead of those waves. Yeah, I wasn't at all prepared for what happened there. Neither on the side of losing interest, the lost interest, but the community for those methods. And for the incredibly fast explosion of the renewed field in the early 2000s, near over the last 10, 12 years. And now there's been this huge, at least public-facing explosion in the last, whatever, 18 months, a couple years. And there's been this big push for government regulation that you have had concerns about. What are your concerns? OK, so first of all, there's been a lot of progress in AI deep learning applications over the last decade, a little more than a decade. But a lot of it has been a little behind the scenes. So on social networks, it's content moderation, protection against all kinds of attacks. It was things like that. That uses AI massively. When Facebook knows it's my friend in the photo, that's you. Yes, but no, not anymore. Oh, not anymore. There is no face recognition on Facebook anymore. Oh, isn't there? No, there was turned off several years ago. Oh my god, I feel so dated. But the point being that a lot of your work is integrated in different ways into these products. Oh, if you try to reap out deep learning out of meta today, the entire company crumbles. It's literally built around it. So a lot of things behind the scenes and things that are a little more visible, like translation, for example, that uses AI massively, obviously, or generating subtitles for the video. So you can watch them silently. That's speech recognition. So it is translated. So that is visible, but most of it is behind the scenes.
所以我们称之为深度学习。我们不想使用神经网络这个词,因为它有负面声誉,所以我们将名称改为深度学习。我想象中,过去几十年一直处于计算机科学之外的人,现在能够站在科技的中心,甚至在某种程度上参与全球对话,一定感到很奇怪。这是一段相当不寻常的旅程。是的,但我本来以为进展会更连续一些,而不是像波浪一样起起伏伏。是的,我对发生的事情一点也没有做好准备。无论是对失去兴趣的一方,还是对那些方法失去兴趣的社区。在2000年初,这个领域的快速复兴爆发了,近10到12年来。而现在,在过去的18个月或几年内,至少在公众面前出现了这场巨大的爆炸。对于你担心的政府监管方面,已经有了这样一个巨大的推动。您担心什么?首先,过去的十多年在人工智能深度学习应用方面取得了很多进展。但很多进展都是在幕后进行的。在社交网络上,内容审核,防范各种攻击。例如,使用大规模的AI技术。当Facebook知道照片里是我的朋友时,那就是你。是的,但不是了。哦,不是了。Facebook上已经没有面部识别了。哦,是吗?是的,几年前就关闭了。哦,天啊,我感觉好过时。但重点是你的很多工作以各种方式整合到这些产品中。哦,如果你现在试图从元数据中剥离出深度学习,整个公司就会崩溃。它完全是基于这个技术建立的。所以很多事情都在幕后进行,而一些更加可见的事情,比如翻译,大量使用人工智能,显然还有为视频生成字幕。这样你就可以静音观看。这就是语音识别。所以这是可见的,但大部分是在幕后进行的。
And in the rest of society is also largely behind the scenes. You buy a car now, and most cars have a little camera looking at the windshield, and the car will break automatically if there is an obstacle in front, that's called automatic emergency braking system. It's actually a required feature in Europe. A car cannot be sold unless it has that. It uses all non-inseverable American cars, well. Yeah. And that uses deep learning. It uses conventional net, in fact, my invention. So that saves lives, same for medical applications and things like that. So that's a little more visible, but still kind of behind the scenes. What has changed in the last year or two is now that there is sort of AI-first products that are in the hands of the public. The fact that the public got so enthusiastic about it was a complete surprise to all of us, including OpenAI and Google and us.
在社会的其他领域也主要是在幕后。比如现在购买一辆车,大多数汽车都有一个小摄像头朝着挡风玻璃,如果前方有障碍物,车辆会自动刹车,这就是所谓的自动紧急制动系统。实际上在欧洲这是一个必备的功能,没有这个功能汽车是无法出售的。它也在美国所有不可分割的汽车上使用。它使用了深度学习,使用传统的网络,实际上是我的发明。所以这样的系统可以拯救生命,同样对于医疗应用等方面也有帮助。这让人们更容易看到,但仍然是在幕后。在过去一两年发生的变化是现在有一些首次采用AI技术的产品进入了公众视野。公众对此如此热情是让我们所有人感到意外的,包括OpenAI、Google和我们自己。
OK, but let me get your take though on the regulation. Because there's even some big players. You've got Sam Altman in OpenAI. You've got everyone, at least saying publicly, regulation. We think it makes sense. OK, so there's several types of regulations. There is regulation of products. So if you put one of those emergency braking systems in your car, of course, it's been checked by a government agency that makes sure it's safe. I mean, it has to happen, right? So you need to regulate products that are certainly the ones that are life critical in health care and transportation and things like that, and probably in other areas as well. The debate is about whether research and development should be regulated. And there, I'm clearly very strongly of the opinion that it should not. The people who believe it should are people who are afraid of the who claim that there is an intrinsic danger in putting the technology in the hands of essentially everyone or every technologist. And I think on the exact opposite, that this is actually a huge beneficial effect. What's the benefit? Well, the benefit is that we need to get AI technology to disseminate in all corners of society and the economy. Because it makes people smarter. It makes people more creative. It helps people who don't necessarily have the technique to write a nicely put together piece of text or a picture or a video or a music or whatever to be more creative, right?
好的,但让我听听你对监管的看法。因为甚至有一些大玩家。比如在OpenAI有Sam Altman。至少公开说,大家都认为监管是有道理的。所以有几种类型的监管。有产品监管。比如,如果你在你的车上安装了紧急制动系统,当然,它经过了政府机构的检查,以确保安全。这是必须的,对吧?因此,你需要监管那些在医疗保健和交通运输等领域中至关重要的产品,也可能在其他领域。争论的焦点是是否应该对研发进行监管。在这方面,我的观点非常明确,那就是不应该监管。认为应该进行监管的人是那些害怕声称将技术交给基本上每个人或每个技术人员存在固有危险的人。而我却恰好持相反的观点,认为这实际上是一个巨大的益处。益处是什么?好处是我们需要让人工智能技术普及到社会和经济的各个角落。因为它让人们更聪明。它让人们更有创造力。它帮助那些没有技术去撰写一篇文章或一幅图片或一部视频或一段音乐等的人更有创造力,对吧?
The creation tools, essentially. Creation aids. It may facilitate a lot of businesses, a lot of boring jobs that can be automated. And so it has a lot of beneficial effects on the economy, on entertainment, all kinds of things. Making people smarter is intrinsically good. You could think of it this way. May have, in the long term, the similar effect as the invention of the printing press, that had the effect of making people literate and smarter and more informed. So some people try to regulate that too. Well, that's true.
创作工具,基本上说就是创作助手。它可能会促进很多企业,很多可以自动化的枯燥工作。因此,它对经济、娱乐等各种领域有很多有益的影响。让人们变得更聪明本身就是好的。你可以这样想。从长远来看,可能具有类似于印刷术的发明的效果,印刷术有使人们识字、更聪明、更有信息的作用。所以有些人也试图对其进行管制。嗯,这是真的。
Actually, the printing press was banned in the Ottoman Empire, at least for Arabic. And some people say that the Minister of AI of the UAE says that it's contributed to the decline of the Ottoman Empire. So yeah, if you want to ban technological progress, you're taking a much bigger risk than if you favor it. You have to do it right, obviously. I mean, there are side effects of technology that you have to mitigate as much as you can. But the benefits are far overwhelmed the dentures. The EU has some proposed regulation. Do you think that's the right kind? Well, so there are good things in the proposal for that regulation.
实际上,印刷机在奥斯曼帝国是被禁止的,至少对阿拉伯文字而言是如此。有人称阿联酋的人工智能部长表示,这对奥斯曼帝国的衰落起到了贡献作用。所以,是的,如果你想禁止科技进步,那么你承担的风险要比支持它要大得多。当然,你必须做对。我是说,科技带来的副作用,你必须尽可能地减少。但好处远远超过了缺点。欧盟提出了一些法规。你认为那是正确的吗?那么,在这些法规中确实有一些好的内容。
And there are things, again, when it comes to regulating research and development and essentially making it very difficult for companies to open source their platforms. I think are very counterproductive. And in fact, the French, German, and Italian governments basically have blocked the legislation in front of the EU parliament for that reason. They really want open source. And the reason they want open source is because imagine a future where everyone's interaction with the digital world is mediated by the AI system. That's where we're headed. That's where we're heading.
有些事情,当涉及到监管研发并且基本上让公司很难开放他们的平台时,我认为是非常逆生产的。实际上,法国、德国和意大利政府基本上因为这个原因阻止了在欧盟议会前的立法。他们真的想要开源。他们想要开源的原因是因为想象一个未来,在这里每个人与数字世界的互动都是由人工智能系统调解的。这是我们正在前往的方向。
So one of us will have an AI system. Within a few months, you will have that in your smart glass. So you can get smart glasses from MITTA. And you can talk to it. And there's an AI system behind it. And you can ask questions. Eventually, it will have displays. So these things would be able to, I could speak French to you. And it would be automatically translated. In your glasses, you'll have subtitles.
所以我们之中有一个人会拥有一个人工智能系统。在几个月内,你们就可以在智能眼镜上使用这个系统。所以你们可以从MITTA购买智能眼镜。你们可以与它交流。在背后还有一个人工智能系统。你们可以提问。最终,它将拥有显示屏。所以这些东西能够,我可以用法语和你交流。它会自动翻译。在你们的眼镜上,会显示字幕。
Or you would hear my voice in English. And so erasing barriers and stuff like that, you would be in a place and it would indicate where you should go or information about the building you're looking at. Or whatever. So we'll have intelligent assistants living with us at all times. This will actually provide intelligence. This will be like having a human staff working for you, except they're not human. And it might be even smarter than you. But it's fine. I mean, I work with people who are smarter than me. So that's the future.
或者你会听到我的用英语说话。这样一来,就会消除障碍和那种东西,你会在一个地方,指示你应该去哪里或关于你正在查看的建筑物的信息。或其他什么的。所以我们将随时有智能助手与我们同行。这实际上将提供智能。这就像有一个为你工作的人类员工,只是他们不是人类。甚至可能比你更聪明。但没关系。我的意思是,我与比我更聪明的人们一起工作。所以这就是未来。
Now, if you imagine this kind of future, where all of our information diet is mediated by those AI systems, you do not want those things to be controlled by a small number of companies on the west coast of the US. It has to be an open platform, kind of like the internet. The internet is all the software infrastructure of the internet is completely open source. And it's not by design. It's just that it's the most efficient way to have a platform that is safe, customizable, et cetera.
现在,如果你想象一下这样的未来,那里所有我们的信息都是由这些AI系统经过处理的,你不希望这些事情被美国西海岸的少数几家公司控制。它必须是一个开放的平台,就像互联网一样。互联网是完全开源的软件基础设施。这并非是设计出来的。只是这是最高效的方式来拥有一个安全、可定制等平台。
And for assistance you want, those systems will constitute the repository of all human knowledge and culture. You can't have that centralized. Everybody has to contribute to it. So it needs to be open. You said at the fair 10 anniversary event that you wouldn't work for a company that didn't do it the open way. Why is it so important to you? Two reasons. The first thing is science and technology progress through the quick exchange of information and scientific information. One problem that we have to solve with AI is not a technological problem of what product do we have to build. That's, of course, a problem. But the main problem we have to solve is, how do we make machines more intelligent? That's a scientific question.
你需要帮助时,这些系统将构成人类知识和文化的数据库。这种集中的形式是不可取的。每个人都需要为此做出贡献。因此,它需要是开放的。你在十周年活动上说,你不会为那些不走开放道路的公司工作。为什么对你来说这么重要?有两个原因。首先,科学技术通过信息和科学信息的快速交流来进步。我们要用AI解决的一个问题不是我们要建造什么产品的技术问题。当然,那是一个问题。但我们要解决的主要问题是,如何使机器更加智能?这是一个科学问题。
And we don't have a monopoly on good ideas. A lot of good ideas come from academia. They come from other research labs, public or private. And so if there is a faster general information, the field progresses faster. And if you become secretive, you fall behind. Because people don't want to talk to you anymore. Let's talk about what you see for the future. It seems like one of the big things you're trying to do is a shift from these large language models that are trained on text to looking much more at images.
我们并不拥有好点子的垄断权。许多好点子来自学术界,来自其他研究实验室,无论是公立的还是私立的。因此,如果有更快的信息共享,领域就会更快地进步。如果你变得守口如瓶,你就会落后。因为人们不想再和你交流了。让我们谈谈你对未来的看法。看起来你试图做的一个大的转变是从那些训练在文本上的大型语言模型转向更多地关注图像。
Why is that so important? OK, so as you saw from the question, we have those LLMs. It's amazing what they can do. They can pass the bar exam. But we still don't have cell-driving cars. We still don't have domestic robots. Like, where is the domestic robot? They can do what a 10-year-old can do. Share all the dinner table and share all the dishwasher. Where is the robot? They can learn to do this in one shot like a 10-year-old.
为什么这么重要呢?好的,就像你从问题中看到的那样,我们有那些LLM。他们能做出令人惊讶的事情。他们可以通过律师执业考试。但我们还没有自动驾驶汽车。我们还没有家用机器人。比如,家用机器人在哪里?他们可以做一个十岁孩子能做的事情。分享家庭餐桌和洗碗机。机器人在哪里?他们可以像十岁孩子那样一次性学会这些。
Where is the robot that can learn to drive a car in 20 hours of practice like a 17-year-old? We don't have that. That tells you we're missing something really big. We're training the wrong way. We're not training the wrong way, but we're missing essential components to reach human-level intelligence. So we have systems that can absorb an enormous amount of training data from text. And the problem with text is that text only represents a tiny portion of human knowledge. This sounds surprising. But in fact, most of human knowledge is things that we learn when we're babies and that has nothing to do with language. We learn how the world works. We learn intuitive physics. We learn how people interact with each other. We learn all kinds of stuff. But they really don't have anything with language. And think about animals. A lot of animals are super smart. In many domains, we're actually that's smarter than humans in some domains, right? They don't have a language. And they seem to do pretty well. So what type of learning is taking place in human babies and in animals that allow them to understand how the world works and become really smart have common sense that no AI system today has.
在哪里有一个能够在20小时的练习中学会开车,就像一个17岁的人一样?我们没有这样的。这告诉我们我们缺少了一些非常重要的东西。我们在训练的方式上出了问题。我们并没有用错方法,但是我们缺少达到人类水平智能所需的基本组成部分。所以我们有一些系统能够从文本中吸收大量的训练数据。问题是文本只代表了人类知识的一小部分。这听起来让人惊讶。但实际上,人类知识大部分是我们从婴儿时期学到的,与语言无关。我们学习世界如何运作。我们学习直觉物理学。我们学习人与人之间如何互动。我们学习各种各样的知识。但它们跟语言并没有什么关系。想想动物。很多动物非常聪明。在许多领域,它们实际上比人类更聪明,对吧?它们没有语言。而它们似乎表现得相当不错。所以人类婴儿和动物之间发生的是一种怎样的学习,使它们理解世界如何运作,变得非常聪明,具有现今任何AI系统都没有的常识。
So the joke I make very often is the smartest AI system we have today are stupid or a house cat. Because a cat can navigate the world in a way that a chatbot certainly can't. A cat understands how the world works, a hundred times causality, understands that if it does something else will happen, right? And so it can plan sequence of actions. You ever seen a cat sitting at the bottom of a bunch of furniture and looking around moving ahead and then going, jump, jump, jump, jump, jump. That's amazing planning. No robot can do this today. And so we have a lot of work to do. It's not a so problem. We're not going to get human level AI systems before we get significant progress in being able to train systems to understand the world. Basically by watching video and acting in the world. Another thing using focus on is, I think, what you call a objective-based model. Objective-driven. Objective-driven. Yeah. Explain why you think that is important. And I haven't been clear just in hearing you talk about it, whether safety is an important component of that or safety is kind of separate or alongside that. It's part of it. So the idea of objective-driven, OK, let me tell you, first of all, what a current data is. Define the problem. It does, right? So NLMs really should be called auto-regressive NLMs. The reason we should call them this way is that they just produce one word or one token, which is a sub-word unit. It doesn't matter. One word after the other without really kind of planning what they're going to say.
因此,我经常开玩笑说,我们今天拥有的最智能的 AI 系统要么是愚蠢的,要么是一只家猫。因为一只猫能够以一种聊天机器人绝对做不到的方式来遍历世界。猫能理解世界如何运作,了解因果关系,明白如果它做了某事会发生什么,对吧?因此它能够规划出一系列的行动。你有没有见过一只猫坐在一堆家具底下四处张望,然后向前移动,跳,跳,跳,跳,跳。这是令人惊叹的规划。今天没有任何机器人能做到这一点。因此,我们有很多工作要做。这不仅仅是一个问题。在我们能够训练系统理解世界的能力取得显著进展之前,我们不会得到人类级别的 AI 系统。基本上是通过观看视频并在现实世界中行动来实现。我们还要关注的另一件事是,我认为你们所说的目标驱动模型很重要。目标驱动。目标驱动。是的。请解释一下为什么你认为这很重要。在听你谈论时,我并不清楚安全是否是其中重要组成部分,还是独立的或与之并列。安全是其中的一部分。所以目标驱动的概念,好,首先让我告诉你,当前的数据是什么。它定义了问题,对吧?因此 NLM 实际上应该被称为自回归 NLM。我们之所以应该这样称呼它们是因为它们只是逐个生成一个单词或一个标记,无论是子单词单元都无关紧要。它们只是直接一个接一个地生成单词,而并没有真正计划它们要说什么。
So you give them a prompt, and then you ask it, what word comes next? And they produce one word. And then you shift that word into their input. And then say, what word comes next now, et cetera, right? That's called auto-regressive prediction. It's a very old concept. But that's how it works now. Jeff did it like 30 years ago or something. Actually, Jeff had some work on this with EDSS. I mean, it was a student a while back, but that wasn't very long ago. Yes, you're been to had a very tracing paper on this in the 2000s using neural nets to do this, actually. It was probably one of the first. Anyway, I got you distracted here. So you can get to what's. Right, OK. So you produce a word one after the other without really thinking about it beforehand, without knowing the system doesn't know in advance what it's going to say, right? It just produces those words.
所以你给他们一个提示,然后问他们,接下来是什么词?他们会产生一个词。然后你将这个词放入他们的输入中。然后说,下一个词是什么,依此类推,对吧?这就是所谓的自回归预测。这是一个非常古老的概念。但这就是现在的工作原理。杰夫大约30年前做过这个。事实上,杰夫和EDSS一起在这方面做过一些工作。我是指以前的一个学生,但那并不是很久之前的事了。是的,你在2000年代利用神经网络做过一些关于这方面的追踪纸的工作,事实上。这可能是其中之一。不管怎样,我让你分心了。所以你可以继续说到底是什么。好的,明白了。所以你一个接着一个地产生一个词,事先并没有真正考虑过,也不知道系统事先不知道它将要说什么,对吧?它只是产生那些词。
And the problem with this is that it can elucidate, in a sense that sometimes it will produce a word that is really not part of a correct answer, and then that's it. The second problem is that you can control it. So you can't tell it, OK, you're talking to a 12-year-old. So only produce words that are understandable about a 12-year-old. You can put this in a prompt, but that has kind of limited effect unless the system has been fine-tuned for that. So it's very difficult, in fact, to control those systems. And you can never guarantee that whatever they're going to produce is not going to escape the conditioning, if you want, the training that they've gone through to produce not just useful answers, but answers that are non-toxic and everything, and that non-biased and everything.
这样的问题在于它有时会产生一个不属于正确答案的词,而且你无法控制它。你无法告诉它,好吧,你在和一个12岁的孩子交谈。所以只生成适合12岁孩子理解的词汇。你可以在提示中这样做,但除非系统经过细致调整,否则效果有限。事实上,控制这些系统非常困难。并不能保证它们产生的任何内容不会逃离自己的条件,也就是说,它们接受过的培训,不仅仅要产生有用的答案,还要有非有毒性的答案,非偏见的答案。
So right now, that's done by kind of fine-tuning the system and training it on lots of people kind of answering questions and rating questions that's called human feedback. There's an alternative to this. And the alternative is you give the system an objective. So the objective is a mathematical function that measures to what extent the answer produced by the system conforms to a bunch of constraints that you wanted to satisfy. Is this understandable by a 12-year-old? Is this toxic in this particular culture? Does this answer the question in a way that I want? Is it consistent with what my favorite newspaper was saying yesterday or whatever? So a bunch of things like this constraints that could be safety guardrails or just task. And then what the system does, instead of just blindly producing one word or the other, it plans an answer that satisfies all of those criteria. And then you produce that answer. That's objective driven AI. That's the future, in my opinion. We haven't made this work yet, or at least not in the situation that we want. People have been working on this kind of stuff for robotics for a long time. That's called model predictive control of motion planning.
所以现在,这是通过对系统进行微调并训练许多人回答问题和评价问题,这被称为人类反馈来完成的。这有另一种替代方法。替代方法是给系统一个客观目标。客观目标是一个衡量系统产生的答案在多大程度上符合你想要满足的一系列约束的数学函数。这个是否让一个12岁的孩子能够理解呢?在这个特定文化中是否有毒害性?这个答案回答了我想要知道的问题吗?它是否与我喜欢的报纸昨天报道的情况相符?诸如这样的约束,可以是安全防护或者任务。然后系统做的是,不是盲目地产生一个词语或另一个词语,它计划一个能满足所有这些标准的答案。然后你产生这个答案。这就是目标驱动的人工智能。在我看来,这就是未来。我们还没有让这项工作生效,或者至少没有在我们想要的情况下。人们一直在研究这种类型的机器人技术。这被称为模型预测控制或者运动规划。
There's obviously been so much attention to Jeffrey Hinton and Yasuo Benjio having these concerns about what the technology could do. How do you explain the three of you reaching these different conclusions? OK, so it's a bit difficult to explain for Jeff. He had a bit of an epiphany in April where he realized that the systems that we have now are a lot smarter than he expected them to be. And he realized, oh my god, we're kind of close to having systems that have human ability. I disagree with this completely. They're not as smart as he thinks they are. Right. Yeah, right. And he's thinking in sort of very long term, and so abstract term. So I can understand why he's saying what he's saying, but I just think he's wrong. And we've disagreed on things before. We're good friends. But we disagreed on these kind of questions before, on technical questions, among other things. So I don't think he's thought about the problem of existential risk and stuff like that for very long, basically since April. I've been sort of thinking about this on a philosophical moral point of view for a long time. For Yasuo, I think it's more concerned about short term risks that would be due to misuse of technology. By terrorist group or people with bad intentions. And also about the motivation of the industry developing AI, which he sees as not necessarily aligned with the common good because he claims it's motivated by profits. So that may be a bit of a political science there, that perhaps he has less trust in the democratic institutions for doing the right thing than I have. I've heard you say that that is the distinction, that you have more faith in democracy and in institutions than they do. I think that's the case.
杰弗里·辛顿和本底尾康夫对技术可能带来的问题非常关注,这是显而易见的。你是如何解释你们三个人对这个问题达成不同结论的?好的,对于杰夫来说有点难以解释。四月份他突然恍然大悟,意识到我们现在拥有的系统要比他预期的聪明得多。他意识到,天哪,我们已经接近拥有人类智慧的系统了。我完全不同意这一点。他认为系统并不像他想象的聪明。没错,没错。他的想法更侧重长期和抽象。我能理解他说这些话的原因,但我觉得他错了。我们以前在技术问题等方面出现分歧。我们是很好的朋友,但我们在这类问题上,包括技术问题都有不同观点。所以我觉得他自四月以来,对于存在风险等问题没有深思熟虑。我一直在从哲学和道德的角度长期思考这个问题。对于康夫来说,他更关注于由于技术被滥用而导致的短期风险,例如由恐怖主义团体或恶意人士引起的问题。他还关注于发展人工智能的行业的动机,他认为这并不一定符合共同利益,因为他认为动机是出于利润。这可能涉及到一些政治科学,或许他对民主机构做出正确决定的信任程度比我低。我听过你说这就是区别之处,你比他们对民主和机构有更多的信任。我认为是这样。
I don't want to put words in their mouth, and I don't want to misrepresent them. Ultimately, I think we have the same goal. We know that there's going to be a lot of benefits to AI technology. Otherwise, we wouldn't be working on this. And the question is how you do it right. Do we have to have, as Yasuo advocates for, some overarching multinational regulatory agency to make sure everything is safe? Should we ban open sourcing models that are potentially dangerous? But run the risk of basically slowing down progress, slowing the dissemination of technology in the economy and society. So those are trade-offs. And reasonable people can disagree on this. Yeah. That's the, in my opinion, the criterion, the reason really that I'm really very much in favor of open platforms is the fact that AI systems are going to constitute a very basic infrastructure in the future. And there has to be some way of ensuring that culturally and in terms of knowledge, those things are diverse. A bit like Wikipedia, right? You can't have just Wikipedia in one language. It has to cover all languages, all cultures and everything. Same story.
我不想曲解他们的意思,也不想误传他们的观点。最终,我认为我们有相同的目标。我们知道人工智能技术会带来许多好处。否则我们不会在此努力工作。问题是如何正确做到这一点。我们是否需要像泰申所倡导的那样,设立一个跨国监管机构来确保一切安全?我们应该禁止潜在危险的开源模型吗?但却面临基本上减缓进展、减缓技术在经济和社会中的传播速度的风险。这些都是权衡取舍。在这方面,理智的人们可能会有分歧。是的。在我看来,我非常支持开放平台的原因是因为人工智能系统将成为未来的一种基础设施。必须有一种方式来确保在文化和知识方面,这些内容都是丰富多样的。有点像维基百科,对吧?不能只有一种语言的维基百科。它必须覆盖所有语言、所有文化等等。同样的道理。
There has been, it's obviously not just the two of them. It's a growing number of people who say, not that it's likely, but there's like a real chance, like a 10, 20, 30, 40% chance of literally wiping out humanity, which is kind of terrifying. Why are so many in your view getting it wrong? It's a tiny, tiny number of people. Ask the vast. It's like 40% of researchers in one poll. No. No, but it's a self-selected poll online. People say, select themselves to answer those polls. No. Like, the vast majority of people in AI research, particularly in academia or in startups, but also in large labs, like ours, don't believe in this at all. Like, they don't believe there is a significant risk of existential risk to humanity. All of us believe that there are proper ways to deploy the technology and bad ways to deploy it and that we need to work on the proper way to do it. OK. And the analogy I draw, I think, is the people who are really afraid of this today would be a bit like people in 1920 or 1925 saying, oh, we have to ban airplanes because it can be misused. Someone can fly over a city and drop a bomb. And those can be dangerous because they can crash. So we're never going to have planes that cross the Atlantic because it's just too dangerous. Like a lot of people will die out of this, right?
有一个明显的情况,显然并不只是他们两个。越来越多的人说,虽然可能性不大,但实际上有一定的机会,就像真的有10、20、30、40%的机会真的毁灭人类,这有点可怕。在你看来,为什么会有那么多人判断错误?这只是一小部分人。问问多数人吧。在一项调查中有40%的研究人员如此认为。不,不,那是一个自愿参加的在线调查。人们自愿选择回答这些调查。实际上,人工智能研究的绝大多数人,尤其是在学术界或初创企业,甚至像我们这样的大型实验室,根本不相信这一点。他们不认为存在对人类构成重大威胁的风险。我们都相信有正确的技术部署方法和错误的部署方法,我们需要努力找到正确的方法。我觉得我可以类比一下,那些真的害怕这种情况的人,就像1920年或1925年的人说,哦,我们必须禁止飞机,因为它可能被滥用。有人可能飞越一个城市放炸弹。它们可能是危险的,因为它们可能坠毁。所以我们永远不会有横穿大西洋的飞机,因为太危险了。那会有很多人因此丧生,对吧?
And then they will ask it to regulate the technology, like, you know, ban the invention of the turbojet, OK, or regulate turbojets. In 1920, turbojets weren't invented yet. In 2023, human level AI has not been invented yet. So the question is to discussing how to make this technology safe, superhuman, intelligent safe. It's the same as asking a 1920 engineer, you know, how you can make turbojet safe. Like, they're not invented yet, right? And the way to make them safe is going to be like turbojet. It's going to be years and decades of iterative refinements and careful engineering of how to make those things proper and they're not going to be deployed unless they're safe. So again, you have to trust in the institutions of society to make that happen. And just so I understand your view on the existential risk, I don't think you're saying it's zero, but you're saying it's quite small, like below 1%. You know, it's below the chances of an asteroid hitting the Earth and global nuclear war and things of that type. I mean, it's on the same order. I mean, there are things that you should worry about and there are things that you can do anything about.
然后他们会要求其规范技术,比如,你知道的,禁止涡轮喷气发动机的发明,好吧,或者规范涡轮喷气发动机。在1920年,涡轮喷气发动机还没有被发明出来。到了2023年,人类级别的人工智能还没有被发明出来。所以问题是讨论如何使这项技术安全、高超人智能和安全。这就像是在问一个1920年的工程师,你知道,如何让涡轮喷气发动机安全。就像,它们还没有被发明出来,对吧?而使它们安全的方式会像涡轮喷气发动机一样。需要经过多年甚至几十年的迭代改进和谨慎的工程技术来确保这些东西的正确性,并且除非它们安全,否则它们不会被部署。因此,你必须信任社会的制度来实现这一点。再来,我想了解一下你对存在危险的看法,我觉得你并不是说它是零,但你是说它相当小,比如低于1%。你知道,它是低于小行星撞击地球和全球核战争等事情发生的几率。我的意思是,它是在同一水平上。有一些事情是值得担心的,但有一些事情是你无能为力的。
But in the case, like natural phenomenon, right? There's not much you can do about them. But things like deploying an AI, we have agency. Like, we can decide not to deploy if we think there is a danger, right? So attributing a probability to this makes no sense because we have agency. Last thing on this topic, autonomous weapons, how will we make those safe and not have at least the possibility of really bad outcomes with them? So autonomous weapons already exist. But not in the form that they will in the future. We're talking about missiles that are self-guided, but that's a lot different than a soldier that's sent into battle. OK, the first example of autonomous weapon is land mines. And some countries, not the US, but some countries banned its use, its international agreements about this, that neither the US nor Russia nor China assigned to ban them. And the reason for banning them is not because they're smart, it's because they're stupid. They're autonomous and stupid. And so they're OK with anybody, right? Guided missile, the more guided the missile is, the less collateral dimension makes.
但在这种情况下,就像自然现象一样,对它们我们没什么办法。但是像部署人工智能这样的事情,我们有能动性。比如,如果我们认为有危险,我们可以决定不部署,对吧?因此,对这一点进行概率评估是没有意义的,因为我们有能动性。最后要说的是,自主武器,我们如何确保它们安全,不至于产生非常糟糕的后果?所以自主武器已经存在了。但不是以后将会存在的形式。我们谈论的是自导导弹,但这与被派往战斗的士兵是完全不同的。好的,自主武器的第一个例子是地雷。一些国家,不是美国,但一些国家已经禁止了地雷的使用,有关这一点有国际协议,美国、俄罗斯和中国都没有签署禁止使用地雷的协议。禁止它们的原因不是因为它们聪明,而是因为它们愚蠢。它们是自主且愚蠢的。因此,它们可以对任何人都是一样的,对吗?有导引的导弹,导引性越强,伤害面就越小。
So then there is a moral debate. Is it better to actually have smarter weapons that only destroy what you need and doesn't kill hundreds of civilians next to it? Can that technology be used to protect democracy? In Ukraine, Ukraine makes massive use of drones, and they're starting to put AI into it? Is it good or is it bad? I think it's necessary regardless of whether you think it's good or bad. Autonomous weapons are necessary. Well, for the protection of democracy in that case, right? But obviously, the concern is what if it's Hitler who has them rather than Roosevelt? Well, then it's the history of the world.
然后就引发了一个道德辩论。是更好地拥有智能武器,只摧毁需要摧毁的东西,而不会造成附近数百名平民死亡?这种技术能被用来维护民主吗?在乌克兰,乌克兰大量使用无人机,并且开始将人工智能整合进去?这是好事还是坏事?我认为无论你认为好还是坏,这都是必要的。自主武器是必要的。嗯,为了维护民主权利,是吧?但显然,担忧在于万一掌握这些武器的是希特勒而不是罗斯福呢?那么,那就是世界的历史。
Who has better technology? Is it the good guys and the bad guys? So the good guys should be doing everything they can. It's, again, a complicated moral issue, it's not a speciality. I don't work on weapons. But you're a prominent voice saying, hey, guys, don't be worried. Let's go forward. So and this is, I think, one of the main concerns people have. Okay, so another passive is like some of my colleagues. And I think you have to be realistic about the fact that this technology is being deployed in defense. And for good things, the Ukrainian conflict has actually made this quite obvious, that progressive technology can actually help protect democracy.
谁拥有更先进的技术?是好人还是坏人?所以好人应该竭尽所能。这又是一个复杂的道德问题,不是一个专业。我不从事武器研发。但你是一个著名的声音,说,“嘿,伙计们,不要担心。让我们继续前行。”我认为这是人们主要关注的问题之一。好吧,另一个被动的态度就像我的一些同事。我认为你必须要对这种技术被用于国防部署这个事实保持现实。而且对于好东西,乌克兰冲突实际上已经很明显地表明了,先进技术实际上可以帮助捍卫民主。
We talk generally about all the good things AI can do. I'd love to the extent you can to talk really specifically about things that people, like let's say your middle aged or younger, can hope in their lifetime that AI will do to make their lives better. So this thing is in a short term, safety systems for transportation, for medical diagnosis, the technique tumors and stuff like that, which is improved with AI. And then mid-term term, understanding more about how life works, which would allow us to do things like drug design more efficiently, like all the work on protein folding and design of proteins, synthesis of new chemical compounds and things like that.
我们一般性地谈论人工智能可以做的所有好事。我很希望你能够尽可能具体地谈论一些人们可以在他们有生之年期待人工智能改善他们生活的事情,比如说像您这样的中年人或年轻人。所以这事情在短期内,人工智能可改善交通安全系统,医疗诊断,技术诊断肿瘤等方面。中期内,人工智能可以帮助人们更好地了解生命是如何运作的,这让我们可以更有效地进行药物设计,比如蛋白质折叠和设计的研究,合成新的化学物质等。
So there's a lot of activity on this. There's not been a huge revolutionary outcome of this yet. But there are a few techniques that have been developed with the help of AI to treat rare genetic diseases, for example, in things of that type. So this is going to make a lot of progress over the next few years, make people's life more enjoyable than longer perhaps, etc. And then beyond that, again, imagine all of us would be like a leader in either science, business, politics, or whatever it is. And we'll have a staff of people assisting us. But there won't be people. There will be virtual people working for us.
因此,在这方面活动很多。目前还没有取得巨大的革命性成果。但是,借助人工智能已经开发出一些技术来治疗罕见遗传病,例如类似的情况。因此,在未来几年内,这将取得很大进展,使人们的生活更加愉快,也许更长寿等等。再者,想象一下,我们每个人都会像科学、商业、政治等领域的领袖。我们会有一群助手。但这些助手不会是真实的人,而是虚拟人物为我们工作。
Everybody is going to be a boss, essentially. And everybody is going to be smarter as a consequence. Not individually smarter perhaps, although they will learn from those things. But smarter in the sense that they will have a system that makes them smarter, right? Make them easier for them to learn the right thing, to access the right knowledge, to make the proper decisions. So we'll be in charge of AI systems. We'll control them. We'll be subservient to us. We set their goals. But they can be very smart in fulfilling those goals.
每个人实际上都将成为老板。因此,每个人都会更聪明。也许并不是个别地更聪明,尽管他们会从中学到东西。但更聪明是指他们将拥有一个使自己更聪明的系统,对吧?使他们更容易学到正确的东西,获取正确的知识,做出正确的决定。所以我们将主导人工智能系统。我们将控制它们。它们将屈从于我们。我们设定它们的目标。但它们在实现这些目标时可以非常聪明。
As the leader of a research lab, a lot of people at fair are smarter than me. And that's why we hire them. And there is kind of an interesting interaction between people. People is particularly between politics, right? The politician, the sort of visible persona, makes a decision. And that's setting goals, essentially, for other people to fulfill. So that's the interaction we'll have with AI systems. We set goals for them. And they fulfill it.
作为一个研究实验室的负责人,有很多在那里的人比我更聪明。这也是为什么我们会雇佣他们。而且人与人之间的互动也非常有趣。人与人之间的互动尤其在政治方面。政治家,这种明显的人物角色,做出决定。这实质上就是为其他人设定目标。这就是我们与人工智能系统之间将会发生的互动。我们为它们设定目标,然后它们去实现。
I think you've said AGI is at least a decade away, maybe farther. Is this something you guys are working toward? Or are you leaving that kind of to the other guys? Or is that your goal? Oh, it's our goal. Of course, it's always been our goal. But I guess in the last 10 years, there were so many useful things we could do in the short term that a part of the labs ended up being devoted to those useful things, like content moderation, translation, computer vision, robotics. A lot of things that are kind of application areas of this type.
我认为你说的通用人工智能至少还需要十年,甚至更远。这是你们在努力实现的目标吗?还是你们将这种努力留给其他人?还是这是你们的目标?哦,这是我们的目标。当然,这一直都是我们的目标。但我猜在过去的10年里,有很多有用的事情我们可以在短期内做,因此实验室的一部分最终致力于这些有用的事情,比如内容审查、翻译、计算机视觉、机器人技术。很多都是这类应用领域的事物。
What has changed in the last year or two is now that we have products that are AI first, right? Assistance that are built on top of LAMA and things like that. So things services that META is deploying will be deploying not just on mobile devices, but also on smart glasses and ARVR devices and stuff like that, that are AI first. So now there is a product pipeline where there is a need for a system that has essentially a human-level AI. We don't call this AGI because human intelligence is actually very specialized. It's not general.
在过去的一两年中发生的变化就是我们现在拥有了以AI为先的产品,对吧?这些是建立在LAMA等技术之上的助手。所以META正在部署的服务不仅仅在移动设备上部署,还会在智能眼镜、ARVR设备等等AI优先的设备上部署。因此现在有一个产品线,这里需要一个系统,其实就是一个拥有类似人类水平智能的系统。我们不把这称为AGI,因为人类智力实际上是非常专业化的,而不是通用的。
So we call this AMI advanced machine intelligence. But when you say AMI, you're basically meaning AGI. Basically, it's the same that what people mean by AGI. We like it. Joanna and I like it because we speak French. And that's AMI. It means French. Yes. Mon ami, my friend. So yeah, no, we're totally focused on that. That's the mission of FAIR, really. Whenever AGI happens, it's going to change the relationship between people and machines. Do you worry at all if we have to hand over control to things like corporations or governments to these smarter entities?
所以我们称之为AMI高级机器智能。但是当你说AMI时,基本上是指AGI。基本上,AMI与人们所理解的AGI是一样的。我们喜欢它。乔安娜和我喜欢它是因为我们说法语。这就是AMI。它意味着法语。是的。Mon ami,我的朋友。所以是的,我们完全专注于这一点。这确实是FAIR的使命。无论何时AGI出现,它都会改变人与机器之间的关系。你是否担心如果我们不得不把控制权交给像公司或政府这样的更聪明的实体?
We don't hand over control. We hand over the execution. We control. We set the goals, as I said before. And they execute the goals. It's very much like being a leader of a team of people. You set the goal. This is a wild one, but I find it fascinating. There are some people who think, even if humanity got wiped out by these machines, not a bad outcome because it would just be the natural progression of intelligence. Larry Page is apparently a famous proponent of this, according to Elon Musk. Would it be terrible if we got wiped out, or would there be some benefits because it's a form of progress?
我们不交出控制权,我们交出执行权。我们控制局面,我们设定目标,就像我之前说的那样。然后他们来执行这些目标。这很像是领导一个团队的领导。你设定目标。这可能有点疯狂,但我觉得很迷人。有些人认为,即使人类被这些机器消灭了,也不算坏结果,因为这只是智能的自然进化。据埃隆·马斯克称,拉里·佩奇显然是这方面的著名支持者。如果我们被消灭了,会是可怕的事情吗,还是会有一些好处,因为这是一种进步的形式呢?
I don't think this is something that we should think about right now because predictions of this type that are more than, let's say, 10 years ahead are complete speculation. So how are descendants will see progress or their future? It's not for us to decide. We have to get them the tools to do whatever they want. But I don't think it's for us to decide. We don't have the legitimacy for that. We don't know what it's going to be. That's so interesting, though. You don't think necessarily humans should worry about humanity continuing?
我认为现在不应该考虑这个问题,因为超过10年的这类预测完全是猜测。后代将如何看待进步或他们的未来?这不是我们的决定。我们必须为他们提供工具,让他们做任何他们想做的事。但我认为这不是我们的决定。我们没有权威做出这样的决定。我们不知道未来会怎样。尽管如此,你不认为人类应该担心人类的延续吗?
I don't think it's a worry that people should have at the moment. I mean, OK, so you can rely also. How long has humanity existed? About 300,000 years. It's very short. So if you project 300,000 years in your future, what will humans then look like, given the progress of technology? We can figure it out. And probably the biggest changes will not be through AI. It probably be through genetic engineering or something like that, which currently has banned probably for reasons that we don't know the potential dangers of that. Last thing, because I know our time is running out. Do you see a middle path that acknowledges more of the concerns, at least considers maybe you're wrong and to an extent this other group is right, and still maintains the things that are important to you around open use of AI? Is there kind of a compromise?
我认为这不是人们现在应该担心的事情。我的意思是,好吧,所以你也可以依赖。人类存在了多长时间?大约30万年。这很短暂。所以如果你将30万年投影到未来,那时的人类会是什么样子,考虑到技术的进步?我们可以想办法弄清楚。也许最大的变化不会是通过人工智能。可能是通过基因工程或类似的东西,目前可能由于我们不知道潜在危险而被禁止。最后一件事,因为我知道我们的时间不多了。你有没有看到一种能够更多地承认担忧的中间立场,至少考虑到也许你错了,而在某种程度上其他人是对的,并且仍然保持你认为AI的开放使用是重要的东西?是否存在某种妥协?
So there's certainly potential dangers in the medium term of that are essentially due to potential misuse of the technology. And the more available you make the technology, the more people you make it accessible to more people. So you have a higher chance of people with bad intentions being able to use it. So the question is, why countermeasures do you use for that? So some people are worried about things like massive flood of misinformation, for example, that is generated by AI. What measures can you take against that? So what we're working on is things like watermarking so that you know when a piece of data has been generated by a system. Another thing that we're extremely familiar with at MITA is detecting false accounts. But divisive speech that is sometimes generated, sometimes just typed by people with bad intentions, hate speech, dangerous misinformation, we already have systems in place to protect against this on social networks. And the thing that people should understand is that those systems make massive use of AI.
在中期内,由于技术的潜在滥用,肯定存在潜在的危险。而且,技术变得越来越普遍,就会让更多人可以接触到它。这样一来,有恶意意图的人就更有可能使用它。因此,问题是,你要采取什么对策呢?有些人担心的是AI生成的大量错误信息的泛滥,你可以采取哪些措施来抵制?我们正在研究的是水印技术,这样你就可以知道数据是否是由系统生成的。而在MITA,我们非常熟悉检测虚假账号。但是,一些具有进行煽动性言论,有时是由恶意人士生成,有时是由恶意人士输入的仇恨言论、危险错误信息,我们已经有系统来在社交网络上抵制这些行为。人们应该明白的是,那些系统大量使用了人工智能技术。
So hate speech, take down and detection, in all languages in the world was not possible five years ago because the technology was just not there. And now it's much, much better because of the progress in AI. So same for cybersecurity. You can use AI systems to try to attack computer system. But that means you can also use it to protect. So every attack has a countermeasure, and they both make use of AI. So it's a cat and mouse game as it's always been. Nothing new there. So that's for the short to medium term dangers. And then there is the long term danger of risk of existential risk. And I just do not believe in this at all because we have agency. So it's not a natural phenomenon that we can't stop.
五年前,由于技术尚未到位,世界上所有语言中的仇恨言论的删除和检测都是不可能的。现在由于人工智能的进步,这方面的情况好多了。网络安全也是如此。你可以利用人工智能系统来试图攻击计算机系统。但这也意味着你可以利用它来保护。每种攻击都有对策,它们都利用了人工智能。这就像猫捉老鼠的游戏一样,一直如此。这些是短期到中期的危险。然后还有存在风险的长期危险。我根本不相信这一点,因为我们有能力做出选择。这不是我们无法阻止的自然现象。
This is something that we do without going to distinguish ourselves by accident. The reason why people think this, among other things, is because of a scenario that has been popularized by science fiction, which I've received the name Fum. OK. And what that means is that one day someone is going to discover the secret of a GI, whatever you want to call it, superhuman intelligence, is going to turn on the system. And two minutes later, that system will take over the entire world, destroy humanity, make such fast progress in technology and science that we're all dead. And some people actually are predicting this in the next three months, which is insane. So this is not happening. So this scenario is completely realistic.
这是一件我们不经意间所做的事情,而且我们无需去区分自己。人们认为这样的原因,其中之一是因为科幻小说广泛传播的一个场景,我给它取名为"富姆"。好的。这意味着有一天会有人发现一个GI的秘密,无论你想如何称呼它,超人类智能,将会开启系统。两分钟后,那个系统将接管整个世界,摧毁人类,以如此快的速度在技术和科学上取得进步,以至于我们都死亡。有些人实际上预测这将在接下来的三个月内发生,这是疯狂的。所以这不会发生。因此,这个情景完全不切实际。
This is not the way things work. The progress towards human level AI is going to be slow and incremental. And we're going to start by having systems that may have the ability to potentially reach human level AI, but at first, they're going to be as smart as a rat or a cat, something like that. And then we're going to crank them up and put some more guardrails to make sure they're safe and then work our way through smarter and smarter systems that are more and more controllable, et cetera. It's going to be like the same process we used to make turbo jets safe. It took decades. And now you can fly across the Pacific on the two-engine airplane.
这不是事情运作的方式。人类水平的人工智能的进步将会是缓慢而渐进的。我们将从拥有潜在达到人类水平智能能力的系统开始,但首先,它们将会像老鼠或猫那样聪明。然后我们会加强它们,并增加一些防护措施来确保它们的安全,然后逐步推进更智能更可控制的系统,等等。这将类似于我们用来确保涡喷发动机安全的过程。那花费了几十年的时间。现在你可以坐双引擎飞机横跨太平洋了。
You couldn't do this 10 years ago. You had to have three engines before, because the reliability of turbo jets wasn't that high. So it's going to be the same thing, a lot of engineering, a lot of really complicated engineering. We're out of time for today. But if we're all still here in three months, maybe we'll do it again. My pleasure. Thanks a lot.
10年前你办不到这件事。以前必须有三台引擎,因为涡轮喷气发动机的可靠性不太高。所以这将是同样的情况,需要大量的工程、非常复杂的工程。今天我们时间到这里了。但如果三个月后我们都还在这里,也许我们会再来做这件事。我很高兴。非常感谢。