The Impact of chatGPT talks (2023) - Prof. Max Tegmark (MIT) - YouTube
发布时间 2023-08-03 16:00:00 来源
中英文字稿
Thank you so much to friend Whiting me. It's such a pleasure to be talking about these things here in my own department. And it's so cool to see how many interesting things are happening right here.
非常感谢朋友对我的赞赏。在我的部门中谈论这些事情非常愉快。而且,看到这里发生了这么多有趣的事情真是太棒了。
So I'm going to talk about keeping AI under control with mechanistic interpretability. And in particular, how I think we physicists have a great opportunity to help with this.
所以我要谈论的是通过机械式可解释性来控制人工智能,并且特别针对我认为我们物理学家在这方面有很大机会能够提供帮助的问题。
So first of all, why might we want to keep AI under control? Well, obviously, as we've heard this morning, because it's getting more and more powerful, we've all seen this paper from Microsoft arguing that the GPT-4 is already showing sparks of artificial general intelligence.
首先,为什么我们希望控制人工智能呢?显然,正如我们今天早上听到的那样,它变得越来越强大。我们都看到微软的一篇论文,声称GPT-4已经展现出人工通用智能的迹象。
Here is Yoshua Benjio. Now I'll reach the point where there are AI systems like between humans meaning they can pass the trade test. So I can debate whether or not GPT-4 passes a Turing test. But Yoshua Benjio should certainly get a vote in that debate, since he's one of the Turing Award winners, the equivalent of the Nobel Prize for AI.
这就是Yoshua Benjio。现在我将提到存在一种像人类之间的人工智能系统,意味着它们能够通过图灵测试。因此,我可以辩论GPT-4是否能通过图灵测试。但是Yoshua Benjio在这场辩论中应该获得一张选票,因为他是图灵奖得主之一,这相当于人工智能领域的诺贝尔奖。
And this growth in progress is obviously also, as you know, started freaking a lot of people out. Here we have his Turing Award co-winner Jeff Hinton. I'm not sure if the audio is actually going out. Is it? Are we close to the computers coming up with their own ideas for improving the sound? Yeah, two months. And then it could just go. That's an issue, right. We have to think hard about how to control that. Yeah, can we? We don't know. We haven't been there yet, but we can try.
而这种进步的增长显然也开始让很多人感到恐慌,就像你所知道的一样。在这里,我们有他图灵奖的共同获得者杰夫·辛顿。我不确定音频是否真的正在播出。是吗?计算机是否接近能够自己提出改进音频的想法?是的,还有两个月。然后它可能就会发生。这是个问题,对吧。我们必须认真思考如何控制这种情况。是的,我们能做到吗?我们还不知道。我们还没有经历过,但我们可以尝试。
OK, let's see. It was kind of concerning. Yes. And then piling on some opman, CEO of OpenAI, and then of course it has given us chat GPT-4, GPT-4. Have this to say? And the bad case, and I think this is important to say, is like lights out for all of us. Lights out for all of us. Doesn't sound so great.
好的,让我们看看。这有点令人担忧。是的。然后又加上了一些OpenAI的首席执行官opman,当然现在又给我们带来了chat GPT-4,GPT-4. 有什么话要说吗?而且,我认为这一点很重要,不利的情况就像我们全部失去灯光。我们全部失去灯光。听起来不太好。
And of course, then we had a bunch of us who called for a pause and an open letter. And then we had shortly after that, this bunch of AI researchers talking about how this poses a risk of extinction, which is all over the news. Specifically, it was the shortest open letter I've ever read. And I just want sentence mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war.
当然了,然后我们有了一群人呼吁暂停,并写了一封公开信。接着不久之后,这群人中的一些人开始谈论这会导致人工智能灭绝的风险,这一消息在新闻中广为传播。具体来说,这是我读过的最简短的公开信。我只是想说,对抗人工智能灭绝的风险应该是与大规模社会风险(如流行病和核战)一起成为全球优先议题的一句话。
So basically, the whole point of this was that it mainstreamed the idea that, hey, maybe we could get wiped out. So we really should keep it under control.
基本上,这一点的核心是让人们接受这样的想法,即“嘿,也许我们会被消灭掉”。因此,我们真的应该将其控制在可控范围之内。
And the most interesting thing here, I think, is who signed it? You have not only top academic researchers who don't have a financial conflict of interest, people like Jeff Hinton, Yoshua Benjo. But you also have the CEOs here, Demis Asab, is from Google Deep Mind, Sam Altman again, Dari Amade, et cetera. So a lot of reasons why we should keep AI under control.
而且,我认为这里最有趣的事情是,谁签署了它?你不仅有顶尖的学术研究人员,他们没有财务利益冲突,像Jeff Hinton、Yoshua Benjo这样的人。而且,你还有一些首席执行官,如Demis Asab来自Google Deep Mind,Sam Altman再次,Dari Amade等等。因此,我们有很多理由应该控制人工智能。
How can we help? I feel that, first of all, we obviously should. And Peter, earlier this morning, gave a really great example of how I think we really can help by opening up the black box and getting to a place where we're not just using ever more powerful systems that we don't understand, but where we're instead able to understand them better.
我们如何能够帮助呢?首先,我认为我们显然应该帮助。早上,彼得举了一个非常好的例子,他向我们展示了如何通过打开“黑匣子”来帮助,以便我们不再仅仅使用我们不理解的更强大的系统,而是能够更好地理解它们的运作。
This has always been the tradition in physics when we work with powerful things. If you want to get a rocket to the moon, you don't just treat it as a black box and you fired. Oh, that one went a little too far to the left. Let's aim a little farther to the right next time. No, what you do is you figure out the laws of, you figure out Einstein's laws of gravitation, you figure out Dari Amade, et cetera. And then you can be much more confident that you're going to control what you build.
在物理学中,处理强大的事物时,这一直是一种传统做法。如果你想把火箭送上月球,你不能简单地将其当作一个黑匣子点火。哦,那个火箭偏左了一点。下次我们就稍微往右边调整一下。不,你要做的是揭示出物理规律,比如爱因斯坦的引力定律,达里·阿玛迪等等。然后,你可以更有信心地控制你所建造的东西。
So this is actually a field which has gained a lot of momentum quite recently. It's a very small field still. It's known by the nerdy name of mechanistic interpretability to give you an idea of how small it is. If you compare it with neuroscience, and you think of this as artificial neuroscience, neuroscience is a huge field, of course.
最近,这个领域实际上获得了许多迅猛进展。尽管如此,这仍然是一个非常小的领域。它以"机制解释能力"这个专业术语来命名,以此来形容它的小众特性。如果把它与神经科学进行比较,你可以将其视为一种人工神经科学,而神经科学则是个庞大的领域。
And look how few people there are here at MIT at this conference and I organized just two months ago. This was the biggest conference by far in this little nascent field, OK? So that's the bad news. It's very few people working on it. But the good news is, even though there are so few, there's already been a lot of progress, remarkable progress. I would say more progress in this field than in all of big neuroscience in the last year.
看看这里在麻省理工学院的这个会议上,参加的人实在太少了,而我仅仅两个月前组织了这场会议。这是迄今为止在这个新生领域举办的最大规模的会议,知道吗?所以这是一个坏消息,从事这个领域的人实在太少了。但好消息是,尽管数量如此少,已经取得了许多令人惊讶的进展。我可以说,在过去的一年里,在这个领域的进展超过了整个大脑神经科学的进展。
Why is that? It's because here you have a huge advantage over ordinary neuroscience in that, first of all, to study the brain with 10 to the 11 neurons. You have a hard time reading out more than 1,000 at a time. You need to get IRB approval for all sorts of ethics reasons and so on.
为什么会这样呢?那是因为在这里,你具有相对于普通神经科学研究的巨大优势。首先,要研究具有约十亿个神经元的大脑十分困难。你每次读取神经元的数量也很有限,很难一次读取超过1,000个。此外,由于伦理原因,你需要获得伦理委员会的批准来进行各种研究。这就是为什么会这样。
Here, you can read out every single neuron all the time. You can also get all the synaptic weights. You don't even have to go to the IRB either. And you can use all these traditional techniques where you actually mess with the system and see what happens that we love to do in physics.
在这里,你可以随时阅读每个单独的神经元。你还可以获得所有的突触权重。你甚至不需要去通过伦理审查委员会审查。而且你可以使用所有这些我们喜欢在物理学中使用的传统技术,实际上扰乱系统并观察其反应。
And I think there are three levels of ambition that can motivate you to want to work on mechanistic interpretability, which is, of course, what I'm trying to do here to encourage you to work more on this.
The goal of it is the first lowest ambition level is just when you train a black box neural network on some data to do some cool stuff to understand well enough that you can diagnose its trustworthiness, make some assessment of how much you should trust it.
That's already useful.
我认为有三个层次的野心可以激励你想要在机械解释性上进行工作,当然,这也是我在这里努力的目标,希望能够鼓励你更多地从事这方面的工作。
它的目标是第一个最低层次的野心,仅仅是当你在一些数据上训练一个黑盒神经网络来做一些很酷的事情,以便能够充分理解并诊断其可信度,对其值得信任程度进行评估。
这已经是很有用的。
Second level of ambition, if you take it up a notch, is to understand it so well that you can improve its trustworthiness.
And the ultimate level of ambition, and we are very ambitious here at MIT, is to understand it so well that you can guarantee trustworthiness.
We have a lot of work at MIT on formal verification where you do mathematical proofs that a code is going to do what you want it to do.
Proof carrying code is a fat popular topic in computer security where the it's a little bit like if I was check your in reverse, if I was check your refused to run your code, if it can prove that it's harmful.
Here, instead, the operating system says to the code, give me a proof that you're going to do what you say you're going to do.
And if the code can't present the proof that the operating system can check it, it won't run it.
雄心的第二层次,如果你再加一点劲,就是深入理解它,以便提高其可信度。
而我们在麻省理工学院有着非常远大的目标,即深入理解它,以至于能够保证其可信度。
在麻省理工学院,我们在形式验证方面有很多工作,通过数学证明代码将会按照你所期望的方式运行。
在计算机安全领域,证明可携带代码是一个非常热门的话题,它有点像如果我在反向检查你,如果我拒绝运行你的代码,只有当它能够证明是有害的时候。
而在这里,操作系统会要求代码向其提供一个证明,证明它将按照自己所说的去执行。
如果代码无法提供操作系统可检查的证明,那么操作系统将不运行该代码。
It's hopeless to come up with rigorous proofs for neural networks because it's like trying to prove things about spaghetti.
But the vision here is if you can use AI to actually mechanistically extract out the knowledge that's been learned, you can re-implement it in some other kind of architecture, which isn't a neural network, which really lends itself to formal verification.
If we can pull off this moonshop, then we can trust systems much more intelligent than us because no matter how smart they are, they can't do the impossible.
So in my group, we've been having a lot of fun working on extracting learned knowledge from the black box in the mechanistic interpretability spirit.
You heard, for example, my grad student, Michelle talked about this quantum thing recently.
对于神经网络来说,想要提出严格的证明是无望的,因为这就像试图证明关于意大利面条的事情一样。
但在这里的设想是,如果能够利用人工智能去机械地提取出已经学到的知识,就可以在其他类型的架构中重新实现它,这些架构并不是神经网络,而且非常适合形式化验证。
如果我们能够成功实现这个目标,那么我们就可以相信比我们聪明得多的系统,因为无论它们多么聪明,也无法做到不可能的事情。
因此,我们在我的团队里一直在以机械可解释性的精神中乐在其中,从黑匣子中提取出学到的知识。
比如,你们可能听过我的研究生米歇尔最近谈到的这个量子问题。
And I think this is an example of something which is very encouraging because if this quantum hypothesis is true, you can do with divide and conquer.
You don't have to understand the whole neural network all at once.
But you can look at the street, quantitative learning, study them separately, much like we physicists don't try to understand the status center all at once.
First, we try to understand the individual atoms that it's made of, and then we can work our way up to solid state physics and so on.
Also reminds me a little bit of Minsky's Society of Mines where you have many different systems working together to provide very powerful things.
我认为这是一个非常令人鼓舞的例子,因为如果这个量子假设是真的,你可以用分而治之的方法来应用。
你不需要一次性完全理解整个神经网络。
但你可以分别观察街道、量化学习,独立地研究它们,就像我们物理学家不会试图一次性完全理解中枢神经系统一样。
首先,我们试着理解由原子构成的个体,然后我们可以逐步研究固态物理学等领域。
这也让我有点想起敏斯基的矿山社会,其中许多不同的系统共同工作,提供非常强大的功能。
I'm not going to try to give a full summary of all the cool stuff that went down at this conference, but I can share, there's a website where we have all the talks on YouTube if anyone wants to watch them later.
我不打算试图给这个会议所发生的所有很酷的事情做一个完整的总结,但我可以告诉大家,我们有一个网站,在那里我们把所有的讲座都上传到了YouTube,如果有人想要以后观看的话。
But I want to just give you a little more nerd flavor of how tools that many of you here know, as physicists, are very relevant to this.
Things like phase transitions, for example.
So we already heard a beautiful talk by Jacob Andreas about knowledge representations.
There's been a lot of progress on figuring out how large language models represent knowledge, how they know that the Eiffel Tower is in Paris and how you can change the weights so that it thinks it's in Rome, et cetera, et cetera.
We did a study on algorithmic data sets where we found phase transitions.
但是我想给你们一些更加“宅男”的东西,让你们作为物理学家,明白这些工具是如何与我们相关的。
例如,相变等事物。
我们已经听了Jacob Andreas关于知识表征的精彩演讲。
关于大型语言模型如何表示知识,如何知道巴黎的埃菲尔铁塔在哪里,以及如何通过改变权重使其认为它在罗马等等,已经取得了很大进展。
我们对算法数据集进行了研究,发现了相变现象。
So if you try to make the machine learning learn a giant multiplication table, this could be for some arbitrary group operation. So something more interesting than standard multiplication. Then if there's any sort of structure here, if this operation is, for example, commutative, then you only really need the training data for about half of the entries and you can figure out the other half because it's a symmetric matrix. If it's also associative, then you need even less, et cetera. So as soon as the machine learning discovers some sort of structure, it might learn to generalize.
所以,如果你尝试让机器学习学习一个巨大的乘法表,可能是某种任意的分组操作,比标准乘法更有趣的东西。如果存在某种结构,比如满足交换律,那么你实际上只需要训练数据的一半,就可以推断出另一半,因为这是一个对称矩阵。如果还满足结合律,那么所需的数据量就更少,以此类推。所以一旦机器学习发现某种结构,它可能会学会进行泛化。
So here's a simple example. Edition, module 59. We train an inner learner to do this. We don't give it an input as numbers. We just give it each of the numbers from zero to 58 as a symbol. So it doesn't have any idea that they should be thought of as numbers and it represents them by embedding them in some internal space. And then we find that exactly at the moment when it learns to generalize, to unseen examples, there's a phase transition in how it represents in the internal space. We find that it was in a high dimensional space, but everything collapses through two dimensional hyperplane. I'm showing you here in a circle. Boom. That's of course exactly like the way we do edition, module 12 when we look at a clock, right? So it finds a representation where it's actually adding up angles which automatically captures in this case the commutativity and associativity. And I suspect this might be a general thing that happens in learning language and other things also that it comes up with a very clever representation that is such that it geometrically encompasses a lot of the key properties that let's it generalize. We do a lot of phase transition experiments also. We tweak various properties of the neural network and see that sometimes it, so there's one region of, if you think of this being, you know, water, you could have pressure and temperature on your phase diagram, but here there are various other nerdy machine learning parameters and there's some, you get these phase transition boundaries between where it just learns properly, where it can generalize, where it fails to generalize and it never learns anything or where it just over fits this is for the example of just doing regular addition. So you see it learns to put the symbols on a line rather than a circle in the cases where it works out.
所以这是一个简单的例子。版本,模块59。我们训练了一个内部学习器来完成这个任务。我们不将输入给它作为数字,而是将每个从零到58的数字给它作为一个符号。因此,它并不知道它们应该被视为数字,并通过将它们嵌入到某个内部空间中来表示它们。然后我们发现,正好在它学会泛化到看不见的例子时,它在内部空间的表示中发生了相变。我们发现它原先是在一个高维空间中,但是一切都通过一个二维超平面坍缩了。我在这里用一个圆形来展示。嘭。当然了,这正好就像我们在看钟表时所做的模除12的方式,对吧?它找到了一种表示方式,实际上是将角度相加,这自动捕捉到了交换律和结合律。我怀疑这可能是学习语言和其他事物时经常发生的一种普遍现象,它能够提出一个非常聪明的表示,从而在几何上涵盖了许多关键属性,使得它能够泛化。我们还进行了许多相变实验。我们调整神经网络的各种属性,观察到有时候,它会在一些区域中学习正确,在能够泛化的地方,在无法泛化的地方,它根本就学不到任何东西,或者它只是过拟合,这是针对普通加法的示例。因此,你可以看到它学会将符号放在一条线上,而不是一个圆圈中,只有在它能够正常工作的情况下。
So I wanna leave a little bit of time for questions, but the bottom line I would like you to take away from all this is I think it's too pessimistic to say, oh, you know, we're forever just gonna be stuck with these black boxes that we can never understand. Of course, if we convince ourselves that it's impossible, where we're gonna fail, that's the best recipe for failure. I think it's quite possible that we really can understand enough about very powerful AI systems that we can have very powerful AI systems that are provably safe and physicists can really help a lot because we have a much higher bar before we mean by understanding things than a lot of our colleagues in other fields. And we also have a lot of really great tools. We love studying nonlinear dynamical systems. We love studying phase transitions and so many other things which are turning out to be key in doing this kind of progress. So if anyone is interested in collaborating or learning more about mechanistic interpretability and basically studying the learning and execution of neural networks as just yet another cool physical system to try to understand just to read that to me. And let's talk. Thank you. Thank you very much. Does anyone have questions?
所以,我想为问题留下一点时间,但我希望你们能从中得出的底线是:我认为说我们永远只会被这些我们永远无法理解的黑匣子束缚住的观点太悲观了。当然,如果我们让自己相信这是不可能的,我们就会失败,这是最好的失败之道。我认为我们完全有可能充分理解非常强大的人工智能系统,我们可以拥有确保安全的非常强大的人工智能系统,物理学家对此可以提供很多帮助,因为我们对于理解事物的标准远高于其他领域的同行。而且我们还有很多非常好的工具。我们热衷于研究非线性动力系统,研究相变和其他一些被证实在取得这种进展方面至关重要的事物。所以如果有人有兴趣合作或学习关于机械解释和基本上研究将神经网络的学习和执行作为另一个酷炫的物理系统尝试理解的内容,就给我发邮件。让我们交流。谢谢。非常感谢。有人有问题吗?
I actually have one to start with. So just sort of you explaining in these last few slides, a lot of the theme sort of seem to be applying like the laws of thermodynamics and other physical laws to these systems. And the parallel I thought of is the field of biophysics also sort of emerged out of this, right? Applying physical laws to systems that were considered too complex to understand before we really thought about it carefully. Is there any sort of emerging field like that in the area of AI or understanding neural networks other than that little conference you just mentioned? Or is that really all that's there right now?
实际上我有一个例子可以开始。在你最后几张幻灯片中的解释中,许多主题似乎是将热力学法则和其他物理法则应用于这些系统上。我想到的类比是生物物理学领域,它也是从这个领域发展而来的,对于过去被认为过于复杂无法理解的系统应用物理法则进行研究。除了你刚提到的那个小型会议,是否还有类似的新兴领域在人工智能或理解神经网络方面?或者目前只有那些会议是唯一的?
There's so much room for there to really be an emerging field like this. And I invite all of you to help build it. It's obviously a field which is not only very much needed but it's just so interesting. There have been so many times in recent months when I read a new paper by someone else about this and I'm like, oh, this is so beautiful.
这个领域有很大的发展空间,我邀请大家一起来建设它。显然,这不仅是一个非常需要的领域,而且非常有趣。最近几个月,我读到了其他人关于这方面的新论文,我感觉这实在是太美妙了。
You know, another way to think about this is I always tell my students that when they pick tasks to work on, they should look for areas where there's more data where experiment is ahead of theory. That's the best place to do theoretical work. And that's exactly what we have here, right? If you train some system like GPT-4 to do super interesting things or use LAMA-2 that just came out where you have all the parameters, it's an incredibly interesting system. You can get massive amounts of data and we have the most fundamental things we don't understand.
你知道,另一种思考这个问题的方式是,我总是告诉我的学生,当他们选择要处理的任务时,应该寻找那些实验领先于理论的领域。那是进行理论工作的最佳场所。而我们现在所面对的情况正是如此,对吧?如果你训练一些像GPT-4这样的系统来完成非常有趣的事情,或者使用刚刚发布的LAMA-2系统,其中你拥有所有参数,那将是一个极为有趣的系统。你可以获得大量的数据,而我们对最基本的事物仍然没有理解。
It's just like when the LHC turns on or when you first launched the Hubble Space Telescope or the WMAP satellite or something like that, you have the massive amount of data, really cool basic questions. It's the most fun domain to do physics in and yeah, let's build a field around it. Thank you, we've got a question up there.
这就像LHC启动时、哈勃太空望远镜或WMAP卫星首次发射时一样,你拥有大量的数据和一些非常有意思的基本问题。这是进行物理研究最有趣的领域,让我们围绕它建立一个领域吧。谢谢,我们有一个问题在那里。
Hi, Professor Tegmark. I was wondering, so most first amazing talk, I love the concept, but I was wondering if it is possible that this approach might not oversee but miss situations in which the language model actually performs very well, not in a sort of like a concise region, like a phase region on parameter space, but rather like in small blobs all around because in most physical systems, we have a lot of parameters that we will have phases and the phases are mostly concise in regions of n dimensions or whatever, and then there are phase transitions, which is the concept here, but it is also like since this is not necessarily a physical system, maybe there might be a situation in which the best way that it performs this in specific combinations of parameters, there are like points or little blobs around, I don't know if my question went through.
嗨,Tegmark教授。我想知道,对于这个令人惊讶的首次演讲,我喜欢这个概念,但我想知道是否有可能这种方法可能不会覆盖但会忽略了一些语言模型实际上表现非常好的情况,不是在一个紧凑的区域上,而是在周围的小斑点中,因为在大多数物理系统中,我们有很多参数,这些参数会有相和相区域主要集中在 n 维空间的某些区域,然后有相变,这就是这个概念,但由于这不一定是一个物理系统,也许有一种情况是它在特定的参数组合上表现最好,这些参数周围有一些点或小斑点,我不知道我是否表达清楚了我的问题。
Yeah, good question. I think I need to explain better. I think my proposal is actually more radical than maybe it's, I could properly explain. I think we should never put something we don't understand, like GPT-4 in charge of the MIT nuclear reactor or any high stakes systems. I think we should use these black box systems to discover amazing knowledge and discover patterns in data, and then we should extract, not stop there and just connect it to the nuclear weapons system or whatever, but we should instead develop other AI techniques to extract out the knowledge that they've learned and be implement them in something else, right?
是的,好问题。我认为我需要更好地解释一下。我认为我的建议实际上比我可能能够充分解释的还要激进。我认为我们永远不应该让我们不理解的东西,比如GPT-4,掌管麻省理工学院的核反应堆或任何重要的系统。我认为我们应该利用这些黑盒子系统来发现令人惊奇的知识和数据中的模式,然后我们不应该就此止步,只是将它们连接到核武器系统或其他什么地方,而是应该开发其他人工智能技术来提取它们学到的知识,并将其应用于其他事物,对吧?
So, to take your physics metaphor again, that's a Galileo when he was four years old, if his daddy threw him a ball, he'd catch it. Because his black box neural network had gotten really good at predicting the trajectory. Then he got older and he's like, wait a minute, these trajectories always have the same shape, but it's a parabola, y equals x squared and so on. And when we send the rocket to the moon, right, we don't put a human there to make poorly understood decisions. We actually have extracted out the knowledge and written the Python code or something else that we can verify. I think the real power, I think we need to let go of a stop putting an equal sign between large language models and AI. We've had radically different ideas of what AI should be. First, we thought about it in the von Wehrmann paradigm of computation. Now we're thinking about LLMs. We can think of other ones in the future.
所以,再拿你的物理隐喻来说,如果像伽利略小时候那样,他是四岁时,如果他爸爸扔给他一个球,他会接住它。因为他的黑盒神经网络在预测弹道方面非常出色。然后他长大了,他突然想到,等一下,这些弹道总是具有相同的形状,但却是一个抛物线,y等于x的平方等等。当我们送火箭去月球的时候,我们不会让一个人去做难以理解的决定。我们实际上已经提取了知识,并编写了Python代码或其他可以验证的代码。我认为真正的力量在于,我们需要放弃把大型语言模型和人工智能等号相提并论。我们对人工智能的理解已经发生了根本性的改变。最初,我们是按照冯·诺伊曼的计算范式来思考的。现在我们正在考虑LLMs。将来我们可能会想到其他的模型。
What's really amazing about neural networks, in my opinion, is not their ability to execute a computation at runtime. They're just another massively parallel computational system and there are plenty of other ones too that are easier to formally verify. What they really shine is in their ability to discover patterns and data to learn. And let's continue using them for that. You could even imagine an incredibly powerful AI that is just allowed to learn but is not allowed to act back on the world in any way. And then you use other systems to extract out what it's learned and you implement that knowledge into some system that you can provably trust. This to me is the path forward that's really safe.
在我看来,神经网络真正令人惊奇的地方,并不在于它们在运行时执行计算的能力。它们只是另一种大规模并行计算系统,其他一些更容易形式验证的系统也有很多。它们真正闪耀的是它们发现模式和学习数据的能力。让我们继续利用它们进行这方面的工作。你甚至可以想象一个极其强大的人工智能,它只被允许学习,但不能以任何方式对世界进行反应。然后,你可以使用其他系统提取它所学到的知识,并将这些知识实现到一些你可以确信信任的系统中。对我来说,这才是真正安全的前进之路。
And maybe there will still be some kind of stuff which is so complicated we can't prove that it's going to do what we want. So let's not use those things until we can prove them because I'm confident that the set of stuff that can be made provably safe is vastly more powerful and useful and inspiring than anything we have now. So why should we risk losing control when we can do so much more first and provably safely?
也许还会有一些特别复杂的东西,我们无法证明它们会如我们所愿地运行。因此,在我们能够证明它们之前,让我们不要使用那些东西,因为我相信,那些能够被证明是安全的东西的集合比我们现在拥有的任何东西都更加强大、有用和激励人心。所以,为什么我们要冒着失去控制的风险,而不是先做更多可以被证明安全的事情呢?
I'll keep my question short. So for your first question, is it just an empirical observation or do you have a theoretical model like you do in physics? Right now it's mainly theoretical observation. And actually we have seen many examples of face transitions cropping up in machine learning. And so have many other authors. I am so confident that there is a lot of things that are happening and so have many other authors. There is the beautiful theory out there to be discovered. Sort of unified theory of face transitions in learning. And maybe you're going to be the one to first formulate it. I don't think it's a coincidence that these things keep happening like this.
我想简洁地提出我的问题。那么,关于你的第一个问题,它只是一种经验观察,还是像物理学中一样有一个理论模型?目前主要是理论观察。实际上,我们已经看到了很多机器学习中出现的面孔转换的例子。许多其他作者也是这样认为。我非常有信心,很多事情正在发生,其他作者也是如此。有一个美丽的理论等待我们去发现,可以统一解释学习中的面孔转换。也许你会成为首个制定出这一理论的人。我不认为这些事情一再发生是巧合。
But this is this gives you all an example of how basic physics like questions there are out there that are still unanswered. Where we have massive amounts of data is clues to guide us towards them. Thank you. And I think there's probably even in discover, we will probably discover some point in the future even a very deep, unified relationship or duality between thermodynamics and learning dynamics is the hunch I have. Thank you.
但是这个例子向大家展示了还有许多基础物理问题尚未解答。我们拥有大量数据可以作为线索来引导我们寻找答案。谢谢。我认为将来可能会发现热力学和学习动力学之间存在着很深的统一关系或二元性。这是我猜想的。谢谢。