首页  >>  来自播客: Dwarkesh Patel 更新   反馈

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment

发布时间 2023-03-27 13:57:49    来源
But I would not underestimate the difficulty of alignment of models that are actually smarter than us of models that are capable of misrepresenting their intentions. Are you worried about spies? I'm really not worried about the way it's being leaked. We'll all be able to become more enlightened because we'd interact with an AGI that will help us see the world more correctly, like imagine talking to the best meditation teacher in history. Microsoft has been a very, very good partner for us.
但是我不会低估使得比我们更聪明的模型对齐的难度,这些模型有能力误传他们的意图。你担心间谍吗?我真的不担心它的泄露方式。因为我们可以与一个AGI互动,这将帮助我们更正确地看待世界,我们所有人都将变得更加明智,就像想象和历史上最好的冥想老师交谈一样。对于我们来说,Microsoft一直是一个非常非常好的合作伙伴。

So I challenge the claim that next token prediction can also pass human performance. If you're based, you're only smart enough. You just ask it like, what would it person be like? Great insight and wisdom and capability to do. Okay, today I have the pleasure of interviewing Elia Sutskver, who is the co-founder and chief scientist of OpenAI. Elia, welcome to the Lunar Society. Thank you. Happy to be here.
所以我质疑声称下一个词汇预测也可以达到人类的水平这一观点。如果你的基础只够聪明,那就只能做到这个程度。你只是问它,这个人会是什么样子呢?具有很好的洞察力、智慧和能力。好的,今天我很荣幸采访了OpenAI的联合创始人兼首席科学家 Elia Sutskver。Elia,欢迎来到月球学会。谢谢你。很高兴来这里。

First question and no humility allowed. There's many scientists, or maybe not many scientists, who will make a big breakthrough in their field. There's far fewer scientists who will make multiple independent breakthroughs that define their field, throughout their career. What is the difference? What distinguishes you from other researchers? Why have you been able to make multiple breakthroughs in their field?
让我们直接进入主题,不要谦虚。有很多科学家,或者说也许没有那么多,他们在自己的领域取得了重大突破。而在整个职业生涯中,能够在自己的领域里多次取得独立突破的科学家就更少了。那么,区别在哪里呢?你为何与其他研究者不同?为什么你能在领域里多次取得突破?

Well, thank you for the kind words. It's hard to answer that question. I mean, I try really hard. I try hard to understand what that gave it everything you got. And that worked so far. I think that's all there is to it.
哎呀,谢谢你这么夸奖我。这个问题真的很难回答。我的意思是,我确实努力过。我尽力去理解全力以赴的意义。到目前为止,这种方式还算有效。我想就是这样了。

Got it. What's the explanations for why there aren't more illicit uses of GPT? Why aren't more foreign governments using it to spread propaganda or scam grandmothers or something? I mean, maybe they've been really gotten to do it a lot. But it also wouldn't surprise me if some of it was going on right now. Certainly imagine that we'd be taking some of the open source models and trying to use them for that purpose. Like I sure I would expect this would be something that'd be interested in the future. It's like technically possible that you just haven't thought about it enough, or haven't like done it at scale using their technology. Or maybe it's happening which is an element. Would you be able to track it if it was happening? I think large scale track is possible. Yes, I mean, it's requires of all special operation.
明白了。关于为什么GPT没有更多非法用途的解释是什么?为什么没有更多的外国政府使用它来传播宣传或欺骗老太太之类的事情?我的意思是,也许他们已经真的做了很多这样的事情。但是,如果现在正在发生这种情况,我也不会感到意外。可以肯定的是,我们会考虑利用一些开源模型来达到这个目的。就像我确信我会期待这将是未来感兴趣的事情。从技术上讲,你可能还没有认真考虑过这个问题,或者还没有大规模地使用他们的技术来实现这个目标。或者也许现在正在发生这种情况。如果正在发生这种情况,你能追踪到吗?我认为大规模追踪是可能的。是的,我的意思是,这需要特殊的操作。

Now there's some window in which AI is very economically valuable on the scale of airplanes. Let's say what we haven't reached agi yet. How big is that window? I mean, I think this window is hard to give you a precise answer. But it's definitely going to be like a good multi-year window. It's also a question of definition because AI before it becomes agi is going to be increasingly more valuable year after year. I'd say in an exponential way. So in some sense it may feel like, especially in hindsight, it may feel like there was only one year or two years because those two years were larger than the previous years. But I would say that already last year there have been a fair amount of economic value produced by AI. Next year is going to be larger and larger after that. I think that this is going to be a good multi-year chunk of time. What that's going to be true, I would say, from now on, TLAGI pretty much.
现在有一个阶段,在这个阶段AI的经济价值非常高,就像飞机一样。我们假设还没有达到通用人工智能(AGI),那么这个阶段有多长呢?我认为很难给出一个确切的答案。但肯定会有一个相当长的时间窗口。这也是一个定义问题,因为在AI变成AGI之前,它的价值将逐年递增。我觉得这是以指数方式增长的。所以从某种意义上说,特别是事后回顾,可能觉得只有一年或两年的时间,因为这两年的增长超过了以前的几年。但我想说,去年AI已经产生了相当大的经济价值。明年将更大,之后越来越大。我认为这将是一个很多年的时间段。从现在开始到实现通用人工智能(AGI),这一现象会持续存在。

Well, because I'm curious if there's a startup that's using your models. At some point, if you have agi, there's only one business in the world. It's open AI. How much window do they have? Does any business have, or are they actually producing something that agi can't produce?
那么,因为我很好奇是否有一个创业公司在使用你们的模型。在某种程度上,如果你拥有 AGI(人工通用智能),那全球只有一个事业,那就是 OpenAI。他们的市场空间有多大?是否有其他公司正在生产 AGI 无法生产的产品?

Yeah, well, I mean, it's the same question as asking how long until agi? Yeah. I think it's a hard question to answer. I mean, I hesitate to give you a number also because there is the same where effect, where people who are optimistic people who are working on the technology tend to underestimate the time it takes to get there. But I think that the way I ground myself is by thinking about a self-driving car. In particular, there is an analogy where if you look at the TLAVA Tesla, and if you look at the self-driving behavior of it, it looks like it does everything. It does everything. But it's also clear that there is still a long way to go in terms of reliability. And we might be in a similar place with respect to our models where it also looks like we can do everything. And at the same time, it will be, we'll need to do some more work until we really iron out all the issues and make it really good and really reliable and robust and well behaved.
嗯,这个问题其实跟问AGI(通用人工智能)需要多久才能实现一样。我觉得这个问题很难回答,我也不好给你一个确切的时间,因为有一种所谓的“乐观主义者效应”,那就是那些对技术充满信心的人们往往会低估实现这个目标所需的时间。但我个人的想法是,我们可以从自动驾驶汽车这个例子来考虑。 特别是,我们可以把它跟特斯拉的TLAVA自动驾驶行为相类比。看似特斯拉的自动驾驶系统已经可以应付各种场景,但实际上,在可靠性方面还有很长的路要走。我们现在的模型也可能正处在类似的阶段,虽然看上去似乎什么都能做,但仍需要付出更多努力,以消除各种问题,使模型更加优秀、可靠、稳定和规范。

By 2030, what percent of GDP is AI? Oh, gosh, hard to answer that question. Very hard to answer the question. Give me an over under. Like the problem is with my error bars and locscale. So I could imagine, like I could imagine like a huge percentage, I could imagine it would be a small percentage.
到2030年,AI在GDP中所占的百分比是多少呢?哎呀,这个问题真的很难回答。实在是太难回答了。要我给个大概范围吗?问题在于我的误差范围和比例尺度。所以我可以想象,可能占很大比例,也有可能是个比较小的百分比。

Okay, so let's take the counterfactual where it is a small percentage. Let's say it's 2030 and you know, not that much economic value is imperative by these elements. As unlikely as you think this might be, what would be your best explanation right now? Why something like this might happen? My best explanation. So I really don't think that's a likely possibility.
好吧,那我们来讨论一个设定是这些因素占很小比例的反事实情景。假设现在是2030年,那么这些因素对经济价值的影响并不大。即使你认为这种情况可能性不大,你现在最好的解释是什么?为什么会发生这样的事情?关于这个问题,我要说的是,我真的觉得这种可能性不大。

Yeah. So that's the preface to the comment. But if I were to take the premise of your question, well, like why were things disappointing in terms of the real world impact? And my answer would be reliability. If somehow it ends up being the case that you really want them to be reliable and then it have not been reliable or if reliability are now to be harder. Then we expect I really don't think that will be the case. But if I had to pick one, if I had to pick one and you tell me like, Hey, like why didn't things work out? It would be reliability that you still have to look over the answer is and double check everything. And that's just really puts a damper on the economic value that can be used by those systems. They'll be technically mature. It's just a question of whether it'll be reliable enough.
是的。所以这是对评论的序言。但如果我要回答你问题的前提,那就是,现实世界的影响为什么令人失望呢?我的答案是可靠性。如果不知怎么地,你真的希望它们是可靠的,但结果并不可靠,或者可靠性比我们预期的要困难很多。虽然我真的不认为会是这样,但如果我必须选一个原因的话,你问我为什么事情没有解决,那我会认为是因为可靠性的问题,你仍然需要核对答案和仔细检查一切。而这实际上就削弱了这些系统所能带来的经济价值。它们在技术上已经足够成熟了,只是问题在于是否足够可靠。

Yeah, well in some sense, not reliable means not technological maturity. See what I mean, fair enough.
嗯,从某种意义上讲,不可靠意味着技术成熟度不高。你明白我的意思吧,还算说得通。

What's after generative models, right? So before you're working on reinforcement learning, is this is this basically it? Is this a paradigm that gets us to AGI or is there something after this? I mean, I think this paradigm is going to go really, really far and I would not underestimate it. I think it's quite likely that this exact paradigm is not going to be the quiet AGI form factor. I mean, I hesitate to say precisely what the next paradigm will be. But I think it will probably involve integration of all the different ideas that came with the game in the past.
那么,在生成模型之后是什么呢?在此之前,您一直在研究强化学习,这就是基本的原理吗?这个范式能让我们实现AGI(人工智能)吗?还是说在这之后还有其他东西?我的意思是,我认为这个范式会走得非常非常远,而且我们不应该低估它。我认为这个准确的范式很可能不会是实现AGI的最终形态。我不愿意精确地说下一个范式会是什么。但我想它可能会涉及到过去涌现出的所有不同的想法的整合。

Is there some specific one you're referring to or? I mean, it's hard to be specific. So you could argue that the next token prediction can only help us match human performance. And maybe not surpass it. What would it take to surpass human performance? So I challenge the claim that next token prediction can also pass human performance. It looks like on the surface, it cannot. It looks on the surface if you just learn to imitate, predict what people do. It means that you can only copy people. But the here is a controversial argument for why it might not be quite so if your neural net is, if your base neural net is smart enough, you just ask it like, what would it, what would it person with great insight and wisdom and capability do? Maybe such a person doesn't exist, but there's a pretty good chance of the neural net. We will be able to extrapolate how such a person would behave. Do you see what I mean?
您是在提问某个具体的问题吗?我的意思是,很难具体说明。因此,你可以认为,下一个标记预测可能只能帮助我们达到与人类相当的水平,并不能超越。要想超越人类,我们需要付出什么样的努力呢?因此,我质疑下一个标记预测能否超越人类的表现。从表面看,似乎它不行。从表面看,如果你只是学着模仿,预测人们的动作,那意味着你只能复制人们。但是这里有一个有争议的论点,如果你的神经网络足够智能,你只需要问它,像一个有洞察力、智慧和能力的人会怎么做?也许这样的人并不存在,但神经网络很有可能能预测这样的人会如何行事。你明白我的意思吗?

Yes, although where would it get the sort of insight about what that person would do? If not from the data of regular people, because if you think about it, what does it mean to predict the next token well enough? What does it mean? Actually, it's actually it's a much, it's a deeper question than it seems. Predicting the next token well means that you understand the underlying reality that led to the creation of that token. It's not statistics like it is statistics, but what is statistics? In order to understand those statistics, to compress them, you need to understand what is it about the world that creates those statistics.
是的,虽然这种情况下它从哪里获得关于那个人会做什么的深刻见解呢?如果不是从普通人的数据中得来的,因为仔细想想,足够好地预测下一个词汇意味着什么?实际上,这是一个比看上去更深入的问题。足够好地预测下一个词汇意味着你理解了产生那个词汇的潜在现实。虽然看起来像统计学的问题,但统计学背后又是什么呢?为了理解这些统计数据,对它们进行压缩,你需要了解到底是什么因素导致了这些统计数据的产生。

And so then you say, okay, well, I have all those people, what is it about people that creates their behaviors? Well, they have, you know, they have thoughts and they have feelings and they have ideas and they do things in certain ways. All of those would be deduced from next token prediction. And I'd argue that this should make it possible, not indefinitely, but to a pretty decent degree to say, well, can you guess what you do if you took a person with like this characteristic and that characteristic? Like such a person doesn't exist. But because you're so good at predicting the next token, you should still be able to guess what that person would do this hypothetical imaginary person.
那么,接下来你可能会说,好吧,既然有了这些人,究竟是什么导致了他们的行为呢?其实呢,这些人会有各种想法、感受和主意,而且他们会以特定的方式去做事。所有这些都可以通过预测下一个词汇得出。我认为这应该让我们在一定程度上可以预测到,如果让一个具有某种特征和其他特征的人来做某事,那么他们会如何行动。虽然这样的人可能并不存在,但由于你在预测下一个词汇方面如此擅长,你仍然应该能猜到这个虚构的假设的人会做什么。

These are great their mental ability, then the rest of us. When we're doing reinforcement learning on these models, how long before most of the data for the reinforcement learning is coming from AI's and not humans? I mean already most of the different reinforcement learning is coming from AI's. Yeah, well, it's like the humans are being used to train the reward function. But then the reward function in its interaction with the model is automatic and all the data that's generated in the process of reinforcement learning is created by AI.
这些模型具有很强的智能,比我们其他人要强大得多。当我们在这些模型上进行强化学习时,还需要多久,大部分强化学习的数据才会主要来自人工智能而不是人类呢?我的意思是,目前大部分不同类型的强化学习已经是来自人工智能了。是的,就像人类被用来训练奖励函数一样。然后奖励函数与模型的互动是自动的,而在强化学习过程中产生的所有数据都是由人工智能创建的。

Like if you look at the current, I would say the Nick paradigm, which is in getting some significant attention because of chat GPT reinforcement learning from human feedback. The human feedback is being used to train the reward function. And then the reward function is being used to create the data which trains them all.
就像你看到现在的情况,我会说Nick范式正在受到一些显著关注,因为它借助了通过人类反馈进行强化学习的聊天机器人GPT。这种人类反馈正被用来训练奖励函数。然后,奖励功能便被用来创建训练整个模型所需的数据。

Got it. And is there any hope of just removing a human from the loop and have it improve itself and some sort of alpha go away?
好的。那么,是否有可能让人类完全退出这个过程,让它自我改进,就像某种程度上的AlphaGo那样呢?

Yeah, definitely. I mean, I feel like in some sense our hopes for like our plant like very much so the thing you really want is for the human teachers that tell you that teach the AI for them to collaborate with an AI. You might want to think about it. You might want to think of it as being in a world where the human teachers do 1% of the world and the work and the AI do 99% of the work. You don't want it to be 100% AI, but you do want it to be a human machine collaboration which teaches the next machine.
是的,当然。我的意思是,在某种程度上,我们对植物的期望非常像你真正想要的那样,那就是让教导AI的人类教师与AI进行合作。你可能需要考虑这个问题。你或许应该把它想象成这样一个世界:在这个世界里,人类教师完成1%的工作,而AI完成剩下的99%的工作。你不希望它是100%的AI,但你确实希望它是一个教导下一代机器的人机协作。

Currently, I mean, I have a chance to play around these models. They seem bad at multi-step reasoning and they have been getting better. But what does it take to really surpass that barrier? I mean, I think dedicated training will get us there more improvements to the base models who get us there. But like fundamentally, I also don't feel like they're that bad at multi-step reasoning. I actually think that they are bad at mental multi-step reasoning, but they're not allowed to think out loud. But when they are allowed to think out loud, they're quite good. And I expect this to improve significantly both with better models and be special training.
当前,我的意思是,我有机会尝试这些模型。它们在多步推理方面似乎表现不佳,但已经有所进步。要真正突破这个障碍需要什么呢?我觉得,专注的训练可以让我们获得更多的改进,基础模型的改进也将推动这一进程。但从根本上讲,我并不觉得它们在多步推理上有多糟糕。实际上,我认为它们在心智多步推理方面表现不佳,但它们无法大声地表达出来。当它们可以表达出来时,它们的表现相当不错。我预计随着模型的优化和专项训练,这些方面会有显著改善。

Are you running out of reasoning tokens? Are there enough of them?
你的论证筹码是不是快用完了?还有足够多吗?

I mean, you know, it's okay. So for context on this question, there are claims that indeed at some point we'll run out of tokens in general to train those models. And yeah, I think this will happen one day and we'll, by the time that happens, we need to have other ways of training models, other ways of productively improving their capabilities and sharpening their behavior, making sure they're doing exactly precisely what we want without more data. You haven't run out of data yet. There's more.
我的意思是,你知道的,这还算可以接受。所以关于这个问题的背景,有人声称确实在某个时刻我们会用尽训练这些模型所需的通用代币。当然了,我觉得这种情况终有一天会出现,而在那时,我们需要其他方法来训练模型,别的有效途径来增强它们的能力,优化它们的行为,确保它们在没有更多数据的情况下也能做出我们想要的精确行动。不过暂时还没有用尽数据,还有更多。

Yeah, I would say I would say the data situation is still quite good. There are still lots to go. But at some point, yeah, at some point data will run it.
是的,我认为现在的数据状况仍然相当不错。还有很多事情要做。但是在某个时候,是的,在某个时候数据会用完的。

Okay, where what is the most valuable source of data? Is it read it to the books? What would you trade many other tokens of other varieties for?
好吧,最有价值的数据来源是什么?是从书本上阅读吗?如果有其他种类的代币,你愿意用什么去交换?

Generally speaking, you'd like tokens which are speaking about smarter things, don't come to charge like more interesting. So I mean, all this all the sources which you mentioned is available. Okay, so maybe not Twitter, but do we need to go multi-models to get more tokens or do we still have enough text tokens left?
一般来说,你会喜欢那些谈论更聪明话题的代币,而不是收费更有趣的。我的意思是,你提到的所有这些资源都是可用的。好吧,可能推特上没有这么多,但我们是不是需要采用多模型的方法来获得更多代币呢?或者我们还有足够的文本代币吗?

I mean, I think that you can still go very far into text only, but going multi-models seems like a very good direction. If you're comfortable talking about this, like where is the place where we haven't scraped the tokens yet?
我的意思是,我觉得仅靠文本处理仍然可以走得很远,但是发展多模型似乎是一个很好的方向。如果你愿意谈论这个问题,比如我们还没有挖掘到的领域是哪些呢?

Oh, I mean, yeah, obviously, I mean, I can't answer that question for us, but I'm sure I'm sure that for everyone, there's a different answer to that question.
哦,我的意思是,是的,显然,我不能替我们大家回答那个问题,但我相信,对于每个人来说,对那个问题的回答都是不同的。

How many orders of magnitude improvement can we get just not from scale or not from data, but just from algorithm improvements?
我们能从算法改进中获得多少数量级的提升,而不仅仅依赖于规模扩大或数据增加?

Hard to answer, but I'm sure there is some. Is it some a lot or is so a little?
很难回答,但我相信确实有一些。具体来说,到底是有很多还是很少呢?

I mean, so only one way to find out.
我的意思是,只有一种方法可以找到答案。

Okay, let me get to your like quick fire opinions about these different research directions. Retrieval transformers. So just like somehow storing the data outside of the model itself and retrieving it somehow.
好的,让我快速了解一下您对这些不同研究方向的看法。关于检索型变压器,就是某种程度上将数据存储在模型之外,并以某种方式进行检索。

Seems promising. But you would you see that as a path forwarder?
看起来很有前途。但是你会把它视为一种前进的途径吗?

I think it seems promising robotics was it the right step for opening it to leave that behind?
我认为机器人技术似乎很有前景,这是一个正确的步骤,我们应该把它发扬光大,不要抛在身后。

Yeah, it was back then it really wasn't possible to continue working in robotics because it was so little data like back then if you wanted to do and robot if you wanted to work on robotics, you needed to become a robotics company. You needed to really have a giant group of people working on building robots and maintaining them and having.
是的,那时候继续从事机器人领域的工作真的是不可能的,因为当时可用的数据非常少。那时候,如果你想做机器人或者想从事机器人领域的工作,你需要成为一个机器人公司。你需要拥有一个庞大的团队来研发、制造和维护机器人。

And even then like if you only if you want to have 100 robots, it's a giant operation is already but you're not going to get that much data. So in a world where most of the progress comes from the combination of compute and data, right? That's where we've been where it was the combination of compute and data that drove the progress. There was no path to data from robotics. So back in the day, then you made a decision to stop working in robotics. There was no path forward. Is there one now?
即使当时,如果你只想拥有100个机器人,这已经是一个庞大的操作,但你并不能获得那么多的数据。所以在一个大部分进展来自计算和数据结合的世界里,对吧?这就是我们所处的世界,正是计算和数据的结合推动了进步。而在机器人领域,却没有数据这条途径。所以早些时候,你决定停止从事机器人工作,因为没有前进的道路。现在有了吗?

So I'd say that now it is possible to create a path forward, but one needs to really commit to the task of robotics. You really need to say I'm going to build like many thousands tens of thousands hundreds of thousands of robots and somehow collect data from them and find a gradual path where the robots are doing something slightly more useful and then the data that they get from these robots and then the data that is obtained and used to train the models need to something slightly more useful. You could imagine is kind of gradual path of improvement.
所以我想说现在我们完全有可能开创一条前进的道路,但我们需要真正投入到机器人技术的工作中。你需要下定决心要制造成千上万甚至是数十万的机器人,同时设法从这些机器人那里收集数据,并找到一个逐步改进的道路。让这些机器人做些稍微有用的事情,然后从他们身上获取的数据用来训练模型,以实现更实用的功能。你可以设想这是一个逐步优化的过程。

You build more robots. They do more things you collect more data and so on, but you really need to be committed to this path. If you say I want to make robotics happen, that's what you need to do. I believe that there are companies who are thinking about such doing exactly that. But I think that you need to really love robots and need to be really willing to solve all the physical and logistical problems of dealing with them. It's not the same as software at all. So I think one could make progress in robotics today with enough motivation.
你需要制造更多的机器人,让它们完成更多的任务,从而收集更多的数据等等,但你真的需要致力于这条道路。如果你说我想让机器人发生,那就是你需要做的事。 我相信有些公司正在考虑如何做到这一点。 但是我认为你需要非常热爱机器人,并且需要非常愿意解决与它们打交道的所有物理和后勤问题。 这与软件完全不同。 所以我认为今天只要有足够的动力,就能在机器人领域取得进展。

What ideas are you excited to try, but you can't because they don't work well on current hardware. I don't think current hardware is a limitation. Okay. I think it's just not the case. Got it. So but anything you want to try, you can just spin it up. I mean, of course, like the thing you might say, well, I wish current hardware was cheaper or maybe it had higher. Like maybe it would be better if it was higher memory process for bandwidth, let's say. But by and large hardware is just a limitation.
有什么想法是你很想尝试但受限于现有硬件性能的?我认为现有的硬件并不是一个限制。好的,我觉得根本不是这种情况。明白了。所以,任何你想尝试的东西,你都可以启动。我的意思是,当然,你可能会说,我希望现在的硬件更便宜,或者可能它具有更高的性能。比如说,如果它有更高的内存处理带宽就更好了。但总的来说,硬件只是一个限制。

Let's talk about alignment. Do you think we'll ever have a mathematical definition of alignment? Mathematical definition of things unlikely. Uh-huh. Okay, do I do think that we will instead have multiple like rather than rather than achieving one mathematical definition. I think we'll achieve multiple definitions that look at alignment from different aspects. And I think that this is how we will get the assurance that we want. And by which I mean you can look at the behavior. You can look at the behavior in various tests, the contra-ms, them in various adversarial stress situations. So the neural net operates from the inside. I think you have to look at all several of these factors at the same time.
让我们来谈谈对齐问题。你认为我们会有一个数学定义的对齐吗?数学定义的事物不太可能。嗯哼。好吧,我的观点是,我们不太可能得到一个统一的数学定义。相反,我认为我们会得到许多不同的定义,从不同的方面来看待对齐问题。我认为这样我们才能得到想要的保证。我的意思是,你可以观察行为,可以在各种测试中观察行为,在各种对抗和压力情况下检查它们。所以神经网络是从内部运行的。我认为你必须同时关注这几个因素。

And how short do you have to be before you release a model in the wild? Is it 100% 95%? Well, it depends how capable the model is. The more capable the model is, the more the more the higher over it, the the more confident it needs to be. Okay, so just say it's something that's almost AGI. Where is AGI? Well, it depends what your AGI can do. Keep in mind that AGI is an ambiguous term also. Like your average college undergrad. It's an AGI, right? You'd say it all, yeah. But you see what I mean. There is significantly bigger in terms of what is meant by AGI. So depending on where you put this mark, you need to be more or less confident.
那么,模型在被投放到实际应用领域之前,需要达到多高的准确率呢?是100%还是95%?其实,这取决于模型的能力。模型的能力越高,我们对其的信心就需要越大。好吧,假设有一个接近通用人工智能(AGI)的模型。那么,AGI是什么呢?这取决于你的AGI能做什么。请记住,AGI也是一个含义模糊的术语。就像你们那些普通的大学本科生,可以称为AGI,对吧?你可能会这么说。但你明白我的意思。关于AGI的定义存在很大差异。所以根据你对这个标准的设定,你需要对它的准确率有更高或者更低的信心。

Well, you mentioned a few of the paths towards alignment earlier. What is the one you think is most promising at this point? Like I think that it will be a combination. I really think that you will not want to have just one approach. I think people want to have a combination of approaches where we you spend a lot of compute. But we're certainly probably to find any mismatch between the behavior that you want it to teach and the behavior that it exhibits. We look inside into the neural net using another neural to understand how it operates on the inside. I think all of them will be necessary every approach like this reduces the probability of misalignment. And you also want to be in a world where you're.
嗯,你之前提到了几种实现AI对齐的途径。在这些方法中,你认为哪一个最有前景呢?我觉得可能是一个综合方案。我真的认为不能只依赖一种方法。人们希望采用多种方法相结合的方式来完成这个任务,而这需要大量的计算能力。当然,我们需要发现期望教导的行为和AI实际展现出的行为之间有任何不匹配的地方。我们可以用另一个神经网络来深入了解这个神经网络的内部运作。我认为所有这些方法都是必要的,每一种方法都有助于降低不匹配的可能性。同时,你还需要适应这样一个世界。

The degree of alignment keeps of increasing faster than the capability of the models. I would say that right now our understanding of our models is still quite rudimentary. We made some progress, but much more progress is possible. And so I would expect that ultimately the thing that we'll really succeed is when we will have a small neural net that is well understood. That's given the task to study the behavior of a large neural net that is not understood to verify.
模型的对齐程度在不断增加,而这个速度比模型的能力还要快。我认为目前我们对模型的理解仍然相当基础。我们取得了一些进步,但还有很大的进步空间。因此,我预期最终真正成功的时刻是当我们拥有一个易于理解的小型神经网络时。而这个小型神经网络的任务就是研究一个较大且难以理解的神经网络的行为并进行验证。

By what point is mostly I research being done by AI? I mean, so today when you use co-pilot, right? What fraction? How do you do the how do you divide it up? So I expect at some point you ask your, you know, the standard of chat GPT you say, hey, like I'm thinking about this and this can you suggest fruitful ideas I should try. And you would actually get fruitful ideas. I don't think that's the way we make it possible for you to solve problems you couldn't solve before. Got it. But it's somehow just telling the human, skipping them ideas faster or something. It's not yourself interacting with the one example. I mean, you could you could slice it in a variety of ways. But I think the bottom of the air is what idea is good insights and that's something interesting, but the neural net could help with these.
在某个阶段,大部分研究工作会由AI来完成吗?我的意思是,例如现在你们使用副驾驶员,这个过程中AI占了多大比例?你怎么进行划分?我想,总有一天,你就可以跟像GPT这样的聊天类AI说:“嘿,我在考虑这个和那个,你能给我一些建设性的意见吗?”然后你真的会得到有价值的建议。我不认为这样做会让你解决以前无法解决的问题。明白了。这个过程可能就是让人类更快地获取到想法,但是否能自己与具体例子进行互动?你可以从不同角度来看待这个问题。我认为最关键的是要找到好的想法和有洞察力的见解,而神经网络可以在这方面有所帮助。

If you designed some like a billion dollar prize for some sort of alignment research result or product, what is like the concrete criteria in yourself for that billion dollar price? There's something that makes sense for such a price.
如果你为某种人工智能研究成果或产品设计了一个价值十亿美元的奖励,那么对于这个十亿美元奖励的具体标准是什么?有什么合理的条件可以让这个奖金显得非常有意义?

It's funny that you asked this. I was actually thinking about this exact question. I haven't I haven't come up in the exact criteria yet. Maybe something that be the benefit. Maybe a prize where we could say that. Two years later or three or five years later, we look back and say like that was the main result. So rather than say that there is a price committed that decides right away. You wait for five years and then award it retractively.
有趣的是,你问了这个问题。我其实正好在思考这个问题。我还没有想出具体的标准。也许是有什么好处吧。可能是一个奖项,我们可以说,两年后或三年或五年后,我们回顾时可以说,那是主要的成果。所以,与其说有一个评奖委员会立即决定,不如等五年,然后追溯性地颁奖。

But there's no concrete thing we can identify yet as it like you solve this particular problem and you're you made a lot of progress. I think a lot of progress yet. So I wouldn't say that this would be the. The full thing.
但是我们还不能确定具体的问题,比如说你解决了这个特定的问题,你就取得了很多进展。我认为还有很多进展要取得。所以我不会说这就是全部了。

Do you think end to end training is the right architecture for bigger and bigger models or do we need better ways of just connecting things together? I think end to end train is very promising. I think connected mixed together is a promise. Everything is promising.
你认为端到端训练是越来越大的模型的正确架构吗,还是我们需要更好的方法去连接这些东西?我认为端到端训练是很有前途的。我认为连接混合在一起也是有前途的。总的来说,一切都是充满希望的。

So open AI is projecting revenues of a billion dollars in 2024. That might very well be correct. But I'm just curious when you're talking about a new general purpose technology. How do you estimate how big a windfall it will be? Like that. But why that particular number?
所以OpenAI预计到2024年的收入将达到10亿美元。这个预测很可能是正确的。但我就是好奇,当我们谈论一项新的通用技术时,如何估计它将产生多大的意外之财呢?像那样。但为什么会是那个特定的数字呢?

I mean, you look at the current you look at the cut, you know, we've already had a beef. So we've had a product. For quite a while now for back from the GPT three days from two years ago through the API and we've seen how it grew. We've seen how the response to Dali has grown as well. And so you see how the response to chat GPT's and I think all of this gives us information that allows us to make a relatively sensible extrapolations of 24. Maybe that would be that be one answer like you need to have a data you can't come up with those things out of thin air because otherwise your error bars will be like off by. Your earbuds are going to be like a hundred x in each direction. I mean, the most exponentials don't stay exponential. Especially when they get into bigger and bigger quantities, right? So how do you determine in this case that. I mean, like would you bet against the I.
我的意思是,你看看现在的情况,我们已经有了一种产品,那就是GPT-3,它从两年前开始一直延续到现在。我们看到它是如何成长的,我们也看到了人们对Dali的反应如何逐渐增强。我们也看到了人们对于Chat GPT的反应。我认为这些都为我们提供了资料,让我们能够对未来做出相对合理的预测。也许这就是一个答案:你需要一些数据,不可能凭空想象出这些东西,因为这样的话,你的误差范围可能会出现很大的偏差。我的意思是,大部分指数增长并不会一直保持这种状态,尤其是当数量越来越大的时候,对吧?那么,这种情况下你应该如何判断呢?我的意思是,你会否认AI吗?

Not after talking with you, let's talk about what like a post a GI future looks like. Are people like you, you know, I'm guessing you're working like 80 hour weeks towards this grand goal that's really assessed with. Are you going to be satisfied in a world where you're basically living in an AI retirement home or like what is a word? What is your what are you concretely doing after a G I comes?
在和你交谈之后,我们来谈谈像后人工智能时代的未来会是什么样子。就像你这样的人,我猜你每周工作80个小时,为了实现这个伟大的目标付出巨大努力。在一个你基本生活在人工智能退休社区的世界里,你会满足吗?或者说,当人工智能出现后,你实际上在做什么?

I think the question of what. What I'll be doing or what people will be doing after a G I come. It's a very tricky question. You know, I think where where will people find meaning, but I think I think that that's something that AI could help us be. Like. One thing I imagine is that we'll all be able to become more enlightened because we'd interact with an AGI that will help us. See the world more correctly. Become better on the inside as it would allow them interact like imagine talking to the best meditation teacher in history. I think that would be a helpful thing, but I also think that because the world will change a lot, it will be very hard for people to understand. What is happening precisely and how to go and how to really contribute one thing that I think. Some people will choose to do is to become part AI in order to really expand their minds and understanding it to really be able to solve the hardest problems that society will face then.
我认为关于在通用人工智能(AGI)出现之后,我会做什么或者人们会做什么这个问题是非常棘手的。你知道,我在思考人们会如何寻找意义。但我觉得人工智能可能会帮助我们实现这一点。我设想的一种情景是,我们都将因为与AGI的互动而变得更加开明,因为它能帮助我们更正确地看待世界。通过与之互动,人们可以变得更好,就像与历史上最佳的冥想导师交谈一样。我觉得这会很有用。 但同时,我也认为由于世界会发生很大变化,人们很难理解正在发生的一切,以及如何真正做出贡献。我想一件可能会发生的事是,有些人会选择成为部分人工智能,这样就可以扩大他们的智慧和理解能力,真正解决社会所面临的最难题。

Are you going to become part AI very tempting to tempting you. Well, do you think they'll be physically embodied humans and 3000 3000. Oh, I'll do I know what's going to happen 3000. Like what what does it look like? Are there still like humans walking around on earth or every guest thought concretely about what you actually want to squalk look like 3000.
你会成为部分人工智能吗?这个诱惑很大。那么,你认为公元3000年时人类还会以实体存在的形式出现吗?哦,我怎么会知道公元3000年会发生什么呢?像什么呢?地球上还会有人类漫步吗?你是否具体考虑过你希望公元3000年的样子?

Well, I mean that that that the thing is here's the thing like let me let me describe to you what I think is not quite right about the question. Like it implies like oh, like we get to decide how we want the world to look like. I don't think that picture is correct.
嗯,我的意思是,这个问题有些不妥。就好像它暗示着,哦,我们可以决定我们想让世界变成什么样子。但我觉得这个描述并不准确。

I think change is the only constant. And so of course, even after a GI's built, it doesn't mean that the world will be static. The world will continue to change. The world will continue to evolve. And it will go through all kinds of transformations.
我认为变化是唯一不变的东西。所以,即使在构建了一种全球信息后,这并不意味着世界会一直保持原样。世界将继续变化,继续发展,并经历各种各样的转变。

And I really have no I don't think anyone has any idea of how the world will look like in 3000. But I do hope that there will be a lot of descendants of human beings who will leave happy fulfilled lives where they're free to do as their wish as they see fit where they are the ones who are solving their own problems.
我真的不知道,也不认为有人能知道在公元3000年,世界会是什么样子的。但我真心希望那时候还有很多人类的后代,他们过着幸福充实的生活,可以自由地按照自己的意愿去做事,他们能够解决自己的问题。

Like one of the things which I would not want one one one world which I would find very unexciting is one where you know, you feel this powerful tool and then the government said OK, so the AGI said that society shall be running such a way and now visual run society in such a way. At much rather. Have a world where people are still free to make their own mistakes and suffer their consequences and gradually evolve morally and progress forward on their own strength. See what I mean with the AGI providing more like a base safety net.
就像我不喜欢的事情之一,一个我觉得非常乏味的世界就是,你知道,你觉得这个强大的工具(AGI)在起作用,然后政府说好的,所以这个AGI决定了社会应该以这样的方式运行,然后我们就以这样的方式运行社会。我更愿意拥有一个世界,在这个世界里,人们仍然可以自由地犯错,并承担后果,逐渐在道德上发展和进步,靠他们自己的力量向前。你明白我的意思吗?就是说,AGI更像是提供了一个基本的安全网。

How much time do you spend thinking about this? Thanks versus just doing the research that I do think about those things of fairbite. Yeah, things are very interesting questions.
你花了多少时间去思考这个问题?谢谢。相对于我所做的研究而言,我确实会花时间去考虑这些公平问题。嗯,这些问题确实很有趣。

So in what ways have the capabilities we have today in what ways have these are passed were expected them to be in 2015 and what ways are they still not where you're going to be. By this point, I mean in fairness that it's sort of expected to be in many 15 in 2015 I my thinking was a lot more. I just don't want to bet against deep learning. I want to make the biggest possible bet on deep learning don't know how that it will figure it out.
那么,在哪些方面我们现在的能力已经超过了我们在2015年的预期?又在哪些方面还没有达到预期呢?坦率地说,我原本预计在2015年的能力要比现在强大得多。但我并不想对深度学习抱有怀疑,我想尽可能地大力投资深度学习,相信它能找到解决方案。

But is there any specific way in which it's been more than you expected or less than expected. I think concrete prediction you added 2015 has been. Frounced. You know, unfortunately, I don't remember concrete predictions. I made it. But I definitely, but I definitely think that overall in 2015.
但是,有没有哪方面具体超出了您的预期或低于预期?我认为您2015年提出的具体预测已经被证实。遗憾的是,我不记得我做过哪些具体的预测。但我肯定,总的来说,在2015年。

I just want to move to make the biggest bet possible and deep learning. But I didn't know exactly didn't have a specific idea of how far things will go in seven years. Well, I mean, 2015.
我只是想全力投入到深度学习这个领域,尽可能做出最大的决策。但是我并不知道截止到 2015 年,七年里事物会发展到何种程度。就是这个意思。

I did have all these best with people into many 16, maybe 2017 that things will go really far. But specifics. So it's like it's both it's both the case that it surprised me and I was making these aggressive predictions. But I think maybe I believe them only only 50% on the inside.
我曾经在2016年,甚至是2017年,和很多人都做过最好的尝试,并预测事情会发展得很远。但我对具体情况并不了解。所以可以说,这既让我感到惊讶,也让我做出了这些激进的预测。但我内心可能只有50%的信心相信这些预测。

Uh-huh. Well, what do you believe now that even most people at OpenAI would find farfetched? I mean, I think that this because we communicate a lot at OpenAI, people have a pretty good sense of what I think. And so yeah, we've reached the point of OpenAI. I think we see eye to eye on all these questions.
哦,那您现在有什么观点,即使是大多数OpenAI的人也会觉得很离谱呢?我的意思是,因为我们在OpenAI有很多交流,所以人们对我的想法有很好的了解。所以,对于OpenAI来说,我们在这些问题上已经取得了一致的看法。

So Google has, you know, it's custom TPU hardware. It has all this data from all its users, you know, Gmail, what and so on. Does it give an advantage in terms of training bigger models and better models than you? So I think like when the first, first when the TPU came out, I was really impressed and I thought, wow, this is amazing.
所以,你知道,谷歌有它定制的 TPU 硬件。它从所有用户那里获取了很多数据,例如 Gmail 等等。这是否在训练更大、更好的模型方面比你更有优势?我想,当 TPU 刚出现的时候,我真的很震惊,觉得哇,这太神奇了。

But that's because I didn't quite understand hardware back then. What really turned out to be the case is that TPUs and GPUs are almost the same thing. They're very, very similar. It's like I think a GPU chip is a little bit bigger. I think a TPU chip is a little bit smaller. It may be a little bit cheaper. But then they make more GPUs than TPUs. So I think the GPUs might be cheaper after all. But fundamentally you have a big processor and you have a lot of memory and there is a bottleneck between those two.
不过那是因为当时我对硬件的理解还不够。事实证明,TPU 和 GPU 几乎是一回事。它们非常非常相似。我觉得 GPU 芯片可能稍微大一点,而 TPU 芯片可能稍微小一点。TPU 可能更便宜一点吧。但总的来说,生产的 GPU 的数量可能比 TPU 的数量更多,因此 GPU 最终可能更便宜。但本质上来说,它们都是一个大型处理器,有很多内存,而且它们之间存在瓶颈。

And the problem that both the TPU and the GPU are trying to solve is that by the amount of time it takes you to move one floating point from the memory to the processor, you can do several hundred floating point operations on the processor. Which means that you have to do some kind of batch processing. And in this sense, both of these architectures are the same. So I really feel like hardware, like in some sense, the only thing that matters about hardware is cost cost per flop. Overall, systems cost. Okay, and there's a much better friend.
而TPU和GPU要解决的问题是,在将一个浮点数从内存移动到处理器所需的时间内,您可以在处理器上执行几百个浮点操作。这意味着您必须进行某种批量处理。从这个意义上说,这两种架构都是相同的。因此,我真的觉得硬件在某种意义上,唯一重要的事情就是每次操作的成本。总体而言,系统的成本。好吧,这是一个更好的朋友。

Well, actually don't know. I mean, I don't know how much should what the TPU costs are. But I would suspect that probably not if anything probably views are more expensive because there is less of them. When you're doing your work, how much of the time it's been, you know, configuring the right in the socializations, making sure the training run goes well and getting the right hyper parameters and how much is it just coming up with whole new ideas.
嗯,其实我不太清楚。我的意思是,我不知道TPU的成本应该是多少。但我猜测可能它们并不便宜,因为市场上比较少。在你做工作时,有多少时间是用在配置正确的社交关系、确保训练顺利进行以及找到正确的超参数上,以及有多少时间是用来构思全新的想法呢?

I would say it's a combination, but I think that coming up with it's a combination, but coming up with whole new ideas is actually not. It's like the modest part of the work certainly coming up in new ideas is important. But I think even more important is to understand the results, to understand the existing ideas, to understand what's going on.
我认为这是一个综合体现,但是产生全新的想法其实并不是这个组合的全部。虽然提出新想法很重要,但更重要的是理解结果、理解现有的想法以及理解整个事物的发展。所以,产生全新想法只是工作中的一小部分。

Because normally you have these, you know, neural nets are very complicated system, right? And you ran it and you get some behavior which is hard to understand what's going on. Understanding the results, figuring out what next experiment to run. A lot of the time you spent on that.
因为通常情况下,你知道,神经网络是非常复杂的系统,对吧?当你运行它时,可能会得到一些难以理解的行为。要理解这些结果、找出下一个要进行的实验,你需要花费很多时间来完成。

Understanding what could be wrong, what could have caused the system, the neural net produce a result which was not expected. I'd say a lot of time you spend as well of course coming up with new ideas, but not new ideas.
理解可能出现问题的原因,探究到底是什么导致了系统、神经网络产生了未预期的结果。我想你也会花很多时间提出新想法,但这些并非完全是全新的。

I think like I don't like this and framing as much. It's not that it's false, but I think the main activity is actually understanding. How do you know what is the difference between the two? So at least in my mind when you say come up with new ideas, I'm like, oh, like what happened if it did such and such. Whereas understanding it's more like like what is this whole thing? Like what are the real underlying phenomena that are going on? What are the underlying effects? Like why? Why are we doing things this way and not another way? And of course this is very adjacent to what can be described as coming up with ideas, but I think the understanding part is where the real action takes place.
我觉得这个问题我不是特别喜欢,可能需要换个说法。问题的关键并不在于是否错误,而是在于理解。你如何知道这两者之间的区别呢?至少在我看来,当你说出提出新主意时,我的想法是:哦,如果这样做会发生什么呢?而在理解这方面,更像是:这整个事物是什么?真正的潜在现象是什么?潜在的影响是什么?为什么我们要这样做而不是另一种方式?当然,这与所谓的提出新主意非常相近,但我认为真正关键的部分在于理解。

Does that describe your entire career? Like if you think back on like the image net or something, was that more new idea or was that more understanding? Well, I was definitely understanding. Definitely. It was a new understanding of very old things.
那是否描述了您的整个职业生涯?就像您回想起图像网络(ImageNet)之类的事情,那更像是一个新想法还是更像是理解?嗯,那绝对是理解。毫无疑问。这是对非常古老的事物的全新理解。

What is the experience of training on Azure and like using Azure? Fantastic. I mean, yeah, I mean Microsoft has been a very, very good partner for us and they've really helped take Azure and make it bring it to a point where it's really good for ML. And they're super happy with it.
在Azure上进行培训和使用Azure的体验是怎样的?非常棒。我的意思是,是的,我的意思是Microsoft一直是我们非常好的合作伙伴,他们真正帮助Azure发展到对于机器学习非常有利的地步。而且他们对此非常满意。

How vulnerable is the whole ecosystem to something that might happen in Taiwan? So let's say there's like a tsunami in Taiwan or something. What would happen to AI in general? Like it's definitely going to be a significant setback. It's not going to like it might be something equivalent to like no one will be able to get more, more compute for a few years. But I expect the computers will spring up.
整个生态系统对可能在台湾发生的事情有多脆弱?比如说,如果台湾发生了海啸,这对整个人工智能领域的影响会有多大?这肯定会导致一个很大的挫折,也许相当于几年内没有人能够获得更多的计算能力。但我预计计算机会卷土重来。

Like for example, I believe that Intel has fads just of the previous year of like a few generations ago. So that means that if Intel wanted to, they could produce something GPU like from like four years ago. But yeah, it's not the best. Let's say I'm actually not sure about if if if my statement about Intel is correct. But I do know that there are fads outside of Taiwan that is not as good. But you can still use them and still go very far with them. It's just it's just a setback.
就拿英特尔来说吧,我认为他们只是沿袭了前几年或者几代产品的一些潮流。这意味着,如果英特尔愿意,他们可以像四年前那样生产出一些GPU。但确实不是最好的。不过,关于英特尔的这个说法,我不太确定是否正确。但我确实知道,除了台湾之外还有一些不那么好用的类似产品。当然,你仍然可以使用它们,并且可以取得很大进展,只是会有一些限制罢了。

What would inference get cost prohibitive as these models get bigger and bigger? So I have a different way of looking at this question. Yeah, it's not that inference will become cost prohibitive. Inference of better models will indeed become more expensive. What is it prohibitive? Well, it depends on how useful is it?
当这些模型越来越大时,推理的成本会变得越来越高吗?关于这个问题,我有另一种看法。对,推理成本并不会变得过高。更好的模型的推理确实会变得更加昂贵。那什么情况下会变得过高呢?这取决于这个模型的实用性有多大。

Like if it is more useful than it is expensive than it is not prohibitive. Like to give you an analogy, like suppose you want to talk to a lawyer. You have some case you or need some advice or something. You are perfectly happy to spend $5,000 an hour. Right? So if your neural net could give you like really reliable legal advice, you'd say I'm happy to spend $400 for that advice. And suddenly inference becomes very much non-prohibitive.
换句话说,如果某样东西的实用性更大,价格就不算太贵。举个例子,假设你需要咨询律师。你有些问题需要请教,或者需要一些建议。那么,你可能很愿意支付5000美元/小时的费用。对吧?那么,如果你的神经网络可以给你非常可靠的法律建议,你也会说:“花400美元咨询我很愿意。”这样一来,费用就不再显得过分。

The question is can neural net produce an answer good enough at this cost? Yes. And you will just have like price discrimination and different models and different models. I mean, it's already the case today. So on our product, the API, we serve multiple neural nets of different sizes. And different customers use different neural nets of different sizes depending on their use case.
问题是,神经网络能否以这样的成本产生足够好的答案?答案是肯定的。而且您会看到价格歧视和不同的模型。我是说,目前已经是这种情况了。在我们的产品API中,我们提供了多种大小不一的神经网络。不同的客户根据他们的应用场景选择使用不同大小的神经网络。

Like if someone can take a small model and fine tune it and get something that's satisfactory for them, they'll use that. Yeah. But if someone wants to do something more complicated and more interesting, they'll use the biggest model.
像如果有人可以拿一个小模型,进行微调并得到满意的成果,他们会使用那个小模型。是的。但是如果有人想要做更复杂、更有趣的事情,他们会使用最大的模型。

How do you prevent these models from just becoming commodities where these different companies just they just pay the share their prices down until it's basically been the cost of the GPU run? Yeah. I think there is a question of force that's trying to create that and the answer is you've got to keep on making progress.
如何防止这些模型仅仅成为商品,在不同的公司之间互相降低价格,直到基本达到GPU运行的成本?是的,我认为这是一个需要面对的问题,答案就是我们必须不断取得进步。

You've got to keep improving the models. You've got to keep on coming up with new ideas and making our models better and more reliable, more trustworthy. So you can trust their answers. All those things. Yeah. The last thing is like 2025.
您必须不断改进这些模型。您需要不断提出新的想法,使我们的模型更加优秀、更可靠、更值得信赖。这样您才能相信它们给出的答案。所有这些方面都是如此。是的,最后一项就是到2025年。

And the model from 2024 somebody just offering it a cost. And it's like so pretty good. Why would people use a new one from 2025 if the one from just a year old there is, you know, even better.
而且2024年的那款模型有人只是以一个很低的价格在出售。而且它还非常好看。如果只比2024款旧一年的2025款模型更好,那么人们为什么还要用新的2025款的呢?

So there are several answers there for some use cases that may be true. There will be a new model from 2025 which will be driving the more interesting use cases. There's also going to be a question of inference costs like you can if you can do research to serve the same model at less cost. So there will be different.
所以对于某些使用场景来说,这里有几个答案可能是正确的。从2025年开始,将会有一个新模型来推动更有趣的用例。同时,还会涉及到推理成本的问题,比如你是否可以通过研究降低同一模型的服务成本。因此,这些情况将会有所不同。

The same model you'll be served will cost different different amounts to serve. And I can also imagine some degree of specialization to where some companies may try to specialize in some area and be stronger in an error area compared to other companies. And I think that too may. That may be a response to commoditization to some degree.
同样的服务模式在不同的公司可能会有不同的价格。我可以想象在某种程度上,一些公司可能会尝试在某个领域专业化并且与其他公司相比,在那个领域有更强的实力。我认为这也可能是一种应对商品化的策略。

As overtime do these different companies do their research directions converge with their diverge. Are they doing similar and similar things over time or are they doing are they going up branching off in the different areas. So that's in the near term it looks like this convergence in the like I expect this going to be a convergence a divergence converge behavior where there is a lot of convergence on the near term work. There's going to be some divergence on the longer term work. But then once the longer term work starts to yield through that I think there will be conversions again.
随着时间的推移,这些不同公司的研究方向是否会趋同或分歧呢?他们是否随着时间的推移做越来越相似的事情,还是分叉进入不同的领域呢?就近期而言,它们似乎在某种程度上呈现出趋同的现象。我预测会出现一种趋同-分歧-趋同的行为,即在短期工作上有很多趋同之处,在长期工作上会有一些分歧。但是,一旦长期工作开始产生成果,我认为它们将再次趋于一致。

Got it. What one of the most promising area they have really just that's right now. There is obviously less less publishing now so it will take a longer before this promising direction gets rediscovered. That's how I'd imagine it. I think it's going to be convergence to the average convergence.
明白了。他们最有前景的领域之一就是目前的这个方向。很明显,现在出版的内容较少,所以这个有前景的方向重新被发现还需要一段时间。我想这可能在平均收敛度上实现。

Yeah, we talked about this a little bit at the beginning, but you know as foreign governments learn about how capable these models are. How do you are you worried about spies or some sort of attack to get your weights or you know somehow abuse these models and learn about them. Yeah, it's definitely something that you absolutely can discount that. Yeah. And yeah, something that we right guard against the best of our ability, but it's going to be a problem for everyone who is building this.
是的,我们在开始时稍微谈到了这个问题,但是您知道,随着外国政府了解到这些模型的能力。你们会不会担心有间谍或者某种攻击来窃取你们的权重,或者以某种方式滥用这些模型并了解它们的信息呢?是的,这绝对是一个不能忽视的问题。我们会尽最大努力防范这种问题,但这对于每一个正在建立这种模型的人来说都是一个问题。

How do you prevent your weights from leaking or what? I mean, you have like really good security people. And like how many people have the if they wanted to just like access agent of the weights, a machine, how many people could do that? I mean, like what I can say is that the security people that we have the built to have done a really good job so that I'm really not worried about the way to speak.
你们是如何防止权重泄露的?我的意思是,你们有非常优秀的安全团队。那么,如果有人想要随意访问权重、机器代理,有多少人能做到这一点?我的意思是,我们拥有的安全团队已经做得非常好,以至于我真的不担心权重泄露的问题。

What kinds of emerging properties are expecting from these models at this scale? Is there something that just comes about day now, though? I'm sure things will come out. I'm sure really new surprising properties and come up. I would not be surprised. The thing which I'm really excited about or the thing we should like to see is reliability and controllability and things that this will be very, very important class of emerging properties.
这些模型在这个规模下有哪些新兴的特性值得期待呢?是有什么东西马上就要出现了吗?我相信会有一些新的东西出现。我确信会有一些真正令人惊讶的新特性出现。这并不会让我感到意外。我真正兴奋的事情或者我们期望看到的是可靠性、可控性等方面的特性,因为这将是非常非常重要的一类新兴特性。

If you have reliability and controllability, I think that helps you solve a lot of problems reliability music and trust the models out with controllability music and troll it. And we'll see what it will be very cool if those emerging properties did exist. Is there somewhere you can predict today that advance? Like what will happen in this parameter? We'll have an average.
如果你具备可靠性和可控制性,我觉得那会帮助你解决很多问题。可靠性有关音乐,信任有关模型,而可控制性有关音乐和控制。我们将看到,如果这些新兴特性真的存在,那将非常酷。今天你是否能预测某个地方会有哪些进步?比如,在这个参数中会发生什么?我们会得到一个平均值。

I think it's possible to make some predictions about specific specific capabilities. It's definitely not simple and you can't do it in a super fine brain way at least today. But I think getting better at that is really important than anyone who is interested in who has research ideas on how to do that. I think that can be a valuable contribution.
我认为可以预测一些具体的能力。这绝对不简单,至少目前还无法以非常精细的方式来完成。但是我认为,提高这方面的技能比任何对如何实现这一点有研究想法的人都更重要。我认为那将是非常有价值的贡献。

How seriously do you take these scaling laws? If there's a paper that says, oh, you just increase. You need this many orders of magnitude more to get all the reasoning out. Do you take that seriously or do you think it breaks down at some point?
你对这些比例尺法则有多认真呢?如果有篇论文说,哦,你只需要增加。你需要这么多数量级的提高才能得到所有的推理。你真的很重视这个观点吗?还是你认为在某个时候这个观点就不成立了?

Well, the thing is that the scaling not tells you what happens as you, what happens to your look to your next word prediction accuracy, right? There is a whole separate challenge of linking next word prediction accuracy to reasoning capability. I do believe that indeed the reason link, but this link is completely. And we may find that there are other things that can give us more reasoning pre-unit effort.
嗯,问题在于扩展并不能告诉你,当你在预测下一个词汇准确性时,会发生什么,对吧?将下一个词汇预测的准确性与推理能力联系起来是一个完全独立的挑战。我确实相信它们之间存在联系,但这种联系现在还不够明确。我们也许会发现其他因素可以让我们更有效地提高推理能力。

Like for example, some special look, you know, you mentioned reasoning tokens and I think they can be helpful. There can be there can be probably some things. Is this something you're considering just hiring humans to generate tokens for you or is it all going to come from that already exists out there?
例如,有些特殊的外观,你知道,你提到了推理代币,我认为它们可能会有所帮助。可能会有一些东西。你是否考虑过雇佣人类为你生成代币,还是说所有的资源都要来自于现有的东西?

I mean, I think that relying on people to teach our models to do things, especially, you know, to make sure that they are well behaved and they don't produce false things. I think it's an extremely sensible thing to do.
我认为让人们教导我们的模型去做事,特别是确保它们行为良好,不产生错误的东西,这是非常明智的。

Isn't it odd that we have the data we need exactly the same time as we have the transfer or at the exact same time that we have these GPUs? Is it odd to you that all these things happen at the same time or do you not see that way?
你觉得我们在需要数据传输时刚好获得数据,以及在需要GPU时刚好拥有它们这件事奇怪吗?你觉得这些事情同时发生很奇怪吗,还是你并不这么认为?

I mean, it is definitely an interesting. It is an interesting situation that is the case. I will say that it is odd and it is less odd on some level.
我的意思是,这绝对是一个有趣的情况。这是一个有趣的情况,确实如此。我要说的是,这很奇怪,但在某种程度上又不那么奇怪。

Here is why it's less odd. So what is the driving force behind the fact that the data exists, that the GPUs exist, that the transformer exists? So, as a data exists because computers became better and cheaper, we've got smaller and smaller transistors. And suddenly at some point it became economical for every person to have a personal computer. Once everyone has a personal computer, you really want to connect them with a network. You get the internet. Once you have the internet, you have suddenly data appearing in great quantities.
原因不奇怪在于:那么数据、GPU和变压器存在的驱动力是什么呢?这是因为随着电脑变得更好且更便宜,晶体管越来越小。突然之间,每个人拥有个人电脑变得经济实惠。一旦每个人都有了个人电脑,就需要用网络连接。于是就有了互联网。有了互联网,大量数据突然出现了。

The GPUs were improving concurrently because you have the smallest small and small transistors and you're looking for things to do with them. The gaming turned out to be a thing that you could do. And then at some point the gaming GPU and VDS had waited a second. Right? It made it turn it into a general-purpose GPU computer. Maybe someone will find it useful. Turns out it's good for neural nets.
GPU性能得到提升是因为晶体管越来越小,我们需要寻找用途来发挥它们。于是,电子游戏变成了一个可以利用GPU的领域。然后,在某个时刻,游戏GPU和VDS等待了一下。对吧?这使得它可以转变成通用GPU计算。或许有人会觉得它很有用。结果发现,它很适合神经网络。

So, it could have been the case that maybe the GPU would have arrived five years later or ten years later. If, let's suppose, gaming wasn't the same. It's kind of hard to imagine. What does it mean if gaming isn't the same? But it could. Maybe there was a counterfactual world where GPUs arrived five years after the data or five years before the data. In which case, maybe things would move a little bit more. Things would have been as ready to go as they are now. But that's the picture which I imagined.
所以,也许GPU本来可能是在五年后或十年后才会出现。如果游戏产业没有如今的发展,我们很难想象会发生什么。那么游戏产业没有发展是什么概念呢?但这也是有可能的。也许在一个反事实世界中,GPU的诞生可能是在数据出现的五年后,也可能是五年前。在这种情况下,事情可能会有所改变。那时,事物的发展可能没有现在这么顺利。不过,这只是我脑补出来的情景。

The only progress in all these dimensions is very intertwined. It's not a coincidence that you don't get to pick and choose which dimensions things improve if you see what I mean.
所有这些方面的进步都是密不可分的。也就是说,你无法选择改进的方面,这并非巧合。

How inevitable is this kind of progress? So, if, let's say, you and Jeffrey Henten and a few other pioneers, if they were never born, does the deep learning revolution happen around the same time? How much does it delay?
这种进步有多么不可避免?所以,假设你、Jeffrey Henten和其他一些先驱们从未出生,深度学习革命是否会在相同的时间发生?会有多大的延迟?

I think maybe there would have been some delay, maybe like your delays. It's really hard to tell. Really? It's really hard to tell. I mean, I hesitate to give a lot a lot, a longer answer because, okay, well, then you'd have GPUs would keep on improving, right? Then at some point, I cannot see how someone would not have discovered it. Because here's the other thing. Is it, if, okay, so let's suppose no one has done it. Computers keep getting faster and better. It becomes easier and easier to train these neural nets. Because you have bigger GPUs. So it takes less engineering effort, train one. You don't need to optimize your code as much. When the image and the data set came out, it was huge and it was very, very difficult to use. Now, imagine, wait for a few years and it becomes very easy to download and people can just just thinker. So I would imagine that like a modest number of years maximum, this would be my guess. I hesitate to give a lot a longer answer, though, you can't, you can't run. You can't rerun the world, you don't know.
我想也许会有一些延迟,可能就像你们那样的延迟。真的很难说。真的吗?真的很难判断。我的意思是,我不想给一个很长的答案,因为,好吧,那么你会看到GPU会继续改进,对吧?那么在某个时候,我无法想象怎么会有人不去发现它。因为还有另一件事。好吧,假设没有人做这个事情。电脑会变得越来越快、越来越好。这使得训练这些神经网络变得越来越容易。因为你有更大的GPU。所以训练一个神经网络所需的工程成本较低。你不需要对代码进行太多优化。当图像和数据集出现时,它非常庞大,而且非常难以使用。现在,想象一下,等待几年,下载就会变得非常容易,人们可以随意尝试。所以我猜想,最多就是过了几年。这是我的猜测。尽管如此,我还是不愿意给出一个更长的答案,因为你不能重走一遍世界,你也不知道。

Let's go back to alignment for a second. As somebody who deeply understands these models, what is your intuition of how hard alignment will be?
让我们再回到对齐问题上来。作为一个对这些模型有深刻理解的人,您对于实现对齐有多困难的直觉是什么?

Like, I think, so here's what I would say. I think with the current level of capabilities, I think we have a pretty good set of ideas of how to align them. But I would not underestimate the difficulty of alignment of models that are actually smarter than us. Of models that are capable of misrepresenting their intentions. Like, I think it's something to think about a lot and to research.
我认为,根本现有的能力水平,我们已经有了一整套不错的想法来实现它们的对齐。但是,我不会低估对比我们更聪明的模型的对齐难度。这些模型可能会歪曲他们的意图。所以,我认为这是一个值得我们深思和研究的问题。

I think this is one area also, by the way, you know, like oftentimes academic researchers asked me, asked me where, what's the best place where they can contribute? And I think, alignment research is one place where I think academic researchers can make very many contributions. I believe in that.
顺便说一下,这也是一个领域,你知道,学术研究人员经常问我,问我他们能在哪里做出最好的贡献?我认为,对齐研究正是一个能让学术研究者发挥很多作用的地方。我相信这一点。

Do you think academia will come up with an insight about actual capabilities or is that going to be just the companies at this point? The companies will realize the capabilities. I think it's very possible for academic research to come up with those insights. I think it's just, it doesn't seem to happen that much for some reason, but I don't, I don't think there's anything fundamental about academia.
你认为学术界会对实际能力有什么深刻见解,还是说现在只有企业会认识到这些能力?我觉得学术研究完全有可能得出这些见解。只是出于某种原因,学术界似乎并没有那么多的成果,但我不觉得这与学术本身有什么根本性的关系。

Like, it's not like academia can't. I think maybe they're just not thinking about the right problems or something because maybe it's just easier to see what needs to be done inside these companies. Hmm. I see, but there's a possibility that somebody could just realize. Yeah, I totally think so.
嗯,这并不是说学术界做不到。我觉得也许他们只是没有考虑到正确的问题,或者因为在这些公司内部,更容易看到需要做什么。嗯,我明白了,但也有可能有人会意识到这一点。是的,我完全同意。

Like, why would I possibly rule this out? I mean, what are the concrete steps by which these language models start actually impacting the world of atoms and not just the world of bits? Well, you see, I don't think that there is a distinction, a plain distinction between the world of bits and the world of atoms. Suppose the neural net tells you that, hey, like here is like something that you should do and it's going to improve your life, but you need to let rearrange your apartment in a certain way. And you go and rearrange your apartment as a result. The neural net impact the world of atoms just fair enough, fair enough.
那我为什么要排除这种可能呢?我的意思是,这些语言模型究竟是通过哪些具体步骤真正影响到物质世界,而不仅仅是信息世界呢?其实,我并不认为物质世界与信息世界之间有着明显的界限。假如神经网络告诉你,嘿,这么做会改善你的生活,但你需要按照某种方式重新布置你的公寓。然后你就按照这个方法去重新布置了你的公寓。那么在这个过程中,神经网络就已经真正地影响了物质世界,这也算是一个很好的例子。

Do you think it'll take a couple of additional breakters as important as a transformer? They get to super human AI or do you think we basically got the insights in the books somewhere and we just need to implement them and connect them? So I don't really see such a big distinction in those two cases and let me explain why. Like, I think what's what one of the ways in which progress is taken place in the past. Is that we've understood that something had a property.
你觉得需要再加几个像变压器一样重要的破解者,才能达到超人工智能吗?还是你认为我们实际上已经在书中找到了其中的见解,只需要实现和整合它们?我觉得这两种情况之间没有太大区别,我来解释一下为什么。我认为过去取得进步的方法之一就是我们了解到某种事物具有某种特性。

A desirable property all along, but you didn't realize. So is that a breakthrough? You can say yes, it is. Is that an implementation of something on the books? Also, yes. So I am, I my feeling is that a few of those are quite likely to happen, but that in hindsight it will not feel like a breakthrough. Everybody is going to say, oh, well, of course, like it's totally obvious that such and such thing can. And work, you see with a transformer, the reason it's being brought up as a bigot as a specific advances because it's the kind of thing that was not obvious or almost anyone.
这一直是一个令人向往的特性,但你并没有意识到。所以这是一个突破吗?你可以说是的,确实是。那这算是对已有理论的应用吗?同样,也是的。因此,在我看来,这样的情况很有可能发生几次,但事后回顾起来却不会觉得是一个突破。每个人都会觉得这是理所当然的,因为这种事情是显而易见的。但你看,关于变压器(transformer)的原因是被提出作为一个特定的具体进展,因为这种事情对于大多数人来说并不明显。

So we look and say, yeah, like it's not something which they knew about. But if an advance comes from something like let's consider that the most fundamental advance of deep learning. That the big neural network trained its back propagation and do a lot of things like where is the novelty? Not in the neural network. Not in the back propagation. But then somehow it's the kind of, but it was it is most definitely a giant conceptual breakthrough because for the longest time people just didn't see that. But then now that everyone sees it, I was going to say, well, of course, like it's totally obvious, big neural network. Everyone knows that they can do it.
所以我们看了一下,是的,这不是他们已经知道的东西。但是,如果某种进步来自像我们认为的深度学习最基本的进步之类的东西。那就是大型神经网络通过反向传播进行训练并完成许多事情,所以新颖之处在哪里呢?不在神经网络里,也不在反向传播里。但是它确实是一个巨大的概念突破,因为很长一段时间人们都没有意识到这一点。但现在每个人都看到了,我本来想说,当然,这太明显了,大型神经网络。每个人都知道他们可以做到这一点。

So is your opinion of your former advisors, new forward forward algorithm? I think that it's an attempt to brain a neural network without back propagation. And I think that this is especially interesting if you are motivated to try to understand how the brain might be learning its connections. The reason for that is that as far as I know, neuroscientists are really convinced that the brain cannot implement back propagation because the signals in the synopsis only move in one direction. And so if you have a neuroscience motivation and you want to say, okay, how can I come up with something that tries to approximate the good properties of back propagation without doing back propagation? That's what the forward forward algorithm is trying to do. But if you are trying to just engineer a good system, there is no reason to not use back propagation. Like it's the only algorithm.
那么您对您之前的顾问关于新的前向算法的意见呢?我认为这是一种尝试在神经网络中不使用反向传播的方法。特别是如果你想努力理解大脑如何学会这些连接的话,这会变得非常有趣。原因是据我所知,神经科学家们真的相信大脑无法实现反向传播,因为突触中的信号只能单向传播。因此,如果你有神经科学方面的动机,想要找出一种尽量保持反向传播优势但又不使用反向传播的方法,那就是前向算法试图要做的事情。但是,如果你只想设计一个好的系统,没有理由不使用反向传播。因为反向传播是目前为止唯一的算法。

I guess I've heard you in different contexts talk about the like using humans as the existing example case that you know, AGI exists, right? At what point do you take the metaphor less seriously and feel they don't feel the need to pursue it in terms of research? Is it is important to you as a sort of existence case? Like at what point does stop caring about humans as an existence case of intelligence? Or as the sort of as an example in the model you want to follow in terms of pursuing intelligence in models?
我想我曾在不同的场合听你谈过,以人类为现有的示例案例来证明AGI的存在,对吗?那么,在什么时候你会不那么严肃地对待这个比喻,并觉得没必要在研究方面继续追求它呢?这种存在论对你来说重要吗?换句话说,从何时开始,你就不再关心人类作为智能存在的案例了?或者说,你希望在追求智能模型方面遵循的那种范例?

I see. I mean, like you got a I think it's good to be inspired by humans. I think it's good to be inspired by the brain. I think there is an art into being inspired by humans and the brain correctly because it's very easy to latch on to an non essential quality of humans or of the brain. And I think many people who wants who many people whose research is trying to be inspired by humans and by the brain often gets a little bit specific. People get a little bit too.
我明白了。我的意思是,受到人类和大脑的启发是很好的。我认为,正确地从人类和大脑中汲取灵感是一门艺术,因为我们很容易抓住人类或大脑的非核心特质。我觉得很多想要从人类和大脑中寻找灵感的研究者往往过于具体和狭隘。

And what cognitive science model should follow at the same time consider the idea of the neural network itself, the idea of the artificial neuron. This too is inspired by the brain, but it turned out to be extremely fruitful. So how do we do this? What behaviors of human beings are essential that you say like this is something that proves to us that it's possible. What is in essential? I think that we have a little bit of a little bit of an intuition that is a little bit different from what we do. So we have a little bit of information that is not information, the norm and of something more basic. And we just need to focus on our own our own basic right.
那么,认知科学模型在遵循的同时,也应该考虑神经网络本身的概念,以及人工神经元的思想。这也是受到大脑启发的,但结果极为丰硕。那我们如何做到这一点呢?人类有哪些行为是至关重要的,你们认为这可以证明这种行为是可能的。什么是非必要条件?我认为我们有一点直觉,这与我们的行为略有不同。因此,我们拥有的一部分信息并非是规范的信息,而是更基本的信息。我们只需关注自己最基本的需求。

I would say that it's like I think one should one can and should be inspired by human intelligence with care. It's such a strong correlation between being first to the deep learning revolution and still being one of the top researchers. You would think that these two things wouldn't be that correlated. Why is that that correlation? I don't think those things are super correlated indeed. I feel like in my case, I mean honestly it's hard to answer the question. You know, I just kept on kept trying really hard and it turned out to have suffice thus far. So it's a perseverance. It's a necessary but not a sufficient condition, like you know many things need to come together in order to really figure something out. Like you need to really go for it and also need to have the right way of looking at things. So it's hard to give them like a really meaningful answer to this question.
我认为一个人可以并且应该谨慎地受到人类智慧的启发。在深度学习革命中处于领先地位与仍然是顶级研究者之间有着如此强烈的相关性。你可能认为这两者之间不应该有那么强的相关性。为什么会产生这种相关性呢? 说实话,我并不觉得这两件事情之间有很强的相关性。在我看来,很难回答这个问题。要知道,我一直在坚持努力,到目前为止,这种努力已经足够了。所以,这是一种毅力。这是一个必要但不充分的条件,就像很多事情需要综合在一起才能真正解决问题。你需要全力以赴,还需要用正确的方式看待问题。所以,很难在这个问题上给出一个真正有意义的答案。

All right. Ilya, it is a very true pleasure. Thank you so much for coming out of the lunar society. I appreciate you bringing us to the offices. Thank you. Yeah, I really enjoyed it. Thank you very much.
好的,伊利亚,非常高兴见到您。感谢您走出月球学会来到这里。非常感谢您带我们参观办公室。谢谢。嗯,我真的很喜欢这次见面。再次感谢。

Hey, everybody. I hope you enjoyed that episode. Just wanted to let you know that in order to help pay for the bills associated with this podcast, I'm turning on paid subscriptions on my set stack at warcashbatelle.com. No important content on this podcast will ever be pay-walled. So please don't donate if you have to think twice before buying a cup of coffee. But if you have the means and you've enjoyed this podcast, or got some kind of value out of it, I would really appreciate your support.
嘿,大家好。希望你们喜欢这一期节目。我想告诉大家,为了支持这个播客的相关费用,我会在我的网站warcashbatelle.com上开启付费订阅功能。但请放心,这个播客的重要内容永远不会设置付费墙。所以,如果你在买一杯咖啡之前都需要三思的话,请不要捐款。但如果你有能力,而且喜欢这个播客,或者从中得到了一些价值,我将非常感激你的支持。

As always, the most helpful thing you can do is just share the podcast. Then to people you think might enjoy it, put it in Twitter, your group chats, etc. Just blitz the world. Appreciate your listening. I'll see you next time. Cheers.
一如既往,您能做的最有帮助的事情就是分享这个播客。把它推荐给您认为可能喜欢它的人,在Twitter上发布,放到您的群聊里等等。积极地传播给整个世界。感谢您的收听,我们下次再见。干杯。



function setTranscriptHeight() { const transcriptDiv = document.querySelector('.transcript'); const rect = transcriptDiv.getBoundingClientRect(); const tranHeight = window.innerHeight - rect.top - 10; transcriptDiv.style.height = tranHeight + 'px'; if (false) { console.log('window.innerHeight', window.innerHeight); console.log('rect.top', rect.top); console.log('tranHeight', tranHeight); console.log('.transcript', document.querySelector('.transcript').getBoundingClientRect()) //console.log('.video', document.querySelector('.video').getBoundingClientRect()) console.log('.container', document.querySelector('.container').getBoundingClientRect()) } if (isMobileDevice()) { const videoDiv = document.querySelector('.video'); const videoRect = videoDiv.getBoundingClientRect(); videoDiv.style.position = 'fixed'; transcriptDiv.style.paddingTop = videoRect.bottom+'px'; } const videoDiv = document.querySelector('.video'); videoDiv.style.height = parseInt(videoDiv.getBoundingClientRect().width*390/640)+'px'; console.log('videoDiv', videoDiv.getBoundingClientRect()); console.log('videoDiv.style.height', videoDiv.style.height); } window.onload = function() { setTranscriptHeight(); }; if (!isMobileDevice()){ window.addEventListener('resize', setTranscriptHeight); }