首页  >>  来自播客: User Upload Audio 更新   反馈

Breakthrough potential of AI | Sam Altman | MIT 2023 - YouTube

发布时间 2023-05-07 16:00:00    来源
I know you're on this worldwide tour trying to help control the fire that GPT-4 and chat GPT have started. This particular room is maybe a little different from a lot of that. Most of the people here in this room are either building companies or working on plans to build companies that are in the ecosystem really triggered by chat GPT. My kind of people out there. Yeah I wish you were here too. They are exactly your kind of people. I know that part of the mission here is to make the world a better place but also to build on top of the platform that you've created and obviously you navigated to the position you're in in life very deliberately and you're the perfect person to help advise them. We're going to keep this on focus in a way that it helps this room as much as possible. These 900 people create successful companies.
我知道你正在全球范围内巡回演讲,试图帮助控制GPT-4和聊天GPT所引起的火灾。但是,这个特定的房间可能与大部分内容不同。在这个房间里,大多数人要么正在建立公司,要么正在制定计划,这些公司的生态系统都是由聊天GPT激活的。他们是我的菜。是的,我也希望你能在这里。这些人正是你的目标人群。我知道,这个使命的一部分是要使世界变得更加美好,但也要建立在你创造的平台之上,并且你显然是有意识地导航到你现在人生位置的,你是帮助他们咨询的完美人选。我们将把重点放在帮助这个房间尽可能多地创造成功的公司上。

The first thing I'm going to ask you about is if AGI is in the near term future then we're right now at this inflection point where human history has a period of time up till AGI and then obviously has a completely different history from here forward. So it seems to me that at this stage you're going to be a centerpiece of the history books no matter how this evolves. Do you think it's the same? So I think it's the same in terms of what? In terms of the way history will describe this moment, this moment being this year of innovation in this field. I hope this will be like a page or a chapter in history books but I think that over the next several billion years such unbelievable things are going to happen that this will be just one small part and there will be new and bigger and more exciting opportunities and challenges in front of us.
我首先要问你的是,如果人工智能通用智能(AGI)在不久的将来到来,那么现在我们就处于这个拐点,人类历史将有一个时期直到AGI,然后显然从这里开始有完全不同的历史。因此,在这个阶段,无论这个演变如何,你似乎将成为历史书的核心。你觉得是这样吗?那么,我认为这个问题是关于什么方面的?是关于历史如何描述这一时刻吗?这一时刻指的是这一年在该领域的创新。我希望这将成为历史书上的一页或章节,但我认为在未来几十亿年中,会发生如此不可思议的事情,这只是其中的一小部分,我们将面临新的、更大、更令人兴奋的机遇和挑战。

I think one of the things that a lot of people are asking with prior iterations of GBT, open source iterations, you had a whole variety of ways of taking that source code and making a vertical company out of it or an adjacent company, something a federated learning or something. In the future iteration of these companies you've got this highly tunable closed API to start from. Any quick advice on okay I'm starting a company now I have to make some decisions right out of the gate. What do I start with? How do I make it work in any given vertical use case?
我认为很多人对于先前版本的GBT以及开源版本的GBT都有一个问题,那就是如何将源代码转化为一个垂直公司或者一个相关公司,例如联合学习等等。在未来的GBT版本中,你可以利用高度可调的封闭API进行创业。如果你现在正要开始一家公司,你需要迅速做出一些决策。你应该从哪里开始?如何使它在任何一个垂直使用场景中都能起作用?

I think there's always more that stays the same about how to make a company than what changes. A lot of people whenever there's a new platform shift like this thing just because they're using the platform like that's what's going to guide business strategy. It doesn't, nothing lets you off the hook for building a product that people love for being very close to your users, fulfilling their needs for thinking about a long-term durable business strategy. That's actually probably only more important during a platform shift, not less.
我认为在创立一家公司的过程中,一些相同的原则始终适用而变化的因素较少。很多人在面对新的平台转变时,会因为使用的平台不同而改变他们的商业策略。然而,这并不可取,你必须为人们所喜爱的产品付出更多努力,与你的用户接近,满足他们的需求,考虑长期的可持续商业策略。事实上,在平台转变时,这一点可能比以往更加重要,而不是更少。

If we think back to the launch of the App Store which is probably the most recent similar example, there were a ton of companies that built very lightweight things with, I don't want to call them like exploitative mechanics but just like it was not something durable and those companies had incredible meteoric rises and falls and then the companies that really did all the normal things you're supposed to do to build a great business endured for the last 15 years. You definitely want to be in that ladder category and the technology is just like this is a new enabler but what you have to do as a company is like build a great company that has a long-term company strategic advantage.
如果我们回顾一下App Store发布时的情况,那可能是最近最相似的例子。当时有许多公司创建了非常轻量级的东西,我不想称之为剥削性的机制,但它们并不耐久,这些公司经历了令人难以置信的迅猛崛起和衰落,而那些真正做了建立伟大企业的所有正常事情的公司,已经经受了过去15年的考验。你肯定想成为那种后一类企业,而技术只是一种新的促进因素,但作为一家公司,你必须建立一个具有长期战略优势的伟大企业。

Then what about foundation models just as a starting point? If I look back two years, one of the best ways to start was to take an existing foundation model, maybe add some layers and retrain it in a vertical use case. Now the foundation models or the base model is maybe a trillion parameters so it's much, much bigger but your ability to manipulate it without having to retrain it is also far, far more flexible. I think you have 50,000 tokens to play with right now in the basic model. Is that right? About a minute. Thirty two thousand in the biggest model. Thirty two thousand in the base model.
那么基础模型怎么办?如果我回顾两年前,最好的开始方式之一是拿一个现有的基础模型,可能添加一些层,并在一个垂直应用场景中重新训练它。现在基础模型或者基础模型可能有一万亿个参数,所以它非常巨大,但是您可以不必重新训练它就能灵活操纵它。我认为您现在可以在基本模型中使用5万个令牌来玩。是这样吗?大模型中有三万两千个。基础模型中有三万两千个。

Okay and actually so how's that going to evolve? There are new iterations that are going to come out pretty quickly. We're still trying to figure out exactly what developers want in terms of model customization. We're open to doing a lot of things here and we're you know we also hold out like developers are users. Our goal is to make developers super happy and figure out what they need. We thought it was going to be much more of a fine tuning story and we have been thinking about how to offer that in different ways but people are doing pretty amazing things with the base model and for a bunch of reasons often seem to prefer that. So we're like actively reconsidering what customization to prioritize given what users seem to want seem to be making work.
好的,实际上,这个产品将会如何演变呢?会有新版本很快推出。我们仍在试图弄清开发人员在模型自定义方面要求什么,我们愿意在这方面尝试做很多事情。我们像对待开发人员一样对待用户,我们的目标是让开发人员非常满意,并找出他们的需求。我们认为这会是一个很微调的故事,我们一直在思考如何以不同的方式提供这种服务,但人们正在用基础模型做出很棒的事情,而且由于一些原因,他们似乎更喜欢这种方式。因此,我们正在积极重新考虑要优先考虑哪些自定义设定,考虑到用户似乎想要的和正在使用的情况。

As the models get better and better it does seem like there is a trend towards less and less of a need to fine tune and you can do more and more of the context. And when you say fine tune you mean changing parameter weights. Yeah. I mean is there going to be ability and ability at all to change the parameter weights in the GPD world? Yeah well definitely offer something there. But it like right now it looks like maybe that will be less used than ability to offer like super cheap context like 1 million if we can ever figure that out on the base model.
随着模型的不断改进,似乎有一种趋势,即越来越不需要进行微调,可以更多地考虑上下文。而当你说微调时,你指的是改变参数权重。是的。我的意思是,在GPD领域,是否有能力改变参数权重?是的,我们肯定会提供一些东西。但现在看起来,这可能比提供像基础模型上的超级廉价语境(如果我们能够找到这样的解决方案)更少使用。

Yeah let's drill in on that just a little bit because it seems like regardless of the specifics the trend is toward as the models are getting bigger and bigger and bigger so you go from one trillion to 10 trillion parameters. The amount you can achieve with just changing prompt engineering or changing the tokens that are feeding into it is growing disproportionately to the model size. Does that sound right?
让我们稍微深入探讨一下,因为这似乎无论具体情况如何,趋势是越来越朝着模型变得越来越大的方向发展,所以你从一万亿个参数变成了一千万亿个参数。只需通过改变提示工程或输入到模型中的标记即可获得的成果,与模型大小不成比例地增长。这听起来正确吗?

Um, disproportionately to the model size yes but I think we're like at the end of the error where it's going to be these like giant giant models and we'll make it better in other ways. Um, but I would say it like it grows proportionate to the model capability. Yep.
嗯,相对于模型大小,是有些失调的,但我认为我们已经到了需要采取其他方式来改进的错误的终点,不再像以前那样制作庞大的模型了。但我认为,它的增长与模型的能力成比例。是的。

And then the investment in the creation of the foundation models is in the on the order of 50 million, 100 million just in the in the training process. Um, so it seems like is it? What's the magnitude there? We don't share but it's much more than that. Okay. Yeah. And rising I assume over time. Yeah.
然后,基础模型的创建投资大约为5000万美元,训练过程中仅为1亿美元。看起来像这样吗?那么那个数量级是多少?我们不分享,但是远远大于这个数字。好的。是的,我想随着时间的推移会不断增加。

So then somebody trying to start from scratch. Somebody trying to start from scratch, you know, is trying to catch up to something that's anymore maybe or maybe we're all being incredibly dumb and we're missing one big idea and all of this is not as hard as expensive as we think and there will be a totally new paradigm that obsolete us which will be great and not great for us but like great for the world. Yeah. Yeah.
所以,有人试图从零开始。试图从零开始,你知道,就是试图赶上一些可能不再存在的东西,或者说我们都非常愚蠢,忽略了一个大点子,这些并不像我们认为的那么困难和昂贵,而将出现一个完全新的范例来取代我们,这将是伟大的,不是伟大的,但对于这个世界来说是伟大的。是的,是的。

So let me get your take on some so Paul Graham calls you the greatest business strategist that he's ever encountered and of course all these people are wrestling with their business strategy and what exactly to build and where and so I've been asking you questions that are more or less vertical use cases that sit on top of GPT-4 and Chetchi and soon GPT-5 and so on. But there's also all these business models that are adjacent. So things like federated learning or data conditioning or just deployment and and and so those are interesting business models too.
那么请让我听听你的意见。保罗·格雷厄姆称你是他曾经遇到的最优秀的商业战略家,当然,所有这些人都在为他们的业务战略和应该在哪里建立以及如何建立而苦苦挣扎。因此,我一直在问一些关于基于GPT-4、Chetchi等技术的垂直应用用例的问题,而很快将会有GPT-5等技术的应用。但是,存在很多相邻的业务模型,如联邦学习、数据调整或部署等,这些也是有趣的商业模式。

If you were just investing in a class of company that's in the ecosystem any thoughts on where the greater returns are, where the faster growing more interesting business models are. I don't think PG quite said that. I know he said something like in that direction but in any sense in any case I don't think it'd be true. I think there are people who are like unbelievable business strategists and I am not one of them so I hesitate to give advice here.
如果你只是投资于生态系统中的一个公司类别,你认为哪些公司的回报更高,哪些公司的业务模式增长更快更有趣?我不认为PG完全是这么说的。我知道他说过类似的话,但在任何情况下,我认为这并不正确。我认为有一些人是非常出色的商业战略家,而我不是其中之一,所以我在这里犹豫给出建议。

The only thing I know how to do I think is this one strategy again and again which is very long time horizon capital intensive difficult technology bets and I don't even think I'm particularly good at those. I just think not many people to try them so there's very little competition which is nice. I mean these strategies I think that I have a lot of competition. But the strategy that it takes to now like take a platform like OpenAI and build a new fast growing defensible consumer enterprise company. I know almost nothing about like I know all of the theory but none of the practice and I would go find people who have done it and get the practice get the advice from them.
我唯一擅长的策略似乎就是一遍又一遍地进行资本密集、技术难度高、长期计划的投资,但我并不认为自己特别擅长这个。只是因为没有太多人尝试这种策略,竞争压力比较小,这对我来说很有利。我认为其他策略的竞争对手比较多。但现在的策略需要像OpenAI这样的平台,建立一个新的、快速增长的、有竞争力的消费型企业,我对此几乎一无所知。我只知道理论而不知道实践,我会寻找那些成功实践过的人,向他们请教。

All right good advice.
好的建议,非常中肯。

A couple questions about the underlying tech platform here so I've been building neural networks myself since the parameter count was sub 1 million and they were they're actually very useful for a bunch of commercial applications and then kind of watch them tip into the billion and then the you know with GPT2 I think about one and a half billion or so and then GPT3 and now GPT4 so you go up we don't know the current parameter count but I think it was 175 billion in GPT3 and it was just mind blowingly different from GPT2 and then GPT4 is even more mind blowingly different.
这里有几个关于技术平台的问题,我自己一直在构建神经网络,从参数数目还不到100万的时候开始,它们实际上非常适用于许多商业应用。然后我看到它们逐渐达到了十亿级别,比如GPT2大约有15亿,GPT3和GPT4更多。我们不知道当前参数数目是多少,但我认为GPT3大概有1750亿个参数,与GPT2相比天差地别。而GPT4更是超乎想象地迥异。

So the raw underlying parameter count seems like it's on a trend just listening to NVIDIA's forecast where you can go from a trillion to 10 trillion and then they're saying up to 10 quadrillion in a decade. So you've got four factors of 10 or 10,000 X in a decade. Does that even sound like it's in the right ballpark?
因此,根据NVIDIA的预测,原始底层参数计数似乎正在趋势上升,其中你可以从一万亿增加到十万亿,然后他们称十年后会增加到十万亿亿。因此,你在十年内有四个10的因数,即10,000倍。这听起来是否有可能呢?

I think it's way too much focus on parameter count. I mean parameter count will trend up for sure but this reminds me a lot of the gigahertz race in chips in the like 90s and 2000s where everybody was trying to like point to a big number and then event like you don't need probably most of you don't need not many gigahertz or any or iPhone but it's fast. Like what we actually care about is capability and I think it's important that what we keep the focus on is rapidly increasing capability and if there's some reason that parameter count should decrease over time or we should have like multiple models working together each of which are smaller we would do that. Like what we want to deliver to the world that the most capable useful and safe models we are not here to like jerk ourselves off about parameter count. Yeah. Can we quote you on that? Okay it's going to get quoted no matter what. So yeah.
我认为太过于关注参数数量了。我的意思是,参数数量肯定会增加,但这让我想起了90年代和2000年代芯片中的千兆赫竞赛,在那个时候每个人都试图指向一个巨大的数字,但实际上你可能并不需要很多千兆赫或 iPhone,但它很快。我们真正关心的是能力,我认为重要的是我们保持关注在不断提高能力上,如果有一些原因导致参数数量随着时间的推移应该减少,或者我们应该有多个模型同时工作,每个模型都更小,我们会这样做。我们想向世界传递的是最有能力、有用和安全的模型,不是为了参数数量而自我陶醉。是的,你可以引用我。无论如何,这会被引用的。

Well thank you for taking that away from me. So but one thing that's absolutely unique about this class of algorithm versus anything I've ever seen before is that it surprises you with raw horsepower regardless of whether you measure it in parameter count or some other way. It does things that you didn't anticipate purely by putting more horsepower behind it and so it takes advantage of the scale.
感谢你从我这里拿走了那件事。但是,这个算法类别与我以前见过的所有东西绝对独特的一点是,它会给你带来原始的强大性能,无论你是否以参数计数或其他方式。它仅仅通过投入更多的动力就可以做出你意料之外的事情,因此它利用了规模的优势。

The analogy I was making this morning is if you have a spreadsheet you coded it up you run it on a computer that's 10,000 times faster it doesn't really surprise you it's it's nice and responsive it's still a spreadsheet whereas this class of algorithm does things that it just couldn't do before and so we actually one of our partners in our venture fund wrote an entire book on GPT2 and you can buy it on Amazon it's called start here or start here romance. I think about 10 copies of sold I bought one of them so maybe nine copies of sold but if you read the book it's just not a good book and here we are it's only that was four years ago it's only been four years and now the quality of the book has gone from you know GPT2 3, 4 not a good book you know somewhat reasonable book to now it's possible to write a truly excellent book you have to give it the the framework you have to you're still effectively writing the concept but it's filling in the words just beautifully and so as an author that could be a force multiplier of something like 10, 100 and it just enables an author to be that much more powerful so this class of algorithm then if the underlying substrate is getting faster and faster and faster it's going to do surprising things on a relatively short time scale and so I think one of the things that people in the room need to predict is okay what is the next real world society benefiting use case that hits that tipping point on this curve so any insights you can give us into you know what's what's going to be possible that wasn't possible a year prior to years prior okay
今天早上我所举的比喻是:如果你有一份电子表格,你编写了它,在一台速度比你原来的计算机快10000倍的计算机上运行它,你并不会感到太惊讶,因为它仍然是一份电子表格;而这种算法则是能够实现以前不可能完成的任务。我们的风险投资基金合作伙伴写了一本关于GPT2的书,在亚马逊上有售,标题是《从这里开始》或《从这里开始,浪漫》。我想已经卖出了大约10本,我买了其中的一本,所以可能只剩下了9本,但如果你阅读这本书,你会发现这并不是一本好书,这才是四年前的事,现在它已经过时了。GPT2要写出一本真正出色的书还需要给它一个框架,虽然你仍需要编写基本概念,但这种算法能够美妙地填写文字,因此,对于作者来说,这会是一种10-100倍的力量倍增器,让他们的力量更加强大。因此,如果基础结构越来越快,这类算法将在相对较短的时间内做出令人惊奇的事情,所以我认为在座的人需要预测的是:下一个真正受益于实际社会的用例,会在这条曲线上达到临界点,因此您可以向我们提供有关一年前或两年前不可能实现的事情的任何见解。

I said I don't have like business strategy advice I just thought of something I do I think in new areas like this one of the right approaches is to let tactics become strategy instead of the other way around and you know I have my ideas I'm sure you all have your ideas maybe we'll be mostly right we'll be wrong in some ways and even the details of how we're right will be wrong about um the I think you never want to lose sight of vision and focus on the long term but a very tight feedback loop of paying attention to what is working and what is not working and doing more of the stuff that's working and less of the stuff that's not working and just very very careful user observation can go super far so like you know I can speculate on ideas you'll speculate on ideas none of that will be as valuable as putting something out there and really deeply understanding what's happening and being responsive to it.
我说过我不擅长商业策略建议,但我想到了一个做法。在像这样的新领域中,正确的方法之一是让战术成为战略,而不是相反。我有我的想法,你们肯定也有你们的想法,也许我们大部分都是对的,但有些方面我们会错,甚至在我们正确的方面的细节上也会出错。我认为你永远不应该失去远见和专注于长期,但紧密的反馈循环,关注什么有效和无效,并加强有效的部分、减少无效的部分,以及非常仔细的用户观察,会很有帮助。所以,我可以推测想法,你也可以推测想法,但这些都不如发布某个东西并深入了解正在发生的事情并对其做出响应来得有价值。

Um as Dave is getting ready for the next question Sam when did you know your baby chat G.P.T. was something really special and what was the special sauce that allowed you to pull off something that others haven't and Dave will come back but yeah oh who likes Sam so far? okay all right if Sam was hiring would you consider being part of his team okay all right we got a lot of hands great yeah please please come we really need help and it's going to be a pretty exciting next few years um I mean we've been working on it for so long that it's like you kind of know with gradually increasing confidence that it's it's really gonna work but this is you know we've been doing the company for seven years um these things take long right now I would say by and like in terms of why it worked one other time it's just because we've like been on the grind sweating every detail for a long time and most people are willing to do that um in terms of when we knew that Chats G.P.T. in particular was gonna like catch fire as a consumer product probably like 48 hours after lunch yeah all right.
当戴夫准备回答下一个问题时,他问萨姆:“你何时知道Chats G.P.T是一件非常特别的事情,你是如何做到这一点,而别人却没有成功的?”戴夫会回来,但是是的,谁喜欢萨姆到目前为止?好的,如果萨姆在招聘,你会考虑成为他团队的一部分吗?好的,我们有很多人,非常好,请来,我们确实需要帮助,未来几年将会非常激动人心。我想说,我们已经在这个公司工作了七年,我们已经长时间地工作了,逐渐地增加信心,这确实会起作用,但这需要很长时间。为什么它成功了,是因为我们已经付出了很长时间的努力来汗水淋漓地完成每一个细节,而大多数人不愿意这样做。至于我们何时知道Chats G.P.T. 将会像消费品一样风靡,可能是在发布后的48个小时内。

Um so before Dave comes one back I asked Lex to ask a sexy question hey Lex hey you want to use the communicator you're good what is it it's a Star Trek you're good I'm good okay I grew up in the Soviet Union we didn't have oh check off check second second season yeah let me ask some sexy controversial questions so you got uh legends in artificial intelligence Ilya Suskevar and Andrei Kapathe over there who's smarter just kidding oh just kidding you don't have to answer that that's that joke everybody was about to he was thinking about it all right I like it uh no it's just uh
在戴夫回来之前,我让莱克斯问一个性感的问题。嘿,莱克斯,你想用这个通讯器吗?你很擅长操作,这是一款《星际迷航》的通讯器,你会吗?我擅长,好的。我在苏联长大,我们没有奇科夫和第二季,让我问一些性感的有争议的问题,你们这里有人工智能的传奇人物伊利亚·苏斯克瓦和安德烈·卡帕乔,谁更聪明?开玩笑的,你不用回答,这只是个笑话,每个人都想说。好的,我喜欢。

So we're at MIT and from here with Max Tagmark and others they put together this open letter to halt uh AI development uh for six months what are your thoughts about this open letter there's parts of the thrust that I really agree with we we spent more than six months after we finished training G.P.T.4 before we released it so taking the time to really study the safety for model to get external audits external red teamers um to to really try to understand what's going on in mitigated as much as you can that's it's important it's been really nice since we have launched G.P.T.4 how many people have said like wow this is not only most capable model open and I put out but like by far the safest and most aligned and unless I'm trying to get it to do something bad it won't um so that we totally I totally agree with um I also agree that as safety as as capabilities get more and more serious that the safety bar has got to increase um but unfortunately I think the letter is missing like most technical nuance about what's where we need the pause like it's actually like opening I an earlier version of the letter claimed open AI is trained G.P.T.5 right now we are not and won't for some time um so in that sense it was sort of silly but we are doing other things on top of G.P.T.4 that I think have all sorts of safety issues that are important to address and we're totally left out of the letter um so I think moving with caution and an increasing rigor for safety issues is really important the letter I don't think is the optimal way to address it.
我们正在麻省理工学院,与Max Tagmark等人一起发布了这封公开信,要求停止AI的开发,暂停六个月。您对这封公开信有何想法?我同意其中的某些主旨,我们在训练G.P.T.4后花费了六个月以上的时间才发布它,因此,花时间研究模型的安全性,并获得外部审计和红队,以真正理解正在发生什么并尽可能减轻风险是很重要的。自从我们推出G.P.T.4以来,人们表示它不仅是最有能力的开放模型,而且是最安全和最符合需求的,除非我试图让它做一些坏事,否则它不会受影响。我完全同意这一点。随着安全性和功能越来越严重,安全门槛必须提高。不幸的是,我认为这封公开信缺乏大部分技术细节,关于我们需要暂停的地方缺乏最专业的见解。实际上,早期版本的信件声称OpenAI正在训练G.P.T.5, 但现在我们不会这么做。但是,我们正在对G.P.T.4进行其他操作,这些操作有许多重要的安全问题应该解决,但被完全忽略了。因此,我认为谨慎行事和加强安全问题的严格性是非常重要的,但我认为这封信不是解决问题的最佳方式。

It's just a quick question if I may one more uh is you have been extremely open having a lot of conversations being honest uh others at opening AI as well what's the philosophy behind that because compared to other companies they're much more closed than that in that regard and do you plan to continue doing that? We certainly plan to continue doing that um the tradeoff is like we say dumb stuff sometimes you know stuff that turns out to be totally wrong and I think a lot of other companies don't want to say something until they're sure it's right um but I think this technology is going to so impact all of us that we believe that engaging everyone in the discussion putting these systems out into the world deeply imperfect though they are in their current state so that people get to experience them think about them understand the the upsides and the downsides it's worth the trade off even though we do tend to embarrass ourselves and public and have to change our minds with new data frequently um so we're going to keep doing that because we think it's better than any alternative and a big part of our goal at opening AI is to like get the world to engage with this and think about it and and gradually update and build new institutions or adapt our existing institutions to be able to figure out what the future we all want is uh so that's kind of like why we're here.
这只是一个小小的问题,如果可以的话,您一直非常开放且进行了很多交流,坦率地与开放AI以及其他公司的人交谈,那么背后的哲学是什么呢?因为与其他公司相比,在这方面,它们更加封闭。您计划继续这样做吗?我们肯定计划继续这样做。折衷方案是,我们有时会说些愚蠢的话,你知道一些结果是完全错误的,而我认为许多其他公司不想说出来,直到他们确信是正确的。但是,我们相信这种技术将对我们大家产生如此巨大的影响,因此我们认为,让每个人参与讨论,将这些系统推向世界,尽管它们当前在不完美的状态下的问题很深重,但这是值得的,这样人们就能体验它们,思考它们,理解它们的优缺点,即使我们倾向于在公众场合尴尬自己并时常根据新数据改变主意,我们仍然将继续这样做,因为我们认为这比任何替代方案都更好,而opening AI的一个重要目标是让世界参与其中,思考并逐渐更新和建立新的机构或适应我们现有的机构,以便能够找到我们都希望的未来,所以这就是我们在这里的原因。

So we only have a few minutes left and I have to ask you a question that that has been on my mind since I was 13 years old so I think if you read Ray Kurzweil or any of the luminaries in the sector the day when the algorithms start writing the code that improves the algorithms is a pivotal day it accelerates the process toward infinity or in the singularity of the world to absolute infinity and so now a lot of the companies that I'm an investor in or have been co-founder of are starting to use LLMs for code generation and it's interesting very wide range of lifts or improvement in the performance of an engineer ranging from about 5% to about 20x and it depends on what you're trying to do what type of code how much context it needs a lot of it is related to tuning in the system.
我们只剩下几分钟了,我必须问一个我自13岁以来就一直想问你的问题。如果你读过雷·库兹韦尔或该领域的其他权威人士的作品,你就会知道,当算法开始写改进算法的代码时,那将是一个关键时刻。它会加速过程向无限或世界的奇点迈进至绝对无限。现在,我投资的很多公司或者我作为联合创始人所创办的公司正在开始使用LLM来生成代码,这很有趣。这种方法可以大大提高工程师的性能,其改进程度范围非常广泛,从5%到20倍不等,这取决于你想做什么,需要怎样的代码,需要多少上下文和经验。其中很多都涉及到系统的调优。

So there's two questions in there first within open AI how much of a force multiplier do you already see within the creation of the next iteration of the code and then the follow on question is okay what does it look like a few months from now a year from now two years from now are we getting close to that day where the thing is so rapidly self-improving that it hits some great question I think that it is going to be a much fuzzier boundary for you know getting to self-improvement or or not I think what will happen is that more and more of the improvement loop will be aided by AI's but humans will still be driving it and it's going to go like that for a long time and there's like a whole bunch of other things that I have never believed in the like one day or one month takeoff for a bunch of reasons but like one of which is how incredibly long it takes to build new data centers bigger data centers like even if we knew how to do it right now just like waiting for the concrete to dry getting the power into the building the stuff takes a while but I think what will happen is humans will be more and more augmented and be able to do things in the world faster and faster and it will not work out like it will not somehow like most of these things don't end up working out quite like the sci-fi books and neither will this one but the rate of change in the world will increase forever more from here as humans get better and better tools.
这段话有两个问题,首先是在Open AI中,你们看到在创建下一代代码中的作用提高了多少,接下来的问题是,从几个月后、一年后、两年后来看,我们是否接近了那个自我提升非常迅速并达到某个伟大阶段的时刻。我认为这个问题有一个模糊的边界,不确定是不是正在自我提升。我认为将来会有越来越多的AI参与到提升循环中,但人类仍将驱动这个过程,并且这种情况会持续很长时间。还有一些其他的影响因素,例如新数据中心或更大的数据中心建设需要花费很长的时间,即使我们现在知道如何做。我认为,虽然这不会像科幻小说中描述的那样神奇,但随着人类使用越来越好的工具来完成任务,世界变化的速度将不断变快。



function setTranscriptHeight() { const transcriptDiv = document.querySelector('.transcript'); const rect = transcriptDiv.getBoundingClientRect(); const tranHeight = window.innerHeight - rect.top - 10; transcriptDiv.style.height = tranHeight + 'px'; if (false) { console.log('window.innerHeight', window.innerHeight); console.log('rect.top', rect.top); console.log('tranHeight', tranHeight); console.log('.transcript', document.querySelector('.transcript').getBoundingClientRect()) //console.log('.video', document.querySelector('.video').getBoundingClientRect()) console.log('.container', document.querySelector('.container').getBoundingClientRect()) } if (isMobileDevice()) { const videoDiv = document.querySelector('.video'); const videoRect = videoDiv.getBoundingClientRect(); videoDiv.style.position = 'fixed'; transcriptDiv.style.paddingTop = videoRect.bottom+'px'; } const videoDiv = document.querySelector('.video'); videoDiv.style.height = parseInt(videoDiv.getBoundingClientRect().width*390/640)+'px'; console.log('videoDiv', videoDiv.getBoundingClientRect()); console.log('videoDiv.style.height', videoDiv.style.height); } window.onload = function() { setTranscriptHeight(); }; if (!isMobileDevice()){ window.addEventListener('resize', setTranscriptHeight); }