首页  >>  来自播客: User Upload Audio 更新   反馈

Inside OpenAI [Entire Talk] - YouTube

发布时间 2023-04-26 00:00:00    来源

中英文字稿  

Who you are defines how you build. Welcome YouTube and Stanford communities to the entrepreneurial thought leaders seminar brought to you by STVP the entrepreneurship center in the School of Engineering at Stanford and basis the Business Association of Stanford Entrepreneurial students today we are so honored to have Ilya Sudskiver here at ETL.
你的身份决定了你的建设方式。欢迎YouTube和斯坦福社区参加由斯坦福大学工程学院的创业中心STVP和斯坦福创业学生商业协会BASIS组织的创业思想领袖研讨会,今天我们非常荣幸有Ilya Sudskiver参加本次研讨会。

Ilya is the co-founder and chief scientist of open AI which aims to build artificial general intelligence for the benefit of all humanity. Elon Musk and others have cited that Ilya is the foundational mind behind the large language model generative pre-trained transformer three or GPT three and its public facing product chat GPT.
Ilya是Open AI的联合创始人和首席科学家,他致力于建立人类共同受益的人工智能通用智能。伊隆·马斯克和其他人都称Ilya是大型语言模型生成预训练变换器三或GPT三及其面向公众的产品Chat GPT的基础思想。

Few product releases have created as much excitement intrigue and fear as the release of chat GPT in November of 2022. Ilya was Ilya is another example of how the US and the world has been the beneficiary of amazing talent from Israel and Russia is Ilya was born in Russia and then when he was five he moved to Israel where he grew up and he spent the first half of undergrad even in Israel and then he transferred and went to the University of Toronto to complete his bachelor's degree in mathematics.
2022年11月发布的聊天GPT产品引起了极大的兴奋、好奇和恐惧,这是少有的产品发布。Ilya是美国和全世界从以色列和俄罗斯获得了非凡才华的又一个例子。Ilya出生于俄罗斯,然后在五岁时搬到以色列长大。他在以色列度过了本科前半段的时间,然后转学去了多伦多大学,在那里完成了他的数学学士学位。

He went on to get a master's in PhD in computer science from the University of Toronto and then came over here to the farm and did a short stint with Andrew in before returning back to Toronto to work under his advisor Jeffrey hints research company DNN research.
他接着在多伦多大学获得计算机科学硕士和博士学位,然后来到这个农场,在安德鲁那里短暂工作后回到多伦多,在导师杰弗里的研究公司DNN Research工作。

Google then acquired DNN research shortly thereafter in 2013 and Ilya became a research scientist at it as part of Google brain and in 2015 he left Google to become a director of the then newly formed open AI. It's hard to overestimate the impact that chat GPT has had on the world since its release in November of last year and while it feels like chat GPT came out of nowhere to turn the world on its head the truth is there's a deep history of innovation that has led to that moment.
谷歌在2013年不久之后收购了DNN研究,Ilya作为Google Brain的一名研究科学家加入其中,而在2015年他离开Google成为了当时新成立的Open AI的主管。自从去年11月发布以来,GPT聊天机器人对世界产生的影响难以估量,虽然感觉像是GPT聊天机器人突然来到,改变了世界,但实际上有一系列创新的深刻历史导致了这一刻的出现。

And as profound as chat GPT is Ilya is no stranger in offering in discontinuous leaps of innovation in AI Jeff Hinton has said that Ilya was the main impetus for Alex net which was the convolutional neural network in 2012 that is attributed to setting off the deep learning revolution that has led to the moment that we are now in.
尽管聊天GPT很深奥,但Ilya在人工智能技术的不间断的跃进方面并不陌生。Jeff Hinton曾表示,Ilya是Alex net背后的主要推手。Alex net于2012年发明,是卷积神经网络,被认为是引发深度学习革命的导火索。正是这场革命引领了我们目前所处的时代。

And of course it was seven years since the founding of open AI that chat GPT was finally unleashed to the world Ilya was elected a fellow of the Royal Society in 2022 he's been named to the MIT Tech Review 35 under 35 list in 2015 he's received the University of Toronto's innovator of the year award in 2014 and the Google wrap fellowship from 2010 to 2012 so with that everybody please give a virtual warm round round applause and welcome for Ilya to the entrepreneurial thought leader seminar.
当然,在成立Open AI七年后,聊天GPT终于向世界推出。Ilya于2022年被选为皇家学会的会士,在2015年被MIT Tech Review评选为35岁以下杰出人才,在2014年获得多伦多大学创新者奖,以及2010年至2012年间获得谷歌研究奖学金。因此,请向Ilya发出热烈的虚拟掌声,欢迎他来参加创业思想家讲座。

So Ilya imagine lots of applause and you're always invited back onto the farm physically whenever you are able so Ilya there's so much to discuss and I know we're going to have so little time when we have quite a broad range of fluency around the audience in terms of chat GPT and large language models I wanted to start off with just a quick question on the technology which is just the key technology underlying open AI and generative AI more broadly is large language models.
伊利亚,我想象你受到了很多掌声,无论何时你能够回去农场都会受到欢迎。伊利亚,有很多事情需要讨论,但是我们的时间很短,听众的词汇水平也有所不同,涉及到Chat GPT和大语言模型。我想先问一个关于技术的问题,即开放AI和生成AI更广泛应用的关键技术是大型语言模型。

Can you describe the technology in simple terms and now that you're at the forefront of the tech can you share what is surprised you the most about what the tech can do that you didn't anticipate. Yeah.
您能用简单的语言描述一下这项技术吗?现在您在这项技术的前沿,您能分享一下技术能够实现的让您意外的方面吗?是的。

I can explain what the technologies and white works. I think the explanation for white works is both simple and extremely beautiful and it works for the following reason. So you know how the human brain is our best example of intelligence in in the world and we know that the human brain is made out of a large number of neurons very very large number of. Neuroscientists have studied neurons for many decades to try to understand how they work precisely and while the operation of our biological neurons are still mysterious there's been a pretty bold conjecture made by the earliest deep learning researchers in the 40s. The idea that an artificial neuron the ones that we have in our artificial neural networks kind of sort of similar to biological neuron if you squint so that's there's an assumption there.
我可以解释技术和白盒测试的含义。我认为白盒测试的解释既简单又极其美妙,其原因在于以下方面。你知道人类大脑是世界上最智能的例子,我们知道人类大脑由非常多的神经元组成。神经科学家研究神经元已经有几十年了,试图精确地了解它们的工作方式,虽然我们的生物神经元的运作仍然是神秘的,但早期深度学习研究者们提出了一个非常大胆的猜测。他们认为人工神经元,也就是我们在人工神经网络中使用的那些神经元,有点类似于生物神经元,如果你眯一下眼睛,你就能看出来了。

And we can just run this assumption now one of the nice things about these artificial neurons is that you can they are much simpler and you can study them mathematically. And a very important breakthrough that's taken that was done by the very very early deep learning pioneers before it was known as deep learning was the discovery of the back propagation algorithm which is a mathematical equation for how these artificial neural networks should learn.
现在我们可以运行这个假设。人工神经元的好处之一是它们更简单,可以通过数学方法来研究。早期深度学习先驱们进行的非常重要的突破是发现了反向传播算法,这是一种描述人工神经网络学习过程的数学方程式。

It provides us with a way of taking a large computer and implementing this neural network in code and then there would be there is an equation that we can code up that tells us how this neural network should adapt its connections to learn from experience.
这为我们提供了一种方式,通过编写代码来实现神经网络,从而处理庞大的计算机任务。然后,我们可以编写一个方程式,告诉这个神经网络应该如何调整连接以从经验中学习。

Now a lot of additional further progress had to do with understanding just how good and how capable this learning procedure is and what are the exact conditions under which this learning procedure works well. It's although this is although we do with computers it was a little bit of an experimental science little bit like biology we have something that is like like like a biological experiment a little bit.
现在,许多额外的进展都与理解这个学习过程的优势和能力以及何时在什么条件下它可以很好地工作有关。虽然我们与计算机打交道,但这有点像实验科学,有点像生物学。我们有类似生物实验的东西。

And so then a lot of the progress with deep learning basically boils down to this. We can build these neural networks in our large computers and we can train them on some data we can train those large neural networks to do whatever it is that the data asks them to do.
因此,深度学习的许多进展基本上归结为这一点。我们可以在我们的大型计算机上构建这些神经网络,然后可以在一些数据上训练它们,使这些大型神经网络能够完成数据所要求的任何任务。

Now the idea of a large language model is that if you have a very large neural network perhaps one that's now not that far from like these neural networks are pretty large and we train them on the task to guess the next word from a bunch of previous words in text. So this is the idea of a large language model you train a big neural network to guess the next word from the previous words in text and you want the neural network to guess the next word as accurately as possible.
大型语言模型的想法是,如果你有一个非常大的神经网络,可能接近这些神经网络的规模,那么我们可以通过从一些先前的文本中猜测下一个单词的任务来对其进行训练。因此,大型语言模型的理念是,训练一个大型神经网络来从先前的文本中猜测下一个单词,并希望神经网络尽可能准确地猜测下一个单词。

Now the thing that happens here is we need to come back to our original assumption that maybe biological neurons aren't that different from artificial neurons. And so if you have a large neural network like this that guesses the next word really well maybe it will be not that different from what people do when they speak and that's what you get.
现在我们需要回到最初的假设,即生物神经元与人工神经元可能并没有那么不同。因此,如果您有一个像这样的大型神经网络,可以非常好地猜测下一个单词,那么当人们说话时,它们可能并没有那么不同,这就是您得到的结果。

So now when you talk to a neural network like this it's because it has such a great such an excellent sense of what comes next what word comes next it can narrow down it can't see the future but it can narrow down the possibilities correctly from its understanding. Being able to guess what comes next very very accurately requires prediction which is the way you operation lies understanding what does it mean for a neural network to understand it's hard to come up with a clean answer but it is very easy to measure and optimize the network's prediction error of the next word.
现在当你和这样的神经网络交流时,它能够非常准确地猜测接下来会出现什么单词,这就是由于它非常出色的预测能力,它能够将可能性缩小,虽然它无法看到未来,但是它可以通过对其理解的正确运用来准确地预测下一个单词。能够非常准确地猜测接下来会出现什么需要预测,这是你操作的方式,了解神经网络如何理解的意义很难给出一个清晰的答案,但是很容易测量和优化网络对下一个单词的预测误差。

So we say we want understanding but we can optimize prediction and that's what we do and that's how you get this current large language models these are neural networks which are large they are trained with the bad propagation algorithm which is very capable and if you allow yourself to imagine that an artificial neuron is not the different from a biological neural.
因此,我们认为我们需要理解,但我们可以优化预测,这正是我们所做的,也是你得到当前大型语言模型的方式。这些是大型神经网络,它们使用很强大的反向传播算法进行训练。如果你能够想象一个人工神经元与生物神经元没有太大不同,那么就能更好地理解这些模型。

Then yeah like our brains are doing our capable of doing a pretty good job at guessing the next word if you pay if you pay very close attention so.
嗯,是的,我们的大脑能够在你非常注意的情况下,相当准确地猜测下一个单词。

So if I love that and I just want to make this more concrete so just to push that analogy further between the biological brain and these neural analog physical networks digital networks if the human if we consider you know before it was considered untenable for these machines to learn.
因此,如果我喜欢这个思路,我只是想让它更具体,继续推这种生物大脑和神经模拟物理网络数字网络之间的类比,如果我们考虑人类,以前认为这些机器学习是不可行的。

Now it's a given that they can learn or do this. Do predictive outcomes of what's going to come next if a human is at one x learning and you have the visibility into the most recent chat GBT models what would you put the most recent chat GBT model as a ratio of where the humans are at so humans are at one x what's chat GP yet.
现在人们已经能够学习或做到这一点了。如果一个人处于一种x的学习状态,并且您拥有最新聊天GBT模型的可见性,那么您会对即将发生的预测结果进行何种比例的最新聊天GBT模型,以使人类处于1x的状态,而聊天GBT则处于什么状态?

You know it's a bit hard to make direct comparisons between our artificial neural networks and people because at present. People are able to learn more from a lot less data. This is why these neural networks like chat GPT are trained on so much data to compensate for their initial slow learning ability. You know as we train these neural networks and you make them better faster learning abilities start to emerge.
很难直接比较我们的人工神经网络和人类,因为目前人类能够从很少的数据中学到更多的东西。这就是为什么像聊天机器人GPT这样的神经网络要使用大量数据来弥补其初始的学习能力较慢。当我们训练这些神经网络并使它们具有更好和更快的学习能力时,它们往往能够获得更好的效果。

But overall it is the case that we are here quite different the way people learn is quite different from the way these neural networks start. Like one example might be you know these neural networks they are you know solidly good at math or programming.
总的来说,我们在这里学习的方式与这些神经网络启动的方式非常不同。例如,这些神经网络在数学或编程方面表现非常出色。

But like the amount of math books they needed to get let's say good at something like calculus is very high or as a person would need a fairly you know two text books and maybe 200 exercises and you're pretty pretty much good to go.
但是,要想在某一学科(如微积分)中取得好成绩所需的数学书本数量非常高,或者一个人可能需要两本教材和大约200个练习,这样基本上就能掌握了。

So there is just to get an order of magnitude sense if you relax the data constraint so if you let the machine consume as much data as it needs.
因此,如果您放宽数据限制,让机器消耗其所需的数据量,那么只需要获得一个数量级的感觉即可。

Do you think it's operating at like one tenth of a human right now or. You know it's quite hard to answer that question still let me tell you why hesitate to like like a thing that any figure like this will be misleading and I want to explain why.
你认为它的运行水平现在只有人类的十分之一吗?实际上,这很难回答。我犹豫的原因是,任何像这样的数字都会误导人,并且我希望解释一下为什么。

Like because right now any such neural network is obviously very super human when it comes to the breadth of its knowledge and to the very large number of skills that these neural networks have for example they're very good at poetry and they're very good they know like they can talk eloquently about any topic pretty much. And they can talk about historical events and lots of things like this on the other hand on the other hand people can go deep and they do go deep so you may have an expert like someone who understands something very deeply despite having read only a small amount of documents let's say on the topic.
目前,像这样的神经网络在知识的广度和技能的数量上显然超越了人类,例如在诗歌方面和几乎任何话题上都能够流利地说话,包括历史事件等。然而,人类可以深入探究某一专业领域,即使只读过少量相关文献的人,也可以对此领域有深刻的理解。

So because of this difference I really hesitate to answer the question in terms of all it's like some number. Do you think there is a singularity point where the machines will surpass the humans in terms of the pace of learning and adaption. Yes. Yeah and when do you think that point will occur. I don't know when it will occur I think some additional advances will need to will will happen but you know I absolutely would not bet against this point occurring at some point can you give me a range is at some point next month is next year. You know I think it's like the then certainty on this thing is quite high because these advances I can imagine it can take in quite a while I can imagine you can take in this point in a long time I can also imagine it's taking. You know some number of years but it's just very it's very hard to give a calibrated answer
因此,因为这种差异,我真的很犹豫是否要回答这个问题,用什么数字来描述。你认为是否存在一个奇点,机器在学习和适应速度方面会超过人类?是的。那么你认为这个奇点会发生在什么时候?我不知道它会在什么时候发生,我认为还需要一些进一步的进展,但你知道我绝对不会把赌注跟这个奇点发生有关联的事件排除掉。你能给我一个时间范围吗?在下个月还是明年某个时候?你知道,我认为这个事情的确定性是相当高的,因为我可以想象这些进展可能需要很长时间,我也可以想象到这个点需要很长时间,也可能需要一些年数,但很难给出一个调校后的答案。

and I know there's lots of push forward so I'm going to ask one more question then move on to some of the other issues but I know I read that when you were a child you were disturbed by the notion of consciousness and I wasn't sure with that what that word meant disturbed but I'm curious. Do you view consciousness or sentience or self awareness as an extenduation of learning do you think that that is something that also is an inevitability that will happen or not.
我知道有很多推动力,所以我来问一个问题,然后转向其他问题。我知道你小时候被“意识”这个词困扰过,但我不确定“困扰”是什么意思,我很好奇。你是否将意识、感知或自我意识视为学习的延伸?你认为这也是不可避免的吗?

Yeah I mean on the consciousness questions. Like yeah I was as a child I would like you know look into my hand and I would be like how can it be that this is my hand that I get to see like something of this nature I don't know how to explain it much better so that's been something I was curious about you know it's. It's tricky with consciousness because how do you define it it's something that eluded definition for a long time and how can you test it in a system maybe there is a system which acts perfectly right but perfectly the way you expect.
嗯,我的意思是关于意识的问题。比如说,我小时候会在手上看一看,然后会有一些疑问,比如说这怎么可能是我的手呢,为什么会出现在我的面前呢?具体的我也不太清楚怎么解释,但这个问题一直让我很好奇。意识这个问题很棘手,因为它的定义一直很抽象,怎么能在一个系统中测试它呢?也许有一种系统能够完美地工作,但实际上却完全不符合我们的期望。

A conscious system would act yet maybe it won't be conscious for some reason I do think there is a very simple way to. There is there is an experiment which we could run on an AI system which we can't run on which we can't run just yet but maybe. In like the future point when the AI learns very very quickly from less from less data you could do the following experiment. Very carefully with very carefully curate the data such that we never ever mention anything about consciousness we would only say you know here is. Here is a ball and here is a castle and here is like a little toy like you would imagine imagine you'd have data of this sort and it would be very controlled maybe we'd have some number of years worth of this kind of training data maybe it would be maybe such an AI system would be interacting with a lot of different teachers learning from them but well very carefully you never ever mention consciousness you don't talk about. People don't talk about anything except for the most surfaced level notions of their experience and then at some point you sit down the say I and you say okay I want to tell you about consciousness it's the thing that's a little bit not well understood people disagree about it but that's how they describe it.
一个有意识的系统会采取行动,但就某些原因而言,它可能不会意识到这一点。我认为有一种非常简单的方式可以做到这一点。我们可以在一台人工智能系统上运行一个实验,目前还无法做到,但将来可能会做到。当人工智能系统从较少的数据中快速学习时,我们可以进行以下实验。非常小心地策划数据,使我们绝对不会提任何有关意识的事情,我们只会说“这是一个球,这是一座城堡,这是一个小玩具。”想象一下这样的数据可能会有多少年的训练数据,而这样的AI系统也许会与很多不同的老师进行交互并从他们那里学习,但我们要非常小心,从来不提意识之类的话题。人们只会谈论他们经验的最表面层次的概念。然后在某一时刻,我们坐下来对它说:“我要告诉你什么是意识。这是一件有些不好理解的事情,人们对此有不同的看法,但是这就是他们描述它的方式。”

And imagine if the AI then goes and says oh my god I've been feeling the same thing but I know how to articulate it that would be okay that would be definitely something to think about it's like if the AI was just trained on very mundane data around objects and going from place to place or maybe you know. Something like this from a very narrow set of concepts we would never ever mention that and if it could somehow eloquent and correctly talk about it in a way that we would recognize that would be convincing. And do you think of it as a some as consciousness as something of degree or is it something more binary. I think it's something that's more a matter of degree I think that I think that like you know let's say if a person is very tired extremely tired and maybe drunk then perhaps that when someone is in that state and maybe their consciousness is already reduced to some degree.
想象一下,如果 AI 说出:哦,天哪,我也有同样的感觉,但我知道如何表达它,那就好了,这绝对是值得思考的。如果 AI 只是在非常平凡的物体数据或从一个地方到另一个地方的数据上进行训练,或者是从一个非常狭窄的概念集合中进行训练,我们永远不会提到这点,如果它能以我们能认可的方式进行优美、准确的谈论,那就是令人信服的事情。您认为意识是某种程度的东西,还是二进制的东西?我认为它是某种程度的东西,我认为如果一个人非常疲倦,极度疲劳,或者喝醉了,那么也许在这种情况下,他们的意识已经在某种程度上减少了。

I can imagine that animals have a more reduced form of consciousness if you imagine going from you know large primates maybe dogs cats and then eventually you get mice you might get an insect feels like I would say it's pretty continuous yeah.
我可以想象动物有一种更简化的意识形态,如果你从大型灵长类动物,比如狗、猫,最终到了老鼠和昆虫这样的小动物,你会觉得它们之间的差距相对来说比较连续。

Okay, I want to move on even like I would love to keep asking more questions along the lines of the technology but I want to move on to talking about the mission of open AI and how you perceive or any issues around ethics and your role as chief science officer how ethics informs if at all how you think about your role and so let me just lay a couple foundation points out and then have you speak.
好的,我想继续讨论技术相关的问题,但是我还是想移入谈论Open AI的使命,以及您在伦理问题上的角色和看法,伦理道德是否会影响您如何看待自己的角色。因此,让我先提出一些基本观点,然后请您发表看法。

As you know open a eyes mission is to ensure the art of that artificial general intelligence benefits all of humanity and it started off as a nonprofit and open source and it is now a for profit and closed source and with a close relationship with Microsoft and Elon Musk who I believe recruited you to originally join open AI and gave a hundred million dollars when it was a nonprofit has says that the original vision was to create a counterweight to Google and the corporate world and he didn't want to have a world in which AI was.
你知道,“打开 AI 的眼睛”(OpenAI)的任务是确保人工智能的艺术造福于全人类。它最初是一个非盈利性质的开放源码组织。现在已转化为盈利性质的封闭源码组织,并与 Microsoft 和 Elon Musk 有着密切的关系。我相信是 Elon Musk 招募了你最初加入 OpenAI,并在那时是一个非盈利组织时捐出了一亿美元。他表示,最初的愿景是要创建一个平衡 Google 和企业世界的力量,而他不想看到一个 AI 支配着整个世界。

It's a world in which AI which is as it which he perceives and others can have an existential threat to humanity to be solely in the whole holds of of a corporate of a for profit and now open AI is neither open nor exclusively a nonprofit it's also a for profit with close ties to Microsoft and it looks like the world maybe headed towards a private do opily between Microsoft and Google. So I'm going to write on the calculus to shift from a for profit to a nonprofit and did you weigh in the ethics of that decision and do ethics play a role in how you conceive of your role as the chief science officer or do you view it more as something that somebody else should handle and you are mainly just tasked with pushing the technology forward.
这是一个AI为唯利是图的企业所独占,对人类构成存在威胁的世界,现在开放的AI既不是公益的也不是独立于营利机构的,它与微软有着密切的联系,似乎世界可能正在向微软和谷歌之间的私人垄断方向发展。因此,我要计算从营利机构转向非营利机构的成本,并权衡决策所涉及的道德因素。作为首席科学事务官,你认为道德在你理解你的角色时发挥了重要作用,还是你认为这更应该由其他人处理,你只是主要负责推动技术前进?

So many parts let me think about the best way to approach it. So there are several parts there is the question around open source versus closed source. There is a question around nonprofit versus for profit and the connection with Microsoft and how to see that in the context of Elon Musk's recent comments.
有这么多的部分让我考虑最好的处理方式。其中有几个部分:开源与闭源问题,非营利组织与盈利组织问题,以及与微软的关联,以及如何在埃隆·马斯克最近的评论中看待这一问题。

So I'm going to ask you a question about how I see my role in this maybe I'll start with that because I think that's easier. So I feel the way I see my role I feel a lot I feel direct responsibility for what of the opening I does. So my role is primarily around advancing the science it is still the case I'm one of the founders of the company and ultimately I care a lot about open eyes overall impact.
所以我想问你一个关于我在这个角色中的看法的问题,也许我会从这个开始,因为我认为那更容易理解。所以我认为我的角色是非常关键的,我对公司的开拓有直接的责任。因此,我的主要职责是推动科学的进步。我仍然是公司的创始人之一,并且最终我非常关心 open eyes 的整体影响。

I want to go so be this context I want to go and talk about the open source versus close source and the nonprofit versus for profit. And I want to start the open source versus close source because I think that you know the challenge with AI is that AI is so all in composing and composing. And it comes with many different challenges comes it's many many different dangers which come into conflict with each other.
我想去探讨开源与闭源,以及非营利与营利的问题。我想从开源与闭源开始,因为我认为人工智能的挑战在于它涵盖了太多不同的方面,这其中涉及到许多挑战和危险,往往相互冲突。

And I think the open source versus close source is a great example of that. Why is it desirable what are some reasons for which it is desirable to open source AI.
我认为开放源代码与闭源代码的比较是一个很好的例子。为什么开放源代码的人工智能是值得期望的?有哪些理由使它开源是可取的?

The answer there would be to prevent concentration of power in the hands of those who are building the AI. So if you are in a world where let's say there is only a small number of companies you might have control this very powerful technology. You might say this is an undesirable world and that AI should be open and that anyone could use the AI. This is the argument for open source.
答案是要防止那些正在建设人工智能的人集中权力。因此,如果你处于一个仅有少数公司掌握这种强大技术的世界,你可能会认为这是一个不可取的世界,AI 应该是开放的,任何人都可以使用 AI。这是开源的争论。

But this argument you know of course you know to state the obvious there are near term commercial incentives against open source. But there is another longer term argument against open source in as well which is if we believe if one believes that eventually AI is going to be unbelievably powerful.
当然,你知道,有一些很显然的商业动机反对开源,这个争论你肯定知道。但是,还有另一个更长期的反对开源的论点,那就是如果我们相信人工智能最终会变得非常强大。

If we get to a point where your AI is so powerful where you can just tell it hey can you autonomously create a biological research lab. Autonomously do all the paperwork, rent the space, hire the technicians, aggregate experiments do all this autonomously. Like that starts to get incredibly that starts to get like mind-bendingly powerful should this be open source also.
如果您的人工智能(AI)变得非常强大,足以让您直接告诉它:“嘿,你能自主创建一个生物研究实验室吗?自主处理所有文书工作,租赁场地,雇用技术人员,整合实验数据,全部自主完成。”这样的功能变得非常强大,甚至是令人难以置信的强大,这种技术是否应该开源呢?

So my position on the open source question is that I think that I think that the reason maybe a level of capability you can think about these neural networks in terms of capability. How capable they are, how smart they are, how much can they do.
我的观点是,关于开源的问题,我认为也许它的原因是能力水平。可以从能力的角度来考虑这些神经网络。它们有多强大,有多聪明,能够做多少事情。

When the capability is on the lower end I think open source is a great thing. But at some point and you know there can be debate about where the point is. But I would say that at some point the capability will become so vast that it will be obviously irresponsible to open source months.
当能力处于较低水平时,我认为开放源代码是一件好事。但在某个时候,你知道这个点可能会引起争议。但我会说,在某个时候,能力将变得如此巨大,以至于开源数月将变得显然不负责任。

And was that the driver behind closed sourcing it or was it driven by a devil's compact or business necessity to get cash in from Microsoft or others to support the viability of the business was the decision making to close it down actually driven by that line of reasoning or was it driven by more.
这是关闭源代码的驱动因素还是出于一项恶魔的协定或是业务必要性,以从微软或其他方面获得资金来支持业务生存能力?决定关闭是基于以上思考的决策,还是有其他原因?

So the way I've articulated you know my view is that the current level of capability is still not that high where it will be the safety consideration it will drive the close source in the model this kind of this kind of research. So in other words a claim that it goes in phases right now it is indeed the competitive phase but I claim that as the capabilities of these models keep increasing the will come a day where it will be the safety consideration that will be the obvious and immediate driver to not open source these models.
我的观点是,目前的技术水平并不是非常高,无法让使用者完全放心地开源这些模型。换句话说,目前处于竞争阶段,但是一旦这些模型的能力不断提高,有一天人们会认识到安全问题会成为不可避免的考虑因素,从而不再开源这些模型。

So this is the open source versus closed source but your question had an up but your question had another part which is nonprofit versus for profit. And we can talk about that also.
这是开源与闭源的问题,但你的问题还有另一个方面,即非盈利与盈利。我们也可以谈论这个问题。意思是,讨论开源与闭源软件之间的差异以及非盈利组织与盈利组织之间的区别,需要分别探讨。

You know indeed it would be preferable in a certain meaningful sense if open AI could just be a for a nonprofit from now until the mission of open AI is complete. However one of the things that's worth pointing out is the very significant cost of these data centers. I'm sure you're reading about various AI startups and the amount of money they are raising the great majority of which goes to the cloud providers.
确实,在某种有意义的意义上,如果Open AI从现在开始能够成为非营利组织,那将是更好的选择。然而,需要指出的是这些数据中心的成本非常高昂。我相信你已经听说过各种人工智能初创公司以及他们筹集的资金数量,其中绝大部分资金都用于支付云服务提供商的费用。

Why is that? Well the reason so much money is needed is because this is the nature of these large neural networks they need the compute and of story. You can see something like this. It's all you can see a divide that's now happening between academia and the AI companies.
为什么会这样呢?这是因为这些大型神经网络的本质需要计算和故事。你可以看到类似的事情。现在学术界和人工智能公司之间正在发生分歧。

So for a long time for many decades cutting edge research in AI took place in academic departments in universities. That kept being the case up until the mid 2010s. But at some point when the complexity and the cost of these projects started to get very large it no longer remained possible for universities to be competitive. And now universities need a university research in AI needs to find some other way in which to contribute. Those ways exist they're just different from the way they're used to and different from the way the company is contributing right now.
很长一段时间内,人工智能的前沿研究一直在大学的学术部门进行。直到2010年代中期,这种情况一直存在。但随着这些项目的复杂性和成本不断增大,大学不再能保持竞争力。现在,大学研究人员需要找到其他方法来贡献人工智能领域。这些方法确实存在,它们与公司现在的贡献方式不同,也不同于大学以往所采用的方式。

Now with this context you're saying okay the thing about nonprofit and nonprofit is that people who give money to a nonprofit never get to see any of it back. It is a real donation and believe it or not it is quite a bit harder to convince people to give money to an nonprofit. And so we think what's the solution there or what is a good course of action.
现在,你说非营利组织的问题在于向非营利组织捐款的人永远不会得到任何回报,这是真正的捐赠。信不信由你,说服人们向非营利组织捐款实际上要困难得多。因此,我们认为有什么解决方案或应该采取什么好的行动。

So we came up with an idea that to my knowledge is unique in all corporate structures in the world. The open AI corporate structure is absolutely unique. Open AI is not a for profit company. It is a kept profit company and I'd like to explain what that means. What that means is that equity in open AI can be better seen as a bond rather than equity in our company.
我们想出了一个想法,据我所知,在全球的公司结构中都是独一无二的。开放式AI企业结构是绝对独特的。 Open AI并不是营利性公司,而是保持利润的公司,我想解释一下这是什么意思。这意味着在Open AI的股权可以被视为债券,而不是我们公司的股权。

The main feature of a bond is that once it's paid out it's called so in other words open AI has a finite obligation to its investors as opposed to an infinite obligation to that normal companies have. And does that include the founders to the founders of equity in open AI.
债券的主要特点是一旦支付,就被称为已支付,换句话说,OpenAI对其投资者有有限的义务,而不是像普通公司那样有无限的义务。这是否包括OpenAI的创始人在OpenAI中所持股份呢?

So some Altman does not have equity but the other founders do. And is it kept or is it unlimited is kept and how does that cap is that kept up. Because the founders I presume didn't buy in unless it's capped at the nominal share value.
因此,有些Altman没有股权,而其他创始人却有。这是否被保留,保留的股份是否有限,以及如何保持上限呢?因为我认为,除非股份受到名义股份价值限制,否则创始人不会入股。

I'm not sure I understand the question precisely but what I can say like what what what what what I can answer the part which I do understand which is like. There is certainly like it is there are it is a different it is different from normal startup equity but there are some similarities as well where the earlier you joined the company the higher the cap is because then. The larger cap is needed to attract the initial investors as the company continues to succeed the cap decreases and why is that important it's important because it means that the company.
我不确定我确切地理解了问题,但是我可以回答我理解的部分。显然,就像有些相似之处,也不同于常规的创业公司股权。如果你早期加入公司,上限就会更高,因为那时需要一个更大的上限来吸引初期投资者。随着公司的成功,上限会减少,这是为什么重要的原因,因为它意味着公司。

Once once once all the obligation to investors and employees are paid out open AI becomes an on profit again and you can say this is totally crazy what are you talking about like it's not going to change anything but it's worth considering what the expect like it's worth looking at what we think AI will be.
一旦所有投资者和员工的义务都被履行完毕,Open AI就会变成一个非营利组织,也许有人会觉得这完全是疯狂的想法,并且认为这不会改变什么,但考虑到我们对人工智能的期望,值得我们去思考这个问题。

I mean we can look at what AI is today and I think it is not at all inconceivable for open AI to achieve it's. To pay out its obligation to the investors and employees becoming on profit at around the time when perhaps the computer will become so capable where the economic destruction will be very big where this transition will be very beneficial.
我的意思是我们可以看一下现在的人工智能技术,我认为“开放式人工智能”达成它的义务并向投资者和员工支付回报,这并非不可想象。也许在计算机能力足够强大的时候,经济破坏将非常严重,而这种转型将带来巨大的收益。

So this is the answer on the cap profit versus nonprofit. There was a last party of question I know I'm speaking for a while but the question had many parts the last party of question is the Microsoft relationship and.
这是关于盈利与非盈利性质的答案。虽然我说了一段时间,但问题有很多部分,最后一个问题是关于微软关系的。

Here the thing that's very fortunate is that Microsoft is there thinking about these questions the right way they understand the potential and the gravity of a GI and so for example on the on all the investor documents that any investor in open AI sign and by the way Microsoft is an investor into open AI which is a very different relationship from the Google deep mind. Any anyone who signed any document any investment document there is a purple rectangle at the top of the investment document which says that the fiduciary duty of open AI is to the open AI mission which means that you run the risk of potentially losing all your money if the mission comes in conflict. This is something that all the investors have signed.
非常幸运的是,微软正在正确地思考这些问题,他们了解通用人工智能的潜力和重要性。例如,在任何投资open AI的投资者文件上,都会有一个紫色的矩形框,其中声明open AI的受托责任是为open AI的使命服务。这意味着,如果使命受到干扰,你有可能失去所有的投资。微软是open AI的投资者之一,这与Google DeepMind之间的关系有很大的不同。所有的投资者都已经签署了这份文件。

And let me just make this clear for everybody because Google Google acquired deep mind so deep mind was just an asset inside of Google but beholden to Google you're making the distinction that with open AI Microsoft is an investor and so beholden to this fiduciary duty for the mission of open AI which is held by the nonprofit which is a is is a. And a GP or an LP in the in the for profit. Okay understood something like this you know I am you know there are people. I can't tell you the precise details. But so but this is the general picture.
让我来向大家明确一点,因为谷歌谷歌收购了深度净心,所以深度净心只是谷歌内部的资产,但必须服从谷歌的掌控。你在强调的是,微软是开放AI的投资者,因此必须遵守开放AI非营利机构的使命,同时非营利机构也是一个GP或LP。好的,我明白了,类似于这样。我不太了解具体的细节,但这就是大致情况。

So I have claimed though now especially Steve was the co founder of Apple and Elon Musk famously signed this very public petition saying that the point of no return is already passed or approaching it where it's going to be impossible to rain in AI and it's and it's it's repercussions if we don't halt it now and they've called for halting AI. So this on you are a world citizen earlier you were born in Russia you were raised in Israel your Canadian and I'm and and it's open a eyes response to that public petition was I know Sam basically said that you know this wasn't the right way to go about doing that. Also in parallel Sam is on a world tour with many countries that also can be antagonistic towards the west are there any citizen obligations ethical obligations that you think also overweight your your technological obligations when it comes to spreading the technology around the world right now through open AI do you think that should be beholden to a regulation or some oversight. Let me think. I'm trying to give you the so you can respond however you want to on that I know we're going to come out of the off of time so I just want to give you the mic and just share everything that's on my mind you can decide how you want to handle it.
我认为,现在特别是由于史蒂夫是苹果公司的联合创始人,以及埃隆·马斯克签署了这份公开请愿书,声称已经超过或即将到达无法控制AI以及其后果的临界点了,如果我们现在不停止,那么他们呼吁停止AI。你是一个世界公民,出生在俄罗斯,成长在以色列,持有加拿大国籍,而OpenAI对这份公开请愿书的回应是,我知道Sam基本上说,这不是正确的做法。此外,Sam正在世界各地巡回演出,其中许多国家也可能对西方敌对,你认为在将技术传播到全球时,除了技术义务之外,还存在一些公民义务和道德义务吗?在OpenAI方面,您认为应该受到监管或监督的约束吗?让我想想。我正在尝试为您提供自由回答的机会,因为我们即将用尽时间,我想让您分享一下您所思考的所有内容,并请您决定如何应对。

Thank you I mean you know it is true that AI is going to become truly extremely powerful and truly extremely transformative and I do think that we will want to move to a world with sensible government regulations and there you know there are several dimensions to it. We want to be in a world where there are clear rules about for example training more powerful neural networks. We want there to be some kind of careful evaluation careful prediction of these of what we expect these neural networks of what they can do today and of what we expect them to be able to do let's say in a year from now or by the time they finish training.
谢谢。我的意思是,AI将变得非常强大和具有极大的转变性,并且我认为我们将需要转向一个有着理性政府规定的世界。这里有几个方面需要考虑。首先,我们希望生活在一个有明确规则的世界,例如关于如何训练更强大的神经网络。其次,我们需要进行谨慎的评估和预测,以了解当前和未来一年内,或者经过训练后这些神经网络的预期能力。

I think all these things will be very necessary in order to like rationally rationally I wouldn't use the word slow down the progress I will use the term you want to make it so that the progress is sensible so that you did step we've done the homework and indeed we can make a credible story that OK. The neural network the system that we've trained it has we are doing this and here all the steps and it's been verified certified I think that is the world that we are headed to which I think is correct. And as for the citizen obligation I feel like I mean I'll answer it like this like I think I think like there are there are two answers to it so obviously you know I leave a leave in the United States and really like it here and I want and I want this place to flourish as much as possible care about that.
我认为所有这些事情都将是非常必要的,以便理性地推进进展。我不会使用“减缓进度”这个词,我会使用“让进展变得合理”这个术语,这样就可以做到我们已经做好了作业,确实可以制定一个可信的故事。神经网络、我们训练的系统正在这样做,在这里有所有的步骤,它已经被验证认证。我认为这是我们正在朝向的世界,我认为这是正确的。至于公民义务,我觉得我会这样回答,我认为有两个答案,很明显,我居住在美国,我非常喜欢这里,我希望这个地方尽可能繁荣,我关心这个。

I think that of course there will be lots of but the world is much more than just the US and I think that these are the kind of questions which are feel a little bit. Let's say outside of my expertise how these between country relationships work out but I'm sure that will be lots of discussions there as well. Yeah, can I turn a little bit towards strategy I'm curious for you guys internally what metrics do you track as your north star what are the most sacred KPIs that you use to measure open a eyes success right now. The most sacred KPIs you know I think this is also the kind of question where maybe different people give you different answers but I would say I would say that there are if I were to really narrow it down I would say that there are. There is a couple of really important KPIs of really important dimensions of progress one is undeniably the technical progress are we doing good research.
我认为当然会有许多问题,但世界不仅仅是美国,我认为这些是我不太懂得的国家间关系的问题。但我相信也会有很多讨论。是的,我想转向策略,我想知道你们内部跟踪的指标是什么,最受关注的KPI是什么,用来衡量开放式成功的。最重要的KPI,你知道,我认为这也是一种问题,不同的人可能会给你不同的答案,但我想说,如果我真的要缩小范围,我会说有一些真正重要的KPI,有一些真正重要的进展维度,其中一个无可否认的是技术进步,我们正在做好的研究。

Do we understand our systems better I'll be able to train them better can be control them better I am I is our is our is our research plan being executed well is our safety plan being executed well how happy are we with it.
我们是否更好地理解了我们的系统,这样我就能更好地训练它们,控制它们。我们的研究计划是否被良好执行?我们的安全计划是否被良好执行?我们对此有多满意?

I would say this would be my description of the primary KPI which is do a good job of the technology then there is of course stuff around the product but. Which I think is cool but I would say that it is really the core technology which is the heart of opening the technology it's development. And on and it's control it's steering.
我认为这应该是我对主要关键绩效指标的描述,即技术方面要做得好,当然,产品周围也有一些要素,但我认为真正核心的是技术,它是开放技术和开发的核心,控制与引导。

And do you view right now chat GBT is a destination. Do you view open AI in the future being a destination that people go to like Google or will it be powering other applications and be the back end or be be you know used as part of the back end infrastructure. Is it a destination or is it going to be more behind the scenes in in five to 10 years. Yeah well I mean the exchange so fast I I cannot make any claims about. Five to 10 years in terms of the correct shape of the product I imagine a little bit of both perhaps but this kind of question. I mean I think it remains to be seen I think there are I think this stuff is still so new.
你现在是否认为GBT聊天是一个目的地?你是否认为Open AI在未来会成为人们像谷歌一样到达的目的地,还是将用于驱动其他应用程序并成为后端基础设施的一部分?它是一个目的地,还是在未来五到十年将更多地隐藏在幕后?嗯,我的意思是现在的发展太快了,我无法确定未来五到十年产品的正确形式,可能会有一些混搭,但这种问题仍有待解决。我认为这些技术仍然非常新颖,未来的发展方向还有待观察。

Okay I'm going to ask one more question then I jump to the student questions if you were student at Stanford today interested in AI if you were. You know somebody who wants to be Ilya what would you focus your time. And another second question on this if you're also interested in entrepreneurship. Where would what would you what advice would you give for a Stanford undergrad engineer that's interested in AI and entrepreneurship.
好的,我再问一个问题,然后再回答学生的问题。如果您是今天在斯坦福大学对人工智能感兴趣的学生,或是您知道有人想成为Ilya,您会把时间集中在哪里呢?另外一个问题是,如果您也对创业感兴趣,您会给一个在斯坦福大学读本科、对人工智能和创业感兴趣的工程师什么建议?

So I think on the first one. It's always hard to give generic advice like this. Yeah. But I can still provide some generic advice nonetheless. And I think it's something like it is generally a good idea to lean into once unique predispositions. You know every you know what if you think if you look if you think about the set of let's say inclinations or skills or talents that the person might have. The combination is pretty rare so leaning into that is a very good idea no matter which direction we choose to go look to go in. And then on the AI research.
我认为首先我们要考虑第一个问题。给出像这样的通用建议总是很困难的,但我仍然可以提供一些通用建议。我认为一般来说,倾向于发掘一个人独特的先天条件是一个好主意。你知道,如果你想想一个人可能拥有的偏好、技能或才能的集合,这个组合相当罕见,所以不论我们选择哪个方向去追求,发掘这些特质是一个非常好的主意。另外,在人工智能研究方面...

Like I would say. I would say that there. You know I could say something but even but there especially you want to lean into your own ideas and really ask yourself what can you is there something that's totally obvious to you. That makes you go why is everyone else not getting it. If you feel like this that's a good sign it means that you might be able to get you you want to lean into that and explore it and see if your instinct is drawn out. It may not be true but you know my my advisor Jeff Hinton says this thing which I really like he says you should trust your intuition because if your intuition is good you go really far and if it's not good then it's nothing you can do. And as far as entrepreneurship is concerned.
就像我会说的那样。我会说那个地方。你知道,我可能会说些什么,但特别是在你想要倾听自己的想法并真正问问你自己有什么可以让你彻彻底底的感觉为什么其他人都没有意识到的地方时,你要特别的谨慎。如果你感觉到这种感觉,那是一个好的信号,这意味着你可能能够得到你想要的东西,你要倾听并探索它,看看你的直觉是否被正确引导。可能不是真的,但是你知道我的顾问Jeff Hinton说过一句话,我非常喜欢,他说你应该相信你的直觉,因为如果你的直觉是好的,你会走得很远。如果它不好,那就无法控制。至于创业来说。

Like this is a place where the unique perspective is even more valuable or maybe it's because it's maybe maybe I'll explain why I think it's more valuable than in research. Well in research it's very valuable too but in entrepreneurship like you need to like almost pull from your unique life experience where you say okay I see this thing I see this technology I see something like take a very very broad view and see if you can hone in on something and that actually could just go for it. So that would that would be the conclusion of my generic advice.
就像这里是一个独特视角更有价值的地方,或许是因为创业领域里,你需要从你个人独特的生活经验中汲取灵感,去看见这些科技或事物的广阔视角。这也是为什么创业领域所需要的独特视角比研究领域更为重要。当然,在研究领域里也同样重要,只不过在创业过程中,你需要更加努力地去拓宽你的视野,以便从中捕捉到新的机遇。所以,这就是我的通用建议的结论。

Okay which is great that's also great I'm going to move on to the student question so one of the most uploaded question is how do you see the field of deep learning evolving in the next five to 10 years. Yeah let's see you know I expect deep learning to continue to make progress. I I expect that you know there was a period of time where a lot of progress came from scaling. And you see we saw that most in the most pronounced way in going from GPT one to GPT three.
好的,太好了,我要继续回答学生的问题。其中一个最常见的问题是,在未来五到十年中你如何看待深度学习领域的发展。我认为深度学习会继续取得进展,过去一段时间的进展主要来自于规模的扩大。我们能够在GPT 1到GPT 3的发展中明显地看到这一点。

But things will change a little bit the reason the reason the reason that progress in scaling was so rapid is because people had all these data centers which they weren't using for a single training run. So by simply reallocating existing resources you could make a lot of progress and it doesn't take that long necessarily to reallocate existing resources you just need to you know someone just needs to decide to do so. It is different now because the training runs are very big and the scaling is not going to be progressing as fast as it used to be because building data centers takes time.
但是事情会有一点变化,原因是扩展方面的进展非常迅速,因为人们拥有了所有这些数据中心,他们没有用于单个训练运行。因此,通过简单重新分配现有资源,您可以取得很大进展,重新分配现有资源不一定需要很长时间,您只需要有人决定这样做即可。现在与以前不同,训练运行非常庞大,而且扩展的速度不会像以前那么快,因为建立数据中心需要时间。

But at the same time I expect deep learning to continue to make progress in art from other places. The deep learning stack is quite deep and I expect that there will be improvements in many layers of the stack. And together they will still lead to progress being very robust. And so if I had to guess I imagine that there would be maybe I'm certain we will discover new properties which are currently unknown of deep learning and those properties will be utilized. And I fully expect that the systems of 5 to 10 years from now will be much much better than the ones they are right now.
同时,我期望深度学习在其他领域继续取得进展。深度学习的技术堆栈非常深,我预计在堆栈的许多层面都会有改进。这些改进汇聚在一起,仍将带来非常强大的进步。因此,如果我不得不猜测的话,我想我们也许会发现当前尚不为人所知的深度学习的新属性,并将利用这些属性。我完全期待,5到10年后的系统将比目前的系统更加出色。

But exactly how it's going to look like I think I think it's a bit harder to answer it's a bit like. It's because the improvements that there will be maybe a small number of big improvements and also a large number of small improvements all integrated into a large complex engineering artifact.
但确切的外观是什么样子,我认为这有点难回答,就像它。这是因为改进可能会有少量的大的改进,也可能会有大量的小的改进,所有这些改进都集成到一个大型的复杂工程工件中。

And can I ask you, your co founder Sam Altman has said that we've reached the limits of what we can achieve by scaling to larger language models is do agree. And if so, you know what then what is the next innovation frontier that you're focusing on if that's the case.
我可以问一下您,您的共同创始人Sam Altman曾经说过,我们已经达到了通过扩大语言模型规模所能实现的极限,您是否同意这种说法?如果是的话,那么接下来您将专注于哪些下一个创新领域?

Yeah, so I think maybe I don't know exactly what he said, but maybe he meant something like the age of easy scaling has ended or something like this. Of course, of course, the large neural nets would be better, but it's do be a lot of effort and cost to do them. But I think there will be lots of different frontiers and actually into the question of how can one contribute in deep learning identifying such a frontier, perhaps one that's been missed by others is very fruitful.
嗯,我觉得也许我不知道他确切地说了什么,但也许他的意思是易于扩展的时代已经结束了,或者类似这样的东西。当然,当然,大的神经网络会更好,但要做它们需要很多努力和成本。但我认为将会有很多不同的前沿,实际上进入深度学习如何贡献,识别这样一个前沿也许被其他人错过了是非常富有成效的问题。

And is it can I go even just deeper on that because I think there is this debate about vertical focus versus general general's training, you know, is it better you do you think that there's better performance that can be achieved in particular domain such as law or medicine by training with special data sets or is it likely that general's training with all available data will be more beneficial.
能否深入探讨一下这个问题,因为我认为现在有一个关于垂直专注与一般性训练的争论,你认为通过使用特定的数据集进行培训可以在某些领域比如法律或医药方面取得更好的表现,还是利用所有可用数据进行一般培训更有益呢?

So, like at some point we should absolutely expect specialist training to make huge impact, but the reason we do the general is training is just so that we can even reach the point where just so that we can reach the point where the neural net will can we understand the questions that we are asking. And only when it has a very robust understanding only then we can go into specialty training really benefit from it. So, yeah, I mean, I think all these I think these are all fruitful directions.
因此,我们应该绝对期望专业培训带来巨大影响的某个时候,但我们进行一般性的培训是因为我们需要达到一个点,只有当我们能够理解我们正在提出的问题时,神经网络才能达到这一点。只有当它有非常强大的理解能力时,我们才能进入专业培训并从中受益。因此,是的,我认为这些都是有成果的方向。

But you don't think when do you think we'll be at that point when specialist training is the thing to focus on. I mean, you know, like if you look at people who do open source work, people who work is open open source models, they do a fair bit of this kind of specialist training because they have a fairly underpowered model and they try to get any ounce of performance they can out of it. So, I would say that this is an example. I'd say that this is an example of it happening it's already happening to some degree it's not a binary it's you might want to think of it as if I like a continue spectrum.
但你并没有想过,你认为我们何时会到达培训专家是重点的那一点。我的意思是,你知道,如果你看看那些做开源工作的人,那些遵循开源模式工作的人,他们会进行相当多的这种专业培训,因为他们的模型相当低端,他们想要从中获得任何性能上的优势。因此,我会说这是一个例子。我会说这是一个已经在一定程度上正在发生的例子,而不是一个二进制的例子。你可以将其看作是一个持续的光谱。

But do you think that the competitive do you think that the winning advantage is going to be having these proprietary data sets or is it going to be having a much higher performance large language model when it comes to these applications of AI into verticals. So, I think it's maybe productive to think about about an AI like this as a combination of multiple factors where each factor makes a contribution. And is it better to have a special data which helps you make your AI better in a particular set of tasks of course is better to have a more capable base model of course from the perspective of the task. So, maybe this is the answer it's not an either or.
你认为竞争优势是拥有专有数据集,还是在应用AI到垂直领域时拥有更高性能的大型语言模型?因此,我认为将AI视为多个因素的组合可能是有益的,每个因素都有所贡献。当然,拥有特殊的数据可以帮助您在特定的任务集中使您的AI更好,从任务的角度来看,拥有更能胜任的基础模型当然更好。因此,也许这就是答案,它不是二选一的问题。

I'm going to move down the other questions. There's a question on what was the cost of training and developing GP T3 slash four. Yeah, so you know for obvious reasons I can comment on that. But there I think there is a you know I think even from our research community there's a strong desire to be able to get access to to different aspects of open a eyes technology.
我要把其他的问题往下移一下。关于培训和开发GP T3斜杠四的成本有一个问题。由于一些明显的原因,我不能对此发表评论。但我认为,即使是从我们的研究社区来看,人们也急需能够获得不同方面的开放式技术。

And are there any plans for releasing it to researchers or to other startups to encourage more competition and innovation some of the requests that I've heard are unfettered interactions without safeguards to understand the model's performance model specifications, including details on how it was trained and access to the model itself, I either trained parameters.
有没有计划将它发布给研究人员或其他初创企业,以鼓励更多竞争和创新?我听到了一些请求,包括没有保障的互动以了解模型的性能,模型规范,包括培训方式的详细信息以及访问模型本身或经过训练的参数。

Do you want to comment on any of that. I mean, I think I could relate it to our earlier question about home universe is closed. I think that there are some intermediate approaches which can be very fruitful. For example, model access and various combinations of that can be very, very productive because these manual networks already have such a large and complicated surface area of behavior. And. And studying that alone can be extremely interesting like we have an academic access program we provide various forms of access to the models and in fact plenty of academic research labs do study them in this way. So I think this kind of approach. Is viable and is something that we could that we are doing.
你想对这其中的任何一个部分提出评论吗?我的意思是,我认为可以将它与我们之前有关宇宙是否闭合的问题联系起来。我认为有一些中间方法可能会非常有成果。例如,模型访问和各种组合可以非常有成果,因为这些手动网络已经具有如此庞大和复杂的行为表面。仅仅研究这些也可以极为有趣,就像我们提供学术访问计划,为模型提供各种形式的访问,实际上很多学术研究实验室也是通过这种方式研究它们的。因此我认为这种方法是可行的,也是我们正在做的事情。

And we're coming up on time I want to end with just one final question, which is can you just share any unintuitive but compelling use cases for how you love to use chat GBT that others may not know about. So I mean, I wouldn't say that it's unknown but I really enjoy its poem writing ability. It can write poems, it can wrap, it can be it can be it can be pretty amusing.
我们时间快到了,我想最后问一个问题,你能分享一些非常有趣但不太为人所知的使用方式吗? 就是你最喜欢用聊天 GBT 的方式,其他人可能不知道。我不是说它完全不被人所知,但我真的很喜欢它的写诗能力。它能够写诗,可以押韵,很有趣。

And do you guys use it is it an integrated part of the of teamwork at open I assume it is, but I'm curious do you have any insights on how it changes dynamics with teams when you have AI deeply integrated into you know human team and how they're working and any insights into what we may not know but that will come. I would say today the best way to describe the impact is that everyone is a little bit more productive. People are a little bit more on top of things I wouldn't say that right now there is a dramatic impact on dynamics which I can say oh yeah the dynamics have shifted in this pronounced way. I think the worst of a depersonalizes conversations because it's the AI bot or maybe it may but maybe we're not at that point yet where it's. That I definitely I don't think that's the case and I predict that you'll not be the case but we'll see.
你们是否使用AI?它是Open的团队合作的一个整合部分吧,这个我假设是对的,但是我很好奇,当你将AI深度融入团队时,它如何改变团队的相互作用,你们是否对其中的不为人知但将来必定出现的信息有什么洞见。我认为现在最好的描述是,每个人都会更有效率一些,更加关注细节。我不能说目前团队动态有明显的变化,我认为AI机器人可能会让对话变得不够人性化,但目前也许还没有到那个水平。但我肯定地说,我认为这种情况不会继续下去,我们拭目以待。

Well thank you Ilya for a fascinating discussion time is always too short you're always invited back to the farm we'd love to have you either virtually or in person so thank you thank you thank you to our audience thank you for tuning in for this session of the entrepreneurial thought leader set series next week we're going to be joined by the executive chairman and co founder of octa.
感谢Ilya为我们带来了如此精彩的讨论,时间总是那么短暂,我们非常欢迎他以虚拟或亲身形式回到我们的农场。同时,也感谢我们的观众,感谢你们收看这一期的企业家思维领袖系列节目。下周,我们将邀请Octa的执行主席兼联合创始人与我们一同探讨。

Frederick carist and you can find that event another future events in this ETL series on our Stanford E corner YouTube channel and you'll find even more of the videos podcasts and articles about entrepreneurship and innovation at Stanford E corner that's corner dot Stanford dot E. D. U. and as always thank you for tuning in to ETL.
在我们的斯坦福E角YouTube频道上,您可以找到Frederick Carist以及其他未来的ETL系列活动,您还可以在Stanford E角网站上找到更多关于创业和创新的视频、播客和文章,网址是corner.stanford.edu。感谢您一如既往的收听ETL。