首页  >>  来自播客: User Upload Audio 更新   反馈

The Possibilities of AI [Entire Talk] - Sam Altman (OpenAI) - YouTube

发布时间 2024-05-01 08:05:26    来源

中英文字稿  

Welcome to the entrepreneurial thought leader seminar at Stanford University. This is the Stanford seminar for aspiring entrepreneurs. ETL is brought to you by STVP, the Stanford Entrepreneurship Engineering Center and Basis, the Business Association of Stanford Entrepreneurial Students. I'm Ravi Balani, a lecturer in the Management Science and Engineering Department, and the Director of Alchemist, an accelerator for enterprise startups. And today, I have the pleasure of welcoming Sam Altman to ETL. Sam is the co-founder and CEO of OpenAI. And OpenAI is not a word I would use to describe the seats in this class. And so I think by virtue of that, that everybody already play nose OpenAI, but for those who don't, OpenAI is the research and deployment company behind chat, GBT, Dali and Sora. Sam's life is a pattern of breaking boundaries and transcending what's possible, both for himself and for the world. He grew up in the Midwest in St. Louis, came to Stanford, took ETL as an undergrad. We held on to Stanford for two years. He studied computer science. And then after a sophomore year, he joined the inaugural class of Y Combinator with a social mobile app company called Looped. That then went on to go raise money from Sequoia and others. He then dropped out of Stanford, spent seven years on Looped, which got acquired.
欢迎来到斯坦福大学的创业思想领袖研讨会。这是为有志成为企业家的人士举办的斯坦福研讨会。ETL由STVP、斯坦福创业工程中心和Basis(斯坦福创业学生商会)共同举办。我是Ravi Balani,斯坦福大学管理科学与工程系的讲师,也是Alchemist的主管,Alchemist是一个企业初创公司的加速器。今天,我很荣幸地邀请到Sam Altman来参加ETL。Sam是OpenAI的联合创始人兼首席执行官。OpenAI并不是我用来形容这个班级的一个词。所以我认为每个人都已经对OpenAI耳熟能详了,但对于那些还不了解的人来说,OpenAI是背后支持着ChatGPT、Dali和Sora的研究和部署公司。Sam的生活充满了打破界限和超越可能性的方式,不仅是为了自己,也是为了世界。他在中西部圣路易斯长大,来到斯坦福大学,作为本科生参加了ETL。他在斯坦福待了两年,主修计算机科学。在大二那年之后,他加入了Y Combinator的首届班级,并创立了一个名为Looped的社交移动应用公司,后来获得了来自Sequoia等公司的资金支持。然后他辍学,花了七年的时间在Looped上,最终被收购。

And then he rejoined Y Combinator in an operational role. He became the president of Y Combinator from 2014 to 2019. And then in 2015, he co-founded OpenAI as a non-profit research lab with the mission to build general purpose artificial intelligence that benefits all humanity. OpenAI has set the record for the fastest growing app in history with the launch of chat JBT, which grew to 100 million active users, just two months after launch. Sam was named one of times 100 most influential people in the world. He was also named times CEO of the year in 2023. And he was also most recently added to Forbes list of the world's billionaires. Sam lives with his husband in San Francisco and splits his time between San Francisco and Napa, and he's also a vegetarian. And so with that, please join me in welcoming Sam Altman to the stage.
然后,他以运营角色重新加入了Y Combinator。他从2014年到2019年担任Y Combinator的总裁。然后在2015年,他与他人共同创立了OpenAI,这是一个以建立造福全人类的通用人工智能为使命的非营利性研究实验室。OpenAI在发布聊天应用JBT后创下了最快增长应用的记录,仅在推出两个月后用户就达到1亿活跃用户。山姆被列入《时代》杂志评选的全球百名最具影响力人物之一。他还于2023年获得了《时代》年度最佳CEO称号。最近,他还被列入了《福布斯》全球亿万富豪榜。山姆与丈夫住在旧金山,并在旧金山和纳帕间分配时间,他也是一位素食主义者。希望大家能欢迎山姆·奥尔特曼上台。

And then full disclosure, that was a longer introduction than Sam probably would have liked. Brevity is the soul of wit. And so we'll try to make the questions more concise. But this is also Sam's birth week. It was his birthday on Monday. And I mentioned that just because I think this is an auspicious moment, both in terms of time, you're 39 now. And also place your at Stanford in ETL that I would be remiss if this wasn't sort of a moment of just some reflection. And I'm curious if you reflect back on when you were half a life younger, when you were 19 in ETL. If there were three words to describe what your felt sense was like as a Stanford undergrad, what would those three words be? There's always hard questions. I was like, you want three words only? Okay. You can go more, Sam. You're the king of brevity. Excited, optimistic, and curious. And what would be your three words now? I guess the same. Which is terrific.
然后全面披露,这个介绍可能比山姆愿意的要长。简洁是智慧之源。所以我们会尽量让问题更简洁。但这也是山姆的生日周。星期一是他的生日。我提到这个只是因为我觉得这是一个吉祥的时刻,不仅是在时间上,你现在39岁了。还有地点在斯坦福的ETL,如果我现在不进行一些反思就会有所遗漏。我很好奇如果你回想起半生前,当你19岁在ETL时。作为斯坦福本科生,如果要用三个词来描述你的感觉,那三个词会是什么?这总是一个很难回答的问题。我觉得你只要求三个词?好的。你可以说更多,山姆。你是简洁之王。兴奋、乐观和好奇。那现在你的三个词会是什么?我想是一样的。这太棒了。

So there's been a constant thread, even though the world has changed. You know, a lot has changed in the last 19 years, but that's going to pale in comparison what's going to happen in the next 19. Yeah. And so I need to ask you for your advice if you were a Stanford undergrad today. So if you had a freaky Friday moment, tomorrow you wake up and suddenly you're 19 inside of Stanford undergrad knowing everything you know, what would you do? Would you drop out? I'd be happy. I would feel like I was like coming of age at the luckiest time, like in several centuries, probably. I think the degree to which the world is going to change and the opportunity to impact that, starting a company, doing AI research, any number of things is like quite remarkable. I think this is probably the best time to start. Yeah, I think I would say this.
所以尽管世界发生了很多变化,但一直有一个不变的主题。你知道,在过去19年里发生了很多变化,但与未来19年将发生的变化相比,这些都微不足道。是的。所以我需要向您请教,如果您今天是斯坦福大学本科生,您会给什么建议?所以如果明天您突然变成19岁的斯坦福大学生,知道了一切,您会怎么办?会退学吗?我会很开心。我会觉得自己是在一个非常幸运的时刻成长,可能是几个世纪以来最幸运的时刻。我认为世界将发生变化的程度以及影响这些变化的机会,创办一家公司,进行人工智能研究,等等,都是非常了不起的。我认为现在可能是开始的最佳时机。是的,我想我会这样说。

I think this is probably the best time to start a company since the internet at least and maybe kind of like in the history of technology. I think with what you can do with AI is like going to just get more remarkable every year. And the greatest companies get created at times like this, the most impactful new products get built at times like this. So I would feel incredibly lucky and I would be determined to make most of it and I would go figure out like where I wanted to contribute and do it. And do you have a bias on where would you contribute? Would you want to stay as a student? And if so, would you major in a certain major giving the pace of change? Probably I would not stay as a student but only because like I didn't and I think it's like reasonable to assume people kind of are going to make the same decisions they would make again. I think staying as a student is a perfectly good thing to do. It would probably not be what I would have picked. No, and this is you. This is you. So you have the freaky Friday moment. It's you, you're reborn as a 19 year old. Oh, yeah. What I think I would. Again, like I think this is not a surprise because people kind of are going to do what they're going to do. I think I would go work on AI research. And where might you do that, Sam? I think, I mean, obviously I have a bias towards Open AI but I think anywhere I could like do meaningful AI research I would be like very thrilled about. So you'd be agnostic if that's academia or private industry. I say this with sadness. I think I would pick industry realistically. I think it's, I think to, you kind of need to be the place with so much compute. Okay. And if you did join on the research side, would you join, so we had Kazra here last week who was a big advocate of not being a founder but actually joining an existing company to sort of learn the chops. For the students that are wrestling with, should I start a company now at 19 or 20 or should I go join another entrepreneurial either research lab or venture? What advice would you give them? Well, since he gave the case to join a company, I'll give the other one, which is I think you learn a lot just starting a company. And if that's something you want to do at some point, there's this thing Paul Graham says, but I think it's like very deeply true. There's no pre-startup like there is pre-med. You kind of just learn how to run a startup or run a startup. And if that's what you're pretty sure you want to do, you may as well jump in and do it.
我觉得现在可能是至少自互联网以来,甚至可能是科技史上最好的创办公司的时机。我认为人工智能能做的事情每年都会变得更加令人惊叹。最伟大的公司都是在这样的时候创建的,最有影响力的新产品也是在这样的时候建成的。所以我会感到非常幸运,并且我会下定决心充分利用这个机会,我会找出我想要贡献的地方并去做。那你有偏向于想要贡献的领域吗?你想留在学生身份吗?如果是,考虑到变化的速度,你会选择专攻某个专业吗?可能我不会选择留在学生身份,但只是因为我没有选择留下来,我认为人们可能会做出再次做出相同的决定是很合理的。我认为留在学生身份是一个完全可以做的好事。这可能不是我会选择的。不,而且这是你。这是你。所以你有一个怪异的星期五时刻。你,你重生为一个19岁的人。哦,是的。我认为我会。同样,我认为这并不让人惊讶,因为人们可能会做他们想做的事情。我认为我会去做人工智能研究。那么你会在哪里做这个,山姆?我认为,显然我倾向于开放人工智能,但我认为只要能够进行有意义的人工智能研究的地方我都会感到非常高兴。所以你会不会考虑这是学术界还是私营企业。我带着悲伤说这句话。我想我现实情况下会选择行业。我认为为了获得更多计算资源,你必须去那个地方。好的。如果你加入研究方面,你会加入,我们上周有卡兹拉在这里,他是一个不主张成为创始人,而是加入一个现有公司学习技术的人。对于那些在挣扎中的学生,犹豫是现在在19或20岁开一家公司还是加入其他创业实验室或风险投资公司,你会给他们什么建议?嗯,因为他提出加入公司的理由,我来说说另一个理由,我认为创办公司你会学到很多东西。如果这是你以后想要做的事情,有一件事保罗·格雷厄姆说的,我觉得非常深刻真实。没有像医学前期一样的准备,你只是学着如何经营一家创业公司或者直接去运营一家创业公司。如果这是你确信想要做的事情,你也许应该跳进去去做。

And so let's say, so if somebody wants to start a company and they want to be in AI, what do you think are the biggest near-term challenges that you're seeing in AI that are the right best for a startup and just to scope that? What I mean by that are what are the holes that you think are the top priority needs for open AI that open AI will not solve in the next three years? So I think this is like a very reasonable question to ask in some sense. I think it's I'm not going to answer it because I think you should never take this kind of advice about what startup to start ever from anyone. I think by the time there's something that is like the kind of thing that's obvious enough that me or somebody else will sit up here and say it, it's probably like not that great of a startup idea.
因此,让我们来说,如果有人想要创办一家公司,他们想要从事人工智能领域,你认为人工智能领域目前面临的最大挑战是什么?哪些方面是创业公司的最佳选择? 我的意思是,你认为开放人工智能(Open AI)在未来三年内无法解决的问题是什么?我认为这是一个很合理的问题。但我不打算回答,因为我认为你永远不应该从任何人那里获取有关创业的建议。我认为到了某个显而易见的地步,我或其他人会站在这里说出来,那可能并不是一个好的创业点子。

And I totally understand the impulse. And I remember when I was just like asking people like, what startups should I start? But I think like one of the most important things I believe about having an impactful career is you have to chart your own course. If the thing that you're thinking about is something that someone else is going to do anyway or more likely something that a lot of people are going to do anyway, you should be like somewhat skeptical of that. And I think a really good muscle to build is coming up with the ideas that are not the obvious ones to say.
我完全理解这种冲动。我记得当我像问人们应该创办什么初创公司的时候。但我认为对于拥有有影响力的职业来说,我最重要的信念之一就是你必须为自己规划路线。如果你正在考虑的事情是别人会做的事情,或者更有可能是许多人都会做的事情,你应该对此持有一些怀疑态度。我认为一个非常好的能力要培养的是提出那些不太明显的创意。

So I don't know what the really important idea is that I'm not thinking of right now, but I'm very sure someone in this room knows what that answer is. And I think learning to trust yourself and come up with your own ideas and do the very like non-consensus things like when we started OpenAI, that was an extremely non-consensus thing to do. And now it's like the very obvious thing to do. Now I only have the obvious ideas because I'm just like stuck in this one frame, but I'm sure you all have the other ones. But can I ask it another way and I don't know if this is fair or not, but what questions then are you wrestling with that no one else is talking about?
我不知道我现在没有意识到的真正重要的想法是什么,但我非常确定这个房间里有人知道那个答案是什么。我认为学会相信自己,提出自己的想法,做一些非常不合常规的事情,就像当我们开始OpenAI时,那真的是一个极具争议的决定。现在看来,这是非常明显的事情。现在我只有一些明显的想法,因为我被困在了这个框架里,但我相信你们都有其他想法。但我能换个方式问吗,我不知道这样公平不公平,但你们目前正在探讨哪些没有人提及的问题呢?

How to build really big computers. I mean I think other people are talking about that, but we're probably like looking at it through a lens that no one else is quite imagining yet. I mean we're definitely wrestling with how we, when we make not just like grade school or middle school or level intelligence, but like PhD level intelligence and beyond the best way to put that into a product, the best way to have a positive impact with that on society and people's lives, we don't know the answer to that yet. So I think that's like a pretty important thing to figure out. Okay.
如何构建真正巨大的计算机。我指的是我认为其他人讨论过这个问题,但我们可能正在通过一种其他人还没有想象的视角来看待它。我指的是我们正在努力思考,当我们创造不仅仅是像小学或中学水平智能那样的东西,而是像博士水平智能及以上水平时,把它变成产品的最佳方式,以及如何最好地对社会和人们的生活产生积极影响,我们还不知道答案。所以我认为这是一个相当重要的事情需要弄清楚。好的。

And can we continue on that thread then of how to build really big computers? If that's really what's on your mind. Can you share, I know there's been a lot of speculation and a lot of hearsay too about the semiconductor foundry endeavor that you are reportedly marking on. Can you share what's the vision? Yeah. It would make this different than others that are out there. Just foundries, although that's part of it. If you believe, which we increasingly do at this point, that AI infrastructure is going to be one of the most important inputs to the future, this commodity that everybody's going to want.
我们可以继续谈论如何建造真正大型的计算机吗?如果这确实是你心中所想的。你能分享一下吗,我知道关于你据说正在进行的半导体代工事业,已经出现了很多猜测和传言。你能分享一下愿景是什么吗?是的。这会让这个项目与其他项目不同。虽然代工只是其中一部分。如果你相信,正如我们现在越来越相信的那样,人工智能基础设施将成为未来最重要的输入之一,这将是每个人都想要的一种商品。

And that is energy, data centers, chips, chip design, new kinds of networks. It's how we look at that entire ecosystem and how we make a lot more of that. And I don't think it'll work to just look at one piece or another, but we got to do the whole thing. Okay. So there's multiple big problems. Yeah. I think like just this is the arc of human technological history as we build bigger and more complex systems. And does it grow so in terms of just like the compute cost, correct me if I'm wrong, but chat GBT3 was, I've heard it was $100 million to do the model and it was $175 billion parameters.
这是关于能源、数据中心、芯片、芯片设计、新型网络等方面的生态系统,我们要看待整个生态系统,并努力更充分地利用这些资源。我认为仅仅关注其中一部分是行不通的,我们必须要整体考虑。所以存在着多个大问题。是的。我觉得这就是人类科技史的进程,随着我们构建越来越大、越来越复杂的系统,成本也在增长。就像计算成本,纠正我如果我错了,但我听说训练GPT3的模型花了1亿美元,参数达到1750亿。

GBT4 was cost $400 million with 10x the parameters. It was almost 4x the cost, but 10x the parameters. Correct me. Adjust me. I do know it, but I won't. Oh, you can. You're invited to. This is Stanford, Sam. But even if you don't want to correct the actual numbers, if that's directionally correct, does the cost do you think keep growing with each subsequent? Yes. And does it keep growing multiplicatively? Probably. I mean, and so the question then becomes how do we, how do you capitalize that?
GBT4花费4亿美元,拥有10倍的参数。它几乎是成本的4倍,但具有10倍的参数。纠正我。调整我。我知道这一点,但我不会这么做。噢,你可以。你被邀请来的。这是斯坦福,山姆。但即使你不想纠正实际的数字,如果方向是正确的,你认为成本是否会随后每个参数增长?是的。它是否会以乘法方式增长?可能。我是说,那么问题就变成了我们如何,你如何使其资本化?

Well, look, I kind of think that. Giving people really capable tools and letting them figure out how they're going to use this to build the future is a super good thing to do and is super valuable. And I am super willing to bet on the ingenuity of you all and everybody else in the world to figure out what to do about this. So there is probably some more business minded person than me at opening eyes somewhere that is worried about how much we're spending, but I kind of don't. Okay. So that doesn't cross it.
嗯,看,我有点这样想。给予人们真正能干的工具,让他们自行决定如何利用这些工具来构建未来,这是一件非常好的事情并且非常有价值的。我非常愿意相信你们和全世界其他人的创造力,去找出如何应对这个挑战。也许在某个地方有比我更商业头脑的人在担心我们花了多少钱,但我不太担心。好吧。这个问题不容忽视。

So, you know, opening eyes is phenomenal. Chachibiti is phenomenal. Everything else, all the other models are phenomenal. It burned, you burned $520 million of cash last year. That doesn't concern you in terms of thinking about the economic model of how do you actually, where's going to be the monetization source?
所以,你知道,睁开眼睛是了不起的。Chachibiti也是了不起的。其他所有的模式也都是了不起的。去年你烧掉了5.2亿美元现金,这不会让你担心关于如何实际上的经济模式是怎样的,从哪里来的变现来源?

Well, first of all, that's nice of you to say, but Chachibiti is not phenomenal. Like Chachibiti is like mildly embarrassing at best. GPT-4 is the dumbest model any of you will ever have to use again by a lot. But you know, it's like important to ship early and often and we believe in iterative deployment.
首先,你这样说真是太客气了,但Chachibiti并不是什么了不起的东西。最多只能说Chachibiti有点尴尬。GPT-4是你们中任何人都会用到的最愚蠢的模型,但你们要知道,提前和频繁发货是很重要的,我们相信迭代部署。

Like if we go build AGI in a basement and then, you know, the world is like kind of blissfully walking blindfolded along. I don't think that's like, I don't think that makes us like very good neighbors. So I think it's important given what we believe is going to happen to our express our view about what we believe is going to happen. But more than that, the way to do it is to put the product in people's hands and let society co-evolve with the technology, let society tell us what it collectively and people individually want from the technology, how to productize this in a way that's going to be useful, where the model works really well, where it doesn't work really well, give our leaders and institutions time to react, give people time to figure out how to integrate this into their lives, to learn how to use the tool.
如果我们在地下室里建造智能通用人工智能(AGI),然后,你知道,世界就像是在欢快地闭着眼睛走。我认为这不是一个很好的想法,我认为这不会让我们成为很好的邻居。因此,我认为重要的是根据我们相信会发生的事情来表达我们的观点。但更重要的是,做法是将产品放到人们手中,让社会与技术共同演化,让社会告诉我们他们在技术方面的集体和个人需求,如何将其产品化为有用的方式,让模型运作得非常好的地方,让模型运作得不太好的地方,给予领导者和机构时间来反应,给予人们时间来找出如何将其融入他们的生活,学会如何使用这个工具。

It's just something you all like cheat on your homework with it, but some of you all probably do like very amazing wonderful things with it too. And as each generation goes on, I think that will expand. And that means that we ship imperfect products, but we have a very tight feedback loop and we learn and we get better. And it does kind of suck to ship a product that you're embarrassed about, but it's much better than the alternative.
这只是你们都喜欢用来作弊复制作业的一种工具,但其中有些人可能也会用它做出非常惊人和出色的事情。随着每一代人的发展,我认为它会不断扩大。这意味着我们会发布不完美的产品,但我们有一个非常紧密的反馈循环,我们会学习并变得更好。发布一个让你感到尴尬的产品确实有点糟糕,但比起其他选择要好得多。

And in this case in particular, where I think we really owe it to society to deploy iteratively, one thing we've learned is that AI and surprise don't go well together. People don't want to be surprised. People want a gradual roll out and the ability to influence these systems. That's how we're going to do it. There could totally be things in the future that would change where we think iterative deployment isn't such a good strategy. But it does feel like the current best approach that we have. And I think we've gained a lot from doing this and hopefully the larger world has gained something too.
在这种情况下,我认为我们真的有责任向社会部署迭代式技术。我们学到的一件事是,人工智能和惊喜结合在一起并不理想。人们不希望被惊讶到。人们更希望系统逐步推出,并有影响这些系统的能力。这就是我们要做的。未来可能会有事情改变我们对迭代部署不是很好的策略的看法。但目前似乎是我们拥有的最佳途径。我认为我们从中获益良多,希望整个世界也从中有所收获。

Whether we burn 500 million a year or 5 billion or 50 billion a year, I don't care. I genuinely don't. As long as we can, I think, stay on a trajectory where eventually we create way more value for society than that. And as long as we can figure out a way to pay the bills, we're making AGI. It's going to be expensive. It's totally worth it. Do you have a vision in 2030 of what, if I say you crushed it, Sam, it's 2030, you crushed it. What does the world look like to you?
无论我们每年烧掉5亿美元、50亿美元还是500亿美元,我都不在乎。我真的不在乎。只要我们能够保持一个轨迹,最终为社会创造比那更大价值。只要我们能找到一种支付账单的方式,我们就能实现人工智能。这将是昂贵的。但是完全值得。你在2030年有一个愿景,如果我说你做得很棒,山姆,到了2030年,你做得很出色。对你来说,世界会是什么样子?

You know, maybe in some very important ways, not that different. Like we will be back here. There will be a new set of students. We'll be talking about how startups are really important. The technology is really cool. We'll have this new great tool in the world. It would feel amazing if we got to teleport forward six years today and have this thing that was smarter than humans in many subjects and could do these complicated tasks for us and, you know, like we could have these like complicated program written or this research done or this business started. And yet like the sun keeps rising, the people keep having their human dramas. Life goes on. So sort of like super different in some sense that we now have like abundant intelligence that our fingertips and then in some other sense, like not different at all.
你知道,在某些非常重要的方面,也许并没有那么不同。就像我们会回到这里一样。将会有一批新的学生。我们将会谈论创业是多么重要。技术真的很酷。我们将拥有这个世界上新的伟大工具。如果我们今天能够快进六年,拥有这个比人类在许多领域更聪明的东西,可以代替我们完成这些复杂的任务,你知道,好像我们可以完成这些复杂的程序编写或这项研究已经完成或这项业务已经启动。然而,太阳照常升起,人们依然陷入人类戏剧中。生活还在继续。所以在某种意义上,现在我们拥有丰富的智慧在指尖上,在另外一些方面,又如同以前没有任何不同。

And you mentioned artificial general intelligence, AGI, artificial general intelligence. And in a previous interview, you defined that as software that could mimic the median competence of a, or the competence of a median human for tasks. Yeah. Can you give me, is there a time if you do a best guess of when you think or arrange you feel like that's going to happen?
你提到了人工通用智能,AGI,人工通用智能。在先前的一次采访中,你将其定义为能够模仿一般人员的中等能力或平均人类任务能力的软件。是的。你能给我一个时间吗,如果你猜测最佳时间或你认为会发生的时间?

I think we need a more precise definition of AGI for the timing question because at this point, even with like the definition you just gave, which is a reasonable one, there's some. I'm parroting back what you said in an interview. Well, that's good because I'm going to criticize myself. Okay. It's too loose of a definition. There's too much room for misinterpretation in there to I think be really useful or get at what people really want.
我认为在讨论时间问题时,我们需要对AGI有一个更精确的定义,因为即使在这一点上,即使你刚刚给出的定义是合理的,也存在一些问题。我在采访中重复了你说的话。嗯,那很好,因为我要批评自己。好的,这个定义太宽泛了。有太多的解释空间,我认为并不真正有用,或者达到人们真正想要的目的。

Like I kind of think what people want to know when they say like, what's the timeline to AGI is like, when is the world going to be super different? When is the rate of change going to get super high? When is the way the economy works going to be really different? Like when does my life change? And that for a bunch of reasons may be very different than we think. Like, I can totally imagine a world where we build PhD level intelligence in any area and we can make researchers way more productive.
当人们说的时候,我有点想知道什么是人工智能(AGI)的时间表,就是说,世界什么时候会变得截然不同? 变化的速度什么时候会变得非常快? 经济运作方式什么时候会真正不同? 就是说,什么时候我的生活会改变? 出于很多原因,这可能与我们认为的很不同。 就好像,我完全可以想象一个世界,在那里我们可以在任何领域建立博士级别的智能,从而使研究人员的生产效率大大提高。

Maybe we can even do some autonomous research. And in some sense, like, that sounds like it should change the world a lot. And I can imagine that we do that and then we can detect no change in global GDP growth for like years afterwards, something like that. Which is very strange to think about. And it was not my original intuition of how this was all going to go. So I don't know how to give a precise timeline of when we get to the milestone people care about.
也许我们甚至可以进行一些自主研究。从某种意义上讲,这似乎应该会极大地改变世界。我可以想象我们这样做,然后在接下来的几年里全球 GDP 增长没有任何变化,类似这样的情况。这样想起来很奇怪。这不是我最初的直觉认为会发生的情况。所以我不知道如何给出一个准确的时间表,何时我们能达到人们关心的里程碑。

But when we get to systems that are way more capable than we have right now, one year and every year after. And that I think is the important point. So I've given up on trying to give the AGI timeline. But I think every year for the next many, we have dramatically more capable systems every year. I want to ask about the dangers of AGI. And gang, I know there's tons of questions for Sam in a few moments.
但是当我们使用比现在更有能力的系统时,一年之后和之后的每一年。我认为这是一个重要的观点。所以我放弃了试图确定AGI时间表。但我认为在接下来的许多年里,每年我们都会拥有更加具有实力的系统。我想问一下关于AGI危险的问题。团队,我知道一会儿会有很多关于山姆的问题。

I'll be turning it up. So start thinking about your questions. A big focus on Stanford right now is ethics. And can we talk about how you perceive the dangers of AGI? And specifically, do you think the biggest danger from AGI is going to come from a cataclysmic event which makes all the papers? Or is it going to be more subtle and pernicious? Sort of like, how everybody has ADD right now from using Pro-NOC? Are you more concerned about the subtle dangers or the cataclysmic dangers? Or neither?
我将会加大力度。所以开始想想你的问题。目前斯坦福非常关注伦理道德。我们可以谈一下你认为人工智能通用智能的危险在哪里吗?具体来说,你认为人工智能通用智能最大的危险会来自于一场引起轰动的灾难事件吗?还是更多地是一种微妙而隐蔽的形式?就像现在每个人都因为使用Pro-NOC而患上了注意力缺陷一样?你更担心微妙的危险还是引起灾难的危险?或者两者都不担心?

I'm more concerned about the subtle dangers because I think we're more likely to overlook those. The cataclysmic dangers a lot of people talk about and a lot of people think about. And I don't want to minimize those. I think they're really serious and a real thing. But I think we at least know to look out for that and spend a lot of effort. The example you gave of everybody getting ADD from TikTok or whatever, I don't think we knew to look out for. And that's a really hard, the unknown unknowns are really hard.
我更关心微妙的危险,因为我认为我们更容易忽视这些。很多人谈论和思考的灾难性危险。我不想贬低这些。我认为它们是非常严重的,确实存在的问题。但至少我们知道要提防并付出很多努力。你提到的每个人都因为抖音等而患上注意力缺陷症,我认为我们没有意识到要警惕这种情况。未知的未知真的很困难。

So I'd worry more about those, although I worry about both. And are they unknown unknowns? Are there any that you can name that you're particularly worried about? Well, then I would kind of be unknown unknown. I am worried just about. So even though I think in the short term things change less than we think, as with other major technologies, in the long term I think they change more than we think.
所以我更担心那些,尽管我也担心两者。它们是未知的未知吗?有没有您特别担心的可以说出来的?那么我就属于未知的未知。我只是担心。即使我认为短期内事物变化不如我们想象的那么多,就像其他重要技术一样,但在长期内,我认为它们的变化超出我们的想象。

And I am worried about what rate society can adapt to something so new and how long it'll take us to figure out the new social contract versus how long we get to do it. I'm worried about that. I'm going to open up so I want to ask you a question about one of the key things that we're now trying to inculcate into the curriculum as things change so rapidly is resilience. That's really good. And the cornerstone of resilience is self-awareness. I'm wondering if you feel that you're pretty self-aware of your driving motivations as you are embarking on this journey.
我担心社会能够以多快的速度适应如此新的事物,以及我们需要多长时间来摸清新社会契约,与我们能够实现这一目标的时间相比。我很担心这一点。我要开放一些话题,所以我想问你一个关键问题,就是我们现在试图在课程中灌输的东西,因为事物变化如此迅速,我们现在正在努力推广的其中一个关键要素是韧性。这是非常好的。而韧性的基石是自我意识。我想知道你是否觉得自己对驱动力有足够的自我意识,因为你正在踏上这段旅程。

So first of all, I believe resilience can be taught. I believe it has long been one of the most important life skills. And in the future, I think over the next couple of decades, I think resilience and adaptability will be more important than I've been in a very long time. So I think that's really great. On the self-awareness question, I think I'm self-aware, but I think everybody thinks they're self-aware and whether I am or not is sort of hard to say from the inside.
首先,我认为韧性是可以教授的。我相信韧性长期以来一直是最重要的生活技能之一。未来,我认为在未来的几十年里,韧性和适应能力将比过去更加重要。我认为这真的很棒。至于自我意识问题,我认为我是自我意识的,但我认为每个人都认为自己是自我意识的,我是否真的具有自我意识,难以从内部说清楚。

And can I ask you sort of the questions that we ask in our intro classes on self-awareness? Sure. It's like the Peter Drucker framework. So what do you think your greatest strengths are, Sam? I think I'm not great at many things, but I'm good at a lot of things. And I think breadth has become an underrated thing in the world. Everyone gets hyper-specialized.
我可以问你我们在自我意识介绍课上常问的问题吗?当然可以。就像彼得·德鲁克的框架一样。Sam,你认为自己最大的优点是什么?我觉得我没有在很多事情上很擅长,但我在很多事情上都很好。我觉得广度在这个世界上已经被低估了。每个人都变得过于专门化。

So if you're good at a lot of things, you can seek connections across them. I think you can then kind of come up with the ideas that are different than everybody else has or that sort of the experts in one area have. And what are your most dangerous weaknesses? Most dangerous, that's an interesting framework for it. I think I have like a general bias to be too pro-technology just because I'm curious and I want to see where it goes. And I believe that technology is on the whole, a net good thing. But I think that is a worldview that has overall served me and others well and thus gotten like a lot of positive reinforcement and is not always true.
所以如果你擅长很多事情,你可以在它们之间寻找联系。我认为你可以想出与其他人不同的想法,或者与某个领域的专家不同的想法。那么你最危险的弱点是什么?最危险的,这是一个有趣的框架。我觉得我有一个普遍的偏见,太过于支持科技,因为我很好奇,想知道它将会走向何方。我相信科技整体上是一个积极的事物。但我认为这个世界观总体上为我和其他人谋福利,因此得到了许多积极反馈,但这并不总是正确的。

And when it's not been true, it's been like pretty bad for a lot of people. And then Harvard psychologist David McAland has this framework that all leaders are driven by one of three primal needs, a need for affiliation, which is a need to be liked, a need for achievement and a need for power. If you had to rank list those, what would be yours? I think at various times in my career, all of those, I think they're these like levels that people go through. At this point, I feel driven by like wanting to do something useful and interesting. And I think I definitely had like the money and the power and the status phases. And where were you when you most last felt most like yourself? I all, I all. You all are skilled. And one last question, and what are you most excited about with chat GBT5 that's coming out that people don't? What are you most excited about with the release of chat GBT5 that we're all going to see? I don't know yet. I mean, this sounds like a cop out answer, but I think the most important thing about GBT5 or whatever we call that is just that it's going to be smarter. And this sounds like a dodge, but I think that's like among the most remarkable facts in human history that we can just do something and we can say right now with a high degree of scientific certainty. GBT5 is going to be smarter than a lot smarter than GBT4. GBT6 is going to be a lot smarter than GBT5. And we are not near the top of this curve and we kind of know what to do. And this is not like it's going to get better in one area.
当这种说法不成立时,对许多人来说就会非常糟糕。哈佛心理学家大卫·麦卡兰德提出了这样一个框架,所有领导者都被三种原始需求之一驱使,即亲和力(希望被喜欢)、成就感和权力需求。如果你必须排名这些需求,你会怎么做呢?在我职业生涯的各个时期,我认为我都经历过这些阶段。此刻,我感觉自己被想要做一些有用和有趣的事情驱使着。我确实经历过追求金钱、权力和地位的阶段。当你最后一次感觉最像自己时,你在哪里?我全,我全。你们都很有才华。最后一个问题,对于即将推出的聊天GBT5,你最期待什么?你对即将发布的聊天GBT5最期待什么?我还不知道。我知道这听起来像一种回避答案,但我认为GBT5或者我们将其称之为什么的最重要的事情就是它会变得更聪明。这听起来像是在逃避问题,但我认为这是人类历史上最引人注目的事实之一,即我们可以做一些事情,并且我们可以目前就可以以高度科学精确度说:GBT5将比GBT4聪明得多。GBT6将比GBT5聪明得多。我们还没有接近这条曲线的顶端,并且我们知道该怎么做。这不是说在某个领域会变得更好。

This is not like we're going to, you know, it's not that it's always going to get better at this eval or this subject or this modality. It's just going to be smarter in the general sense. And I think the gravity of that statement is still like underrated. Okay, that's great. Sam, guys, Sam is really here for you. He wants to answer your question. So we're going to open it up. Hello. Thank you so much for joining us. I'm a junior here at Stanford. I sort of wanted to talk to you about responsible deployment of AGI. So as you guys continue to inch closer to that, how do you plan to deploy that responsibly at OpenAI to prevent stifling human innovation and continue to spur that? So I'm actually not worried at all about stifling human innovation. And I really deeply believe that people will just surprise us on the upside with better tools.
这并不是说我们要如何做到,你知道的,不是说这种评估或这个学科或这种模式总是会变得更好。它只是会在一般意义上变得更加聪明。我认为这种说法的重要性仍然被低估了。好的,太棒了。山姆,伙计们,山姆真的很乐意帮助你们。让我们开始吧。你好。非常感谢你加入我们。我是斯坦福大学的一名大三学生。我想和你讨论有关AGI负责任部署的问题。因此,当你们继续接近这个目标时,你们计划如何在OpenAI负责任地部署它,以防止扼杀人类创新,并继续促进创新?我实际上完全不担心扼杀人类创新。我真切地相信人们会用更好的工具给我们带来惊喜。

I think all of history suggests that if you give people more leverage, they do more amazing things. And that's kind of like we all get to benefit from that. That's just kind of great. I am though increasingly worried about how we're going to do this all responsibly. I think as the models get more capable, we have a higher and higher bar. We do a lot of things like red teaming and external audits. And I think those are all really good. But I think as the models get more capable, we'll have to deploy even more iteratively, have an even tighter feedback loop on looking at how they're used and where they work and where they don't work.
我认为整个历史都表明,如果给予人们更多的发挥空间,他们会做更多令人惊奇的事情。而我们所有人都能从中受益。这真的很棒。不过,我越来越担心我们如何能够负责任地做到这一点。我认为随着模型变得越来越强大,我们的标准也变得越来越高。我们做了很多类似红队测试和外部审核的事情。我认为这些都非常好。但我认为随着模型变得更加强大,我们将不得不更加迭代地部署,对它们的使用情况和有效性进行更紧密的反馈循环。

And this world that we used to do where we can release a major model update every couple of years, we probably have to find ways to like increase the granularity on that and deploy more iterative than we have in the past. And it's not super obvious to us yet how to do that. But I think that'll be key to responsible deployment. And also the way we kind of have all of the stakeholders negotiate what the rules of AI need to be, that's going to get more complex over time too. Thank you. Next question right here. You mentioned before that there's a growing need for larger and larger computers and faster computers. However, many parts of the world don't have the infrastructure to build those data centers or those large computers.
在这个我们过去习惯于进行的世界,我们能够每隔几年发布一次重大的模型更新,我们可能需要找到提高粒度并进行比以往更迭代部署的方法。目前对我们来说如何去做还不太明显。但我认为这将是负责任部署的关键。而且,我们让所有利益相关者就AI需要什么规则进行协商的方式,随着时间的推移也将变得更加复杂。谢谢。下一个问题就在这里。您之前提到过,现在需要越来越大的计算机和更快的计算机。然而,世界上许多地方都没有基础设施来建造那些数据中心或大型计算机。

How do you see global innovation being impacted by that? So two parts to that. One, no matter where the computers are built, I think global and equitable access to use the computers for training as well as inference is super important. One of the things that's like very core to our mission is that we make chat GPT available for free to as many people as want to use it with the exception of certain countries where we either can't or don't for a good reason to want to operate. How we think about making training compute more available to the world is going to become increasingly important.
你认为全球创新会受到怎样的影响?这有两个方面。首先,无论计算机在哪里建造,我认为全球公平获取和使用计算机进行培训和推理是非常重要的。我们的使命之一是让Chat GPT免费提供给所有想要使用它的人,除了在某些国家我们因为某种重要原因无法或不愿意运营。我们如何考虑使训练计算资源更加普遍可得将变得越来越重要。

I do think we get to a world where we sort of think about it as a human right to get access to a certain amount of compute and we've got to figure out how to distribute that to people all around the world. There's a second thing though which is I think countries are going to increasingly realize the importance of having their own AI infrastructure and we want to figure out a way and we're now spending a lot of time traveling around the world to build them in the many countries that want to build these and I hope we can play some small role there in helping that happen.
我认为我们会进入一个世界,在这个世界中,我们会把获取一定数量的计算资源视为一项人权,我们必须想办法将这种资源分配给全世界的人。另外一件事是,我认为各国会越来越意识到拥有自己的人工智能基础设施的重要性,我们希望找到一种方法,我们正在花费大量时间在全球各地建立这些基础设施,希望我们能在其中发挥一定作用。

Perfect, thank you. My question was what role do you envision for AI in the future of like space exploration or like colonization? I think space is like not that hospitable for biological life obviously and so if we can send the robots that seems easier. Hey Sam, so my question is for a lot of the founders in the room and I'm going to give you the question and then I'm going to explain why I think it's complicated. So my question is about how you know an idea is non-consensus and the reason I think it's complicated is because it's easy to overthink.
太好了,谢谢你。我想问的是你认为未来人工智能在太空探索或殖民化中将扮演什么角色?我认为太空对生物生命来说并不那么友好,如果我们可以发送机器人似乎更容易。嗨,山姆,我的问题是关于在座的许多创始人,我会先给你问题,然后解释为什么我认为问题很复杂。我的问题是如何判断一个想法是非共识的,我认为问题复杂是因为容易陷入过度思考。

I think today even yourself says AI is the place to start a company. I think that's pretty consensus. Maybe rightfully so, it's an inflection point. I think it's hard to know if an idea is non-consensus depending on the group that you're talking about. The general public has a different view of tech from the tech community and even tech elites have a different point of view from the tech community. So I was wondering how you verify that your idea is non-consensus enough to pursue?
我认为即使是你自己,也认为人工智能是创办公司的好地方。我觉得这是相当普遍的看法。也许是有道理的,这是一个拐点。我觉得很难判断一个想法是否与主流观点不同,这取决于你所说的群体。普通大众对科技有不同的看法,科技社区甚至科技精英对科技社区也有不同的观点。因此我想知道你如何验证你的想法是否足够与主流看法不同,值得追求?

I mean first of all, what you really want is to be right. Being contrarian and wrong is still as wrong and if you predicted like 17 out of the last two recessions, you probably were contrarian for the two you got right probably not even necessarily but you were wrong 15 other times. So I think it's easy to get too excited about being contrarian and again like the most important thing to be right and the group is usually right. But where the most value is when you are contrarian and right and that doesn't always happen in like sort of a zero or one kind of way. Everybody in the room can agree that AI is the right place to start a company.
首先,我想说的是,你真正想要的是对的。与众不同并且错误仍然是错误的,如果你预测了过去两次经济衰退中的17次,那么你可能在两次中是与众不同的,但可能在其他15次中是错误的。所以我认为,对于与众不同这件事情,过分激动是很容易的,重要的是要对的,通常情况下集体是对的。但当你与众不同并且是对的时候,最有价值,但这并不总是以二进制的方式发生。每个人都会同意人工智能是开公司的正确起点。

And if one person in the room figures out the right company to start and then successfully executes on that and everybody else thinks that wasn't the best thing you could do, that's what matters. So it's okay to kind of like go with conventional wisdom when it's right and then find the area where you have some unique insight. In terms of how to do that, I do think surrounding yourself with the right peer group is really important and finding original thinkers is important but there is part of this where you kind of have to do it solo or at least part of it solo or with a few other people who are like you know going to be your co-founders or whatever.
如果房间里有一个人找到了正确的创业公司并成功地执行了,而其他人觉得那并不是最好的选择,那才是最重要的。所以,当习以为常的智慧是正确的时候,跟着走没问题,然后找到你有独特见解的领域。关于如何做到这一点,我认为与正确的同行群体相伴是非常重要的,找到原创思想者也很重要,但在这个过程中,你也需要独自行动,或者至少部分时间独自行动,或者与一些可能成为你合伙人的人一起行动。

And I think by the time you're too far in the like how can I find the right peer group, you're somehow in the wrong framework already. So like learning to trust yourself and your own intuition and your own thought process which gets much easier over time. No one no matter what they say, I think is like truly great at this one there, just starting out. Yeah, because like you kind of just haven't built the muscle in like all of your social pressure and all of like the evolutionary pressure that produced you was against that.
我认为当你已经深陷寻找正确的同伴群体时,你或多或少已经错入了错误的框架。因此,学会信任自己、自己的直觉和自己的思考过程会随着时间的推移变得更容易。无论别人说什么,我认为没有人真的擅长在这方面,只是刚刚开始。是的,因为你似乎还没有在应对所有社交压力和塑造你的演化压力中建立起自信。

So it's something that like you get better at over time and don't hold yourself to too high of a standard too early on it. Hi Sam, I'm curious to know what your predictions are for how energy demand will change in the coming decades and how we achieve a future where renewable energy sources are one cent per kilowatt hour. I mean it will go up for sure, well not for sure, you can come up with all these weird ways in which like we all, the pressing futures where it doesn't go up, I would like it to go up a lot.
所以这是一种随着时间而变得更好的事情,不要过早地将自己对自己的标准定得太高。嗨,山姆,我很想知道你对未来几十年能源需求将如何变化以及我们如何实现可再生能源价格为每千瓦时一美分的未来有什么预测。我的意思是,它肯定会增加,嗯,不是肯定的,你可以想出各种奇怪的方式,像我们都因此而变得紧迫的未来,它不会增加,但我希望它会大幅增加。

I hope that we hold ourselves to a high enough standard where it does go up. I forget exactly what the kind of world's electrical generating capacity is right now, but let's say it's like 3000, 4000 gigawatts, something like that. Even if we add another 100 gigawatts for AI, it doesn't materially change it that much, but it changes at some and if we start at 1000 gigawatts for AI someday it does, that's a material change. But there are a lot of other things that we want to do and energy does seem to correlate quite a lot with quality of life we can deliver for people. My guess is that fusion eventually dominates electrical generation on earth. I think it should be the cheapest, most abundant, most reliable, densest source. I could be wrong on that and it could be solar plus storage and you know, my guess most likely is it's going to be 80, 21 where the other and there will be some cases where one of those is better than the other, but those kind of seem like the two bets for like really global scale one cent per kilowatt hour energy.
我希望我们能够保持足够高的标准,让世界变得更好。我记不清现在世界电力生成能力是多少,但假设是3000、4000吉瓦,大概是这样。即使我们再增加100吉瓦用于人工智能,实际上并没有太大改变,但会有一些变化,如果我们有一天开始为人工智能留出1000吉瓦的发电量,那将是一个实质性的变化。但我们还有许多其他要做的事情,能源似乎与我们为人们提供的生活质量相关性很高。我猜测,核聚变最终将主导地球的发电行业。我认为它应该是成本最低、最丰富、最可靠、最密集的能源来源。我可能猜错了,也可能是太阳能加储能,或者,我猜测最可能的情况是80%是核聚变,21%是太阳能加储能,即使在某些情况下,其中一个可能比另一个更好,但这似乎是实现全球一分钱每千瓦时电力的两种方式。

Hi Sam, I have a question. It's about over 30, what happened last year? So what's the lesson you learned? Because you talk about resilience. So what's the lesson you learned from left that company and now coming back and what made you come in back because Microsoft also give you all of like, time is here more. I mean the best lesson I learned was that we had an incredible team that totally could have run the company without me and did for a couple of days. And you never, and also that the team was super resilient. We knew that some crazy things and probably more crazy things will happen to us between here and AGI as different parts of the world have stronger and stronger emotional reactions and the stakes keep ratcheting up. And you know, I thought that the team would do well under a lot of pressure, but you never really know until you get to run the experiment and we got to run the experiment and I learned that the team was super resilient and like ready to kind of run the company. In terms of why I came back, you know, I originally when the next morning the board called me and I was like, what do you think about coming back? And I was like, no, I'm mad. And then I thought about it and I realized just like how much I loved OpenAI, how much I loved the people, the culture we built, the mission and I kind of like wanted to finish it all together. You did emotionally, this is obviously a really sensitive one. It's not, but imagine that was mostly, okay.
嗨,山姆,我有一个问题。关于去年的情况,发生了什么?所以你学到了什么教训?因为你谈到了韧性。那么,你从辞职那家公司到现在再次回来学到了什么教训以及是什么让你决定回来,因为微软也给予了你更多的时间。我学到的最重要的一点是,我们有一个非常出色的团队,完全可以在没有我情况下运作公司,而且有几天的时间确实也是这样。我们知道,一些疯狂的事情发生了,可能会有更多的疯狂事情在我们和AGI之间发生,因为不同地区的情绪反应变得越来越强烈,赌注越来越高。我以为团队能够在巨大的压力下表现很好,但是直到你真正进行实验之前,你永远不知道。我们进行了实验,我发现团队非常有韧性,准备好来经营公司。至于我为什么回来,当时董事会第二天早上打电话给我,问我对回来的想法如何?我开始回答说“不”,我很生气。然后我想了想,意识到我有多么热爱OpenAI,多么热爱我们建立的文化、使命,我想要一起完成这一切。情感上,这显然是一件非常敏感的事情,但想象一下,这主要是,好吧。

Well then can we talk about the structure about it because this Russian doll structure of the OpenAI where you have the nonprofit owning the for-profit. You know, when we're trying to teach principal geronology. We got to the structure gradually. It's not what I would go back and pick if we could do it all over again. But we didn't think we were going to have a product when we started. We were just going to be like an AI research lab. It wasn't even clear. We had no idea about a language model or an API or chat GBT. So if you're going to start a company, you got to have like some theory that you're going to sell a product someday. And we didn't think we were going to. We didn't realize we're going to need so much money for compute. We didn't realize we were going to like have this nice business. So what was your intention when you started it? We just wanted to like push AI research forward. We thought that. And I know this gets back to motivations, but that's the pure motivation. There's no motivation around making money or power. I cannot overstate how foreign of a concept like. I mean, for you personally, not for opening AI, but you weren't starting. I had already made a lot of money. So it was not like a big. I mean, I like I don't want to like claim some like moral purity here. It was just like, but that was the stage of my life. That's not a driver. Driver. OK. Because there's this.
好吧,那么我们能谈谈关于这个结构吗?这个OpenAI的俄罗斯套娃结构,非营利机构拥有营利机构。你知道,当我们试图教授原则性老龄学时。我们逐渐了解到了这个结构。如果我们可以重新选择,这不是我会去选择的。但当我们开始的时候,我们根本没想到会有产品。我们只是打算成为一个人工智能研究实验室。甚至不清楚。我们对语言模型、API或者聊天GBT都一无所知。所以如果你要创办一家公司,你必须有一些理论,认为有一天你会卖产品。我们并没有想过这一点。我们也没有意识到我们会需要这么多的计算资源。我们也没有意识到我们会经营这样一个不错的业务。那么你们开始的初衷是什么?我们只是想推动人工智能研究。我们当时就是这么认为的。我知道这又扯到动机问题,但那就是纯粹的动机。没有关于赚钱或权力的动机。我无法过分强调这个概念有多么陌生。我是说,针对你个人,不是针对开放AI,但你开始的时候。我已经赚了很多钱。所以这并不是什么大问题。我是说,我不想声称自己有某种道德纯洁性。只是这是我生活的阶段。这并不是主要动力。主要动力。好吧,因为有这个。

So and the reason why I'm asking is just, you know, when we're teaching about principal to have an entrepreneurship here, you can understand principles inferred from organizational structures. When the United States was set up, the architecture of governance is the Constitution. It's got three branches of government, all these checks and balances. And you can infer certain principles that, you know, there's a skepticism on centralizing power that, you know, things will move slowly. It's hard to get things to change, but it'll be very, very stable. If you know not to parrot Billy Eilish, but if you look at the opening AI structure and you think, what was that made for? You have like your near $100 billion valuation and you've got a very, very limited board that's a nonprofit board, which is supposed to look after its fiduciary duties to the is to hand. Again, it's not what we would have done if we knew now, then what we know now, but you don't get to like play life in reverse. And you have to just like adapt. There's a mission we really cared about. We thought we thought AI was going to be really important. We thought we had an algorithm that learned. We knew it got better with scale. We didn't know how predictably it got better with scale. And we wanted to push on this. We thought this was like going to be a very important thing in human history. And we didn't get everything right, but we were right on the big stuff and our mission hasn't changed. And we've adapted the structures.
所以我问的原因只是,你知道,当我们在这里教创业原则时,你可以理解从组织结构中推断出的原则。当美国建立时,治理架构是宪法。它有三个政府分支,所有这些相互制衡。你可以推断出某些原则,你知道,对于集权持怀疑态度,事情会变得缓慢。很难让事情改变,但会非常非常稳定。不要照本宣科那样说话,但是如果你看看开放的AI结构,你会想,那是为什么设计的?你有近1000亿美元的估值,你有一个非常有限的董事会,这是一个非营利性董事会,应该履行其对手头负有的义务。再次强调,如果我们现在知道的话,我们可能不会这样做,但你不能像在逆向生活。你必须适应。我们真的很关心的一个使命。我们认为AI会变得非常重要。我们认为我们拥有一个能够学习的算法。我们知道它随着规模的扩大会变得更好。我们不知道它在什么程度上更可预测地随着规模的扩大变得更好。我们想要在这方面努力。我们认为这将成为人类历史上非常重要的事情。我们并没有把一切都做对,但我们在大问题上是对的,我们的使命没有改变。我们已经调整了结构。

We go and we'll adapt it more in the future. But you know, like you don't, like life is not a problem set. You don't get to like solve everything really nicely all at once. It doesn't work quite like it works in the classroom as you're doing it. And my advice is just like trust yourself to adapt as you go. It'll be a little bit messy, but you can do it. And I just asked this because of the significance of open AI. You have a board which is all supposed to be independent financially so that they're making these decisions as a nonprofit. Thinking about the stakeholder, their stakeholder that they are fiduciary of isn't in the shareholders, it's humanity. Everybody's independent. There's no financial incentive that anybody has that's on the board, including yourself with open AI. Well, Greg was, okay, first of all, I think making money is a good thing. I think capitalism is a good thing. My co-founders on the board have had financial interest and I've never once seen them not take the gravity of the mission seriously. But you know, we've put a structure in place that we think is a way to get incentives aligned and I do believe incentives are superpowers. But I'm sure we'll evolve it more over time. And I think that's good, not bad. And with open AI, then you fund, you don't get any carry in that and you're not following on investments onto those companies.
我们会继续前进,并在未来进行更多调整。但你知道,生活并不是一道题目。你不能一次性解决所有问题。并且生活并不像在课堂上做题那样完美无缺。我的建议是相信自己能够随着情况变化而适应。虽然可能会有一点混乱,但你可以做到。我之所以这样问是因为开放AI的重要性。你们的董事会都是独立财务上的,以确保他们能够作为非营利机构做出这些决定。考虑到他们的利益相关者,他们的受托人不是股东,而是全人类。每个人都是独立的。董事会中没有任何人有任何金融激励,包括你自己在开放AI中。首先,我认为赚钱是好事。我认为资本主义是件好事。我的创始人同事在董事会上有财务利益,但我从未见过他们对使命的重要性不认真对待。但你知道,我们已经建立了一个我们认为能够使激励对齐的结构,我相信激励是超能力。但我相信随着时间的推移,我们会进一步完善它。我认为这是好的,而不是坏的。在开放AI中,你们资助,但在这个过程中你们不获得任何分成,并且没有跟投这些公司的投资。

Okay, thank you. We can keep talking about this. No, no, I know you want to go back to students. I do YouTube. So we'll keep going to the students. How do you expect that AGI will change geopolitics and the balance of power in the world? Like maybe more than any other technology. I don't, I think about that so much and I have such a hard time saying what it's actually going to do. Or maybe more accurately, I have such a hard time saying what it won't do. And we're talking earlier about how it's like not going to, maybe it won't change day to day life that much. But the balance of power in the world, it feels like it does change a lot. But I don't have a deep answer of exactly how. Thanks so much. I was wondering, sorry, I was wondering in the deployment of like general intelligence and also responsible AI. How much do you think is it necessary that AI systems are somehow capable of recognizing their own insecurities or like uncertainties and actually communicating them to the outside world? I always get nervous anthropomorphizing AI too much because I think it like can lead to a bunch of weird oversights. But if we say like how much can AI recognize its own flaws, I think that's very important to build. And right now and the ability to recognize an error in reasoning and have some sort of like introspection ability like that, that seems to me like really important to pursue.
好的,谢谢。我们可以继续谈论这个话题。不,不,我知道你想回到学生的问题。我做YouTube。所以我们将继续关注学生。你认为人工智能通用智能将如何改变世界地缘政治和力量平衡?可能比其他任何技术都更多。我不知道,我想到那个问题很多次,但很难说它实际会做什么。或者更准确地说,我很难说它不会做什么。我们刚刚谈到可能不会改变日常生活那么多。但是世界力量平衡,感觉它确实会有很大的改变。但是我并没有一个深入的答案,关于具体会如何改变。非常感谢。我在思考,抱歉,我在思考智能部署和负责任的AI。你认为AI系统有多重要能够识别自己的不安全性或不确定性,并将其实际传达给外部世界?我总是有点紧张地人格化AI,因为我觉得这可能会导致一些奇怪的疏忽。但如果我们说AI有多少能够认识到自己的缺陷,我认为这是非常重要的去建设。现在的能力去识别推理错误并具有某种反思的能力,对我来说这似乎非常重要。

Hey, Sam, thank you for giving us some of your time today and coming to speak. From the outside looking in, we all hear about the culture and togetherness of open AI. In addition to the intensity and speed of which you guys work out, clearly seen from charge, you see, and all your breakthroughs. And also in when you are temporarily removed from the company by the board and how all of your employees tweeted, open AI has nothing without its people. What would you say is the reason behind this? Is it the binding machine to achieve AI or something even deeper? What is pushing the culture every day? I think it is the shared mission. I mean, I think people like like each other and we feel like we've, you know, we're in the trenches together doing this really hard thing. But I think it really is like deep sense of purpose and loyalty to the mission. And when you can create that, I think it is like the strongest force for success, at any start, at least that I've seen among startups. And you know, we try to like select for that in people we hire, but even people who come in, not really believing that AGI is going to be such a big deal and they get an error is so important, tend to believe it after the first three months or whatever. And so that's like, that's a very powerful cultural force that we have. Thanks.
嘿,山姆,感谢您今天抽出时间来和我们交谈。从外部看来,我们都听说过开放AI的文化和团结。除了你们工作的强度和速度,从产品、进展等方面都能清楚地看到。还有当你被董事会暂时解除职务时,所有员工都在推特上表示,没有了员工的开放AI就一无所有。你会说这背后的原因是什么呢?是为了实现人工智能而束缚在一起,还是更深层次的东西?是什么推动了文化的发展?我觉得是共同的使命。我觉得人们喜欢彼此,感觉我们一起在战壕里做着这件非常艰难的事情。但我认为这真的是深刻的目标感和对使命的忠诚。当你能够创造这种感觉时,我觉得这是成功的最强大动力,至少在创业公司中是我见过的。我们试图在招聘时选择这种人,但即使是刚进来的人可能不认为人工通用智能会是一件大事,但经过几个月后通常都会相信它的重要性。因此,这是我们拥有的非常强大的文化力量。谢谢。

Currently, there are a lot of concerns about the misuse of AI in the immediate term with issues like global conflicts and the election coming up. What do you think can be done by the industry, governments, and honestly people like us in the immediate term, especially with very strong open source models? Something that I think is important is not to pretend like this technology or any other technology is all good. I believe that AI will be very net good, tremendously net good. But I think like with any other tool, it'll be misused. You can do great things with a hammer and you can like kill people with a hammer. I don't think that absolves us or you all or society from trying to mitigate the bad as much as we can and maximize the good. But I do think it's important to realize that with any sufficiently powerful tool, you do put power in the hands of tool users or you make some decisions that constrain what people in society can do. I think we have a voice in that. I think you all have a voice in that. I think the governments and our elected representatives in democratic processes have the loudest voice in that. But we're not going to get this perfectly right. Like we society are not going to get this perfectly right. And a tight feedback loop I think is the best way to get it closest to right. And the way that that balance gets negotiated of safety versus freedom and autonomy, I think it's like worth studying that with previous technologies and we'll do the best we can here. We society will do the best we can here.
当前,人们对人工智能的滥用问题非常关注,特别是面临全球冲突和即将到来的选举等问题。您认为行业、政府和像我们这样的普通人在短期内可以采取什么行动,特别是在使用非常强大的开源模型时?我认为重要的一点是不要假装这项技术或任何其他技术都是全好的。我相信人工智能将带来非常大的好处,巨大的净利益。但我认为就像任何其他工具一样,它也会被滥用。你可以用锤子做很棒的事情,也可以用它来杀人。我不认为这能使我们、你们或社会免于努力尽量减少不良因素,最大化好处。但我认为意识到任何足够强大的工具,你会将权力放在使用工具的人手中,或者做出一些限制社会人们行为的决定。我认为我们在其中有一席之声。我认为你们都在其中有一席之声。我认为政府和我们在民主过程中选举产生的代表在其中拥有最大的声音。但我们不会完全正确地做到这一点。就像我们社会不能完全正确地做到这一点。我认为建立一个密切的反馈循环是使之尽可能正确的最佳方法。安全与自由和自治之间的平衡如何协商,我认为值得研究以前的技术,并且我们会在这里尽最大努力。我们社会会在这里尽最大努力。

I'm getting actually I've got to cut it. Sorry. I know. I just want to be very sensitive to time. I know the interest forex seeds the time and the love for Sam. Sam, I know it is your birthday. I don't know if you can indulge us because I know there's a lot of love for you. So I wonder if we can all just sing happy birthday. No, no, no, no. Please. No. We want to make you very uncomfortable. I'd much rather do one more. This is less interesting. We can we can do one more question quickly. Dear Sam, happy birthday to you. Twenty seconds of awkwardness.
对不起,我得说了。我知道。我只是想非常敏感地对待时间。我知道兴趣种子与时间和对Sam的爱息息相关。Sam,我知道今天是你的生日。我不知道你是否可以容忍我们,因为我知道大家都对你充满爱。所以我想我们能不能一起唱生日快乐歌。不,不,不,请不要。我们不想让你感到难受。我宁愿再多做一个。这个话题不够有趣。我们可以快速再问一个问题。亲爱的Sam,祝你生日快乐。让我们经历二十秒的尴尬吧。

Is there a burner question? Somebody's got a real burner and we only have thirty seconds. So make it short. Hi. I wanted to ask if the prospect of making something smarter than any human could possibly be scaring me. It of course does and I think it would be like really weird and a bad sign if it didn't scare me. Humans have gotten dramatically smarter and more capable over time. You are dramatically more capable than your great great grandparents and there's almost no biological drift over that period. Like sure you eat a little bit better and got better health care.
有没有一个关键问题?有人拿着一个真正的关键问题,而我们只有三十秒。所以简短一点。嗨。我想问一下,让我害怕的可能是制造比任何人类更聪明的东西。当然它是的,我认为如果不让我感到害怕,那将是非常奇怪和一个不好的迹象。人类的智力和能力随着时间的推移大大提高。你比你的曾曾祖父母聪明得多,而在那段时间内几乎没有生物漂移。当然,你吃得更好了,得到了更好的医疗照顾。

Maybe you eat worse. I don't know. But that's not the main reason you're more capable. You are more capable because the infrastructure of society is way smarter and way more capable than any human. And through that it made you society, people that came before you, made you the Internet the iPhone, a huge amount of knowledge available at your fingertips and you can do things that your predecessors would find absolutely breathtaking. Society is far smarter than you now. Society is an AGI as far as you can tell. And the way that that happened was not any individual's brain but the space between all of us, that scaffolding that we build up and contribute to brick by brick, step by step and then we use to go to far greater heights for the people that come after us.
也许你吃得不好。我不知道。但这并不是你更有能力的主要原因。你更有能力是因为社会基础比任何人都要聪明、更有能力。通过这种方式,它让你成为了社会的一部分,让人类将你打造成了互联网、iPhone,让大量知识触手可及,你能做到让你的前辈们感到无比惊讶的事情。社会现在比你聪明得多。从你看来,社会就是一个人工智能。这种现象的产生并不是某个人的大脑所能做到的,而是我们所有人之间共同建立的支撑系统,我们一砖一瓦地贡献着,一步一步地建立起来,然后利用这一支撑系统让接下来的人们达到更高的境界。

Things that are smarter than us will contribute to that same scaffolding. You will have your children will have tools available that you didn't and that scaffolding will have gotten built up to greater heights. And that's always a little bit scary. But I think it's like way more good than bad and people will do better things and solve more problems and the people of the future will be able to use these new tools and the new scaffolding that these new tools contribute to. If you think about a world that has AI making a bunch of scientific discovery, what happens to that scientific progress is it just gets added to the scaffolding and then your kids can do new things with it or you in 10 years can do new things with it. But the way it's going to feel to people I think is not that there is this much smarter entity because we're much smarter in some sense than the great, great, great grandparents or more capable at least, but that any individual person can just do more.
有些比我们更聪明的东西将有助于同一搭建结构。你和你的孩子将拥有你没有的工具,这些搭建结构将会建立到更高的高度。这总是有点可怕的。但我认为好处远大于坏处,人们将做出更好的事情,解决更多问题,未来的人们将能够使用这些新工具和新工具所带来的新搭建结构。想象一下一个有人工智能做出大量科学发现的世界,那些科学进步会怎样发展,它们只能被添加到搭建结构中,然后你的孩子可以用它们做出新事物,或者十年后的你可以用它们做出新事物。但我认为人们的感受不是觉得有一个比我们聪明得多的实体,因为在某种意义上,我们比曾祖父辈要聪明得多,或者至少更有能力,而是任何个体都可以做得更多。

And that we're going to end it. So let's give Sam a round of applause.
那么,我们将要结束了。让我们为山姆鼓掌。