Welcome to the entrepreneurial thought leader seminar at Stanford University. This is the Stanford seminar for aspiring entrepreneurs. ETL is brought to you by STVP, the Stanford Entrepreneurship Engineering Center and Basis, the Business Association of Stanford Entrepreneurial Students. I'm Ravi Balani, a lecturer in the Management Science and Engineering Department, and the Director of Alchemist, an accelerator for enterprise startups. And today, I have the pleasure of welcoming Sam Altman to ETL. Sam is the co-founder and CEO of OpenAI. And OpenAI is not a word I would use to describe the seats in this class. And so I think by virtue of that, that everybody already play nose OpenAI, but for those who don't, OpenAI is the research and deployment company behind chat, GBT, Dali and Sora. Sam's life is a pattern of breaking boundaries and transcending what's possible, both for himself and for the world. He grew up in the Midwest in St. Louis, came to Stanford, took ETL as an undergrad. We held on to Stanford for two years. He studied computer science. And then after a sophomore year, he joined the inaugural class of Y Combinator with a social mobile app company called Looped. That then went on to go raise money from Sequoia and others. He then dropped out of Stanford, spent seven years on Looped, which got acquired.
And then he rejoined Y Combinator in an operational role. He became the president of Y Combinator from 2014 to 2019. And then in 2015, he co-founded OpenAI as a non-profit research lab with the mission to build general purpose artificial intelligence that benefits all humanity. OpenAI has set the record for the fastest growing app in history with the launch of chat JBT, which grew to 100 million active users, just two months after launch. Sam was named one of times 100 most influential people in the world. He was also named times CEO of the year in 2023. And he was also most recently added to Forbes list of the world's billionaires. Sam lives with his husband in San Francisco and splits his time between San Francisco and Napa, and he's also a vegetarian. And so with that, please join me in welcoming Sam Altman to the stage.
And then full disclosure, that was a longer introduction than Sam probably would have liked. Brevity is the soul of wit. And so we'll try to make the questions more concise. But this is also Sam's birth week. It was his birthday on Monday. And I mentioned that just because I think this is an auspicious moment, both in terms of time, you're 39 now. And also place your at Stanford in ETL that I would be remiss if this wasn't sort of a moment of just some reflection. And I'm curious if you reflect back on when you were half a life younger, when you were 19 in ETL. If there were three words to describe what your felt sense was like as a Stanford undergrad, what would those three words be? There's always hard questions. I was like, you want three words only? Okay. You can go more, Sam. You're the king of brevity. Excited, optimistic, and curious. And what would be your three words now? I guess the same. Which is terrific.
So there's been a constant thread, even though the world has changed. You know, a lot has changed in the last 19 years, but that's going to pale in comparison what's going to happen in the next 19. Yeah. And so I need to ask you for your advice if you were a Stanford undergrad today. So if you had a freaky Friday moment, tomorrow you wake up and suddenly you're 19 inside of Stanford undergrad knowing everything you know, what would you do? Would you drop out? I'd be happy. I would feel like I was like coming of age at the luckiest time, like in several centuries, probably. I think the degree to which the world is going to change and the opportunity to impact that, starting a company, doing AI research, any number of things is like quite remarkable. I think this is probably the best time to start. Yeah, I think I would say this.
I think this is probably the best time to start a company since the internet at least and maybe kind of like in the history of technology. I think with what you can do with AI is like going to just get more remarkable every year. And the greatest companies get created at times like this, the most impactful new products get built at times like this. So I would feel incredibly lucky and I would be determined to make most of it and I would go figure out like where I wanted to contribute and do it. And do you have a bias on where would you contribute? Would you want to stay as a student? And if so, would you major in a certain major giving the pace of change? Probably I would not stay as a student but only because like I didn't and I think it's like reasonable to assume people kind of are going to make the same decisions they would make again. I think staying as a student is a perfectly good thing to do. It would probably not be what I would have picked. No, and this is you. This is you. So you have the freaky Friday moment. It's you, you're reborn as a 19 year old. Oh, yeah. What I think I would. Again, like I think this is not a surprise because people kind of are going to do what they're going to do. I think I would go work on AI research. And where might you do that, Sam? I think, I mean, obviously I have a bias towards Open AI but I think anywhere I could like do meaningful AI research I would be like very thrilled about. So you'd be agnostic if that's academia or private industry. I say this with sadness. I think I would pick industry realistically. I think it's, I think to, you kind of need to be the place with so much compute. Okay. And if you did join on the research side, would you join, so we had Kazra here last week who was a big advocate of not being a founder but actually joining an existing company to sort of learn the chops. For the students that are wrestling with, should I start a company now at 19 or 20 or should I go join another entrepreneurial either research lab or venture? What advice would you give them? Well, since he gave the case to join a company, I'll give the other one, which is I think you learn a lot just starting a company. And if that's something you want to do at some point, there's this thing Paul Graham says, but I think it's like very deeply true. There's no pre-startup like there is pre-med. You kind of just learn how to run a startup or run a startup. And if that's what you're pretty sure you want to do, you may as well jump in and do it.
And so let's say, so if somebody wants to start a company and they want to be in AI, what do you think are the biggest near-term challenges that you're seeing in AI that are the right best for a startup and just to scope that? What I mean by that are what are the holes that you think are the top priority needs for open AI that open AI will not solve in the next three years? So I think this is like a very reasonable question to ask in some sense. I think it's I'm not going to answer it because I think you should never take this kind of advice about what startup to start ever from anyone. I think by the time there's something that is like the kind of thing that's obvious enough that me or somebody else will sit up here and say it, it's probably like not that great of a startup idea.
And I totally understand the impulse. And I remember when I was just like asking people like, what startups should I start? But I think like one of the most important things I believe about having an impactful career is you have to chart your own course. If the thing that you're thinking about is something that someone else is going to do anyway or more likely something that a lot of people are going to do anyway, you should be like somewhat skeptical of that. And I think a really good muscle to build is coming up with the ideas that are not the obvious ones to say.
So I don't know what the really important idea is that I'm not thinking of right now, but I'm very sure someone in this room knows what that answer is. And I think learning to trust yourself and come up with your own ideas and do the very like non-consensus things like when we started OpenAI, that was an extremely non-consensus thing to do. And now it's like the very obvious thing to do. Now I only have the obvious ideas because I'm just like stuck in this one frame, but I'm sure you all have the other ones. But can I ask it another way and I don't know if this is fair or not, but what questions then are you wrestling with that no one else is talking about?
How to build really big computers. I mean I think other people are talking about that, but we're probably like looking at it through a lens that no one else is quite imagining yet. I mean we're definitely wrestling with how we, when we make not just like grade school or middle school or level intelligence, but like PhD level intelligence and beyond the best way to put that into a product, the best way to have a positive impact with that on society and people's lives, we don't know the answer to that yet. So I think that's like a pretty important thing to figure out. Okay.
And can we continue on that thread then of how to build really big computers? If that's really what's on your mind. Can you share, I know there's been a lot of speculation and a lot of hearsay too about the semiconductor foundry endeavor that you are reportedly marking on. Can you share what's the vision? Yeah. It would make this different than others that are out there. Just foundries, although that's part of it. If you believe, which we increasingly do at this point, that AI infrastructure is going to be one of the most important inputs to the future, this commodity that everybody's going to want.
And that is energy, data centers, chips, chip design, new kinds of networks. It's how we look at that entire ecosystem and how we make a lot more of that. And I don't think it'll work to just look at one piece or another, but we got to do the whole thing. Okay. So there's multiple big problems. Yeah. I think like just this is the arc of human technological history as we build bigger and more complex systems. And does it grow so in terms of just like the compute cost, correct me if I'm wrong, but chat GBT3 was, I've heard it was $100 million to do the model and it was $175 billion parameters.
GBT4 was cost $400 million with 10x the parameters. It was almost 4x the cost, but 10x the parameters. Correct me. Adjust me. I do know it, but I won't. Oh, you can. You're invited to. This is Stanford, Sam. But even if you don't want to correct the actual numbers, if that's directionally correct, does the cost do you think keep growing with each subsequent? Yes. And does it keep growing multiplicatively? Probably. I mean, and so the question then becomes how do we, how do you capitalize that?
Well, look, I kind of think that. Giving people really capable tools and letting them figure out how they're going to use this to build the future is a super good thing to do and is super valuable. And I am super willing to bet on the ingenuity of you all and everybody else in the world to figure out what to do about this. So there is probably some more business minded person than me at opening eyes somewhere that is worried about how much we're spending, but I kind of don't. Okay. So that doesn't cross it.
So, you know, opening eyes is phenomenal. Chachibiti is phenomenal. Everything else, all the other models are phenomenal. It burned, you burned $520 million of cash last year. That doesn't concern you in terms of thinking about the economic model of how do you actually, where's going to be the monetization source?
Well, first of all, that's nice of you to say, but Chachibiti is not phenomenal. Like Chachibiti is like mildly embarrassing at best. GPT-4 is the dumbest model any of you will ever have to use again by a lot. But you know, it's like important to ship early and often and we believe in iterative deployment.
Like if we go build AGI in a basement and then, you know, the world is like kind of blissfully walking blindfolded along. I don't think that's like, I don't think that makes us like very good neighbors. So I think it's important given what we believe is going to happen to our express our view about what we believe is going to happen. But more than that, the way to do it is to put the product in people's hands and let society co-evolve with the technology, let society tell us what it collectively and people individually want from the technology, how to productize this in a way that's going to be useful, where the model works really well, where it doesn't work really well, give our leaders and institutions time to react, give people time to figure out how to integrate this into their lives, to learn how to use the tool.
It's just something you all like cheat on your homework with it, but some of you all probably do like very amazing wonderful things with it too. And as each generation goes on, I think that will expand. And that means that we ship imperfect products, but we have a very tight feedback loop and we learn and we get better. And it does kind of suck to ship a product that you're embarrassed about, but it's much better than the alternative.
And in this case in particular, where I think we really owe it to society to deploy iteratively, one thing we've learned is that AI and surprise don't go well together. People don't want to be surprised. People want a gradual roll out and the ability to influence these systems. That's how we're going to do it. There could totally be things in the future that would change where we think iterative deployment isn't such a good strategy. But it does feel like the current best approach that we have. And I think we've gained a lot from doing this and hopefully the larger world has gained something too.
Whether we burn 500 million a year or 5 billion or 50 billion a year, I don't care. I genuinely don't. As long as we can, I think, stay on a trajectory where eventually we create way more value for society than that. And as long as we can figure out a way to pay the bills, we're making AGI. It's going to be expensive. It's totally worth it. Do you have a vision in 2030 of what, if I say you crushed it, Sam, it's 2030, you crushed it. What does the world look like to you?
You know, maybe in some very important ways, not that different. Like we will be back here. There will be a new set of students. We'll be talking about how startups are really important. The technology is really cool. We'll have this new great tool in the world. It would feel amazing if we got to teleport forward six years today and have this thing that was smarter than humans in many subjects and could do these complicated tasks for us and, you know, like we could have these like complicated program written or this research done or this business started. And yet like the sun keeps rising, the people keep having their human dramas. Life goes on. So sort of like super different in some sense that we now have like abundant intelligence that our fingertips and then in some other sense, like not different at all.
And you mentioned artificial general intelligence, AGI, artificial general intelligence. And in a previous interview, you defined that as software that could mimic the median competence of a, or the competence of a median human for tasks. Yeah. Can you give me, is there a time if you do a best guess of when you think or arrange you feel like that's going to happen?
I think we need a more precise definition of AGI for the timing question because at this point, even with like the definition you just gave, which is a reasonable one, there's some. I'm parroting back what you said in an interview. Well, that's good because I'm going to criticize myself. Okay. It's too loose of a definition. There's too much room for misinterpretation in there to I think be really useful or get at what people really want.
Like I kind of think what people want to know when they say like, what's the timeline to AGI is like, when is the world going to be super different? When is the rate of change going to get super high? When is the way the economy works going to be really different? Like when does my life change? And that for a bunch of reasons may be very different than we think. Like, I can totally imagine a world where we build PhD level intelligence in any area and we can make researchers way more productive.
Maybe we can even do some autonomous research. And in some sense, like, that sounds like it should change the world a lot. And I can imagine that we do that and then we can detect no change in global GDP growth for like years afterwards, something like that. Which is very strange to think about. And it was not my original intuition of how this was all going to go. So I don't know how to give a precise timeline of when we get to the milestone people care about.
也许我们甚至可以进行一些自主研究。从某种意义上讲,这似乎应该会极大地改变世界。我可以想象我们这样做,然后在接下来的几年里全球 GDP 增长没有任何变化,类似这样的情况。这样想起来很奇怪。这不是我最初的直觉认为会发生的情况。所以我不知道如何给出一个准确的时间表,何时我们能达到人们关心的里程碑。
But when we get to systems that are way more capable than we have right now, one year and every year after. And that I think is the important point. So I've given up on trying to give the AGI timeline. But I think every year for the next many, we have dramatically more capable systems every year. I want to ask about the dangers of AGI. And gang, I know there's tons of questions for Sam in a few moments.
I'll be turning it up. So start thinking about your questions. A big focus on Stanford right now is ethics. And can we talk about how you perceive the dangers of AGI? And specifically, do you think the biggest danger from AGI is going to come from a cataclysmic event which makes all the papers? Or is it going to be more subtle and pernicious? Sort of like, how everybody has ADD right now from using Pro-NOC? Are you more concerned about the subtle dangers or the cataclysmic dangers? Or neither?
I'm more concerned about the subtle dangers because I think we're more likely to overlook those. The cataclysmic dangers a lot of people talk about and a lot of people think about. And I don't want to minimize those. I think they're really serious and a real thing. But I think we at least know to look out for that and spend a lot of effort. The example you gave of everybody getting ADD from TikTok or whatever, I don't think we knew to look out for. And that's a really hard, the unknown unknowns are really hard.
So I'd worry more about those, although I worry about both. And are they unknown unknowns? Are there any that you can name that you're particularly worried about? Well, then I would kind of be unknown unknown. I am worried just about. So even though I think in the short term things change less than we think, as with other major technologies, in the long term I think they change more than we think.
And I am worried about what rate society can adapt to something so new and how long it'll take us to figure out the new social contract versus how long we get to do it. I'm worried about that. I'm going to open up so I want to ask you a question about one of the key things that we're now trying to inculcate into the curriculum as things change so rapidly is resilience. That's really good. And the cornerstone of resilience is self-awareness. I'm wondering if you feel that you're pretty self-aware of your driving motivations as you are embarking on this journey.
So first of all, I believe resilience can be taught. I believe it has long been one of the most important life skills. And in the future, I think over the next couple of decades, I think resilience and adaptability will be more important than I've been in a very long time. So I think that's really great. On the self-awareness question, I think I'm self-aware, but I think everybody thinks they're self-aware and whether I am or not is sort of hard to say from the inside.
And can I ask you sort of the questions that we ask in our intro classes on self-awareness? Sure. It's like the Peter Drucker framework. So what do you think your greatest strengths are, Sam? I think I'm not great at many things, but I'm good at a lot of things. And I think breadth has become an underrated thing in the world. Everyone gets hyper-specialized.
So if you're good at a lot of things, you can seek connections across them. I think you can then kind of come up with the ideas that are different than everybody else has or that sort of the experts in one area have. And what are your most dangerous weaknesses? Most dangerous, that's an interesting framework for it. I think I have like a general bias to be too pro-technology just because I'm curious and I want to see where it goes. And I believe that technology is on the whole, a net good thing. But I think that is a worldview that has overall served me and others well and thus gotten like a lot of positive reinforcement and is not always true.
And when it's not been true, it's been like pretty bad for a lot of people. And then Harvard psychologist David McAland has this framework that all leaders are driven by one of three primal needs, a need for affiliation, which is a need to be liked, a need for achievement and a need for power. If you had to rank list those, what would be yours? I think at various times in my career, all of those, I think they're these like levels that people go through. At this point, I feel driven by like wanting to do something useful and interesting.
And I think I definitely had like the money and the power and the status phases. And where were you when you most last felt most like yourself? I all, I all. You all are skilled. And one last question, and what are you most excited about with chat GBT5 that's coming out that people don't? What are you most excited about with the release of chat GBT5 that we're all going to see? I don't know yet. I mean, this sounds like a cop out answer, but I think the most important thing about GBT5 or whatever we call that is just that it's going to be smarter. And this sounds like a dodge, but I think that's like among the most remarkable facts in human history that we can just do something and we can say right now with a high degree of scientific certainty. GBT5 is going to be smarter than a lot smarter than GBT4. GBT6 is going to be a lot smarter than GBT5. And we are not near the top of this curve and we kind of know what to do. And this is not like it's going to get better in one area.
This is not like we're going to, you know, it's not that it's always going to get better at this eval or this subject or this modality. It's just going to be smarter in the general sense. And I think the gravity of that statement is still like underrated. Okay, that's great. Sam, guys, Sam is really here for you. He wants to answer your question. So we're going to open it up. Hello. Thank you so much for joining us. I'm a junior here at Stanford. I sort of wanted to talk to you about responsible deployment of AGI. So as you guys continue to inch closer to that, how do you plan to deploy that responsibly at OpenAI to prevent stifling human innovation and continue to spur that? So I'm actually not worried at all about stifling human innovation. And I really deeply believe that people will just surprise us on the upside with better tools.
I think all of history suggests that if you give people more leverage, they do more amazing things. And that's kind of like we all get to benefit from that. That's just kind of great. I am though increasingly worried about how we're going to do this all responsibly. I think as the models get more capable, we have a higher and higher bar. We do a lot of things like red teaming and external audits. And I think those are all really good. But I think as the models get more capable, we'll have to deploy even more iteratively, have an even tighter feedback loop on looking at how they're used and where they work and where they don't work.
And this world that we used to do where we can release a major model update every couple of years, we probably have to find ways to like increase the granularity on that and deploy more iterative than we have in the past. And it's not super obvious to us yet how to do that. But I think that'll be key to responsible deployment. And also the way we kind of have all of the stakeholders negotiate what the rules of AI need to be, that's going to get more complex over time too. Thank you. Next question right here. You mentioned before that there's a growing need for larger and larger computers and faster computers. However, many parts of the world don't have the infrastructure to build those data centers or those large computers.
How do you see global innovation being impacted by that? So two parts to that. One, no matter where the computers are built, I think global and equitable access to use the computers for training as well as inference is super important. One of the things that's like very core to our mission is that we make chat GPT available for free to as many people as want to use it with the exception of certain countries where we either can't or don't for a good reason to want to operate. How we think about making training compute more available to the world is going to become increasingly important.
I do think we get to a world where we sort of think about it as a human right to get access to a certain amount of compute and we've got to figure out how to distribute that to people all around the world. There's a second thing though which is I think countries are going to increasingly realize the importance of having their own AI infrastructure and we want to figure out a way and we're now spending a lot of time traveling around the world to build them in the many countries that want to build these and I hope we can play some small role there in helping that happen.
Perfect, thank you. My question was what role do you envision for AI in the future of like space exploration or like colonization? I think space is like not that hospitable for biological life obviously and so if we can send the robots that seems easier. Hey Sam, so my question is for a lot of the founders in the room and I'm going to give you the question and then I'm going to explain why I think it's complicated. So my question is about how you know an idea is non-consensus and the reason I think it's complicated is because it's easy to overthink.
I think today even yourself says AI is the place to start a company. I think that's pretty consensus. Maybe rightfully so, it's an inflection point. I think it's hard to know if an idea is non-consensus depending on the group that you're talking about. The general public has a different view of tech from the tech community and even tech elites have a different point of view from the tech community. So I was wondering how you verify that your idea is non-consensus enough to pursue?
I mean first of all, what you really want is to be right. Being contrarian and wrong is still as wrong and if you predicted like 17 out of the last two recessions, you probably were contrarian for the two you got right probably not even necessarily but you were wrong 15 other times. So I think it's easy to get too excited about being contrarian and again like the most important thing to be right and the group is usually right. But where the most value is when you are contrarian and right and that doesn't always happen in like sort of a zero or one kind of way. Everybody in the room can agree that AI is the right place to start a company.
And if one person in the room figures out the right company to start and then successfully executes on that and everybody else thinks that wasn't the best thing you could do, that's what matters. So it's okay to kind of like go with conventional wisdom when it's right and then find the area where you have some unique insight. In terms of how to do that, I do think surrounding yourself with the right peer group is really important and finding original thinkers is important but there is part of this where you kind of have to do it solo or at least part of it solo or with a few other people who are like you know going to be your co-founders or whatever.
And I think by the time you're too far in the like how can I find the right peer group, you're somehow in the wrong framework already. So like learning to trust yourself and your own intuition and your own thought process which gets much easier over time. No one no matter what they say, I think is like truly great at this one there, just starting out. Yeah, because like you kind of just haven't built the muscle in like all of your social pressure and all of like the evolutionary pressure that produced you was against that.
So it's something that like you get better at over time and don't hold yourself to too high of a standard too early on it. Hi Sam, I'm curious to know what your predictions are for how energy demand will change in the coming decades and how we achieve a future where renewable energy sources are one cent per kilowatt hour. I mean it will go up for sure, well not for sure, you can come up with all these weird ways in which like we all, the pressing futures where it doesn't go up, I would like it to go up a lot.
I hope that we hold ourselves to a high enough standard where it does go up. I forget exactly what the kind of world's electrical generating capacity is right now, but let's say it's like 3000, 4000 gigawatts, something like that. Even if we add another 100 gigawatts for AI, it doesn't materially change it that much, but it changes at some and if we start at 1000 gigawatts for AI someday it does, that's a material change. But there are a lot of other things that we want to do and energy does seem to correlate quite a lot with quality of life we can deliver for people. My guess is that fusion eventually dominates electrical generation on earth. I think it should be the cheapest, most abundant, most reliable, densest source. I could be wrong on that and it could be solar plus storage and you know, my guess most likely is it's going to be 80, 21 where the other and there will be some cases where one of those is better than the other, but those kind of seem like the two bets for like really global scale one cent per kilowatt hour energy.
Hi Sam, I have a question. It's about over 30, what happened last year? So what's the lesson you learned? Because you talk about resilience. So what's the lesson you learned from left that company and now coming back and what made you come in back because Microsoft also give you all of like, time is here more. I mean the best lesson I learned was that we had an incredible team that totally could have run the company without me and did for a couple of days. And you never, and also that the team was super resilient. We knew that some crazy things and probably more crazy things will happen to us between here and AGI as different parts of the world have stronger and stronger emotional reactions and the stakes keep ratcheting up. And you know, I thought that the team would do well under a lot of pressure, but you never really know until you get to run the experiment and we got to run the experiment and I learned that the team was super resilient and like ready to kind of run the company. In terms of why I came back, you know, I originally when the next morning the board called me and I was like, what do you think about coming back? And I was like, no, I'm mad. And then I thought about it and I realized just like how much I loved OpenAI, how much I loved the people, the culture we built, the mission and I kind of like wanted to finish it all together. You did emotionally, this is obviously a really sensitive one. It's not, but imagine that was mostly, okay.
Well then can we talk about the structure about it because this Russian doll structure of the OpenAI where you have the nonprofit owning the for-profit. You know, when we're trying to teach principal geronology. We got to the structure gradually. It's not what I would go back and pick if we could do it all over again. But we didn't think we were going to have a product when we started. We were just going to be like an AI research lab. It wasn't even clear. We had no idea about a language model or an API or chat GBT. So if you're going to start a company, you got to have like some theory that you're going to sell a product someday. And we didn't think we were going to. We didn't realize we're going to need so much money for compute. We didn't realize we were going to like have this nice business. So what was your intention when you started it? We just wanted to like push AI research forward. We thought that. And I know this gets back to motivations, but that's the pure motivation. There's no motivation around making money or power. I cannot overstate how foreign of a concept like. I mean, for you personally, not for opening AI, but you weren't starting. I had already made a lot of money. So it was not like a big. I mean, I like I don't want to like claim some like moral purity here. It was just like, but that was the stage of my life. That's not a driver. Driver. OK. Because there's this.
So and the reason why I'm asking is just, you know, when we're teaching about principal to have an entrepreneurship here, you can understand principles inferred from organizational structures. When the United States was set up, the architecture of governance is the Constitution. It's got three branches of government, all these checks and balances. And you can infer certain principles that, you know, there's a skepticism on centralizing power that, you know, things will move slowly. It's hard to get things to change, but it'll be very, very stable. If you know not to parrot Billy Eilish, but if you look at the opening AI structure and you think, what was that made for? You have like your near $100 billion valuation and you've got a very, very limited board that's a nonprofit board, which is supposed to look after its fiduciary duties to the is to hand. Again, it's not what we would have done if we knew now, then what we know now, but you don't get to like play life in reverse. And you have to just like adapt. There's a mission we really cared about. We thought we thought AI was going to be really important. We thought we had an algorithm that learned. We knew it got better with scale. We didn't know how predictably it got better with scale. And we wanted to push on this. We thought this was like going to be a very important thing in human history. And we didn't get everything right, but we were right on the big stuff and our mission hasn't changed. And we've adapted the structures.
We go and we'll adapt it more in the future. But you know, like you don't, like life is not a problem set. You don't get to like solve everything really nicely all at once. It doesn't work quite like it works in the classroom as you're doing it. And my advice is just like trust yourself to adapt as you go. It'll be a little bit messy, but you can do it. And I just asked this because of the significance of open AI. You have a board which is all supposed to be independent financially so that they're making these decisions as a nonprofit. Thinking about the stakeholder, their stakeholder that they are fiduciary of isn't in the shareholders, it's humanity. Everybody's independent. There's no financial incentive that anybody has that's on the board, including yourself with open AI. Well, Greg was, okay, first of all, I think making money is a good thing. I think capitalism is a good thing. My co-founders on the board have had financial interest and I've never once seen them not take the gravity of the mission seriously. But you know, we've put a structure in place that we think is a way to get incentives aligned and I do believe incentives are superpowers. But I'm sure we'll evolve it more over time. And I think that's good, not bad. And with open AI, then you fund, you don't get any carry in that and you're not following on investments onto those companies.
Okay, thank you. We can keep talking about this. No, no, I know you want to go back to students. I do YouTube. So we'll keep going to the students. How do you expect that AGI will change geopolitics and the balance of power in the world? Like maybe more than any other technology. I don't, I think about that so much and I have such a hard time saying what it's actually going to do. Or maybe more accurately, I have such a hard time saying what it won't do. And we're talking earlier about how it's like not going to, maybe it won't change day to day life that much. But the balance of power in the world, it feels like it does change a lot. But I don't have a deep answer of exactly how. Thanks so much. I was wondering, sorry, I was wondering in the deployment of like general intelligence and also responsible AI. How much do you think is it necessary that AI systems are somehow capable of recognizing their own insecurities or like uncertainties and actually communicating them to the outside world? I always get nervous anthropomorphizing AI too much because I think it like can lead to a bunch of weird oversights. But if we say like how much can AI recognize its own flaws, I think that's very important to build. And right now and the ability to recognize an error in reasoning and have some sort of like introspection ability like that, that seems to me like really important to pursue.
Hey, Sam, thank you for giving us some of your time today and coming to speak. From the outside looking in, we all hear about the culture and togetherness of open AI. In addition to the intensity and speed of which you guys work out, clearly seen from charge, you see, and all your breakthroughs. And also in when you are temporarily removed from the company by the board and how all of your employees tweeted, open AI has nothing without its people. What would you say is the reason behind this? Is it the binding machine to achieve AI or something even deeper? What is pushing the culture every day? I think it is the shared mission. I mean, I think people like like each other and we feel like we've, you know, we're in the trenches together doing this really hard thing. But I think it really is like deep sense of purpose and loyalty to the mission. And when you can create that, I think it is like the strongest force for success, at any start, at least that I've seen among startups. And you know, we try to like select for that in people we hire, but even people who come in, not really believing that AGI is going to be such a big deal and they get an error is so important, tend to believe it after the first three months or whatever. And so that's like, that's a very powerful cultural force that we have. Thanks.
Currently, there are a lot of concerns about the misuse of AI in the immediate term with issues like global conflicts and the election coming up. What do you think can be done by the industry, governments, and honestly people like us in the immediate term, especially with very strong open source models? Something that I think is important is not to pretend like this technology or any other technology is all good. I believe that AI will be very net good, tremendously net good. But I think like with any other tool, it'll be misused. You can do great things with a hammer and you can like kill people with a hammer. I don't think that absolves us or you all or society from trying to mitigate the bad as much as we can and maximize the good. But I do think it's important to realize that with any sufficiently powerful tool, you do put power in the hands of tool users or you make some decisions that constrain what people in society can do. I think we have a voice in that. I think you all have a voice in that. I think the governments and our elected representatives in democratic processes have the loudest voice in that. But we're not going to get this perfectly right. Like we society are not going to get this perfectly right. And a tight feedback loop I think is the best way to get it closest to right. And the way that that balance gets negotiated of safety versus freedom and autonomy, I think it's like worth studying that with previous technologies and we'll do the best we can here. We society will do the best we can here.
I'm getting actually I've got to cut it. Sorry. I know. I just want to be very sensitive to time. I know the interest forex seeds the time and the love for Sam. Sam, I know it is your birthday. I don't know if you can indulge us because I know there's a lot of love for you. So I wonder if we can all just sing happy birthday. No, no, no, no. Please. No. We want to make you very uncomfortable. I'd much rather do one more. This is less interesting. We can we can do one more question quickly. Dear Sam, happy birthday to you. Twenty seconds of awkwardness.
Is there a burner question? Somebody's got a real burner and we only have thirty seconds. So make it short. Hi. I wanted to ask if the prospect of making something smarter than any human could possibly be scaring me. It of course does and I think it would be like really weird and a bad sign if it didn't scare me. Humans have gotten dramatically smarter and more capable over time. You are dramatically more capable than your great great grandparents and there's almost no biological drift over that period. Like sure you eat a little bit better and got better health care.
Maybe you eat worse. I don't know. But that's not the main reason you're more capable. You are more capable because the infrastructure of society is way smarter and way more capable than any human. And through that it made you society, people that came before you, made you the Internet the iPhone, a huge amount of knowledge available at your fingertips and you can do things that your predecessors would find absolutely breathtaking. Society is far smarter than you now. Society is an AGI as far as you can tell. And the way that that happened was not any individual's brain but the space between all of us, that scaffolding that we build up and contribute to brick by brick, step by step and then we use to go to far greater heights for the people that come after us.
Things that are smarter than us will contribute to that same scaffolding. You will have your children will have tools available that you didn't and that scaffolding will have gotten built up to greater heights. And that's always a little bit scary. But I think it's like way more good than bad and people will do better things and solve more problems and the people of the future will be able to use these new tools and the new scaffolding that these new tools contribute to. If you think about a world that has AI making a bunch of scientific discovery, what happens to that scientific progress is it just gets added to the scaffolding and then your kids can do new things with it or you in 10 years can do new things with it. But the way it's going to feel to people I think is not that there is this much smarter entity because we're much smarter in some sense than the great, great, great grandparents or more capable at least, but that any individual person can just do more.