We should address like we've had many conversations about opening eye and how cool their technology is and the app store and all of that. I don't think I'm going to say a little bit. Okay, well you hadn't followed the announcement at the time of that podcast. I was pointing out to us to say I'm not knowing what we were talking about.
Ravengrip, Los Amiges, put it on right to the test. More less. Aloha, hello. How are you guys doing? We're just, you know, living out Thanksgiving week on family vacation and dealing with open AI saga. What about you guys? You know, well, first of all, welcome to my surf shack, which is apparently also an excellent podcast studio given our Starlink connection inserts here. So you've traded a pool house for a surf shack. I know. I'm really delighted. I could do a whole pod about the surf shack, but I won't. We were going to be off this week. We had a plan B. We were going to bring you all some archival ancillary content. But something happened around someone named Sam and Tech. Not me. It's not this Sam and Tech. And so we felt we had to bring you the pod. So hello to everyone listening to this. I don't know exactly when it's going to come out because we're aiming to do it fast because the situation is so fluid. But we've got the full quad here to give you all the news on open AI that the information has been breaking. All the hot takes, all the back channel conversation. What am I missing? Not much. Let's go. Some jokes, maybe a future. Bring all the energy from our lack of sleeping. I mean, I miss the time. Me too. Oh, I'm glad. I'm glad you've all been sleeping fine.
羽毛破碎,洛斯阿米格斯,将其进行充分的测试。多多少少。阿洛哈,大家好。你们怎么样呀?我们只是,在度过感恩节周的家庭度假之际,同时还面临着与开放 AI 的事件。你们呢?你知道的,首先,欢迎来到我的冲浪小屋,这也是一个很好的播客工作室,基于我们的 Starlink 连接。所以你们将游泳池小屋换成了冲浪小屋?我知道。我真的很高兴。我可以做一个关于冲浪小屋的完整播客,但我不会这样做的。我们本来打算在这个星期休息的。我们有一个计划 B。我们本来打算给大家带来一些档案附属内容。但是发生了一些关于一个叫 Sam 和 Tech 的事情。不是我,不是这个 Sam 和 Tech。所以我们觉得我们必须给大家带来这个播客。所以向所有听到这个播客的人们问好。我不知道这个播客具体什么时候会发布,因为我们希望快速发布,因为形势非常动态。但我们有完整的团队在这里,为你们提供有关开放 AI 的所有新闻,以及信息已经爆出的热门评论,以及后台交流。我漏掉了什么吗?没有太多。我们开始吧。一些笑话,也许是未来的一些设想。带上我们因为睡眠不足而带来的所有能量。我的意思是,我怀念那段时间。我也是。哦,我很高兴。我很高兴你们都睡得好。
I thought I could get off with a recap for those who A, don't remember or B are so tired that they don't remember what brought us to this place. And then I think our goal is going to be to kind of talk about the bigger picture here, which I think is why the story is so important and also has the benefit of hopefully being relevant no matter what how come happens after we've taped this pod. So let's see on Friday, we should say we don't usually say when we're taping because then it it gets stale. So that's like bad podcasting practice. But we're taping this on Tuesday afternoon. So about four days ago, Sam Altman, we've talked about many times on this pod, the CEO of OpenAI, was fired. And we and the world were treated or notified by this from a letter from the OpenAI's board of directors, claiming that it was for not being consistently candid with the board. I don't know about you guys. Pardon me Sam? You've got to be consistently candid. To have said, anyone made the consistently candid t-shirts because- That's really great. You can count on this pod to be consistently candid. We're going to add it to our merch shop. Consistently candid, we think right now. And so I don't know, I mean actually, Kurt, what did you guys immediately think when you saw that? Like what came to mind? I thought there goes my Thanksgiving vacation. Okay. Just the issue on that. I mean, with love and respect. I know why Jessica has been working 24 hours a day. Why has this ruined your Thanksgiving? Because to me, it's just kind of like great gossip has very little. Oh, Sam, she did not say it ruined her Thanksgiving. She said it made for a very interesting Thanksgiving. But like, what's your angle? Who's calling you? I've never seen the popcorn emoji used as much as the last like four days, because it's like every hour there's something new happening. It's like, I need to watch this episode of succession because the next episode is coming fast. And like, I'm just supposed to be like unplugging this week and with my kids. And now there's this like huge tech saga that we haven't seen the likes of in like 30 years unfolding before our eyes. Or in three months, but who's counting? But like, I'm not discounting. I'm not discounting all of the work, all the journalists and Jessica in particular have been doing that I haven't been doing and not sleeping. I'm not saying that. I'm just saying like, I thought I was in for a very different type of Thanksgiving week than what's happening.
I'm with you. My only the reason I ask is because I personally, this is like my Kardashians on steroids. Like this is hilarious, right? And like, it's like, I couldn't be. I mean, this is, it's good TV is a little slow, except for Shoresy's back. And other than Shoresy, like this is, this is it. But again, like there's an argument that, you know, people are calling, I was wondering whether people in your, well, they're calling you freaking out about like what their AI future solutions.
Well, can I tell you something that did happen? Here I am trying to get a peaceful night's sleep Sunday night. And it's by the end. And my husband, who happens to be Dave here in this podcast, wakes up somehow and goes, what's up? I know I can't believe it either. What the hell is happening? I'm like, first of all, you're going to wake the baby who's literally asleep 10 feet away. Second of all, it's 5 a.m. What are you doing? And it's somebody from our back channel, I won't say who, but they were just like growing out about the next big saw there that just unfolded. And I'm like, this is crazy.
Okay, so Sam, I think you raise a good point. There is this, we let's return to the conversation of why people are fixated with this, because I think that is, in fact, a fascinating piece of this and maybe even playing into it somehow. But I will, okay, you want me to speed up the timeline is what I'm taking away from this. So Sam was fired. The board made it seem like corporate malfeasance, usually when a board issues something like that. There's fraud, there's something. Instantly, information others were reporting that Sam had no knowledge of an investigation. The picture was very unclear. And really, there appeared to be a power struggle with OpenAI's chief scientist, former co-founder and board member, and Sam over the future direction of the technology.
So the narrative that quickly emerged was that whatever transpired, and I still don't think we know maybe even half of what transpired, but it was really related to this tension. Many people have long pointed out, in OpenAI, began as a nonprofit, became, under Sam Altman, a for-profit in partnership with Microsoft to push to commercialize the technology in part to make the money needed and to have a structure where it could partner to train its models and raise the capital needed to do this. But what we understand now is obviously that has had huge ramifications internally on the politics at OpenAI in part, in big part, but we don't know if that's the only reason leading to this situation, which is still very unclear.
Over the weekend, there were talks. Employees were, many employees, including the senior leadership, including the interim CEO, were very upset by this, started partnering, colluding with Sam and OpenAI's President Greg Brockman to get them back. Those talks failed the same night. Sam and Sasha Nadella, the CEO of Microsoft, announced that Sam was going to Microsoft with Greg to start a new artificial intelligence research lab. Microsoft said it would hire anyone from OpenAI who wanted to join hours later, 700-ish OpenAI employees said they would resign if the board wasn't pushed out in Sam and Greg were reinstated and negotiations continue.
So we've got really interesting issues about the direction of this technology. We've got personal and perhaps petty power struggles. We have major questions around governance, how a nonprofit board that wasn't even fully staffed could make a move like this and clearly fail to communicate its reasons. And we've got, also, I think like a fair amount of just pure old Silicon Valley employee activism at this point, right? And I think a really interesting story there. So I think that is the state of play. We can take this in a lot of directions.
But Dave, what are your big picture takeaways as of the taping of this? And if I take a phone call, you'll know it's one of four people and I will be right back. Just leave your audio on, Jess, if that happens.
Well, I would add one bullet point to your list there. And I think we also have religious fundamentalism at play here, which I guess put a fine point on, which I've been talking about it quite a bit in the back channel. But I think that whatever side you're on on the inside of this struggle, whether it's the SAM side or the ILLIA research side, I think we have to point out that this is a struggle around ideology to some extent. And I think, like you said, Jess, there's a bunch of stuff that we don't know. I think there's probably some very big things that we don't know.
But focusing on what we do know, the research side of the AI community has, you know, is largely a secular, you know, almost always talks in a deeply religious tone. They've made the, you know, AI in particular, the AGI technology into almost a sacred, you know, we've almost deified this technology. And I think this is a API. You mean the idea that we'll unpack that for people? Like, what is that? The idea here is that OpenAI was set up as a nonprofit organization to keep humanity safe.
In the case that AGI, artificial general intelligence, which has become the kind of moniker that we and everybody has started to use to talk about this technology and its kind of end or penultimate state, they've really started to worry that when this happens, humanity might be at risk. And they use the religious, fundamentalist kind of positioning around this, that if and when this happens, we should all fear this with, you know, kind of the fear of God, really, and that we've created a God, this God is going to potentially destroy us all. And that's the thing that I'm kind of fascinated by in relation to all of this.
And I was actually talking about this before a lot of these articles came out. There's an article that came out from the Atlantic, which I think actually touches on this where apparently internally, they've been using, you know, they have a saying, feel the AGI, they chant this, they created a literally wooden figurine representing AGI internally. And I think that the point I would like to. They burned it, right? You got to complete the burn. They burned their deified effigy. And I guess the thing that I would say is that all fundamentalisms, and we've seen quite a, even, you know, in the last few months, we've seen other forms of fundamentalism make grave miscalculations in terms of the actions that they take in the real world.
And fundamentalisms in all of their forms are the fundamentalist, you know, philosophy becomes a cult, or whether a fundamentalist religion decides that it's okay to murder people on behalf of their cause. You end up making grave miscalculations. And I think that that's something we should be talking about more aggressively in the valley and that that's something to be discussed here. So I couldn't agree more, Dave, that this is the fundamental thing.
And the thing that is kind of wild to me is like, Sam is a fundamentalist on this, right? Like, you know, and it's like, so it's very interesting to me that the stories become Iliya, the fundamentalist versus Sam, the pragmatist who just wants to build a business, because the reality is Sam Altman, the founder, even as a private conference, not even a year ago, the story was not only is the only goal of open AI to protect against AGI, but interestingly, the way to do that is to be the first to create AGI, right? So the whole thing is actually like, which is, if you think about everything, I had like, draw this flowchart of like their, their actual mental model, which is like, all right, yeah, the only goal is to save humanity from AGI. The only way to save humanity from AGI is to create it first and trust us that we're the good people, because like, we will have created the AGI and like, therefore, we can keep it safe from bad AGI. Therefore, we must make infinite money in order to create the, is this like this weird flowchart, and like somehow, like the way the pieces have gotten locked in that the stories have been told is like, you know, the profit part, right, is now this, is the pragmatism is now associated with Sam, and like, the fundamentalism is associated with like the board, but the reality is, is like, Sam founded the organization, and is it relevant?
I think that's an excellent point, Sam, I want to go to you, Brett, because everyone in like traditional business America, AKA, CNBC, WCMBC, is like, how did this governance breakdown happen? And I think you just answered it. Like, they've been, if everyone's drinking from the same like, religious kind of fire hose, right, including the investors, now I don't know that I would put Microsoft in that camp, so maybe they complicate that, but like, that kind of goes a long way towards explaining how you could basically have four independent people with their own sort of political power struggles with Sam, like, even overseeing this, and then a lot of people like, and then how Sam, I mean, I just have so much reporting pointing to the fact that Sam knew he had problems with the board, and then a lot of people were like, well, why didn't he do anything about it? I think again, if you're living in this construct of a sort of fundamentalist mentality, like, it's easier to understand why you would overlook things that now we're all wondering why they were overlooked.
And all those other things before, which is like, one is, everyone's like, oh, the employees have voted, you know, they're all pro Sam or like, pro business. They've all were hired in like the last six months, right? Like, the org is so fresh, and they all came in to do the business side of it, so it's really interesting to think about like the fundamentalist foundings of open AI and control of it, versus who now works there, which is a bunch of x Facebook people we all know, who are just like, oh, yeah, this is like clearly a good business, right? And so it's like really interesting to think about the org structure and what that even means. Billie and AR. Right, and like, and so that's a piece of it. And then I also think it's just like, you know, again, the big mistake with open AI, I have a fundamentalism is they really, they're very clever people, and the idea was to have their cake in you too, right? So they set up a nonprofit board, they tell all the investors, no, not only can you not have any board seats, you can't even own a piece of the company, right? Because our mission is so important on a thousand year basis, and we're going to need more money than you can possibly fathom and blah, blah, blah. They do give employees equity though. Is it actual equity? Yeah. Okay, well, they certainly even give investors equity, right? Right, but the end there's cap, but employees get equity. And in fact, one of the travesties of this, Yeah, cap profit is not equity. Cap profit is proxy equity, right? Like, they understand they had equity that they were in the process of selling in a secondary tender valued at $986 billion.
Interesting. I mean, I don't know the details of that. And it's interesting for me to hear because my understanding was that the whole point was that they basically were the whole thing is owned by the nonprofit, right? And they've made a big point of that, right? Is that they're going to think it's controlled by the nonprofit. Okay. So fine. We can talk about the differences between ownership and control and what's an RSE versus whatever. But thematically, the idea was set it up in a way that the nonprofit controls everything forever with a noneconomically motivated board, right? It's like hilarious. Like they're, you know, they're disclaimers about, oh, this might be worth a lot of money, but money might mean nothing once we're done, right? Like it's ridiculous over to the top marketing, right? You know, coupled with kind of the realities of trying to build a for profit business. And I think I think it's just, I think it's like a lot of very over-inflated egos, right? That have now gotten caught in this very weird situation. And zealots, right? To your point. But Brett, over to you. What are you holding in your head and thinking up?
So a couple of things. One, I think anytime you need a flow chart to explain how your board works, you're probably starting from a bad position. The second thing is, I feel like Satya is getting like all this credit for like saving open AI and like, you know, doing the right thing. And I don't think a lot of people are recognizing like he like hit a Hail Mary and he was screwed. And like, frankly, I'm surprised that Microsoft and Sequoia and Thrive like didn't do more board diligence when they were investing all this money to figure out like this. Yeah, it's all over again. Board is bored, like not adequate for the type of company that this has become. And so now, oh, God, I'm going to lose $11 billion overnight. I better like figure out how to like, you know, hit this one out of the park and get Sam to join me or else I'm screwed. And like, Microsoft barely is holding on to its stock price right now.
And we'll see what happens next. But I just feel like if that didn't happen, like, it would be a world of trouble. And I think a lot of like fingers pointed at Satya to be like, what the hell were you doing when you invested in not like doing all this like diligence around the board itself?
Sam, you're you're waving. You're just stimulating. What would you like to say?
Sam,你在挥手。你只是在刺激。你想要说什么?
Here's reality. $11 billion is completely irrelevant to Microsoft, right? It's a $3 trillion company, right? Like they can spend $11 billion. Nothing about. No, it's like any, any amount of diligence. It's not that matter. Guys, we're forgetting, right? So there's shareholders and they're speaking to the market, but there's also customers. And so that's a really fascinating part of what's happening.
Because if you're Satya, you shouldn't be worried about Wall Street. You should be worried about Morgan Stanley, JP Morgan, right? These customers who when you trust and they're doing enterprise grade millions, tens of millions, probably more levels, integration, multi year.
And do you want your partner to be the one who went into bed with like a hot startupy thing that has all these issues and where you didn't even have a board seat or a board observer seat or anything of the equipment, right? So to me, this is actually the key point in the miscalculation, right?
What does fundamentalism do? It blinds you to the reality on the ground, most often. And the reality on the ground is exactly what you said. Microsoft has enormously huge customers that are using Azure to deliver, you know, services to their customers.
The other thing that happened over the last year. And you know, it's funny, like literally one year ago, I was sitting at a table with both Adam, DiAngelo and Greg, and having some interesting conversations. And it's shocking to me how much has changed in one year, right?
Like this went from being a nonprofit research organization with a lot of really, you know, interesting ideas, definitely fundamentalists to being the tip of the spear, not just for Microsoft and their customers, but for all of Silicon Valley, the Silicon Valley machine.
Once this happens, you know, once an set of APIs emerges, that becomes the focus for the entire Silicon Valley global idea. Everyone starts conspiring together. And so it's not just these big customers that refer to just tens of thousands of startups, right? And the startups, and we're talking billions, tens of billions of dollars of investment in startups that's going on.
And all of those people's lives are also, you know, and they're, you know, the future of their startups are also at risk, right? And so I think the customer point can't be underestimated. And by the way, right before we started the podcast, I checked and chat TBT is down right now. Right. And so I think it's like, you've got this thing where it's just like, there's a lot of real time through this pod.
Yeah. No, I agree. And Dave, I just want to pack, you mentioned Adam and Greg. So Adam DiAngelo, early Facebook employee who went on to found Quora early board member at OpenAI. And, you know, all reporting suggest obviously was deeply involved in the ouster of Sam. And, you know, is is throughout all of this is sort of leading the board's negotiations with Sam. And so clearly there is a personal relationship there.
But Adam, we just published today a profile of him and the information, you know, he's a he's a stubborn guy with deep philosophical views as well. And so, yeah.
And I'll actually, before Sam, before you jump in, let me just say one thing about Adam, since you brought him up Jess, you know, I've gotten a lot of calls from you brought him up that I brought. Yeah.
But you know, the I've gotten a lot of calls from people asking for my opinion about this, particularly as it rates to Adam, because Adam and I worked together for many years at Facebook, we specifically work together on Facebook platform, the set of APIs that enabled social apps, which is like, you know, another huge wave of API usage in the Valley.
And I just want to say that Adam is one of the highest integrity people that I've ever worked with. He's a very principled person. And he, you know, he's a very deliberate, very rational, very serious actor. He's a very serious person. And so it does bother me a little bit that there's a lot of like speculation on the internet that like Sam, Adam is doing this for, you know, sort of vindictive or whatever reasons.
Like I just, I don't think that's the case. I think Adam's, you know, he's one of the smartest people I've ever had the luck to work with and maybe even ever met. And so I just want to put that on the record as well.
Yeah, I mean, yeah, I would just point out the fact that like, you know, there's no question that you think about organizing your life and organizing your work around things you can trust, right? And like platforms that are trustable that to Dave's point, I think it's the right way to think about it, which is like, do you want to trust American business and governance and profit motive, which is kind of how this whole thing works, right? Or do you want to trust kind of like fundamentalism, right? And like, you know, where are the lines between, you know, you bring up Facebook, what do you like, which do you want to trust?
100% business. I mean, I think, you know, you talk about it. I hate to say it. Talk about Facebook platform, because this isn't even David and I both have, you know, generational takes on.
You know, the biggest problem with Facebook platform was, is the economics of it never made any sense, right? And it was this, I even that that was kind of echo boom, or we now see where like there was this idea, which is the future is social apps, the social app where it can make everything better. It's all really interesting. High-minded stuff. The problem was when you got down to brass tax, there was this economic trade where everyone's like, Oh, I know a place of platform is economically it's a way for me to get super cheap distribution and newsfeed for stuff I want to share. Great. I know what that is. The problem is that makes no sense for Facebook, right? It was an irrational platform from that perspective.
The data sharing stuff, there's the idea, Oh, we empower the world in these APIs, right? The world will be better and we'll get paid somehow and all put Facebook more for it. But the economic trade of it never made any sense. That's how you had a lot of problems, right? And so the problem I'd say is like, that was like an echo boom. We'll you now see with open AI, which is, look, the reality is, is a large language models, there's clearly a business model to it. Like everyone can look at it. Microsoft, you know, people like me or whatever be like, yep, I will pay money for answers done, right? Like there's an economic way to think about it. And if everyone was playing that game, which Sam could be playing now, which is very different than what he said historically, right?
So this is just a business. I can trust it. People sell it to me. I know the economics of it. It makes sense. I can track and therefore use it. The problem, honestly, with open AI is that there was always this yada, yada, yada, which is, yeah, yeah, we'll sell you these things. It's a business. But whisper like our only real goal is to make a GI and we're going to need way more money we've ever seen. And whisper like, we want to make a GI because we're making the world safe, which by the way, it's like racing to make the dangerous thing to make the world safe as its own interesting mentality. But the basic point is that because of that, you can't trust them.
And so I think the reality is, like, even if they sort this out, right, let's have this sorted out. Sam comes back. Sam now has a nonprofit board. Maybe he's replaced the people that weren't loyal to him, right? Because he was able to ramrod it through. Oh, that I mean, he won't return with that. In some way, that's even scarier, right? If you think about it, because then you have a religious organization, Sam is a realtor, a zealot about this stuff, right? With a fully loyal board and no requirement to make profit, right? Like from their mental model of what they're going after, like that is even scarier, right? Than anything else. So to me, it's like, we just need a more profit LLM company, right? And like, we kind of splinter out like if some people want to do crazy research, that's fine. You know, this experiment is looking the other way clearly didn't work, right? And we just need to move on from it.
Do you not think that he would restructure the company in some way? Because of what you're saying right now? First, I'd say he spends very way to unravel it. Look, maybe you could theoretically figure out a way to like sell the business from out from underneath the nonprofit and some if someone would buy it. But the problem is, is like, no one, either the business has to be valued very cheaply, right? Which the board would never do, right? Would make no sense to do or like something else. But like, it's very hard to imagine the restructuring of the problem I have is honestly is Sam is being painted by the media as like, the business person versus the fundamentalist, but he's like as fundamentals as it gets, right? And so like, to me, it's very hard to like imagine that being a healthy stable situation for the future.
So Sam, on your point about business versus fundamentalism, right? There are obviously some technologies that I don't know, society maybe should think about just being commercialized, nuclear weapons being one of them, right? Now, you're obviously kind of don't think AI is as potent as nuclear weapons, but like, shouldn't we have some concerns with, Oh, to be clear, you know, putting this 100% in the commercial realm, or do we think regulation will fill the gap?
We've talked about this historically, right? About the idea that we would need national science projects and there are things like, I actually would have no problem with the Manhattan water and AI run by the government. I think it's a pretty reasonable thing to be doing.
You know, I think that the open AI people and like that, the the fundamentalist part of AI would say, well, the government isn't doing the Manhattan project. So we're going to do our own private foundation version of the Manhattan project for AI. Like I think that's actually kind of where a lot of us like, there's like, there's three pathways here, right? There's for profit, non-profit, and then government enabled. Yeah, I'm right. Well, and then there's decentralized, right? So, I mean, talk about zealots, right? But a lot of reaction to this has been that this just shows why you can't have any centralization in the development of this, because then your technology could be subject to, you know, the decisions of a weekly government board.
But those conversations about that's this sort of open source argument, which is to say, people were saying, you know, in our back channel and all over the internet that they're super happy that there's open source models, largely because of their investing in for-profit corporations that have dependency upon these APIs. And so if these APIs have been the only APIs, and there were no open source available, then all of those for-profit businesses that people have been investing in for the last nine months would be at risk, right? So people are super worried that, you know, their investments are at risk, right? And so they're happy that the open source exists because they can say, well, it's super great. This is just like Linux. We all depend on Linux for all of our websites because it runs every website server on the planet. We're super happy that that happened, because none of us have to worry about it ever going away, right? And that's kind of your open source. The open source argument is a capitalist argument, right?
Well, and I think it actually is the historically, the way you think about it, is if you can't trust the centralized entity to this point about anything, then like, look, just make it all up into the bitcoins the same way. Bitcoin is open source because you would never trust the security of Bitcoin to like a single actor, right? And the whole beauty of it is fully open, right? As a system. And so I think that is rational. That doesn't negate if you're actually worried about AI being dangerous. It's a problem, right? So like that works really well if you're a business and you just need stable platforms. You know, you can always pull back from a service if you need to and reimplement it yourself. But you know, if you actually believe that this is a Manhattan project type thing, then like, you know, I think the thing I think we probably all agree on is the idea of wrapping a for profit in a nonprofit where the nonprofit is like beholden to no one and somewhat religious. And the for profit sits inside that was like, hilarious, and was clearly intellectually a disaster, but everyone kind of looked the other way and became a practical disaster.
Well, that, but Britt, so your point about a restructuring, I think is kind of the right one and definitely like the direction of our reporting on like what could that look like. And, and in that raises a really interesting question of like, what mechanism is there to implement that? Because I reverse triangular. So what happened here is we had like, we had a poorly governing board that a CEO, Sam Altman, like to say and said many times to me, you know, I serve at the pleasure of the nonprofit board. I have no equity in this thing. They can fire me yada yada.
好嘞,对于Britt提到的重新组织的观点,我认为是对的,并且我确实喜欢我们在这方面的报道方向。这也引出了一个非常有趣的问题,那就是如何实施这个机制呢?因为我反向三角。所以这里发生的情况是,我们有一个管治不善的董事会,而CEO Sam Altman经常告诉我,我是为这个非营利组织董事会服务的,我在这里没有任何股权,他们可以解雇我之类的。
Now, though, he's got 700 ish AI researchers and other people, right? Basically, software engineers. Is that what they do? Okay, software engineer. Is that a demeaning term or how are you? You know, code using chat. Most of the people that have joined. Okay. Let me just finish my point. Right? So like, this is a very uncomfortable situation. This is a very uncomfortable situation for them poured, because now they have like a messianic leader with ground troops, right? Or what you know, like, and that, you know, again, you could there's a risk, I think here, of creating a structure again that is really just shaped to the whim of those movements, because I mean, Microsoft certainly has leverage in this situation, but they've gone out and said, we're willing to directly compete with you and take your people and take your leader. So I don't know how much leverage did I have?
I mean, it's Look, I would love crazy. And like, I was like, I was like, business for the sake of business. It would be great if Microsoft just inherited the people that want to work on this stuff. And they go into a normal business structure, where they're $100 billion piece of a $3 trillion company. That's the safest outcome from a business perspective, right? That makes it easy, right? It's actually better for everyone.
The problem? Do you mean like the for maximizing the value of the technology or you mean versus site? Like, what are you when you say safe? What do you mean? Well, I mean, thinking about LOMs as a business piece of infrastructure as cloud 2.0 technology that lots people want to build on and has like a valuable like very like, let's put it this way, AI not AGI, right? So for AI, the best outcome is you let someone super huge.
So yep, like, anyone who wants to come and make money on this, anyone who wants to come and do the commercialization stuff we were doing at OpenAI, the for profit, just come work here and like we'll set you up and like, we'll put the guardrails in place and we have a real board and we have real governance. The problem is, right? Again, that hidden in the population of those people is a lot of people are really actually fundamentalists about this, right? And so the problem is like, will they actually want to do that? Or is the reality? Because something has to break. Either Sam Altman has said, you know what, all this stuff about we're building AGI and what do we have to do it now and first for national security reasons and like the for profit doesn't matter. It's all about serving the nonprofit. He has to like move beyond that, right? And just be like, nope, AI is cool technology and like we're going to be a business, right? Or like in some ways, like that's not a setup that you can even trust or make sense, right? Like then you're back to like the religious zeal industry, right? A piece of it, right?
Yeah, I mean, as a as the chairman, I'm the chairman of a non-profit. And I cannot even fathom doing this. Like I can't fathom having a for profit entity that is controlled by the nonprofit. Like it's just, I guess, you know, the famous saying, no conflict, no interest, right? But like, it's just an unbelievable conflict of interest.
And so, so, so why couldn't you then just do like a reverse triangular merger? So like by the. What is that exactly? No, this is a real thing. You spin up a new LLC that acquire that merges with the open AI for profit LLC or ink or whatever. And then you Microsoft could fund into that and then you can disassociate the nonprofit from the for profit. I also love like, we just have to reflect on how this is back to the Kardashians, how fascinating this is, right? Because the whole reason this matters too is because this is not this technology is so capital intensive that it can only be funded in certain ways, right? To so like, you can't, you know, everyone thought Sam Altman was going to go out and announce a new startup. No, he goes out and announces that he's joining Microsoft, right? Which startup people are shocked by because startup people don't really like working for other people. But again, this is just like the dynamics are so different.
Was anyone really shocked though? I mean, it makes sense to those that are working. Yeah, thread. We I was going back and forth. Someone was like talking about how this was most unreasonable thing for him to do and how like he would need his comp reviewed by the comp committee. And I'm like, what else are you going to have any talk first of all? Well, but that's so that's a problem. That is the problem. You're right. It's like, I think this is the whole thing to understand about the Sam Altman saga and why it's such like the Kardashian is so interesting is all these things you hold up. Like, I'm not compensated by the company. Like that's bad, right? That means that like his incentives and the ways like are not aligned with like the ecosystem of stability and capitalism that you want to build things on.
I would much prefer. And that prevented a lot of investors from investing in the tender, right? Because they saw that now I want to make sure we cover a couple angles of this that we haven't covered. And so another element of this, Sam Altman doesn't just lead open AI. He runs or, you know, is hugely involved in Helios, a big nuclear project in world coin, big crypto project. He is one of the most prolific investors in Silicon Valley. And he was pursuing two side projects in recent months when we broke in the information about building an AI phone potentially with Johnny Ive. And the second is trying to build a TSMC semiconductor like American foundry for like American dynamism. So like this, I think is an important part of the picture.
Now a lot of people in Silicon Valley say I overstate these, you know, everyone has side projects, blah, blah, blah, but like, it's a lot. I think this is also part of projects. These are all moonshot side projects. Like I made a cool app like like, say on the other day was like, I made this really cool, like, what AI app or something in my spare time. Like that was the cool side project. And sounds like building into some scene, a new phone with Johnny Ive no big deal. Well, I think this goes back to the whole governance thing, which is with Microsoft, right? I mean, I think it's to the degree that like the chip and I don't know this for sure. But like, again, with the big, I mean, these are all these are legitimate questions for board to raise. And we just don't know. Maybe they were like asked and answered. I don't think so. Which is why I wanted them to go ahead. I'm sure they weren't. I'm sure that this is just like, it's fascinating. This like almost was like a Roman like, like situation, like, Cesarean situation where even like the problem again, I think is just like, unaligned actors are really hard to deal with, right? Like Elon super hard to deal with, he's so rich, right? That he like is kind of like unaligned. He just does, it's like doing whatever Elon feels like, you know, Sam Altman is super in this really creative structure for himself around open AI, right? Which basically the funny part about the line, like I serve at the pleasure of the board is especially if you get to replace the board, then you serve at your own pleasure, right? And like, that's really dangerous, right? For like any like system that especially one that's been so religious, right? And so on the record about the kind of big structure thing, like just don't think that works for like capitalism.
Now, I think that it's a free country, like if people want to do that, that's fine. But like, so much of the disappointment and intrigue here is this like tension between like, is this a legitimate real company that people trust and is like part of a business infrastructure of AI that people are excited about building on, right? Can Sam be the leader of that, right? Or is this a religious thing, right? That happens to have some interesting business things people out way too excited about in a terrible structure, right? And like Sam's the wrong leader for it, because he actually is way too religious, right? Like, and I will say, like we should, we should address like we've had many conversations about open AI and how cool their technology is and the app store and all of that, right? And I think, okay, well, you hadn't followed the announcement at the time of that podcast, I'm not talking about just Sam not knowing what we were talking about. So now I have it. I don't always do our homework, but, but I do think that like, you know, you see the same, so prior to this, obviously we had the APEC conference with a lot of important US and China trade conversations. And I was reading something about a dinner Sam Altman showed up to and someone compared him to Taylor Swift showing up, right? And I think there is just, you know, the press has a role to play and, and you know, fingers should be questioned and challenged as well, right? When we think about how we cover these people and, you know, we've a while ago did an article about Sam's investments and how he said he would, they were slowing down in open AI and they just weren't, right?
And like, we had so much flack for that piece of just like, why are you going after the guy? And stuff is like, we're not going after the guy, but like, you know, he says one thing and the picture is different, right?
And so anyway, we're seeing some of those same, you know, the things we saw with FTX, you know, that these forces are in play too. I think, I think this is moment. Just your point, the difference here is that this technology is so powerful and we're all customers of it.
I think that's the thing that is so interesting about this is that this deservedly so the reason that perhaps the Taylor Swift's analogy has happened is that the power of the technology and that it's become a thinking partner for all of us is like this incredibly powerful thing. And we're all trying to figure out what exactly this thing is, right? Like, and to me, that's the thing about the question about how religious this thing is, how dangerous is this thing? Like, what is this, right?
Like, does it have a single portfolio company that would die if open a went away, right? Like, I would have to revert my search to the way it was two and a half months ago on the information, right? Well, there's just other AI platforms. Yeah. Yeah, I just don't like how everyone was diversifying their bank accounts from the SDB thing up. And now it's just like, make sure to diversify your API calls so that like single AI platform can get you in trouble.
And I think the other way, like, David's like, I just think there's another way to look at it, which is like the Valley needs a story desperately, right? And like, I'm like less bold. I think LLMs, like the really cool, very obvious applications, there'll be a lot of money made. I don't think it's like the second coming of anything. I just think it's like cool business, cloud 2.0 stuff.
But the Valley desperately needs a narrative, right? Because so many of them hasn't panned out. It's been a long time since we've had one. And so to me, like the real question about like this cult dumb or whatever is like, and like where we're at is like, it's less about like, oh, this technology is so powerful versus like the industry really, really wants a narrative, right? And like, this was supposed to be it.
And the fact that it's a mess from a business perspective and the fact that, you know, it's quasi religious and the fact that we can't tease out whether this is a business operation or, you know, a religious zealotry, you know, situation, I think is the real problem, right? But I think it's also that like, we're sad.
Like, I think the thing I've been hearing and seeing and I feeling is just like, gosh, we haven't seen another trillion dollar tech company and so many years. And like, this was the one we all were putting our money on and some of us literally. And like, now what, you know, and what does that mean for the future Silicon Valley?
And so, you know, I don't know, I don't know what if it'll come back to the extent that there's also, there's, but there's also a bigger question, which is how dangerous is this really? Like, and I think that's what this comes kind of comes down to, like the religious disagreement is around how it's all subjective, right? Like everybody, nobody can define a really truly objective danger situation. There's just a lot of like, well, I think this is what's going to happen.
And I think that's why it's, go ahead, Sam. Well, I just think, I also think, look, the reality, the way I was in my life, like, I actually think AI technology is super dangerous. Like, I think it's gonna completely fuck with our election cycle. I think it's like incredibly hard to tell what's real or not. Like, it's, but we don't need AGI for that. I think that's the difference. I think AI technology is plenty dangerous. Doesn't mean it isn't plenty cool or pretty valuable. It's also plenty dangerous.
And like, to me, the religious thing is the yada, yada, yada of the AI danger and being like, no, no, no, no, AGI. Like, that's the religious thing that people are into. And I understand it's super cool to feel like you're working on the Manhattan Project. It's a great marketing story. You know, there's all these positive things that I think a lot of people get out of convincing them to be in the cult, right? So to speak.
But I don't think it's real. I think it's invented. And the thing that worries me again is like, specifically Sam Altman, because he is, for better or so, he's put himself at the absolute center of this, right? Is like, there's Sam Altman, the business person, totally reasonable, get it, fine, whatever. But then his rhetoric for years has been Sam Altman, the cultist. And like, I just don't know that it's especially accurate. He's Sam Altman, the pragmatist. I think he's cult, he was cultish because it was problematic.
I know, but isn't that a problem? Like, isn't that a problem? These willing to be the. Of course, that's a problem. I don't know. It's a problem. And I think Sam Altman wants to and believes he is the next Elon Musk. He was friends with Elon. Elon actually started opening eyes and nonprofit. Their tensions between them led to Elon leaving, among other things, but like. I don't know. He doesn't have children to be Elon.
That's true. But like, you know, it's. And again, I don't know what that's necessary. Like, of course, we want to look up to amazing technologists and all that kind of stuff. But I there's like kind of something off about it. And he's pragmatic. He's a fabulous investor. I think that is. If you look at his objective business sense success to date, that's what it was, right? He's not the guy who scales up your 1000 person team to tackle his big issues. But he's very smart in terms of thinking 10 steps ahead, sort of. I guess that's coming up.
There's this question to me. It's like, what is this? I guess I keep coming down to this, which like Sam, you've said, you heard somebody compare LLMs to no better than a slide rule, right? Like a saying that has been on my mind. Yeah. Yeah. Yeah. Is that, you know, another quote that I heard from a dear collaborator is, you know, one cannot see a star through a microscope and one cannot see a cell through a telescope, right? And so like the lenses, which we're seeing focusing are. They're both focusing and limiting at the same time.
And so this technology we've created, is it a slide rule? Is it photoshop for text? Is it, you know, glorified auto-correct? Is it, you know, what is this thing? And what perspective is it providing us? And will it actually ever become, you know, this all-powerful thing? Like to me, like, that's what we're all wrapped up in here. And we're also all kind of using it, interacting with it. And so it's causing this. To me, like, that's why this story has become so big, not just that it's this narrative that we're all grabbing onto.
So I want to. I want to bring us to a conclusion with two questions for you to be the first. You're the open AI board based on what you know so far. Do you vote for or against Sam Altman as CEO of OpenAI? Wow. Didn't prep this one. Didn't prep it. Sam Lesson, you first in the hot seat.
所以我想要。我想通过两个问题来得出一个结论,你们是第一个回答的。根据你目前所知,你作为 OpenAI 董事会的成员,你是否支持 Sam Altman 继续担任 OpenAI 的 CEO?哇,这个我没准备过。没有准备过。Sam Lesson,你先来回答。
So it's interesting, because I think it actually comes down to how religious are they, right? Like the short term. No, you, you, Sam Lesson, are in that seat. What do you do? If I'm in the seat, I don't bring them back. Right. I say it's the ring of power. I threw it away. And, you know, hopefully. And the reason I do that, honestly, is I think that any structure that he'll be able to work in next, and like that moves all on his forward, like Microsoft, will be a dramatically more constrained structure, which forces it to be a good business application, but does my actual primary goal, which is push out the AGI thing, because it won't fund it. Microsoft's not going to fund AGI, and like I accept the fact that the Chinese or some other Yahoo's are probably working on it, but if. But I also have enough respect for the talent that we put together that if I really believe the AGI story, which they clearly do, that like I've kind of like thrown away the ring of power.
Okay, Britt. I think I bring him back, but I create a new set of checks and balances. That actually work to try to meet as many expectations as possible. And I just think that Microsoft thing is going to get really messy, really fast with like the Microsoft shareholders, the public company, all the inter-organizations. Like it's. I know, as Satya said, it's going to be arm's length. I just don't buy into that. And so I don't see us making the progress that we need to make if he doesn't come back. Okay, Dave. The one question that I have is what information don't we know? And this goes back to my comments about Adam.
I think there's something we don't know. Like my intuition is that there's still something we don't know. And so I'm going to make my comments based on what we do know. If there's something we don't know, which is who knows? There's an AGI system that appeared in the last two weeks that told them something really scary, or there's some who knows, some crazy. There was that. We do know that by the way, or we know that the religious leaders who use this as a tactic of cultdom have been telling us that they saw something that for the first time has only. that blew their mind, only like their mind had been blown twice in history before since he was on a man. Well, maybe that was a direct quote. What a great way to sell business services. All right. Am I only on the caveat? But what does Dave vote? What does Dave vote?
My other caveat is if there's something else dark and, you know, crazy that went on with data or something like that. If that happened, then my vote would be the opposite direction. But my vote would be to vote him back in. And I would do that with the creation of a. You know, if it's going to stay in nonprofit or whatever, I would do a two-board structure. The board that I run is a two-board structure based on the Harvard Corporation and based on the Rockefeller Foundation, which effectively has one board that is the managing board. And that board has to send all potential nominees to the board up to the. I think of it like a Supreme Court board that kind of plays the role of making sure that the managing board can't become captive. And there should be some kind of structure like that that provides a true, like, quite literal check and balance structure in place so that we can all trust that the governance of this situation given if it is as, you know, important of a technology is where all saying can be protected.
Yes. What are you guys? There's some breaking news. I'm going to. I will answer that question, but I got to. I'm reading a combination of breaking news here. Hold on. Okay. Basically more about bickering between the board. No. I mean, how much news are unpublished is? I know. Shocking. Hold on. I just want to just make sure. Actually, I think there's very little new here. Hold on. It's such a cliff here. You could like tie the door. No, I know. Well, guys, this is just my life. It's messages of people claiming nothing. I do have to say it, Jess. You've really just absolutely knocked it out of the park the last few days. Thanks, Dave. It's putting this board member Helen more in the spotlight. And so, well, as a source of kind of beef with Sam, which is something that we know. Why hasn't there been more coverage of these other board members? That's. I mean, we're all talking about Sam at all. We know these people that haven't seen much about.
No, but there's a lot. So there's a lot of like who these people are, but I think now what we are in day, whatever of this cycle is, the board is finally realizing that they should find a way to get some of their story out there. Totally. I'm not saying they're not doing this directly, but they are. Clearly, whatever. There's a clearly whatever. They're just starting to like realize that there's kind of more to this. So, okay, but I want to keep this going. I would not bring Sam Altman back in anything that resembled the current structure. Is there a structure where I would bring him back? Possibly.
But I just think that like. The whole thing is too. And you will just have this perpetually kind of every six months, especially since this is all triggered from research paper. So, we have a traditional more in lesson breakdown. The more in the more. No, actually, no, no, no, no. We also have the same thing. Me, you and Dave, Jess, said we would bring him back with a new structure. That's what I heard. Yeah, but like. I'm not talking about like corporation over.
See, I'm like. I can't even imagine what the situation is, but. I think the real solution should be. I know. I think Sam's on his own right now. The profit. No, come on. I feel lined with both the more in Zen lessons in this moment. I think we're all pretty pragmatic here. One more question. Okay. You are Sam Altman. What do you do? Sam Lesson. Go hang out at Microsoft for a while. You're done with the drama? No, I just. I think that like the reality is, is this got real messy for me. And I do have a bunch of different angles and projects on this. And I think it's super easy for me to pull a bunch of resources in, build a bunch of interesting stuff through Microsoft. And you go from there. I would kind of.
And most importantly in that, I think it would be a real sign of dropping the AGI religious act. Like, I think if he took a year of dropping it, right, and was like, okay, like, this is business technology and like there's some projects I want to work on that you can kind of get the stink off of that. But to me, I think the real problem is I just. I don't think you can have it both ways and be a religious leader with these AGI cult stuff and like the fear of AI and racing for a nuclear weapon and also be building a pragmatic business platform. And as you guys all know, like, I think this is a very good pragmatic business platform full stop. I can never imagine if Sam is like Elon, Elon going to work for Microsoft and like not doing one of his many projects.
No, no, but I want to know if you're you, you're Bribmorin and you're faced with these.
不,不是这样,但我想知道你是否真的是你自己,你是Bribmorin,而你面对这些。
I thought I sat at Samox.
我以为我坐在Samox那儿。
What do you want to do?
你想要做什么?
Oh, well, I don't think I would.
噢,嗯,我觉得我不会这样做。
But like, okay, okay, I got a guy in Sam's position as Bribmorin.
但是好吧,好吧,我知道有一个人,他和Sam一样在Bribmorin这个职位上。
I know, I'm not trying to guess his neck.
我知道,我不是在猜测他的脖子。
I'm rushing to all in this by saying if Bribmorin is Sam Altman and Sam Altman is Elon Musk, I don't think I would want to be working at Microsoft because I would have so much complexity around what I can and can't do and my other side projects and I invest in against all messy.
And like, we did some really cool stuff.
And now I'm going to go build, make a shitload of money doing whatever I want to do.
然后我们做了一些很棒的事情。
现在我要去追求我想做的事情,并且赚取大笔的金钱。
Dave, I'm curious, you're answering her.
I do.
My answer is actually pretty simple.
I do whatever it takes to keep my team together.
戴夫,我很好奇,你在回答她。
是的。
我的回答实际上很简单。
我会尽一切努力来保持我的团队团结。
I think that the, you know, look, we're all, we're all founders.
We've all built companies.
It's incredibly hard to put a team together that's firing on all cylinders.
I don't care what anybody says.
That's really, really, really, really hard.
And they built this team that clearly has a lot of loyalty and a lot of incredible creative energy.
I mean, Jess, your point earlier was right that, you know, maybe there aren't, maybe not all startups are fully dependent on this technology, but GPT-4 is the best technology right now.
And they brought together a team that really made this happen.
And it's super hard to build that.
And so I'm doing whatever it takes, a, to keep my team together and to keep it all, you know, going, keep, sort of, keep the music going.
And then the second thing I'm doing, which I think we've talked about quite a bit is I'm, I'm flowing to where the maximum access to the chips in the capital is in order to do this.
And I do think that Microsoft is probably, I mean, let's say Apple showed up, you know, and said they'd play ball here.
Maybe I'd consider that.
They've got chip making capabilities.
They can do their own stuff.
They've also got the same more chess, you know, but at the end of the day, Microsoft, I've already got the OpenAI model embedded in the Azure Cloud.
They're already customers using it.
And you've got the commitment of Satya, who's one of the best CEOs and leaders in the world.
And so, and by the way, Microsoft also has a really good, I think a really good, reputation and history of allowing kind of the subsidiary companies such as, you know, Mohang and Minecraft and LinkedIn and, you know, many of these things they've acquired sort of operate as their own going concerns.
And so, I'm not as worried about control issues as a lot of people I've heard talking about this.
I think that the Microsoft way, if it's the way to keep my team together and continue to get access to the stream of capital, then I'm doing it, right?
If I can maintain my Microsoft relationship and the capital flow and the Azure sort of build out and I can stay over it, OpenAI with a better structure, I'm doing that.
And you know what?
I'm also doing, I'm actually, if I'm in this position, I am actually trying to make a board structure that the world can trust more.
Because I, you know, I've been part of platforms like this, I've seen this play out, I've seen how frustrated people get when APIs change and people can't trust what's going on.
And so, I'm actually arguing to create a better board structure that can produce a higher level of trust.
So, if I am in the same position as Sam Altman, I do not go back to OpenAI because you do it, you are under a mic, or like, what is your career, right?
Like, you have, you know, maybe you assume that people will kind of forget about this, but like, what is the set of conditions in terms of boards and investors and scrutiny and politics and regulation that you're kind of like operating under, right?
So, if you still believe in the mission, which you say you do, but, you know, you also believe in a lot of other things, so like, it's hard to know where the stacks, right?
But like, you know, go do it at Microsoft in the short term or find a way to work with great people and drive it forward. But like, you know, one of the major lessons in all of this, right, FTX didn't have a board either, had like an advisory party council, is like the structure matters, right?
And the set of the, and beyond that now, there's just too much scrutiny, there are going to be hundreds, thousands of reporters and editors who want to catch you in every move, or you've had all your dirty laundry aired about all your kind of internal debates in your organization, and that's just not a great thing.
Like, entrepreneurs like avoid that for the sake of doing something else.
So, that's not necessarily what I think he will do, but that's what I would do.
所以,那不一定是我认为他会做的,但那是我会做的。
So, what else have we covered it all?
I just, look, I think we covered it all.
那么,我们还有什么其他的内容需要讨论吗?
我觉得我们已经讨论了所有的内容。
Yeah.
好的。
Go ahead.
继续前进吧。
I'd say like, I think that it's about governance, it's about incentives, right? And like, I just think the reality is, is like, the, you know, people like to shit on capitalism, and they like to shit on this type of stuff, but there's like, rational business platforms, I think are much easier for people to handle the reality is with technology in general, you can actually do something kind of wild, which you can have huge impact with a small number of people with AI, even a smaller number of people could tiny cut, it can produce like incredibly impactful, interesting things.
And the thing that happens when you cut down the size or whatever is you're able to get, you know, crazy people to coordinate in very non continuous directions, that can produce great outcomes. That's actually really powerful tool, like that is the free radical of Silicon Valley in a lot of ways that creates a lot of great outcomes, but also creates a lot of very dangerous situations. Like that's, it's the speed and scale with small numbers of people that you can align on.
So to me, I think the FTX college right, and it's not just about governance, it's also just about like fundamental alignment and the speed at which these things grow up. And I'm getting way too important with way too few, too few guardrails too quickly at this point. So I mean, I think that's just kind of the reality we, we now live with. And you know, it's, it's kind of a wild future in a lot of ways, but I do think we got to be really careful about who you trust.
And it does back to things like to me, I think the big winners on this are Microsoft actually, despite the challenge of that, it's, it's Amazon, it's going to be Google. It's like, oh, like we know how to think about and trust these entities as players. And yes, we'll use multiple of them, right? And so I think it's just going to be like, you can't trust these crazy weird structure startups from nowhere.
Yeah. Okay, with that, everyone to our listeners, viewers, we wish you a great thanksgiving. We hope that you don't stay glued to X and WhatsApp and whatever other things through all of it. But we hope we don't have to do it. Yeah, we don't have to. We don't have to do it. But you should stay glued to the information.com. Like you should just put that at your things table and fresh it.
Yeah. Why don't you guys have like a live feed update situation? That's what I want from the information. Go on. I mean, it's just, there's a lot and there's been a lot of there has been excellent information live. That's what I mean. The information live. David basically get the information live. As soon as it's confirmed, as soon as it's published, you know, yes, you've been on see how many times you gone see it on the BCS for today. So many times by the end, by the end, the look, I think Frank, my good friend in the control was like, it's nice to see you again.
I'm still here. And it was like, but last night, I didn't even know what I was where I looked down at one point. I was like, I don't think I could wear this on television. So then I paused and they went to commercial break and then I changed my shirt. But I love that, by the way, you had a two-year-old birthday party and somewhere in between all of this this weekend. So like congrats. Anna friends giving many other things, but it's an important story. There is other news. I mean, we just broke a scoop about Tiger's head of private investments, standing down, stepping aside for all of you guys who know Scott. So we'll keep it going.
Did you put open AI in the headlines? We will click on it. Did you be like Tiger, a non-investor in open AI? Right? Are they a master? Are outlets that would do that? To the morons, we wish you guys a very happy Thanksgiving. And we'll see you next week for another episode of Tiger, Tiger, and I would be the CEO because that would also work. Okay. That's why you don't write the headlines. We'll work in the newsroom. Bye. Bye. Bye. Yeah. Bye. If you enjoyed this show, please leave us a virtual high five by rating it and reviewing it on Apple podcast, Spotify, YouTube, or wherever you get your podcast. Find more information about each episode in the show notes and follow us on social media by searching for at more or less at Dave Moran at lesson at J lesson. And as for me, I'm at Brit. See you guys next time.