So the first thing I'd like to say is that it's ridiculously exciting to be here. You're most welcome to be here. This is quite the amazing place and I've been preparing to talk to you for a long time. So I'm really looking forward to it. You said something that caught me right away when we were discussing various issues just before we started. You said you were up till four in the morning. Yeah, actually a little more like five in the morning. But we got to the the XAI data center or super computer center training from beginning installation to start of training in 19 days, which is the fastest that anyone has ever gotten a super computer to train. And is that in that new building off to the side? That's in Memphis actually. It's in Memphis. So that's where you were. Memphis, the capital of ancient Egypt. Right, right, right, right. Yeah, you're bringing what? Perhaps that's where our new god will come from. Yeah, no kidding. No kidding. Yeah, I wish that was funny. Yeah, okay.
So I want to talk to you about a line. Which is funny. Yeah, great. Well, look, I mean, there are a few things we're aiming for with with Grock, the XAI. You know, but it's the name of the the AI from XAI is called Grock. Yeah. If you're familiar, I want to ask you about that too. Well, Grock just means to deeply understand something. Yeah, but it's got that weird background, that stranger in a strange land, right? Robert Heinlein, that was a great work of mine. How old are you? Fifty three. Yeah, okay. So we're roughly from the same era. I read Heinlein a lot when I was a kid. The first two thirds of stranger in a strange land are great. It gets kind of weird in the final third. Yeah. So why do you pick Grock? Well, I think because of the meaning of the word, to Grock something is to understand it at a very deep level. Yeah. To really fundamentally understand something. And that's what we're aiming for with our AI. The state of goal of XAI is to understand the universe. Yeah.
So to really just understand the nature of the universe, and even what questions to ask about the universe. Yeah. So that's like all I think it's a good goal. So let me ask you some specific questions about that. So I played a lot around a lot with large language models. I have some people on my team who've built one. Actually, we built one out of my writing that I've been using to help me with my new book. So if I come across biblical passages that I can't understand, I can use that system to give me a first pass approximation. And it works quite nicely. Yeah. I've used Grock quite a lot too and chat GPT. And I use them as research assistants. And their chat GPT lies a lot. So you have to keep an eye on it. Well, so I've been thinking about this alignment problem. And so I got an idea to run by you. And tell me what you think about this.
Well, so there's a golden thread of conversation that constitutes the basis for humanities education, let's say that's run across centuries. And in principle, that concentrates on ideas that have been winnowed. Probably through a quasi-evolutionary process across large spans of time. Yeah. To get documents out of that, like the King James Bible, for example. Sure. And they're zeroing in on core conceptual structures that we don't even necessarily explicitly understand. Yeah. It seems to me that when we take young people and we give them a genuine humanities education, we solve the alignment problem for them. Now, so the question- Or make it worse. Well, you do that if it's proper, if it's proper Gandistic, you can make it much worse. What's the often is these days? Yeah. That's pretty much the- That's exactly what happens. And inevitably, when that unbroken tradition is not transmitted. And so this strikes to something that's very essential, which is, well, what's the difference, let's say, or is there a difference between the Western canon, let's say, in the latest woke nonsense. Now, I've used Grok a lot, and it's not as woke as child GPT, but it's still woke. Like it still deviates in the- Sure.
So how are you- Can you address that by- It's just a language model at the moment, let's say, if it also understood images, if it also understood behavior- It actually does outside images at this point. Okay, and is the language model and the image understanding, are they stacked on top of each other? Because I think that's partly how we triangulate psychologically. Right, we have an imagination, and we have a verbal module, but those things have to work in sync, and we also have- It certainly is intended to work in sync. It is intended to be what's called a multi-modal model, which means understanding text, images, and video, and audio. Okay, if it understands video, will it start to understand behavior? Yeah.
All this data that you've collected with your car, so I've been wondering, I know Tesla's a car company, but when I look at what you do- AI company. Yeah, well, exactly. Yeah, well, maybe more. Like, those aren't cars, those are autonomous robots. Yeah, they're robots on four wheels. Autonomous robots on four wheels, yeah. They just look like a car. Right. They're disguised as a car in a sense. This guy's a car, yeah. Yeah, yeah, okay, okay. So, what advantage do you have in training, Grock, given that you have all this real-world data that you've gathered from your automobiles? We haven't yet applied real-world video from Tesla to Grock yet. So, I do want to emphasize, XAI is a fairly new company, it's just a little over a year old. So, we really need- we have a lot of catching up to do to companies, relative to companies that have been around for five or 10 or 20 years. We're catching up fast, I think with the last of improvement of XAI is fast than any other company out there. We just completed the- we were just able to install and bring online massive new training centers that we've, like, as mentioned, we're building in Memphis. And it's from getting hardware installation to- it began training, it was only 19 days. And that's the fastest by far, that anyone's been able to do that. So, we're moving quickly, but we're still catching up. And so, if you use- Catching up to who? Well, catching up to, say, Gemini, Chashibt, Claude, and the others. So- And how do you feel that Grok- We're catching up fast. How do you feel that Grok performed, say, in relationship to Chashibt now? Well, so the Grok version that's been released, it's still based on Grok 1 version 1 training. Yeah. We've made several improvements, so it's sort of called Grok 1.5. But the foundation model of Grok is still an order of magnitude weaker than Chashibt. Oh, yes. So it's doing quite well, given that order of magnitude difference. And this new system, how powerful is it? Compared, let's say, to Chashibt. Well, actually, so Grok 2 actually finished training. Now, Grok 2 was training with, called roughly 15,000 GPUs, and they're H100. So Grok 2 finished training about a month ago. We're doing what's called fine-tuning, fixing bugs, and whatnot. So we'll release Grok 2, which should be on par with, or close to, a GPT-4. And that's, hopefully, we release that next month.
Then what we're doing in the Memphis Data Center is we're actually training Grok 3. So that'll probably finish training in about three or four months, and then there'll be some fine-tuning and bug fixing, whatnot. And we're hoping to release Grok 3 by December. And Grok 3 should be the most powerful AI in the world at that point. So my sense with Chashibt, I've worked with lots of undergraduates and graduate students. So my sense with Chashibt is, if you can corner it into behaving properly, that you kind of have something approximating a team of, I would say, master's degree level intelligence, and something like that, what do you envision for this new, well, let's say, the new Grok 3? And then you talked about delving deeper into the structure of the universe, let's say, to answer fundamental questions like, and you are remarkably forward-looking persons. So what the hell do you think you're building with these AI systems?
Like, what is this? Well, I think really what all the AI companies are aiming to build is digital super intelligence. So, you know, intelligence that's far smarter than any human. And then ultimately, an intelligence that is far smarter than all humans combined. Now, one can say, like, is this a wise thing to do? Isn't this dangerous? Well, unfortunately, whether we think bad or not, it is being done. So, really, you know, from my standpoint, from the AI team standpoint, we have the choice of being a spectator or a participant. That's life, man. Yeah, be a spectator or a participant. And I think if we're a participant, we've got a better chance, hopefully, of steering AI in the direction that is beneficial to humanity. So why do you trust yourself on that front? Just out of... I mean, that's an important question, right? I don't trust myself entirely. Good? Well, that's... Yes, fair enough, okay. I would. You're in an ethical conundrum, right? Yes, there's an ethical conundrum. Right, because you said, well, this is happening. Now, the excuse that something is happening is not a rationale for participating in it, but then your next take is, well, you know, we have the chance to do this properly, let's say, as opposed to...
Maybe better. Okay. Just pull it. If... I mean, I think we're... The... From moral standpoint, we really just need to think that maybe we've got a chance of it being better, to some degree, than what others are doing. And we... You know, we'll strive to avoid some of the pitfalls or directions that the others are going in, because the others, from what I've seen, do not strive for truth. Why do they strive for...
They strive for... Well, they strive to give an answer, but they are, I think, trained to be politically correct. And the work mind virus is woven in throughout them. Yeah. I'm sure you've seen that. Yeah. Yeah. Definitely. Definitely. Definitely. You know, my students used to ask me when I... Because I've been teaching what I've been teaching for about 40 years, and one of the questions they used to ask me is how I knew that what I was teaching wasn't just another ideology, right? Because the postmodern take is, well, all it is is a plethora of power games, and so there's no rank ordering approaches to the truth in terms of their ethical suitability, but that's not the game that you're playing. And... Right.
And obviously we're not agreeing with that plus, but it's sort of moral relativism. What's convinced you that that's not a useful way of approaching things? Well, I think you can look at the given belief system and critique it as being likely to enhance or decrease enlightenment. Will any given belief system improve our understanding of the universe? Will we learn more things? Will we achieve a deeper understanding of physics? Right. So that's grounded at least in part in a scientific framework from the sounds of it. Well, just... I mean... I think there are facts about the world. Yes, right. There are things that are, just say, let's say extremely likely to be true versus less likely to be true. I think if one thinks in terms of probabilities about any given sort of axiomatic statement, then that's why we think about it. Now, some things are 99.99% to be true, you can run experiments, you can confirm them. And others have a low probability of truth, 1% likely to be true, or just using extremes here. But any given statement should be thought of as having...
Unless it's a totalg, it should be thought of as having a probability of being true or untrue. Or a probability of being relevant to an argument or not relevant to an argument. We're just talking about the basics of cogency here. Yeah. Yeah. Well, okay. So let me put a twist on that too. So one of the things that really struck me about your public pronouncements in recent years was your insistence that we're in a natal crisis and that that's actually a problem. Well, that's actually true. Well, it depends on whether you think that the planet would be better off if it was depopulated. I don't... That's... the whole airlicks line. Yeah. Paul Ehrlich is a genocidal maniac as far as I'm concerned. I think he's a terrible human being. Yes, and he's never admitted that he was wrong and he was unbelievably wrong. He made a famous bet. You know the bet. I hate Paul Ehrlich. I'll just be clear about that. I think he's terrible and his books have done great damage to humanity. So what... Okay. Fine. I talked to a philosopher a while back who is an antinatalist trying to get my finger on that. There was a recent research article published on this too. Antinatalists are much more likely to show dark tetra traits, macular, valium, psychopathic, narcissistic, and sadistic because those first three weren't enough. Right? And so those things are tightly aligned especially the best predictor with psychopathy for being an antinatalist. Sure. Right. Right. Well, and the psychopaths are very, very, very self-centered. Right? It's they're like overgrown two-year-olds, overgrown aggressive two-year-olds. So that's not good.
How did you start to understand that the one of the fundamental ethical problems is different than a scientific problem. One of the fundamental ethical problems that's plaguing the West is this catastrophically low birth rate. Okay. And you know, when you start making public problems... I'm serving the numbers. I mean, I've noticed this 20 plus years ago that the trend in birth rates for really all countries past certain level of economic development was trending to well below replacement, if not already below replacement. If you extrapolate the curve, which one always has to be cautious about extrapolating any demographic curve? But if you, I just, so I always preface by saying if these trends continue, most countries will dwindle into insignificance. They might completely die out. So I've been thinking about that in relationship to sacred images. Okay. Okay. Well, the sacred image of masculinity in the West is a crucifixion, but a man who's crucified, but sacred image of a woman isn't a woman. The sacred image of a woman is a woman and an infant. Right. It's a dyad and not a monad. Sure. Right. Right. Right. And in the Christian view, those two images, they vie for supremacy. Right. I mean, obviously Christ is the superordinate image, but Mary is the mother of God is.
Sure. What would be the female equivalent? And so one of the things I've been playing with at an axiomatic level is the notion that unless the feminine is conceptualized as the combination of female and infant, then the culture has lost its attachment to the traditional sacred images and is probably on its way out. Yeah. I think there is, there's an argument that when a culture loses its religion that it starts to become anti-natalist and decline in numbers and potentially disappear. So I've got a hypothesis about that. You tell me what you think about this. So I've been working this out in my next book, which is coming out in November. So it's an analysis of biblical stories. And in a way, it's an attempt to solve the alignment problem. Okay. So imagine this, imagine that there's a unity of moral purpose that is conceptualized in the traditional writings as what should be put in the highest place. So it's God in the final analysis. It's ineffable, but it is a fundamental unification of monotheism. Okay. Now, here's a hypothesis. When that collapses, two things arise to replace it. Okay. One is a striving for power and the other is the untrammled, what would you call it? The untrammled dominion of hedonism and especially on the sexual gratification side. So it's like, if there's no ultimate unity that's future and community-oriented, that's predicated on sacrifice, you get a dissolution immediately into the next two contenders for domination.
And one is, it's about me, buddy, and get the hell out of the way. And aligned with that is not only is it about me, it's about me, what would you say, subjugated to my most base whims because why would I want power except to do exactly what the hell I want whenever I want to? And so part of the problem with the idea of people like Dawkins, so Dawkins and the atheists because they didn't- I've had many conversations with Dawkins over the years. You have, eh? Oh, yeah, yeah. Oh, we're going to do a podcast together, which I'm very looking forward to. Yeah. But I'm very curious about this issue because his idea and it's kind of enlightenment idea is that we dispensed with the idiot superstition of the past, then everyone would become, you know, baconian rationalists and that seemed- Yeah. Unfortunately not.
No, well, what seems to me to happen much more likely is that power and hedonism rise to take place of what was holy, so to speak. You know, Anicha warned about that when he proclaimed the death of God to begin with. He thought nihilism would also enter the realm, right? Nihilism, power and hedonism as the triumvirate of replacement gods. And so I've been trying to puzzle out in this new book the way the biblical corpus is conceptualizing what's properly placed in the highest place. And it does- that's part of the reason I was asking you about the natalist issue because you figured out 20 years ago, that's a long time before anybody being talking- Yeah, it was a long time. Yeah. Yeah, yeah. And then you also did publicly proclaim it at a time when the insistence- the moral insistence was all on the side of, you know, Jane Goodall's pronouncements that if we don't reduce the population of the planet dramatically, that the nature goddess is going to be upset, which is also a very, very old idea, not a very good one. So I'm very curious about your intuition there. Like that's a long time ago. And so how did you cotton on to the fact that that antagonistic attitude towards birth that's embedded in our culture now was something that should be called out and that was pathological?
Well, I mean, I should perhaps go back to what is the foundation of my philosophy? Because that that I think helps build up, you know, to explain my actions. So the- when I was, I don't know, about 11 or 12 years old, I had somewhat of an existential crisis because it just didn't seem to be any meaning in the world, like no meaning to life. And so I actually read- tried to read all the religious texts. At that age? Yes. Yes. So I was a voracious reader as a kid. So I, you know, obviously read the Bible. I read the Quran, the Torah, you know, the various- I've put on the Hindu side. Just trying to understand all these things. And obviously as a 12 year old, you're not really going to understand these things super well, but I've just- Well, you understood it well enough to have an existential crisis when you were 11 or 12. Yeah, I'm just trying to figure out- Well, that's a start. Does anyone have an answer that makes sense?
And then I started getting into the philosophy books. And I read quite a bit of Shopan hour in Nietzsche, and which is quite depressing to read as a kid. Yeah. You might say that. Best depressing as an adult. But- And- And- And- And- And- And- And- And nothing really seemed to have to me answers that resonated, at least to me. And so- But then I read Douglas Adams, Hitchhiker's Guide to the Galaxy, which is really a book on philosophy disguised as humor. And what Douglas Adams, the point that Adams tries to make there is that we don't actually know all the answers, obviously. Yeah. In fact, we don't even know what the right questions are to ask. That's where he has, you know, this- If you read the book, the- you know, Earth is actually a giant computer to understand the answer to the question, what is the meaning of life? Yeah. And comes up with the answer 42. Yeah. And I feel like what does that mean? It says, oh, you actually- You don't understand that the real- the thing that's going to take a computer far more powerful than Earth is to understand what question to ask. Yeah, right. That's simply the wrong question. So was that the key realization that- That the question was the same? I would say that was a fundamental turning point, yeah. Yeah, because that's it. So that's very interesting, because one of the things that you see constantly portrayed in redemptive hero myths across the world is that the adventure is the thing, and that the search is the thing rather than there being a final answer, as observed as 42 might be, right? There's no- There's no- the conclusive answer is something like deep engagement in the process. So I'll give you an example of that. So in the Sermon on the Mount, the Sermon on the Mounts, a very detailed set of instructions. Yeah. So there's three parts to it. The first is aim at the highest thing that you can possibly conceive of, and keep modifying that so your aim gets better. Okay, so that's number one. Number two is make the presumption that other people have the same intrinsic value as you do. Well, we have to be careful about that one. Okay, okay. Well, let's discuss that, but it's a- what would you say? It's a recognition of the universalist value of everyone who's made in the image of God. It's something like that. But the third thing is once you do those two things, you can concentrate on the moment. See, and that seems to be- even technically, you can think about this neuropsychologically.
So if you're looking for meaning, meaning is a form of incentive reward. An incentive reward is topaminergic-immediated. An incentive reward occurs in relationship to advancement towards a goal, which is a form of entropy minimization, as it turns out, according to Carl Friston, who knows this sort of thing. Entropy is the ultimate boss battle. Yeah, right, right, right, right. Well, negative emotion signifies the emergence of entropy and positive emotion on the dopaminergic side signals its reduction. But there's something that's more complex there because the higher the goal that you're trying to attain, the more intrinsic value each step towards it comprises, and that's neuropsychologically accurate. And so part of the wisdom of the Sermon on the Mount is that if you posit the highest imaginable goal, then any step towards it is that captures your attention is also deeply meaningful. And so that's an answer to what the meaning is of process rather than say something like 42. And you said, it seems to me that you were intimating that your discovery through atoms that the question was the thing was key to the resolution of your existential crisis. That's correct.
Okay, so that's part of the reason that you're motivated to say build Grog 3 and look in, look deeper. To understand. Yeah, yeah. Understand the universe. Okay, so once how old were you when you figured that when you figured out that the question... 13 or something. What did that do to you? What did that do to you? Well, I was a lot happier after that. Because now it's like, okay, well, I'm just going to accept that we are ignorant of great many things. Yeah. And we wish to be less ignorant. And anything we can do that will improve our understanding of the universe and make us less ignorant and have a deeper understanding of the universe and even what questions to answer to ask about the answer that is the universe, which is, I think, Adam's a central point is good. And so... And that was good enough to resolve that crisis. It was, it was for me at least. Yeah. And so like, is this a religion? I don't know, maybe it is. But I think it's a good one. I'd call the religion of curiosity. Yeah. Well, the ancient god of the Mesopotamians, his name was Marduk and he was the best defense against ensuing chaos and state corruption. Okay, so that's how he was conceptualized. Okay, Marduk had eyes all the way around his head. Okay. Because he paid attention. Right. And he spoke magic words. Okay. Right. And he was literally for the Mesopotamians. He was the agent that revitalized the tyrannical state and overcame evil. And also the force that dispensed with chaos and built something magnificent and cosmic out of it. Right. Right. So yeah, yeah. Sounds like a force for good. Yeah, yeah, yeah. All of the Mesopotamian emperor, so his job was to embody that spirit on earth. And they used to take him out of the city on New Year's Eve, strip him of his kingly clothing, humiliate him, they slapped him, the priests, and then they'd ask him to confess all the ways that he hadn't been a good Marduk, attentive and speaking properly in the previous year. And that's how they renewed the cosmos every year in Missouri. That's our New Year celebration is a derivation of that out with the old and in with the new. And the Egyptians, they worshiped the eye. Right. You've seen that famous eye. You all see I have Horace. They're all seeing I have Horace. That's the antidote to the I have Sauron, by the way. Right. Because you get if you don't use that vision, if each citizen doesn't use that vision, it's replaced by the totalitarian all seeing I. Right. Right. So that's a hell of a thing to know.
Okay. So that's cool. So I wondered, I see, I see, because I wondered what's motivated you because you push in so many directions simultaneously. You have to be really highly motivated to do that. And so you figured out that the question in a sense was the answer. Yeah. Yeah. The question, or, you know, I said another way that seeking greater enlightenment and a better understanding of the universe and what questions to ask about it is something that we can continue to do as a civilization for a very long time. Yeah. Yeah. Likely forever.
Exactly. So. Depending on how powerful grok turns out to be. Yeah. So that's a, so then I throw out, okay, I'll work on things that improve our understanding of the universe. And now that they're said, like, at a base level, well, this is why I actually think we want a population increase because population increase means that there are more people that we've expanded the scale. More brains, man. Yeah. We've expanded the scale of consciousness to the Greek. There are different cultures. We've expanded the scope of consciousness. So there's, you know.
So I read something here. I talked to this gentleman who done a biography of Marx and he went and looked at Marx's poetry and drama that he wrote before he wrote the Communist Manifesto. And he found out something very interesting. He found out that Marx's favorite quote from Gertha was a statement by Mephistopheles. It's a very specific statement and it's a very key statement. Mephistopheles' motivation, so Lucifer's motivation, is predicated on this argument. He said, consciousness is nothing but consciousness of pain and misery. Life is short and brutal and pointless. Therefore, it would be better if consciousness itself was eradicated. I was like, Hobbes. Yeah. Well, I had a little Yorkshire Terrier who was a nasty British and short, so I named him Hobbes. Perfect. Perfect. Well, I think it's even deeper than Hobbes because Hobbes seem to understand that life without social order would degenerate into that. But the Mephistophilia in Credo is that consciousness itself is an evil that should be eradicated because it produces suffering. And that was Marx's favorite. It is the very definition of crazy.
Yeah. It's the definition of the adversarial spirit. Now, your hypothesis, your axiom, let's say, is that that's wrong and that consciousness should be so then we say, so why should consciousness be expanded if it's nothing but consciousness? Of suffering in your. answer? Obviously false. Okay. Well, not so obvious. Lots of people suffer like lots of suicidal and nihilistic. And so and you had that crisis of faith, let's say, when you were 11 or 12, an existential crisis, but you resolved it. Meaning of life crisis. Right. Right. So, no, I think it's just obviously false that people, while there are people who are very sad, there are people also who are very happy and we go through sad and happy moments every human being does. So it's not it's an absurd and obviously false statement that life is merely suffering. Yes. I mean, that is just a ridiculously false statement.
One of the things I've tried to do is to understand, so like, there are a limited number of things that are undeniably real and pain is one of them. Yes. Okay. So like my back hurts a little. Yes. That's because you're up till five in the morning. No, I just have some injuries, but I. From wrestling? From, I think, some childhood injuries that although the final thing that caused some back and shoulder injuries was me fully fighting the world champion, Timore Ressler, and charging him at full speed. It's an outcome over which I did succeed in doing, but. But you paid a price for it. A very high price. Carnivore diet will fix that. The what sorry? The Carnivore diet will fix it. Look, I'm awful. I like meat. I'm pro meat. I don't think a carnivore diet is going to fix this particularly issue. I think my wife had an injury of 40 years and it resolved in two years. Is he safe? Carnivore diet. If you just eat steak or something.
Yeah. I mean, I. All beef. Sure, sure, sure. I I'm a pro. I like meat. But I think this is a. I think I'll probably need an operation or something. I tried the Carnivore diet first. Oh, sure. Sure. Anyways, that did happen to her. She couldn't lift her left arm above here. It took 40 years and in two years it resolved. Okay. Yeah. So that was something to see. It also rejuvenated her physically in a variety of different ways that were quite miraculous to watch. Yeah. Yeah. And that hasn't stopped. So that's weird thing. And I would have never believed it if I hadn't seen this because it's so preposterous. Sure. Anyways, I'm not going to proselytize about the Carnivore diet. Yeah.
So okay. Okay. So that now let's go back to AI if you don't mind. So you were involved in the project that Sam Altman runs now. Open AI. I was one of the principal co-founders. Right. I in fact, I named it. Yeah. So so what the hell happened? It was my idea. What happened? Well, I I saw it. So the origins of opening. But I was very close with with Larry Page is one of my best friends. And in fact, I'd stayed his house because I'd spent half the week in the Bay Area running Tesla in half in LA running SpaceX.
And I and for the longest time, I never even had a house in the Bay Area. I would just stay at friends, friends, places that a spare room might stay there if they didn't have sleep on the couch. And and I find it actually to be a good. It's very funny that you stayed on the couch. I think that's very funny. Yeah. Yeah. I'm you know, I'm going to cast that off. But but but it was but I I didn't have a house for more than a decade.
So I would just stay at rotates through friends, places, which is a great way to cash up with friends. Yeah, right. Right. And and so I would have these conversations with Larry Page long into the night about AI safety. And I just grew increasingly concerned that Dari was not sufficiently concerned about AI safety. And at one point, he did call me a species. Yes, you are one. Yes. Yes, that's I guess correctly labeled. Yes. And he's kind of like he fully it's not there are other people around when he did that. And notice the attempt to deny it.
That's I'm a species in favor of humans instead of. As like compared to like, for example, well, no more like relative to digital intelligence. Oh, yeah, that's even worse. His view is that digital intelligence should be, you know, that I mean, Larry's view is from not speaking is that ultimately we will all upload our minds to the computer. And everyone will just be robots. Yeah. And for a while, he's. There's not much difference between that and the death of humanity.
Well, yeah, I think that's. Um, because whatever we'd be, then it wouldn't be what we are now. Right. And that, you know, and if we pay a price for what we are now, and that's the price of our intrinsic limitations, and that is a difficult bidder, let's say, pill to swallow. But I also think, so I've thought recently, you know, how do you know that something's real? Say death makes things real death makes things real. Sure.
And so if you if you eradicate death, it seems to me that in some fundamental level, you also eradicate reality itself. So I don't like I haven't figured out the connection. There is an important like death death does play an important role. Because I think you really could evolve humans to live much longer or most creatures live much longer. But there's over time, uh, evolution as a, you believe in evolution, um, has, um, found that there, it's better for organisms to have a finite life. Um, and that death, death brings renewal, essentially.
Mm hmm. And I think we do need to be cautious about trying to solve longevity in a, in a sort of a live forever type sense, because I think our, our society, our culture would ossify. Um, and the people in power would always remain in power. Um, Well, and you wouldn't, you know, if you had, let's say you apprehended a 10,000 year span of consciousness with no sleep. Yeah. I don't know what the hell you'd be if that was who you were, but you wouldn't be human. Sure.
And then we also don't understand that, see, part of the problem, I think with the perspective that the technologists are taking with regards to human existence is that there's a reductionism there that's something, it's, it's something like there's no difference between us and the gist of our linguistic network, something like that. Like, whatever we are as conscious beings is a hell of a lot deeper than the patterns of thought that make up our cortical existence. Consciousness is way deeper than cortical existence. Like, I, I, yeah, maybe, um, I, I do think you have to ask this sort of, this gradient question of, um, where along, where, um, does consciousness arise?
Now, I've seen a sort of traditional, uh, question, uh, faith, you would say that, well, there's a soul that inhabits the body and that's the consciousness, perhaps. Um, but, uh, you know, is it, is it what we all started as, as a single cell. So is that single cell conscious? I mean, it doesn't look right. It's, it's, you can't talk to it. It's just a cell. Um, it differentiates very strangely. First, yeah, it's, it's, it's, it has that teleology built into it. That's very difficult to understand, but conscious, it seems not to be, not at that point. It doesn't seem to be conscious. Um, so, so where, um, as, as it, it divides into many more cells, eventually reaching to, uh, and, you know, an adult human has 30 to 40 trillion cells.
Um, so where, where does, is it, where does consciousness arise? Does it grow slowly? Is there a step change? Um, and, uh, you know, I tend to generally believe in physics. Um, and. You seem to have done pretty well with that, please, by the way. Yes. Well, I, I was saying that, uh, physics is the law and everything else is a recommendation. Um, because people can break, um, and do break. Uh, man made laws, but they, uh, you have to see someone break laws of physics. So, uh, you know, and so if you, if you have, uh, beliefs that are incompatible with a rocket getting to orbit, the rocket will not get to orbit. Right. Right. Right. Right.
Yeah. A pragmatic physics is a harsh judge. Yes. Definitely. Yeah. Definitely. Do you think these LLMs, like, do you think that any of the machinery that you've interacted with is showing anything, signs of anything that might be equated to consciousness? I mean, the LLMs are remarkable, right? And they certainly pass the Turing test as far as I'm concerned. Yes. Pass the Turing test. So, uh, from, from a testing standpoint, I think we will, if we're not there already, we soon will be where you would not be able to tell that you're. Yeah. Interacting with a computer or. That's coming right away, man. Yes. In fact, probably. Sort of here. And unless you're really sneaky and you ask, like, harsh questions and cornered the damn things were probably already there. Yeah. So, you know, don't know. If you know some of the tricks, like, how many Rs are there in renderer? And then, Bizali can't figure that one out. Oh, I didn't know that. That was one of its.
Yeah. So it has these weird lacunae in its knowledge, right? Well, it's, it's, but that. It divides everything into tokens and those tokens are more than one letter. And so it actually, weirdly, it's, it's myopic with respect to single letters. Right. I see. So it's got a resolution problem. Yeah. Yeah. Yeah. Now you can get around with this with, like, like weird tricks. Like, if you ask, ask it to write a computer program to count the number of letters in a word, it can create that computer program, run it, and then, and then get the number of letters correct. Hmm. Right. Right. Anyway, but I, I, so back to.
So on the other question, the conscious is always thinking like, where are long lines? Like, is, is everything conscious or is nothing conscious? Potentially. And I think you want to just, when you're trying to understand something, consider the various possible answers and think that there's a probability associated with each one of these answers as opposed to a certain TV. Now, if physics is correct, the universe started off with consisting almost entirely of hydrogen, little bit helium and some lithium. And that coalesced into stars that exploded. You know, when a coalesced in stars, you had the formation of heavier elements. And, and then those stars got, got scattered and then reformed and made new stars. Um, and, uh, so we eventually got, uh, elements that are higher in the periodic table besides the, the very basic ones. That's the physics equivalent of Jacob's ladder, I think. Yeah. So this is, this is what physics predicts. This strange spiral upward towards some, somewhat towards consciousness. Well, well, yeah. So, um, but the point is that universe, at least quench physics started out essentially as hydrogen. And given enough time, you had more, um, your complex, uh, or you have, heavy elements and more complex molecules. And, and then 13.8 billion years later, at least on this planet, we have what we call consciousness in the form, you know.
Yeah. But, but, but that means consciousness had to arise. It's implicit at least from hydrogen. Yeah. Well, um, see. So if you just leave hydrogen out in the sun long enough, it's talking to itself. Well, this is, I think what you're, I've seen your comments on this before. I think you're pointing to the same sort of thing that my friend Jonathan Pazio has been trying to elucidate, which is that there's, uh, there's an implicit structure of possibility. He associates this with the concept of heaven. Like, there's an implicit structure of possibility that material forms are trying to flesh out. And so in some sense the possibility of consciousness is inherent in the hydrogen atoms, right? Obviously because of the courage. Yeah. So, so it's it's, it's, it's, it's a talk, that's a tautology in some ways, but maybe everything's conscious in some way. It means just the grease of consciousness or concentrations of consciousness.
Yeah well I wonder if that's associated with the notion, the Christian notion that the word is primary. Because in mythological representations you have three fundamental elements. You have something like order which you can think about as society but it's the a priori axiomatic interpretive structure. You have that. Then you have chaotic potential. That's the tau v'abohu that exists at the beginning of time. So the way God is represented in the story of Genesis is that so God is the a priori interpretive process that gives rise to order as a consequence of manipulating potential. And the intermediary factor is the word. That's the Christian conception. And the word is something like well language but it's also something like the sacrificial gesture that's necessary for learning to take place. So you could imagine this. When you learn something it's not only that you add to a storehouse that you have it's that something that you already know has to undergo a death and a transformation. You know most real learning is painful. You think? Yeah I mean. Well I think about well well the deeper the axiom that shift thing when you learn the more chaos is associated with it that can be exciting but it can also be destabilizing. That existential crisis that you had had great potential right because you resolved it but that didn't mean it was without its pain.
So if you imagine a hierarchy of axioms right and so the lower down the axiom hierarchy you go the more chaos is released when that axiom is challenged you get a negative emotional response to that with anxiety and threat because God only knows what happens when all hell breaks loose but there's a positive aspect too that's why it's a dragon and a treasure always in the hero mythology. It's because when all hell breaks loose there's immense opportunity and so and that's part of the meaning. Now I think you capitalize on that treasure let's say on the treasure portion of that chaos by assuming something like your own ignorance by allowing your initial preconceptions to die and by tracking the trail of deep and insistent questioning. So now you your questioning took the place if I got it right you basically took the scientific tack is that right because you're yeah well I'm trying to understand the truth of the universe and physics is essentially study of the truth of the universe at least those things that are predictable.
So when I when I resolved my existential crisis which happened about the same time years did I started I didn't study science precisely I wasn't as interested in the transformations of the material world so I'm probably more people oriented than thing oriented temperamentally so I started to study evil. Okay that was my so delving into the depths because I wanted to crack that I wanted to understand if it more not so much even whether it existed because I became convinced of that very quickly but what exactly that had to do with me because when I was reading history I read it as a perpetrator and not as a victim or a hero. I mean I try to read history to discern the facts of what humans did. You know so that also shaped the way that you act though. Probably sure I've read a lot of history. I try to understand the rise and fall of civilizations and what do you think makes them fall? One of the things is a decreasing birth rate which seems to be a natural consequence of prosperity.
Yeah isn't that strange hey because you'd kind of predict the opposite wouldn't you? As far as I know every civilization that has experienced prosperity has had a declining population. Maybe a few exceptions perhaps people can enlighten me. I'll look at the comments on this interview to see perhaps what I can learn. It seems that from what I've read every or almost every civilization when they become prosperous their birth rate drops. You think that's a consequence of the emergence of something like a non punished hedonistic egocentrism? Well I'll say there are so many examples of civilizations that become prosperous that is generally a trend towards hedonism. Yeah well you can get away with it if you're wealthy because the consequences of your consequences don't smack you on the head instantly.
Precisely so if you're a civilization under threat like let's say you take say Rome when they were trying to not get annihilated by Carthage and they had Hannibal running around roaring Italy they didn't have time for hedonism. Hedonism is not an option we're going to get destroyed by Hannibal. Chips are down yeah. Yeah when your own when a civilization is under stress there's very little hedonism that takes place. So you know William James said that the modern world needed a moral equivalent to war right he investigated the religious realm very very deeply and this I think this was in the varieties of religious experience and that really had an effect on me because I think that you need something akin to an existential threat in order to set you straight. I think there's some truth to that.
You know it's sort of like if it's a if it's a spoiled child that where everything who gets that kid gets everything he or she wants and you have sort of a vrook assault situation and then writ large that is a civilization that is a process where people get everything they want. I think it's the right way to think about it developmentally and neuropsychologically because maturation itself consists of two processes let's say the more mature I am the more I'm bringing other people into the purview of my vision so I extend myself across other people be my family first but then brought more the community more broadly the better you're out you are at that the more people you can play a game with at the same time but you also do the same thing with the future and that's actually as far as I can tell what the cortex is for it's to move you away from primordial hedonistic motivation to this more inclusive sense of future and community right and that's right higher order self yeah and so the default would be immaturity and wealth can obviously facilitate that and maybe it's partly because okay so you're a very wealthy man you could give your children anything they ever asked for okay so why not do it why not every time they ask for something like just deliver it yeah there's some wisdom that that comes comes through the ages that that you don't want to have someone be a spoiled rat that's and why do you think giving people everything they want exactly when they want it necessarily produces that because it seems to it did I think it it almost always says can you have a rough childhood yeah yeah like rough and tumble rough childhood plenty of fights and a father who is a difficult creature to contend with okay
what do for you um and are you grateful for it are you unhappy about it well I guess you never know the things that really made you who you are today so at the end of the day am I on net grateful for my life I am and perhaps even for the the hard things because those hard things you know I learned from them what do you learn I mean I read your autobiography the son of autobiography no it's not no no no no definitely not I would tell it in a different way than Isaacson because Isaacson what I think is an excellent biographer is not nonetheless looking at things through his lens and wasn't there at the time right of course of course well what one of the things that stood out for me to though from that and I would like your comments about this was the rather the rough details of your childhood a lot of physical altercations and a lot of I don't know exactly how to go out the altercations I mean I was almost beaten to death within an inch of my life at one point right that counts that definitely counts as a just a you know a few flows here and there yeah so what what did that okay why were why aren't you bitter about that because that's a pathway that people take I think that there are one one can take often people do take the path of vengeance yeah that's for sure yeah so or or that's what you know ableism is yeah to say to feel that the world has treated them unfairly and that they will visit upon the world that which the world is visited upon them and so and justified by recourse to the reality of their own suffering exactly which is often intense right yeah yeah so the story of Job one of the things I concluded from the story of Job because it's a precursor to the crucifixion story so Job makes two decisions the first decision is that no matter how terrible things become for him he will not lose faith in himself and the second is no matter what horrors are visited on him by Satan himself he will not lose faith in the what would you say in the spirit that gave rise to the cosmic order right no matter what
well so you know while I'm not a particularly religious person I do believe that the the teaching the teachings of Jesus are good and wise and that there's this tremendous wisdom in turn the other cheek and and for a while there when I was saying I thought well that's really a weak thing yeah it can be if some someone and and with respect to bullies at school I think you shouldn't turn the other cheek pump the punch on the nose and then also and then thereafter make peace with them but they need to stop stop bullying you and a punch on the nose will stop that and then thereafter you know make peace so sometimes that punch on the nose is the first step in making peace with bullies yes it may you know change their career from being a bully to perhaps they shouldn't be doing such things but yeah I think this anyway.
Paragraph 1:
so I think this notion of forgiveness is important I think it's essential because if you don't forgive then you know as the forgive you said it but an eye for an eye makes everyone. blind if you're going to seek vengeance and you have this never ending cycle of vengeance there are anthropological speculations that we were caught in a 350,000 year cycle of not getting anywhere after modern human beings emerged precisely because of that because we couldn't get out of accelerating tip for tap revenge cycles right yeah so so I'm I'm actually a big believer in it and the principles of Christianity I think they're very good
Paragraph 2:
so what sense then are you not religious well so Dawkins just came out of three weeks ago or there about an announce that he was a cultural Christian right and so the question right I would say I'm the I'm probably a cultural Christian okay I was I was brought up as an Anglican and I was baptized and although I think enough my parents also simultaneously sent me to a Jewish nursery school preschool so was Jesus outlawed well Jews have a reputation for being religious too you know yeah no I might have been the only in on Jewish kid at the school I didn't realize so was the thing so but I was just singing Harvicki Gila one day and Jesus outlawed the next you know so that is my upbringing
Paragraph 1:
so so when when Dawkins announced he was a cultural Christian the question that came to my mind right away was okay there's a bunch of things going on there the first is Dawkins proclamation or admission that if you compare different societies and their axiomatic suppositions he would prefer the ones predicated on Christian axiomatic assumptions and I do think those are good ones okay so okay so so so that that's why I asked you the question about why you would consider yourself not a religious person because it seems to me that the essence of it isn't it isn't the statement that you abide by a particular Protestant creed let's say it seems to me much more akin to the notion that you believe that this set of axiomatic presuppositions is like the pronatalist presumption it's it's correct like it's not correct it's gonna lead to a better society a society I think that we prefer to be like I mean if you say like what what what results in the greatest happiness for humanity considering not just the present but all future humans happiness or meaning well which would you pick personally I'd pick meaning because for me meaning leads to happiness but I think that's right but the reverse isn't necessarily the case yeah let's let's just say if you I could say contentment or something but I think if a set of principles is likely to lead to a society thinking of themselves as happy or content or well okay then you want to you want to then principles that lead to the most amount of happiness over time yeah not just the present yeah yeah yeah yeah yeah that iteration elements important yeah um because we have to consider the happiness of future humans as well I think that's where the ethos actually develops is that it's a consequence of iterated games right now but the contentment issue I have a harder time with so you know reward divides into two categories there's satiation reward and there's dopamine incentive reward and they're not the same and it looks to me given what you've already told me about the way that you resolved your existential crisis is that you consistently pick that that's adventure reward fundamentally over contentment now you like your kids and your content with them I presume you're playing but the way that you found meaning in your life is not through contentment it's through adventure that seems to me the case to be the case well it's not adventure for the sake purely of adventure I do like adventure right but for example a lot of people find happiness and contentment with adventure like climbing tall mountains or or hiking along trails doing doing you know exploring the wilderness and that kind of thing and I've never really found personally I found it compelling to stay climb on Everest yeah the you're doing it conceptually conceptually and from a knowledge standpoint yeah knowledge my diverse right so it's adventure it's adventure with a destination in mind right and you already described the destination it's destination is understanding yes to deepen our understanding of the the nature of the universe I think this is I mean one I guess could call it a religion I wouldn't be upset about that but that's that that is my religion for you know like lack of a better way to describe it is it's really it's the religion of curiosity the religion of greater enlightenment and and then and then if you follow that so like that's the goal then what what falls out of that goal what falls out of that goal is to have consciousness expand and scale and and scope so you have so it's why I use scale let's go up to that you want we want more consciousness and I think it's also good to have buried consciousness you know so everyone's not thinking exactly you know those multiple eyes same
Paragraph 1:
yeah yeah yeah so so I think it's probably good to have multiple religions and have different different perspectives on things and and so so what falls out of that is is we need to take a set of actions that increase the probability that the future will be good for for humanity and that and we want to expand consciousness we want to I think we should increase the population of earth not decrease it and I think that will that will not result in the Hungarians have been successful in that regard by the way they've they've made family policy planning their fundamental concern and that was very wise that was driven in part by a woman named cattle and no vac who used to be president of Hungary she's a very very smart person and they've they've knocked the abortion rate in in hungry down by 38 percent with no compulsion right they have a 12-week limit right hungry
Paragraph 1:
yeah yeah they've increased the proportion of women in the labor force by about 15 percent okay they've knocked the divorce rate down substantially and at minimum they've decreased the decline in the birth rate I don't know if they've actually managed to tilt it back up and hungry yet but they spend about 7 percent of their GDP on family policy right and this ark enterprise we've been building in London made family policy a center point we're trying to bring classic conservatives and liberals together all around the world and you know you're thinking on that natalist front has actually been what would you say has been an input into that okay because I started to notice well I don't know I don't think it was 20 years ago that I'd caught an onto that it was it was it was not that long ago but I knew that there was something terribly wrong with the fact the birth rates have plummeted so terribly in South Korea and Japan I think in South Korea now it's something like 40 percent of men in their 30s are virginal well the version not virgin thing is you know need the handle there but the the fact that I believe the birth rate in South Korea is 0.9 I think right which I think it even went maybe gone down to 0.8 last year or something but that that essentially means that if you fast forward that the population of Korea would decline by 60 percent right necessarily if you have a steady and that assumes over what spent time in fact if that oh it well Koreans are long lived so you won't see that the numbers it won't be as obvious of course you'll just see that there's a very disproportionate number of old people right because they live a long time but for for predicting the future population of any country the simple way to do it is say how many babies were born last year and what is the average lifespan in that country and that and then if if that birth rate if that number of babies stays constant then eventually the old people will die and that will be the population of the country it's very very straightforward right right right so if you look at say Japan which I think had on the order of 800,000 births last year and then you multiply that by the lifespan which is around 84-85 years you get to a population of that's in the sort of 60-70 million range which is a massive decrease from where it is today right right over 100 million well it seems to me the combination of that lack of engagement on the relationship side plus the plummeting birth rate it seems to me to be a primary biological marker of profound demoralization because well yeah I mean people aren't committing to the future or to each other
Paragraph 1:
yes I mean having a having a kid as a boat for the future if you if you intend to have a child like that that is that means you are you care about the future yeah you believe in the future you believe in the future having a child assuming it's intentional is a you can it's the most optimistic thing yes that's what he could do so well so one of the things I derive from this analysis I've been doing of the Old Testament is that that faith and courage I'll give you an example so when Moses is on the verge of the promised land he sends scouts out to check out Canaan because that's the promise now Canaan is the home of the descendants of Canaan so it's a very specific place mythologically it's okay the place of people who aren't aiming up put it that way all right
Paragraph 2:
okay so the scouts go out to look at the future and they come back in two teams and one team says there's nothing but giants there it's a complete bloody catastrophe you let us out into the desert stupidly we were better off in the tyranny there's no way we're going to survive right and the next scouts or at the other set of scouts Caleb and Joshua come back and say well there's trouble there but if we aim up and we get our act together we can turn this into the promised land okay it's at that point that god condemns Moses to die and Aaron who's the political wing and the earth opens up and swallows up the faithless scouts and it's the people who are led by Caleb and Joshua who has the same name as Christ by the way and that's relevant they're the ones that are led into the promised land okay so what's the meaning of the story well the future is always a challenge and the moral thing to do is to evince faith and courage in the future regardless in some ways it's a weird thing because it's kind of regardless of the data because you can say well look at all the suffering that constitutes life and look at all the potential horrors of the future and certainly people do hesitate about bringing a child into a world like this that's I hear that often yeah yeah
Paragraph 1:
yeah well and it's weird because if you had to bring a child into the world and you had to pick a time you probably pick this one yes and so obviously everybody who had a child at least by choice in the past did that in spite of the catastrophe of the future but sorry it's a long-winded way of making a point. there's an ethical requirement that's associated with living in the manner that would justify your life even to yourself to have faith and courage in the future no matter what right so it's not a foolishness or or or what defense against death anxiety or foolish superstition that faith it's not that at all it's a kind of courage it's like we're going to make this work yeah we're going to make this work and child a child is a vote in that direction so how does vote for the future yeah yeah yeah how many kids do you have now uh I I have uh 12 12 yeah so you're definitely doing your part yeah what do you like about kids what do you like about kids I mean there's a there's an older batch in the younger okay so it's quite a quite big difference a little x is over there he's the eldest of the youngest he's four um and my my older boys are 20 and 17 turning 18 shortly so big big gap so what did you like about having kids well I think kids are delightful why what's delightful about them because they get a bad rap man so what did you find delightful about them um I mean I think you we are I mean most people that's true to people are you know I'll gonna love their kids and it's like a little little loved one yeah yeah that's a good deal and they also want to love you they do kids if you give them the chance oh they do they're the only people you'll ever meet in your life who want nothing more than to have the best possible relationship they could have with anyone with you that's a good deal that's good deal um and I think there's you know like frankly if if we weren't biologically inclined to love our children and to want to nurture them and find reward um we would long ago have ceased to exist
Paragraph 1:
I mean you can take say um I don't know a wolf or a wildcat or some creature that that a Wolverine some creature that would normally be very aggressive and when that creature has babies the mother nurtures them and and is tender and and caring um so you know there's we are uh we we've evolved to love our offspring it's it's a natural thing um and I think people I mean even if somebody's sort of taken somewhat of a hedonistic approach to life I think there's an appeal even on the hedonism side to say that well would you not you'll actually find it very rewarding I think that's good that's a good appeal and it's one that's why I asked you question it's a solid it's a solid argument uh you know for the hedonists out there and and not all hedonists are bad I'm gonna have friends who are hedonistic and they're very good people but they uh and I've actually convinced some of them to have kids which I'm happy to say and they've thanked me afterwards like I'm not a good person who I've said you know you surely have kids you won't regret it and not one person has said they regret it ever right so that means many people have kids friends so that's good yeah they love it
Paragraph 2:
well when when I was working in Boston I had a very busy job and I pretty much stopped doing everything except my job and spending time with my kids but if I had to rank those in importance then spending time with my kids yeah that was better and the reason it was more important was because it was partly because it was actually better like if your kids are capable of a modicum of pro-social behavior which is pretty much your choice although temperament makes a difference there isn't anyone more entertaining to associate with than little kids because partly I think it's because they're not as courtically inhibited so they're my daughter has a new there's no filter they're just say what they think well and they you can see what they see through their eyes too so you're all filtered in and you know you see assumptions everywhere and then a child comes along and think oh yeah that's an amazing thing and I'm forgotten all about that you know I replaced my perception with my memory right and so children reopen that yeah that's also why I think it says in the gospels that unless you become like a little child you can't enter the kingdom of heaven because you have to you have to make contact with that untrammeled perception that existed before you ossified your perceptions into your foolish and often nihilistic habits
Paragraph 1:
but I mean so point you're cascading it earlier I do think that's uh and I consider myself an environmentalist but I think the environmental movement has gone too far and you know it's supposed to worship nature you know well it's gone too far in the sense that the that that in its extreme you you start viewing humans as a blight on this the face of the earth that turns out to be a real problem yeah mostly implicitly but sometimes explicitly and if if you internalize that then you start thinking AI systems for example yeah I'm somewhat worried that the AI systems would be I mean you could say like what bersh ways that AI could go bad would train one on Paul Erlich's work and see what happens that would be hell yeah yeah well we have plenty of political systems that have already exactly done that yeah that would be hell that's exactly right that would be hell so yeah that's a disturbing thought that's for sure so so but just going back to the you know how can something which is I think generally it sort of starts out with good intentions but ultimately sort of um pave the door to hell is is environmentalism in the extreme yeah that starts to view human humans as bad humans as a load on the earth that the earth can't sustain this is these are completely false um yeah well it's it's interesting that the economists and the biologists tend to separate into separate segregated camps on that front because the biologists tend to be malthusian and that makes them really bad biologists yeah so there I think it was I can never remember the philosopher who said this but it's a brilliant observation
Paragraph 2:
is that we evolved thought so that our thoughts could die instead of us and that's actually this it's great it's a great line and it's actually the case because the prefrontal cortex evolved so that we could produce disposable avatars right so I in our conversation what I'm really doing in our conversation is I'm offering you a potential avatar of myself for the future and I'm saying why don't you see if you can kill this thing now so I don't have to act it out and die and that's part of the right exactly and so and we've extended that with games for example right we've externalized that and so the reason the biologists are wrong is because they don't actually understand the qualitative difference between human beings and other creatures is that we can let our thoughts die instead of us we sub so that's substitutionary death that's a good way of thinking about it right and that means that the malthusian limits they don't reply to us in the same way and so the economists got that right it's like we can innovate our way out of scarcity in fact I don't like the idea of natural resource for example I think that's a Marxist notion natural resource it's like air okay I'll give you air everything else fresh water that is not a natural resource and kerosene or fossil fuel just laid in the ground until like 1850 because nobody could figure out what the hell to do with it so so what that implies is that it's something I think it's a religious ethos that's the natural resource the religious ethos that allows us to orient to the future to be community oriented and to and to be trustworthy
Paragraph 1:
yeah well okay I mean at least some of these things one can actually apply physics or you know one can analyze in a scientific way to say is how many humans can sustain without what most people consider to be significant environmental damage and I think if you actually do the numbers I think it's potentially 10 times the population we have today right so how did you arrive how did you arrive at that figure I mean obviously you've put a fair bit of thought into this and this is a very counter cultural proposition since the mid 1960s the moral proclamation has been that there are too many people on the planet and that is I think Paul Ehrlich was the ultimate what exponent of that particular analysis was very unscientific he based it on some visit to Delhi I believe right yeah right it's sort of visceral repugnance yeah wrapped himself in science and produced nonsense so I mean you just say like okay well how much land area do we need to grow food yeah how much would that encroach on natural habitats what's the actual food growing potential given especially if we got good at it right and we are actually quite good at it yeah right right right and is there enough water well actually there's plenty of water because earth is mostly water at 70 percent water buying that's convenient by so far yeah desalination is actually very inexpensive so there's really not a shortage of water there's there's not a shortage of of sort of service area and and energy to to book to to grow food and there's no shortage of computational time and increasingly there's no shortage right and that's energy dependent to some degree yeah but the energy problem is solvable very solvable yeah
Paragraph 2:
so let's turn to let's turn if you don't mind to practice some more practical solutions I'd like to I want to let's start a little bit more with the AI issue with open AI because I'd like to explore that and then I'd like to talk about what you're doing with SpaceX I kind of like to walk through your companies and and I want to see how you're integrating your vision across them as well so let's start with with open AI now you you were you started talking about that story with Larry Page you were entirely interested in that I was personally worried about AI safety because well his his view is that we'll all be essentially upload our minds to computers and and then humans there won't really be a need for humans and I'm and I thought that was I was like what team are you on Larry? I'm part of the humans there right right what team are you on and I said we really need to make sure humanity thrives and grows and and and then he called me a species for saying that yeah yeah so I'm like well I I guess I am you know pro-human what are you right right right that's the question yeah if you're not pro-human that's not nothing that's right that's something else you're pro-sumpties it's it's I think it's a crazy thing to not be pro-human I mean if humans are not going to be on team human who is so that was the final straw really and I was like okay we really need some new AI company to serve as a counterbalance to Google because at the time they had almost all of the great AI researchers they had massive computing power massive financial resources and it was very much a unipolar world with respect to AI right and a unipolar world controlled by Larry Page and and who you know had I thought somewhat misanthropic views about humans or at least certainly insufficiently concerned about the what what might happen to the humans so that was the basis for creating open the eye now
第2段:
那么我们来谈谈一些更实际的解决方案吧。如果你不介意的话,我想先从人工智能(AI)问题入手,具体是关于 OpenAI 的问题,因为我很想探讨这一点,然后再聊聊你在 SpaceX 的工作。我希望能逐一了解你的公司,并且看看你如何在这些公司中融入你的愿景。先从 OpenAI 开始吧。之前你提到过拉里·佩奇的故事,你对此非常感兴趣,而我个人则担心 AI 的安全性。佩奇的观点是,未来我们可以将自己的意识上传到电脑中,因此人类将不再需要存在了。而我对此的反应是,拉里,你到底站在哪一边?我是站在“人类”这边的,对吧?我们需要确保人类能够繁荣发展,但他却因此称我为“物种主义者”。于是我心想,好吧,我确实是支持人类的,那你呢?如果你不支持人类,那你支持什么呢?不支持人类是很荒谬的事情,如果人类都不支持自己,那谁会呢?这成为了压倒我的最后一根稻草,我意识到我们需要一个新的 AI 公司来制衡 Google。当时 Google 拥有几乎所有顶尖的 AI 研究人员,巨大的计算能力和雄厚的财力,在 AI 领域是处于单极主导地位的,而这个单极世界由拉里·佩奇掌控,而我认为他对人类抱有一定的敌意,或者至少是对人类的未来缺乏足够的关注。这便是创建 OpenAI 的基础。
Paragraph 1:
yeah that's actually a real problem like insufficiently concerned with the humans that's a problem yes and has all the AI power so yeah right that that would that trouble me and so I thought well what would be the opposite of of Google would be a non-profit that is open source so you can see what's going on not a black box yeah and and that was not sort of what wasn't forced by sort of mock incentives to make as much money as possible so opening I was started as a as a non-profit open source and the open and opening I refers to open so yeah yeah yeah so why were you concerned about the profit mode of warping things and at that point well at least at least at least well I think this perhaps is particularly a challenge for publicly traded companies you just get sued if you don't maximize profits you know the shareholders you'll get a shareholder shareholder class action lawsuit yeah that will force you to maximize profits yeah so you thought that so that you thought a for-profit system might tilt the development of AI in directions that were short-term profit motivated instead of cracking the fundamental problem something like that yeah like look I could be wrong about a lot of these things but that was that's what I thought at the time and and I wanted to create something that I thought would be the polar opposite of Google yeah which is obviously a for-profit and close source centralized
Paragraph 2:
yeah centralized and you don't get to see what's going on and that was the basis for opening I and we were you know recruited a lot of key people I was instrumental in recruiting Elias Satskaya who without him opening up would not be where it is today Elias yeah Elias Satskaya Ilya pretty famous guy in AI and Ilya is also the guy that I thought at opening I who had the strongest moral compass had the most about doing the right thing so it was troubling to see him get ousted from open AI you know he he sort of was part of a coup to exit Sam Alton the CEO yeah right and when that that coup somehow got turned around and and then Ilya was was in fact exited from open AI and opening is now really trying to maximize profit what happened I saw I'm not sure like and and I'm considering considering your legal action here to say like how is it possible that a that that an organization is founded with the goal of being open source non-profit and I provided almost all the money in the beginning almost 50 million dollars to get it going with a no stock I have no stock or control or anything and I how is it possible to go from there to a company that's now allegedly worth over a hundred billion dollars and is it seems to be maximizing profit and is it's a challenge shift and it's not an open source right so that's very different than the original rule I would say that this is like is it possibly more different and I'm not sure how you could be more different so so so so I mean this would be like let's say you let's say you you know fund fund a non-profit to preserve some part of the Amazon rain bars yeah but instead that nonprofit becomes a lumber company right right and chops down the forest and tells it you would think that it was that seems
第二段:
是的,集中化后你看不到内部发生的情况,这也是我们创建OpenAI的初衷之一。我们招募了很多关键人物,其中Elias Satskaya的招募是我推动的。没有他的帮助,OpenAI不会有今天的成就。Elias Satskaya,Ilya在AI领域非常有名,他也是我认为在OpenAI中最具道德指南针、最注重做正确事情的人。因此看到他被OpenAI排挤出局让我感到困扰。他实际上参与了驱逐CEO Sam Altman的行动,但那个行动最后被逆转了,结果Ilya反而被排挤出局。而现在的OpenAI似乎正在努力最大化利润。我看不懂的是,这个组织最初成立的目标是开源和非盈利,我自己也在早期投资了将近5000万美元,但我没有股票、也没有控制权或其他权利。怎么会从一个以开源、非盈利为目标的组织变成一个现在据说市值超过一千亿美元、全力追求利润的公司呢?这是不是一种根本性的转变,完全背离了我们最初的宗旨?你可以想象一下,假设你资助一个非盈利组织来保护亚马逊雨林的某个部分,却发现这个非盈利组织变成了一家伐木公司,砍伐森林然后出售木材。这种情况看起来似乎太不可思议了。
Paragraph 3:
are you shocked are you shocked by what happened is that it yes I'm I'm concerned about it and I have voiced those concerns over the years and you know I think it's like what what the hell happened I don't know I don't know I still don't know but what do you what do you think happened like I mean I look I don't want to push you obviously but I'm curious it's like how did that happen it doesn't make sense to me I would like an answer that question too okay so are you addressing that with grok yeah so maybe maybe that's the solution rather like right sure the last thing you need is another like immense legal battle I mean I'm I'm still considering a legal challenge to at least perhaps they have the court explain to me how an organization that I funded for for one purpose can do the diametrically opposed purpose and that that's okay and become a full profit I please just show me the trail of the breadcrumbs I see I see I want to do that because I'm confused and and and and this yeah well it sounds like it should be illegal if I'm missing something here well it sounds at least like it should be understood at least right at least that what the hell happened here yes you don't want that to happen especially given what's at stake
Paragraph 1:
yes yes exactly opening eyes the leader in AI yeah so okay so and I'm somewhat worried that's that that they've ingested the work mind virus and the training you can see some of that in the in the output results we obviously saw that with Google Gemini as well to just absurd degrees yeah that was really quite the miracle where you know it's it's a draw draw a picture of the founding positive of the United States and it's a group of diverse women yeah this is a historical event this rewriting history yep
Paragraph 2:
yeah no that was really something that was really something to see that was like a jaw on the floor moment jaw on the full moment and wow you know and then and then asking questions like is you know which ones worse misgendering Caitlyn Jenner yeah yeah well thermonuclear warfare and it said misgendering Clayton Caitlyn Jenner like this this yeah that's a priority problem how powerful do you want this AI to get with with with beliefs like that that's and I should yeah to Caitlyn Jenner's credit yes Jenner said I would really much prefer to be misgendered than have nuclear war
Paragraph 3:
well that's good that's good yeah there's a limit to all forms good for Caitlyn that's quite quite sensible yes yes thank god for that yes performing better than the AI yes but you and and while it's sort of perhaps funny and ridiculous at this stage imagine if the AI is not that funny I mean it's semi-funny at this stage but it's the more powerful that AI gets it could decide that that is not merely that it wants to force that outcome and simply say that it eradicated its performative contradictions right yes act out what it believes oh yes that's highly likely society is insufficiently diverse according to his programming and will simply force that diversity by whatever means it's necessary
Paragraph 4:
okay so let's that's good that's good on the AI side for now like I don't think you and I would make it in that scenario no I don't think so yeah so how about we talk about Trump for a minute do you the first thing there's there's one question about Trump that I really wanted to ask you so do you are you shocked at the fact that you're donating a substantial amount of money to facilitate Trump's election is that something you would have believed in the realm of possibility say five years ago well and I want to diversify what's been reported in the media is simply not true okay okay I'm not donating $45 billion a month to Trump right and no no what I have done is I've I have created a pack or super pack what do you want to call it yeah which you know I simply call it the America pack and the you want to tell everybody who's listening what a pack is because people it's a political action committee
Paragraph 5:
yeah it's an organization it's sort of a legal entity that can receive funding that's funding can then be used to help with political campaigns yeah okay okay and how does that differ from a direct donation there are specific limits on direct donations to candidates and the pack system is is a way of putting a political structure in place that sort of runs parallel with the political with the with the formal political system and so yeah you can donate money directly to candidates that amount is fairly small yeah then there's and you can donate a lot more money to um a political action committee or super pack there are various rules that govern the operation of packs and super packs but that's it it certainly allows for a lot more money in the system than would otherwise be possible right right and these are used to be clear on the democrat and republican side
Paragraph 6:
yes um and um I actually think that so it's an open playing field on the pack side yes yeah what are you hoping to accomplish with this and what's the pack called it's called the america pack america pack yeah it's it's very easy to remember it's very easy to remember and it's it's actually it's not meant to be um sort of a hyper-partisan pack it's it's actually the the core principles of of this america pack are intent is to promote the principles that um made america great in the first place so I wouldn't say that I'm say for example maga of make america great again I I think america is great um I'm more m-a-g make america greater uh and uh and and there's there's some um you know core pillars uh for core values that have I think made america great um could you elucidate those
Paragraph 1:
yeah yeah yeah so yeah so um you know one of them is being a meritocracy or um yeah okay as much as much of a meritocracy yes as much of a meritocracy as possible such that you get ahead as a function of your hard work and your skill um and nothing else yeah um which is um you know why I would be opposed to for for example things like d-e-i uh Adrian worldridge documented the fact that the alternative to meritocracy historically is nepotism and dynasty absolutely it's not it's not equity it's not equity correct it's nepotism and dynasty right right
Paragraph 2:
so that's very much worth knowing okay yes meritocracy has its price because it's a severe judge but the alternative is nepotism and aristocracy so or dynasty okay so meritocracy what do you know you know america it's not like america has been purely meritocracy but it has been a more of a meritocracy than any other place right right which is which is good that's good that's good which I regard as good yeah uh so uh promoting um meritocracy promoting um a freedom you know freedom to operate um meaning like the least amount of of of government intervention possible you know we want and and this is I think important to fight as because the the national tendency over time sort of almost like entropy is that uh government the the hand of government gets heavier every year yep um the the laws and regulations accumulate every year and these laws and regulations are immortal right right
Paragraph 3:
so that's the evil uncle of the king a very very old story very old story the Egyptians were wrestling with that problem 4 000 years ago right yeah so you have to have some you have to really have to take an active role in reducing the number of laws and regulations otherwise as more and more those regulations are passed eventually everything becomes illegal right right um and you start getting into these Orwellian situations where then everyone's poor and miserable well where where action A is illegal and action B is illegal and there isn't anything you can do that is legal right right um so you know take an example of of um uh to give you an example of some some low-fare that was leveled against SpaceX for example we were told for many years that we could not hire anyone who was not a permanent right yeah yeah yeah they had to be well so SpaceX develops advanced rocket technology which is considered um uh an advanced weapons technology because it's it's a core part of like intercontinental ballistic missiles
Paragraph 1:
yeah so there are only a handful of things in the sort of highest level of weapon technology and rocket technology is one of those because we could deliver a payload and basically bomb anywhere on earth right from anywhere on earth so um so we were i was told no in certain terms by the government that if we hired anyone who is not a permanent resident of the United States uh that you have either green card or a citizen that i would go to prison oh yeah because the presumption if somebody's not a permanent resident is that they will leave the United States and take the rocket technology from SpaceX to potentially countries that would cause harm to the United States right right pretty you know solid reasoning anything um and then um a few years ago the Biden administration decided to sue SpaceX for failing to hire asylum seekers right right yeah i remember that i remember that so we're told on the one side that uh if we hire anyone who's not a permanent resident resident we we we go to prison now we're told um if we don't hire asylum seekers who are not not asylum not people who've been granted asylum seekers seekers they ask buyers to asylum they're therefore not a permanent resident but if you don't hire them we also go to prison
Paragraph 2:
right well the purpose of being damned if you do and damned if you don't is to make damn sure that you're damned correct right right so that's a kind of outside i mean that that seemed to be insane and unfair um and uh and and and why did the Biden administration because they can only process they can only do so many big cases per year there's a finite number they can yeah yeah why would the justice department of of all the injustices that occur pick this as one of the biggest cases yeah why why do you think well i think they probably had a uh i don't know there was some law say i think it was it was a what do you think about the relationship to you well i don't know all the things but this is before supporting trump or anything like that yeah in fact i i you know supported bydon and and before that hillary and before that abama so right that's why i was asking if this comes as a sharp you know i don't know if it could just be random it could be um like they didn't tell us uh you know why pick us why why why do you do such a crazy lawsuit um random's a bad act even if it is random that's still bad so it's still it's still bad that's just a marker of incompetence but if it's not random well it seems to be highly unlikely to be
Paragraph 3:
yes and and also why why attack space x and not say bowing or larky so um i think part of it might be that uh space x is not unionized um and the democratic party in the u.s is fundamentally controlled by the unions so that i'm speculating here but since since we're not unionized we're i think a very happy workforce um and uh you know i'm out there on the out there on the factory floor i don't see if people people are happy it's it's a good vibe um well they're engaged in something ridiculously exciting it's cool minimum well you know that's not nothing you know people can go a long ways if they're if they're part of a project that's well aimin at mars let's say that's definitely aiming up so that's really exciting that's a real opportunity for people yeah what do you think of trump well i mean i i'm not gonna you know i i i don't sort of subscribe to cults of personality yeah so for me it's really just um you know we've got a choice of administrations and we we have to pick one yeah um and uh you know i think both that there are flaws on both sides what do you think his flaws are um and he's a complicated person and i've been trying to suss him out and figure him out because some of the things that look like flaws might be advantages in disguise like he seems to me to be pretty good at standing up to psychopathic bullies for example sure and that's kind of a useful skill and it's not easy you have to be a bit of a monster to manage that and it isn't obvious to me how many of the idio even the divisive idiosyncrasies of trump are the mirror image of his capacity to stand up to bullies and that's a tough call man
Paragraph 1:
but so you've obviously decided to layer what layer efforts down on the side of the trump administration in the fourth kind of election and so yeah i would i think it's it's uh it's really i think we need a change of administration um uh and i think uh you know many years ago the i think probably the democratic party was the party of meritocracy and and and of personal freedom yeah um they used to be the free speech party yeah and and these days they they seem to be the censorship party under the guise of hate speech yeah um so uh so weirdly the in my view the republican party is actually the party of that that's the meritocracy party uh because you know the democrats also promoting dei which is really just um another form of racism and sexism um it's the most pernicious form i think actually right so it's anti-meritocratic dei is fundamentally anti-meritocratic so so so then it insists on dividing people by groups right as a primary what would you say conceptual distinction between individuals race and ethnicity sex
Paragraph 2:
i think democrat party is is stoking um division i think the evidence for that's clear all of this group identity nonsense has made things much more i can see it in Toronto when my kids grew up in toronto downtown i would say they were race they were race ethnicity and gender blind right seriously yeah they had an unbelievably diverse range of friends and no one cared sure and even in toronto that started to shift around with this emphasis on group division it's a really ugly thing to see so yeah not good not good yeah so my view and is that at this point in the united states the republican party um is more in line with uh uh merit a meritocracy yeah and with personal freedom um so i've never had a conservative many times with my left wing friends let's say they'd refuse to talk to someone or me included definitely because i've invited prominent democrats to come on my podcast in great numbers for a very long time and i've got like absolutely nowhere with that they'll talk to me in private they will not talk to me in public and so that's that's never happened they're afraid of being shunned absolutely 100 percent that's what they're afraid of sure definitely and they they told me that it's not a secret but that's never happened on the republican side i found it much easier because i've talked to a lot of democrats and a lot of republicans and i found it much much easier to talk to the republicans and that is somewhat of a shock i wouldn't have necessarily expected that yeah
Paragraph 1:
and i should be clear that i felt like i think the uh republican party is flawless um it certainly isn't got it's issues um there are extremists within the republican party that i don't agree with um but one has it's a two party system essentially you got to pick one or the other and so you weigh the the grid and the bat in my opinion is that and you flip the coin we need that the country would be better off with the republican administration than a democrat well trump was pretty good at not having wars yes yeah which is actually quite a big thing like it's really a big thing and what he did with the uh Abraham Accords that was a miracle i think right no absolutely um he should have got the Nobel Prize for that i think this this only to be said that you know america needs a strong leader and with the that we have the perception of strength um now um you you have to admire that you know trump after getting shot um with blood streaming down his face it could have been a second shooter who knows um nonetheless you know with with a fist pumping fight fight yeah um after being shot yeah i mean and this is not he's an ordinary he's funny too yes and he's brave he has a courage um this is this is instinctual courage yeah it's not calculated it's not some some arranged event it's in the moment well you could see that then because that was not a time when you're going to shoot true courage in the moment yeah yeah absolutely um and if you want to lead her who's going to deal with some very very tough cookies out there um he's going to you know deal with uh with a poutine or came jon came jon or you know china yeah china and uh they will be and they will be they'll think twice about messing with trump yeah they'll think twice okay okay um and poor poor biden can't make it up the stairs and obviously he's stepped out of the race but it's nobody's going to be intimidated by by viden it's impossible but but in some i
Paragraph1:
think they will be intimidated by a guy who was fist pumping after getting shot well we saw that i think in his administration yeah because it was peace did rain and that was quite that was quite uh well that's a bit of empirical evidence can i can i ask you a little bit about one of the things that you've been relatively vocal about and i understand that there's a personal connection to this as well so i am for what it's worth uh i'm particularly unhappy with my what my colleagues in the psychological field have done with regards to gender affirming care right i think they are a pack of contemptible cowards yes and i think that everyone who's been involved in this in relationship to minors should go to prison i agree okay okay why do you agree what like what what what's your or in the water in this particular this is the worst medical and psychological malpractice i've ever seen anywhere including what i've done what i've studied in reference to historical atrocity eugenics um what do you call those prefrontal lobotomy even the sorts of things that were going on in nazi german at least the bloody nazi's knew it was wrong and tried to hide it yes so okay so what what's your what's your why are you engaged in this particular battle i mean you you said you're going to move a couple of your company headquarters out of california because of the last legislative move that gavin newson pold with regards to the trans issue so yes i mean there three clear was there were many things leading up to that point yes um i just and i of course feel like look it's not that it's the one straw it's just the final straw yeah okay okay so it's a cumulative fair enough fair enough right right um so it's not dramatic grandstanding it's no and and moreover i've had conversations with gavonneson before where i said if you if you pass legislation like this you sign legislation like this um that in my view puts uh children in danger um i will move my companies out of california and he knew that ahead of time okay and so okay you've talked to him directly about this what the hell is he doing like i cannot understand i really cannot understand i mean the democrats is pandering to the far left why did the democrat martyrits constantly pander to the far left what i worked with the democrats in california for five years trying to get them to separate themselves from the far left they wouldn't admit that they existed even and they certainly would never separate them never they wouldn't admit for example that antifa even existed sure you know and so burning down buildings there's this unbelievable blind spot with regards to the far left radicals and the moderate democrats are i think they're useful idiots fundamentally so yeah well i mean going back to the um the so-called gender affirming care which is um a terrible euphemism that's for sure uh it's um it's it's really a child sterilization is what it should be no there's mutilation too mutilation yeah well we want to make sure that that amalgam is sure fair enough it's it's child mutilation it's our sterilization yeah under the guise of gender affirming care and compassion right right right i can't i literally can't imagine anything worse than that yes it's evil um i mean you're taking kids who are obviously often far below the age of consent yep confused miserable yes the reality is that almost every child goes through some kind of identity crisis you know that's part of puberty exactly it's just part of growing up um if so it's very possible for manipulate for for adults to manipulate children into who are having a natural identity crisis into believing that they are the wrong gender yeah um and that and and that and that and that they need to be uh uh the other gender or they need to avoid or when you scope be a girl you know and and that the that will solve all their problems and that will solve their problems and uh and then they uh give them sterilizing drugs uh which are called also a misnomer puberty blocker uh these are sterilization drugs uh so they can never have children again yeah um they uh can have messed double mastectomies um mutilation yeah they're forearms stripped to build non-functioning penises yeah um it's macabre um and um i mean we we have an age of consent for a reason that the reason um you can't get say tattoos blow it age 18 or um drink or drive you know there's there's ages i wish you can do things because uh if we allow children to to take permanent actions when they're um 10, 12, 14 years old they they they will do things that they subsequently greatly regret yes i've interviewed a couple of people who've done exactly that and it's right damn painful so um so i think you you you and doing why are you willing to make this an issue do you think uh well it it sort of happened to one of my it happened to one of my my older boys where i was um i was essentially tricked into signing documents uh for one of my older boys zaver uh this is before i had
Paragraph 1:
==========
really really any understanding what was going on and the we had coke covid going on and so there was a lot of confusion um and um you know i was told over you know zaver might commit suicide if that was a that was a lie right from the outset no reliable clinician ever believed that there was never any evidence for that and also if there's a higher suicide rate the reason is is because of the underlying depression and anxiety and not because of the gender dysphoria and every god damn clinician knows that too and they're too cowardly to come out and say it right and so that and then we end up in exactly when when i saw that lie start to propagate it just made the hair on the back of my next stand-up it's like i see so you're you're telling parents that unless they agree to this radical transformation that their children are going to die and you think that's moral and you think that's true that's so pat that is so pathological that it's almost incomprehensible i can't imagine anything worse i can't imagine a therapist doing anything worse than that or sitting by idly and remaining silent while his colleagues are doing it it's pathetic uh it's it's uh incredibly evil and i agree with you that people that have been promoting this should go to prison it won't stop till that happens yeah it'll just go underground there's all puberty blockers are being accessed online by kids all the time through non-medical channels so yeah it's not going to stop yeah okay so i see so that's so i was i was straight into doing this um and uh you know it wasn't explained to me that puberty blockers are actually just sterilization drugs um so um anyway uh and so i lost my son essentially uh so you know they uh they call it dead naming for a reason yeah all right so that the reason it's called dead naming is because uh your son is dead so my son's avar is dead killed by the woke wine virus i'm sorry to hear that yeah i can't imagine what that would be like yeah so um
yeah and there's lots of people in that situation now right it's not pretty and lots of demolished kids yes yeah well that's a good that's a good reason to be the final straw all right so let's so i'm about to destroy the the woke wine virus after that and we're making some progress join the club yeah okay let's end on something positive you're shooting for Mars what do you think that's doing for people because it's kind of i don't want to be i don't want to step out of my box here but it's really interesting to me to watch you do this because it's preposterous right it's a preposterous thing to do to go to Mars yeah and yet i was old enough when the moon landing was happening to remember what that was like right it was an adventure positive adventure everybody could participate in it was an unbelievable technological accelerant and it was a a mark of faith in the while in the west i would say right in the in the might of the united states and the upward aim and so i know you're playing i don't mean in the game like matter that's what you're tapping into what do you think you're doing with this with the mars venture
well the the mars venture i think is part of the expansion of consciousness beyond earth so um i mean i want to be clear that that i i don't think we should apply some vast number amount of resources to to mars i'm talking about less than one percent of of of our economic output should go to making life multi-planetary but it is natural extension of of expanding the scope of scale of consciousness so i think we want to do everything we can to make sure that earth is going to be great for a long time as long as possible and but also allocate a small amount of resources like i said less than one percent of our economy to um extending life beyond earth um and uh and ultimately to other star systems right and so that's that's perfectly in keeping with what you described at the beginning of the of our discussion about the manner in which you resolved your crisis of faith as a child your commitment to the validity of consciousness your your desire to what would you say facilitate its transformation development extension that goes along with your pro-human ethos right so that's all thinking from first principles so i think you are stuck in the religious camp one word other so i can say you can call it i don't mind it's been called a religion um but i think uh
yeah well i i came here at least in part today to find out what just what the hell your religion was so so i want to end with this if you don't mind so i also asked you why you trust yourself in relationship to ai and you said well i don't entirely and so i thought that's a good answer but i mean more broadly like you have a lot of admirers there's a lot of people who are hoping that you're the guy that you that you what would you say that you're the guy who can do what you say you're going to do and there's lots of evidence that you can it's like you think it's reasonable for people to trust you well i think so i mean that's i wouldn't i wouldn't say trust me entirely but i i think uh on balance uh my track record suggests that i am fairly trustworthy what what elements of it do you think suggest that particularly um well i've i've built a lot of companies that have done useful things yeah you've built a lot of impossible companies that done useful things yeah right so that's an existence proof of sorts
yeah and and you accept the validity of entrepreneurial striving as an indicator of ethical conduct which i think and i think if i think that's valid yeah i wouldn't be able to recruit great people to work with me to build these companies um i think if i was like a really bad person they would just wouldn't want to work with me um so uh you know i think like one of the tests for you know assessing someone's character is um to look at the character of their friends and their associates yeah um and while people can put up a mask themselves for their character their friends and associates will not yeah and so you can judge judge a person's character by their friends and associates and to some degree by their enemies right you know if evil people hate you well you might be doing something right
okay sir thank you all right thank you much appreciated very good to talk to you it's real it was a real privilege you're welcome yeah good all right you get