All right. Welcome to the spaces. And welcome to Rokana and Mike Gallagher. So this is a discussion of AI safety among other things, appropriate, I suppose, especially today having announced the XAI. But I think we're going to touch on a lot of topics here that I think people will find very interesting.
So just welcome Rokana and Mike Gallagher. Well thank you, Elon. Thank you for doing this and your conversation with Mike and me a month or so ago and then we all thought it'd be important to have a thoughtful, engaged conversation in a way that people could participate without the theatrics of congressional hearings where people just are looking to score points. So I hope that all happened today and I know Mike's committed to it. I'll just share a minute of my perspective.
I appreciated what you did to seed open AI. I think there are a lot of benefits to AI from personalized medicine to personalized education to looking at supply chains and how to make those more efficient and increase in productivity in our country and just making products cheaper and more abundant. There are numerous concerns. One of my biggest concerns is the data you put into AI. If you put junk data you're going to get junk outcomes. And right now there is not much regulation or thought about what is the actual data set that we're putting in AI and what is the human judgment and that is something that we're going to have to figure out. But I'm happy now to get Mike's perspective.
It was just been terrific as the chair of our China committee. He and I, he leads, he's the chair of the ranking member on ARM services, a Marine from Wisconsin. I disagree on things but always with an view of what a debate on ideas. Well, thanks, Rohan. I appreciate it. Although I'm embarrassed because my staff informs me that I have, or Elon, you have 147 million followers. I have, Rohan, you have 312,000 and I have 8,000. So mine are very dedicated though. So you are welcome that I brought my dedicated 8,000 to this conversation. Well, I think I'll have a few more after this one. Basis, I think. Quality is not quantity. That's what I'm after. But thank you, Elon.
I think from my perspective, just to briefly add to what Rohan said, we now, though we've, I think on the ARM services committee primarily thought about the applications of AI on the battlefield now because of the quantum leap in the technology over the last few months, this big debate is broken out between sort of the AI optimists and the AI pessimists. I would argue that you're sort of the most prominent member of the ladder camp. I remember reading an interview with a famous technologist who said, you know, with artificial intelligence, we're summoning the demon and all those stories where there's a guy with the pentagram and the holy water. It's like, hey, he sure he controlled the demon. It didn't work out. And that was, of course, you.
So I guess what I'm most interested in is getting your sense of your concerns as someone who's skeptical of pause is practical. But with this new announcement and congratulations on that, are you, what are you doing differently than say deep mind and tropic and open AI and galloping the risk versus the promise of the technology? Sure. So, yes, I mean, I think I've been banging the drum on AI safety for a very long time. And yes, I think if I could put press pause on AI or to really advance to AI sort of just digital super intelligence, I would. It doesn't seem like that is realistic.
So the, you know, XAI is essentially going to, you know, build an AI, you know, like you kind of grow, grow an AI in a good way, sort of hopefully. The premise of AI is to sort of have the AI be sort of maximally curious, maximally truth seeking.
And this may, like, get a little esoteric here. But I think that from an AI safety standpoint, I think a maximally curious AI, one is that it is trying to sort of understand the universe is I think going to be pro humanity from the standpoint that humanity is just much more interesting than not humanity.
So like, obviously, I'm a big fan of Mars and that we should become a space-brank civilization and sort of a multi-planet species. But like Mars is frankly quite boring relative to Earth because it's a bunch of rocks. There's not, there's really, there's no life that we've detected so far, not even microbial life. Earth with it sort of with the vast complexity of life that exists on Earth is just, is vastly more interesting than Mars, you're just learn a lot more with humanity being there with, and actually, I think, fostering humanity. If sort of trying to understand the true nature of the universe, that's like actually the best thing that I can come up with from an AI safety standpoint.
So that's, yeah, and I think this is actually better than trying to explicitly program or morality into AI because if you program a certain morality, you have to say, well, what morality are you programming in, like, who's making those decisions? And even if you are actually extremely good with how you program morality into AI, there's this little morality inversion problem when it's sometimes called the Waluigi problem that if you program Luigi, you inherently get Waluigi by inverting Luigi to use sort of super Mario metaphors.
So I mean, this is kind of getting, this is obviously getting quite esoteric, but hopefully this makes some sense. So I would be a little concerned about, you know, maybe the way that that, say, of an AI is programming the AI to say that this is good and that's not good, you know, and if it is, you know, anyway, so that's, like, the XAI is really just kind of starting out here. So it's not, you know, it'll be a while before it's relevant on a scale of the sort of open AI Microsoft, AI or the Google DeepMind AI, those are really the two big gorillas in the AI right now by far.
So and I could really talk about this for a long time. It's something that really thought about for a really long time and actually was somewhat reluctant to do anything in the space because I'm concerned about the immense power of digital super intelligence. And it's just something that sort of, I think, maybe hard for us to even comprehend. Yeah.
So there's also, even if the AI is extremely benign, the question of relevance that comes up, like if it can do anything better than any human, well, what's the point of existing? That there's also an issue. Just do we even have relevance in such a scenario?
So anyway, that's the bad side. But the good side, obviously, is that in an AI future where you really will have an sort of, in a benign scenario, an age of plenty of where really there will be no shortage of goods and services, any scarcity will be simply a scarcity that we self-defined a scarcity.
So it could be a unique piece of art or a house in a specific location. It's artificially defined scarcity, but goods and services will not be scarce in an sort of AGI future, in a positive AGI future. And I think it's actually important for us to worry about a terminator future in order to avoid a terminator future. Yeah.
And I am an advocate of having some sort of regulatory oversight. And I've actually made this point throughout the meeting with world leaders, including in China. And where actually they simply like actually strong agreement that there should be AI oversight, AI regulation.
So just as we have regulation for nuclear technology, you can't just go make a nuke in your garage and everyone thinks that's cool. But we don't think that's cool. So there's a lot of regulation about around things that we think are dangerous. And even if things are not dangerous at a civilizational level, there's this still, we have the green drug administration, we've got the federal aviation authority, we've got the Department of Transport, there's all these regulatory authorities that we put in place in order to ensure public safety at an individual level.
But AGI is just one of those things that is potentially dangerous at a civilizational level, not just at an individual level. And so that's why I think we want to have AI regulation. We want to be careful in how the air regulation is implemented and not be precipitous and heavy handed. But I think it is something that needs, there's got to be some kind of referee on the field here.
Because one of the dangers is that companies race ahead to, you know, I think it's actually more dangerous for companies that are behind that might take shortcuts that could be dangerous. You know, the FAA came into being after lots of people died in aircraft crashes. And they were like, well, look, if you're going to make aircraft, you really cannot cut corners because people are going to die. So that's kind of how I see AI regulation. I know a lot of people are sort of against it, but I think it's the kind of thing that we should do, we should do it carefully, we should do it thoughtfully. But I think it's something we should do.
Yeah. All right. I appreciate that, Elon. My takeaway headline is that you affirm the earth and affirm humanity. And that is in a time of pessimism. I'm glad that you still believe in the earth and humanity as possibilities. And mention the positives, the abundance of goods and services, the lower cost. And I couldn't agree with you more about the need for regulation and thoughtful regulation.
One of the points I talked to Mike earlier and I was making about the FDA is some of the smartest people are actually at the FDA. And I know some of the business folks complain about the FDA times, but then they acknowledge that the people at the FDA really know what they're talking about. And I think that has led not just to the safety of drugs. It's also led to high standards that has had the United States make the best drugs the most efficacious drugs because the FDA challenges them to a standard. My own view, I don't know, Mike's view, I see, it kind of negates some others on the call, is that Congress lacks the knowledge to really delve into this. And I don't think we ought to be just deferring to industry. And so creating a new regulatory body like the FDA, FAA that has experts from technology but also from ethics from civil society, people who have really studied this, scientists, in my view, would be a concrete step. And whether it's that or we get a commission composed of people who really understand this, it seems that Congress is going to need help to get this right.
Yeah, I would take your point, Ray, about the dearth of expertise in Congress, right? Congress has not covered itself in glory when talking about even the most rudimentary technological issues. I mean, see, sort of Facebook, Zachar, we're carrying 2018. And it's like, it birthed what was called at the time, the meme apocalypse. But I also think there's concerns, you know, I don't speak for the entire conservative world about, I think maybe the pandemic illustrates the dangers of seeing too much authority to unaccountable experts. But do we need a more dynamic regulatory process with the technology like this where the pace of change is so quick? Put differently, you know, it's not like even if we passed a sensible AI law this year that struck that balance, you talked about you on between oversight guardrails but also the need to innovate. It might be outdated very quickly. So figuring out that dynamic regulatory model without stifling innovation, I think, is the core dilemma because for me, and this reflects my bias, I think, the one thing we know for certain is that if we fall behind because the Chinese Communist Party is not going to pause, they are going to use the technology for evil to perfect a techno totalitarian surveillance state. We at least in the free world have the chance of using it for good and striking that balance. But I admit, it's very difficult.
I mean, I guess, Elon, do you see an obvious path forward in terms of regulation that doesn't completely strangle innovation but also doesn't unleash the Terminator hypothesis? Because that's my constituents understand Terminator, whereas I think the positive externalities of AI still seem distant and in some ways ephemeral, if that makes sense.
Yeah, I agree. There isn't, it's difficult to think of any sort of, I can't think of a good sort of movie example, TV example of AI that's the benign scenario. There are some books, like I think the in-banks culture books are probably the, that's the best imagining of a positive AI future that I've read. And I think arguably you could say like the Isaac Asimov Foundation series, also the books, the TV series diverges quite far from the books, but the books themselves actually have somewhat of a benign AI scenario.
But the most sophisticated, perhaps most accurate view of AI future is the in-banks culture books which I'd highly recommend. But it would be helpful for, so Hollywood to articulate that vision in a way that the public can understand. So, but I think the right sequence here is to go with insight followed by oversight.
So at first it's really just, I think the government, I'm trying to understand what's going on and I think there's some merit to an industry group, you know, like the motion picks this picture association that I think actually should be formed. So I mean, I think we'll try to take some steps in that direction because I think there's some amount of self-regulation that that could be good here.
I did, when I was on my recent trip to China, I spent a fair bit of time with the senior leadership there talking about AI safety and some of the potential dangers. And pointing out that if a digital super intelligence is created, well, that could very well be in charge of China instead of the Chinese Communist Party. And I think that did resonate because I think no government wants to find itself. Unceded by a digital super intelligence. So I think they actually are taking action on the regulatory front and are concerned about this as a risk.
And I've seen some comments internally within China where the companies are but unhappy about the government wanting to have regulatory oversight on AI. So I think this is something that does actually resonate even in China. In fact, when I was in China, I said one of the biggest obstacles to AI regulation outside of China is a concern that China will not regulate AI and will then get ahead. They took that point to heart and it's a logical point, I think. And like I said, the concept highlighted that if you make some super intelligence, well, the super intelligence could be what actually runs China and they also resonated.
So I think, yeah, I think trying to shed as much light on the subject, some light is the best disinfectant. And being as open about things as possible, going from insight for a few years to then oversight with consultation with industry is I think the sensible approach. And I think the public is starting to understand the potential of AI with chat GPT was something that the public could interact with. And I've understood the power of AI for a while and those deep in industry have it until you have some easy to use interface is difficult for the public to understand. It's also the case with stable diffusion and the journey. You can see the incredible art that AI can create. It's really amazing.
So I'm actually somewhat of an optimist in general. So I think the best way to ensure a good future is to worry about a bad one. And that's the sensible thing to do and to have discussions like this and continue to have discussions like this. Yeah. Maybe a couple of follow-ups, I'm sure Mike will have that.
My understanding is that there's the machine learning part right now where people basically are making predictions of how to complete a sentence or looking at patterns doing analysis. And then there's the fear of the general intelligence emerging where the concerns that you're voicing about can these robots be smarter than governments. How far away are you? Do you think we are from the general intelligence versus the sophisticated machine learning?
And the second point, because I know, I mean, you showed up to China before Blinken did. I mean, how open do you think China would be to work on some kind of international framework on regulation of AI? And is that in your view feasible?
My understanding from the conversations that I had in China is that China is definitely interested in working in a cooperative international framework regarding AI regulation. There's a fair bit of distrust within America and distrust within America and distrust within China. But at least based on the conversations that I had, I think they would be amenable to being part of an international regulatory framework. And yeah, so that was my impression, but the proof's in the pudding, so we can see.
It is very interesting to visit China in general and see the perceptions. As much as say people in the US might just trust China, likewise people in China just trust America. And what tends to mitigate that distrust is really conversations, especially in-person conversations. It's very easy to demonize an organization or person if you don't meet with them. And then they're like, when you meet with them, they're like, well, they're not as bad of a demon, they're not that bad actually. So you get to understand where they're coming from. And at the end of the day, we're all part of, I think, team humanity, hopefully. I think we should all aspire to be part of team of humanity. We've got one planet only so far and we don't want to lose it.
So there's that famous quote. I think it might be Einstein, but it could be one of those internet things where you think it's Einstein, but it's not. But where I think it's something to the fact that it doesn't know how World War III will exactly be fought, except World War IV would be fought with sticks and stones. It's not going to be anything left. So we really want to aspire to avoid global thermonuclear warfare. We really want to avoid that, big time. And hopefully focus on positive things. And like I said, becoming a space-bearing civilization, becoming a multi-planet species, ultimately going out there and visiting other star systems. Where we may discover many long-dead, one-planet civilizations that never got beyond their original planet.
On the XAI front, if I speak to my personal motivations here, I've always just wondered what is really going on in reality? What's trying to ascend the nature of the universe? And are there aliens? Where are they? The Fermi paradox I find to be intriguing and troubling. If the standard model of physics is correct, the universe has been around for many billions of years. So why haven't we seen aliens? Many members of the public are convinced that the government is hiding evidence of aliens. And I have not seen any evidence of aliens. I get asked this a lot. But I think that's actually maybe a concern. I might feel a little better if I saw evidence of aliens. I have not seen one shred of evidence of aliens. Which is a problem. It means that life and consciousness might be incredibly rare. Like maybe we're at least in this galaxy. And the light of consciousness seems to me like it could be this tiny candle in a vast darkness. And we just do our absolute best to make sure that that candle does not go out.
I'd like to plant a flag in the aliens conversation and come back to it. I take your point about the risk of war, as Eisenhower said, the only way to win World War III is to prevent it. I guess at the risk of being the buzzkill on the conversation, I just heard me very skeptical that the CCP could be a constructive actor in any international framework. Or I guess it would pose the question in what other international framework have they been a constructive actor. For a decade, our experts made the case for sharing cutting edge gain a function research with China. And that turned out to be a total pandemic level disaster. The proponents of such engagement also traditionally make the case that our interests are aligned when it comes to stability on the Korean Peninsula, non-proliferation, climate change. But if you just examine across those domains, they suck on all three. They're bad actors. So I think even if the CCP leaders took your warning to heart, and I understand that I think it's logical, it's thoughtful, I just remained skeptical they would slow down. And I'm fairly certain in the near term Xi Jinping will use this as an instrument for total techno totalitarian control.
I actually think this is Mark Adrian's best point in his AI optimism case that the single greatest risk of AI is that China wins the global AI dominance race, and we do not. I guess put different differently and perhaps provocatively. I'm just not sure they aspire to be on team humanity. I see them acting as if they're on team sort of genocidal communism. Most of them. Well, look, it's always challenging because like, you know, at least we're proving that Twitter doesn't censor the conversations. Yes, actually speaking of which, Ro, I'd just like to say thank you and appreciation for your support of freedom of speech in the whole Twitter files situation. When we're coming to the Twitter files, you're one of the few voices that actually spoke up against censorship and in favor of the First Amendment, which is incredibly important. So just thank you for that. Thank you.
Well, I think it's a basic value. And in my view, I mean, it's conversations like these where you have a different perspective. I have a different perspective. Mike has a different perspective. Thousands of people engage. That's the closest shot we have of getting to truth. And I guess they used to be the old fashioned liberal position, John Stuart Mill, who speak to himself. Exactly. Yes. By the way, I thought we were launching. I thought we were launching Ro's presidential campaign. On a different basis. I was missing for it.
Well, things didn't crack. Things are still running. So, you know, you want to certainly be walking me any such action on this platform. So, but I think we can all agree that having dialogue is productive and it's good to have things like this.
So on the China front, I'm kind of pro-China. And I know this makes it sound like, well, do you have all these vested interests in China? I'm like, I have some vested interests in China. But honestly, I think China is underrated. And I think the people of China are really awesome. And there's a lot of positive energy there. And I think they kind of want the same things that people in America do. So that's not to say that there aren't some very significant disagreements. And there's obviously going to be a significant challenge on the Taiwan question, like a very significant challenge. But I think on the sustainable energy front, really China's done actually a lot to further electric vehicles within China. If China makes more electric vehicles, then I think the rest of the world combined. So there's a lot of solar power, a lot of wind. There's also a lot of coal and why not? But China has been pushing quite hard on the sustainable energy front. And I think that's just a fact that they really have been pushing very hard. And I think ultimately, once the very difficult question of Taiwan is resolved, I am certainly hopeful that there will be positive relations between China and the United States and the rest of the world. And we've probably have some bumpy road between now and then. Like I said, I think we're in the state of the long term, we want to all aspire to be team humanity. And yeah, so that's it.
One of the things I've appreciated, Elon, about Mike's leadership. And in some places, I don't think there is total divergence in that Mike has alerted the Congress and the country about some of the things that the United States needs to do is certainly I've learned from him about ensuring that an invasion of Taiwan doesn't take place. So whether that is making sure that Taiwan has weapons, whether it's making sure that we have the right military posture in the Pacific in effective deterrence. And I think he's built bipartisan consensus for that. And the other thing that's emerged is bipartisan consensus is that we hollowed out our manufacturing base. And we've got a component of making things in this country. And there's a recognition, I think, on a bipartisan basis that we shouldn't, I mean, steel. We've got all of the top 15 steel companies. We don't have a single one in America anymore, nine of them are in China. And then we've got to bring some of that manufacturing back. And it seems to me we could have a view of both the national security challenges and the economic challenges that then allow us also to aspire for the right type of engagement and peace and avoiding war. But I like Mike speak, but it seems to me Mike that the tone of the committee, even when we've had disagreements, has ultimately been an aspiration for peace, not not getting into war.
Yes, I'm generally not trying to make this China rant. I guess, listen, I think even the most hawkish member of our committee, and I'm admittedly on the hawkish side of the spectrum, fully supports the Chinese people. We have no quarrel with the Chinese people. It is the party that is the source of instability in the relationship and the primary enemy of the Chinese people. And that's the core of the dilemma. And I think it's important to make that distinction. And we look forward to a world in which the people aren't subject to the whims of an oppressive regime. Furthermore, on the question of peace, yes, my whole like mission in life is deterrence. That's what I'd spend most of my day focused on a war between the US and China would be absolutely horrific. It would have the potential to make previous world wars look like child's play in comparison. And so, but my theory of deterrence is that we need to deter by denial and put hard power in choosing things path to make the prospect of taking Taiwan so unpalatable that they never tries it. And I know we're straying far afield from the conversation here.
On your points about energy, I just, the stats I've seen, you obviously have a lot more experience in this space than I do. I mean, last year they built 6X the amount of coal plant capacity the rest of the world combined. But if you take an easier case, I don't think there can be a dispute about, I mean, when it comes to some like free speech, and I view you as a champion of free speech, I mean, you called yourself a free speech absolutist, there's no question that the CCP is not into free speech. And therefore, my concern that if they dominate this technology, they could use this to suppress speech, suppress freedom of religion, et cetera, et cetera. And it wouldn't sort of further the advancement of humanity. Again, my view, I'm not trying to be the bad guy on this whole thing, but yeah, reasonable people can disagree. That's why I love Rokana. Yeah. Yeah.
But I do have this theory about prediction, which is that the most entertaining outcome as seen by a third party, not the participants, is the most likely. So that's not necessarily the best thing for those involved in it. Like you could be watching a World War One movie while people are getting one piece, while sipping a soda and eating popcorn. Not so great for those in the movie, but it is entertaining, which does suggest there's some things again that's probably going to get hot in the Pacific. So hopefully not too hot, but it's going to get hot. And hopefully we can get past that and get to a positive situation for the world in the spirit of aspirationally, we're all on team humanity. But it's going to get spicy.
And then basically, as far as the most concerning thing, it's probably the time I question over the next three years. And then probably three years after that is the. I would be surprised if there is not digital superintelligence in roughly the 506 year time frame. So if this was a Netflix series or something, I'd say that the season finale would be a showdown between the Western China and the series finale will be AGI. Well, that's fast. I didn't think five years. I didn't realize that you had a view that it's that quickly.
Bringing it back a little bit to AI, I guess two questions. One, for people, even your harshest critics, Elon, recognize that you've been one of the most successful entrepreneurs, technologists in the world. And if you're a young Elon Musk or an aspiring young person and you're seeing the world of AI or you're someone who's a blue collar family and you've got a kid and you're seeing AI and concerned about automation and what you should do, what would you say to them? I mean, what would you say to someone young 15, 18, 20 in the world that we're entering either if they have a college degree or not having a college degree that they should do and prepare themselves for the future economically?
I can't think of anything like, I suppose that this, if someone is able to contribute to building AI in a positive way, if someone has that technical ability, that is probably the right thing to work on. For your average citizen, I think it's going to be, the future is definitely going to be interesting. Like I said, things get very strange in a future where the AI can basically do everything. And in the benign scenario, I guess we will look for personal fulfillment in some way. I think between now and then, I think it's just trying to be useful.
I mean, on the manufacturing front, I do think we should place a much greater weight in the United States on the importance of manufacturing. I think the things that are shifting back in that direction, you know, generally when somebody asks me for advice, my advice is try to be as useful as possible. It's actually quite hard to be useful. And if you can be of use to your fellow humans then and contribute more than you take, then I think that's a great thing. You know, I have a lot of respect for those who work hard and do, you know, make goods, provide services in excess of what they take. That is just a fundamentally good thing. So it's a hard question to answer with certainty because it is so uncertain.
The advent of AGIs often referred to as the singularity, a singularity is like a black hole. You just don't know what happens after that. You know, after you go in the black hole, like we are on the event horizon of the singularity of digital superintelligence.
It's definitely one of the most interesting parts of all of history. And I've actually thought like, well I wonder if, you know, maybe it would have been better to have been born at a different time and before artificial general intelligence. But then I thought, actually, you know what, even if it is, what is the personal conclusion I came to is that I actually would prefer to be alive to see it just because it's the most interesting thing in history.
So even if it was a calamity, I guess I'd prefer to see it rather than not see it. And obviously we want to do everything possible to make sure it is not a calamity. So I guess the positive side of it won't be boring. Definitely won't be boring.
And I think if I would assign probabilities, I think it is more likely to be a positive scenario than a bad scenario. It's just that the bad scenario is not 0%. And we want to do everything we can to minimize the probability of a bad outcome with AI. But I should terrify that. I think it's, I don't know, maybe it's like 70%, 80% likely to be a good future. And maybe a great future even.
So yeah, I think of the future as like probabilities. And I think for sure, the future is a set of branching probability streams.
是的,我将未来看作是概率的形式。我认为未来肯定是一系列分支概率流的集合。
Would there be Elon like a signpost that we're nearing such a singularity? Right? I mean, I think one of the reasons there was so much excitement over chat GPT is that you never going back to Alan Turing's papers on AI, we've had this idea that once the system passes the Turing test or the imitation game that it would be said to be intelligent. And my understanding is we're basically there. Do you believe that test is a good marker for achieving AI or AGI? Do you believe there's some other sort of thing we should be looking for? And I believe you had a debate over consciousness when it comes to AI recently and would be eager to get your thoughts on that.
Well, I think we're well past the Turing test at this point. But the chat GPT is well, well past the Turing test. So really we're well on our way to digital super intelligence. Like I think it's five or six years away.
And I would say the definition of digital super intelligence is that it's smarter than any human at anything. So that's not necessarily smarter than the sum of all humans. That's a higher bar to be smarter than the sum of all humans. And especially given that it's the sum of all humans that are machine augmented in that we will have computers and phones and software applications. So we're already artifactosybogs. It's just that the computer's not yet integrated with us.
One phone is already an extension of one's self. If you leave your phone behind it feels like missing them syndrome. Like you're patting your pockets and like where did my phone go? And it's crazy the degree to which our sort of our phone, which is like basically a super computer in your pocket, is an extension of yourself. So there's a higher bar to be smarter than the sum of all humans that are computer augmented.
So I mean thinking about this whole thing stresses me out a lot. I've had many sleepless nights thinking about this. So I'm trying to say like, I can figure out how do we navigate to the best possible future for humanity. It's a super hard problem. It might end up being the hottest problem we've ever faced. It definitely demands our attention.
And I think ultimately it's these the nation state battles will seem I think parochial ultimately compared to digital super intelligence. You know, if all the various risks that we face there are ones that are dangerous at an individual level, dangerous at a state level within this like things that are dangerous at a civilizational level.
Global thermoren nuclear warfare is obviously dangerous at a civilizational level. Super virus that has very high mortality rates would be dangerous.
全球热核战争在文明层面上显然是非常危险的。致命率极高的超级病毒也同样具有危险性。
I think it's crazy to do gain a function research. Like gain function research is a nice way of saying death maximization. Like I don't know if it came out with this gain function model. Yes, but it's what's the function you're talking about? Oh, death. Okay. So I think that would be less likely to get funding. So we need to be very cautious about it. We really should not be doing that stuff. It's crazy.
But AI is also a civilizational risk. But the thing about AI is I think unlike gain function or global thermoren nuclear warfare, AI is really has the potential to make the future amazing if it's on right.
I think I know we're approaching the end of the hour. So maybe I could say something like and then you can close. I mean, I guess if we do have the super computing, I think the question is still what values underlie that? How are we going to make sure that the ethical framework or the way that the AI is ultimately making decisions are those grounded in values I think most Americans share. And here I do think that our values of respecting individual freedom, dignity, rights, respecting freedom of speech are important. And we want to make sure ultimately are embedded in the technology that develops.
And that to me seems to be the biggest challenge for the Congress in terms of American leadership is how do we, one, make sure we're part of the debate because I don't think we should just see the debate to technologists and constructing AI without the public deliberation and input in what those values should be and how we do so in an informed way. And I agree with you insight first, but at some point I think we have to make sure that it's our framework of basic values that are given our best shot. That is what's going to be the framework for AI.
Yeah, we definitely want to maximize the, I think the, we want to maximize the happiness, the collective happiness of humanity and the freedom of action of humanity. And you want to, you want to, yeah, like you want to look forward to the future and say, yeah, that's the future I want to be part of and I'm excited about that future. I think that's actually incredibly important in general. I'm actually like concerned that there's somewhat of a, you know, in many parts of the world, a pervasive pessimism about the future and that's part of what's leading to a low growth rate in many parts of the world. And I really would advocate for optimism. In fact, I think generally it's better to be optimistic and wrong than pessimistic and right for them, you know.
So I would hope we leave this space as discussion on a positive note that we should be optimistic about the future and we should actually just, you know, fight to ensure that the future is a good future that you're really going to give Gallagher the last word if you want to leave. I'll be nice. I'll be nice. Well, first of all, optimistically, I choose to believe in the version of singularity in which I can download my consciousness into a robot body and explore Mars as well as many other planets. So I'm putting all my eggs in that basket.
I think there's some, even as we sort of debate the higher order issues here, I think there's some obvious steps we can take in this Congress in the short term that I think would be bipartisan. I mean, what Rowan and I do on armed services together is really just to push the Pentagon to do a better job of leveraging technology and buying commercial technology as opposed to defense primes trying to invent everything themselves and that doesn't really work just given the pace of change. I think there's a lot more we can do there. You know, I think we could put some sensible guardrails on American capital flowing to foreign companies that may be building systems designed to beat us or be used for nefarious purposes. You know, I'd love to skip directly to an international regulatory body with a bunch of constructive actors, but if I'm right that that seems unlikely in the short term, constructive thing would do is just start with ourselves and then build out kind of a free world framework that strikes the balance. You talk about Elon starting with our five eyes and lines and then you just sort of build out concentric circles from there once you get the basic foundation of that. That should be achievable, difficult, but should be achievable.
And on an optimistic note, I think all of us look at what's happening with SpaceX and it makes us incredible, incredibly optimistic. I mean, it's just, it's really inspiring actually and the way in which Starlink technology is being utilized not only in Northeast Wisconsin, but in Eastern Europe right now. I think that's a remarkable story of American innovation. Well, thank you. I mean, I hope it is, you know, certainly it's firing and that's a yeah. I mean, like I really want the things that we see in the positive sci-fi movies, like the sort of Star Trek has actually arguably quite a positive sci-fi version of the future. I think we want that stuff to come true. We want Starfleet Academy. We want to go where no one's gone before and explore the universe. And that's what I think fires me up and I think fires a lot of people up. You know, they were just being credible to go out there. I mean, you know, you look up at the night sky and see all those stars and I wonder like what's going on up there. You know, are there alien civilizations, is there a life up there? And hopefully one day we find out. Seems like a good note to end on. All right.
Yeah. Sure. Do you want to appreciate it, Mike? Appreciate the conversation. Hopefully we should do, you know, my hope is that we could do more of these kind of things.
Yeah. If you're Mike and I have been talking about, I mean, doing it with you, Ilana, just figuring out how we figure out how to have a conversation in this country that is substantive. I mean, I just, I wish we could do more, more things like that.
I couldn't agree more. This was great. Yeah. Yeah. I really appreciate it. And once we find the aliens, let's get them on us. I should have helped those aliens back to us. Nothing so concentrates the mind like an alien invasion.
Yeah, exactly. I mean, in a panisté is a pretty, in fact, like the, the in panisté speech is amazing. So. I forced my three-year-old daughter to memorize that for Fourth of July. So. Did you really? It's all about the church.