You know, ignoring tech is kind of no longer an option in the government. Big tech has been present in Washington, but big tech's interests are not only very different than kind of startup innovation, innovators' interests, but we think also kind of divergent from America's interest as a whole. You know, if America's going to be America in the next 100 years, we have to get this right. Yep. Welcome back, everybody. We are very excited for this episode. We are going to be discussing a lot of hot topics. The theme of today's show is tech and policy and politics.
And so there's, you know, just a tremendous amount of heat right now in the tech world about politics. There's a tremendous amount of heat in the political world about tech. And then we as a firm and actually I and both Ben and I as individuals have been spending a lot more time in policy and politics circles over the last several months. And we as a firm have a much bigger push here than we used to, which Ben will describe in a moment. But we're going to go into quite a bit of detail.
The big disclaimer that we want to provide up front for this is that we are nonpartisan firm. We are 100% focused on tech politics and policy. We today in this episode are going to be describing a fair number of topics, some of which involve partisan politics. Our goal is to describe anything that is partisan as accurately as possible. And to kind of, you know, try to be as sort of fair-minded in representing multiple points of view as we can be, we are going to try very hard to not take any sort of personal political partisan position.
So please, if you could get grant us some generosity of interpretation, what we say we are trying to describe and explain as opposed to advocate for anything specifically partisan. We advocate for tech policy topics. We do not advocate for other partisan topics. And actually, so yeah, on that theme, Ben, could you, you know, we wrote a little while ago, you wrote a blog post about and published about our firm's engagement in politics and policy. We sort of laid out our goals and then also how we're going about it. And we're actually quite transparent about this. And so I hope it may be as an introduction for people who haven't seen that if you could walk through what our plan of strategy is and how we think about this.
Yeah, it kind of starts with, you know, why now? You know, why get involved in politics now? You know, historically, tech has been a little involved in politics, but it's been relatively obscure issues, H1B visas, um, stock option accounting, carried interest, things like that. But now the issues are much more mainstream. And it turns out that, you know, for most of kind of the software industry's life, you know, Washington just hasn't been that interested in tech, we're in regulating tech for the most part, but starting kind of in the mid 2000s, as software ate the world and tech started to invade all aspects of life.
You know, ignoring tech is kind of no longer an option in the government. in that, you know, they've seen it impact elections and education and everything. And they, you know, I think policymakers really want to get in front of it is a term that we hear a lot. You know, we need to be in front of these things this time, not like last time when we were behind the curve. And so, uh, tech really needs a voice and in particular, little tech needs a voice.
So big tech has been present in Washington. Um, but big tech's interests are not only very different than kind of startup innovation innovators interest, but we think also kind of divergent from America's interest as a whole. And so that just makes it like quite imperative for us to be involved, not only to represent the startup community, but also to kind of get to the right answer for the country. Um, and you know, for the country, this is, we think a mission critical effort, because if you look at the last century of the world, you say, okay, why was America strong and why was basically any country significant in terms of military power, economic power, cultural power in the last hundred years.
And it was really those countries that got to the industrial revolution first and exploited it best. And now at the dawn of the kind of information age revolution, um, we need to be there and not fall behind, not lose kind of our innovative edge. And that's all really up for grabs. Um, and really the kind of biggest way America would lose it, because we're still like, you know, from a you know, capitalistic system standpoint from an education standpoint and so forth from a talent standpoint, we're extremely strong and should be a great innovator. But the thing that would stop that would be kind of bad or misguided regulation that forces innovation elsewhere out of the country and kind of prevents us ourselves America and the American government from adopting these technologies as well.
Um, and kind of driving that, you know, driving the things that would, kind of make us, you know, bad on tech regulation, or first really, you know, big tech whose goal is not to drive innovation or make America strong, but to preserve their monopoly. You know, we've seen that act out and out and AI in a really spectacular way where big tech has pushed for the banning of open source for safety reasons, safety reasons. Now, you can't find anybody who's been in the computer industry who can tell you that any open source project is less safe from a first of all, from a hacking standpoint.
Uh, you know, and you talk about things like prompt injection and, and then new attacks and so forth. You would much more trust an open source solution for that kind of thing. But also for, you know, a lot of the concerns of the US government about like, you know, copyrights, where does this technology come from and so forth? Not only should the source code be open, but the data should probably also be open as well. So we know what these things were trained on. And you know, and that's also for figuring out what their biases and so forth. How can you know if it's a black box? So this idea that, you know, closed source would be safer and big tech actually got this, you know, some of this language into the Biden administration executive order, like literally on, you know, like under the guise of safety to protect themselves, you know, against competition is really, really scary.
And so that's kind of a big driver. The other kind of related driver is, I think, this combination of big tech pushing for fake safetyism to preserve their monopoly and then rather thin understanding of how the technologies work in the federal government. And so without somebody kind of bridging the education gap, they're very, very, you know, we are as a country very vulnerable to these bad ideas. And we also think it's just a critical point in technology's history to get it right. Because if you think about what's possible with AI, so many of our country's kind of biggest challenges are very solvable. Now, you know, things like education, better and more equal health care, you know, just thinning out the bureaucracy that we've built and making the government easier to deal with, particularly, you know, for kind of underprivileged people trying to get into business and do things and become entrepreneurs.
All these things are made much, much better by AI. Similarly, you know, crypto is really our best answer for, you know, kind of getting back to delivering the internet back to the people and away from the large tech monopolies. It is the one technology that can really do that. And, you know, if we don't do that, you know, over the next five years, these monopolies are going to get much, much stronger. Probably some of them will be stronger than the US government itself. And we have this technology that can help us, you know, get to this, you know, dream of stakeholder capitalism and participation for all economically. And we could undermine the whole thing with poor regulation.
And then finally, you know, in the area of biology, which is we're at an amazing point in that if you look at the kind of history of biology, you know, we've never had a language just much like we never had a language to describe physics for a thousand years, we didn't have a language to really model biology till now that the language for physics was calculus, language for biology is AI. And so we have the opportunity to cure a whole host of things we could never, you know, touch before, as well as kind of address populations that we never even like did any testing on before and always put in danger.
And you know, this again, you have big pharma whose interest is in preserving the existing system because it kind of locks out all the kind of innovative competition. And so for all those reasons, we've like massively committed the flag and the firm to being involved in politics. So you've been spending a tremendous amount of time in Washington. I've been spending time in Washington, you know, many of our other partners like Chris Dixon, BJ Conde, have been spending time in Washington. We have like real actual kind of lobbying capability within the firm when we talk about that some more, but call it government affairs, but at the, you know, they're registered lobbyists and they're working to kind of work with the government and set up the right meetings and help us get our message across.
And then we're deploying, you know, a really significant amount of money to basically pushing innovation forward, making, you know, getting to the right regulation on tech that preserves America's strength. And we are not only committed to doing that this year, but for the next decade. And so this is a big effort for us. And we thought it'd be a good idea to talk about it on the podcast. Yeah, thank you. That was great.
And then yeah, the key point there at the end is worth double underlining, I think, which is long-term commitment. You know, there there have been times with with tech, specifically where there have been people who kind of cannonball their way onto the political scene, you know, with, you know, large, you know, kind of bomb, you know, sort of money bombs. Yeah. And then, you know, maybe they were just single issue or whatever, but they're in and out or, you know, they're just in and out. They just, you know, it was just like they thought they could have short term impact. They, you know, then two years later, they're gone. We're thinking about that very differently. Yeah, any, you know, then that's why I brought up, you know, the historical lens. We really think that, you know, if America's going to be America in the next hundred years, we have to get this right. Yep. Good.
Okay, we're going to unpack a lot of what you talked about and go into more detail about it. So I will get going on the questions, which again, thank you, everybody, for submitting questions on X. We have a great lineup today. So I'm going to combine a bunch of these questions because there was some themes. So Jared, Jared asks, why has tech been so reluctant to engage in the political process, both at the local and national level until now? And then Kate asks, interestingly, the opposite question, which I find this juxtaposition very interesting because this gets to the nature of kind of how we've got, how we've gotten to where we've gotten to. Kate asks, tech leaders have spent hundreds of millions lobbying in DC, right, the opposite point. In your opinion, has it worked? And what should we be doing differently as an industry when it comes to working with DC?
And so I wanted to kind of juxtapose these two questions because I actually think they're both true. And the way that they're both true is that there is no single tech, right? Yep. To bend to your point, there is no single tech. And so and maybe once, upon a time there was, you know, and I would say, you know, my involvement in political, you know, kind of efforts and, you know, in this domain started, you know, 30 years ago. And so I've seen a lot of the evolution over the last three decades. And I was, you know, I was in the room for the founding of TechNet, which is one of the sort of legacy, you know, kind of like, a lot of terms. Andor, John Chambers and John Doar. So, you know, so I've kind of seen a lot of twists and turns on this over the last 30 years. And I think, you know, the way I would describe it is, you know, as Ben said, you know, so one is, look, they're just, you know, was there a diversion? Was there, was there a sort of a distinction and a real difference of view between big tech and little tech 20, 30 years ago? Yes, there was. It's much wider now. I would say that that whole thing is really gapped out. You know, the big tech, even big, you've been, then you probably remember, big tech companies in the 80s and 90s often actually didn't really do much in politics. You know, they didn't really have, you know, and probably most famously Microsoft probably, you know, Microsoft probably would everybody at Microsoft and I'd probably say they had underinvested, you know, kind of given what happened with the antitrust case that unfolded.
Yeah, actually, the one issue we were united on was the stock option accounting, which, you know, interestingly, and we were against Warren Buffett and, you know, Warren Buffett was absolutely wrong on it and one. And it's actually very much strengthened tech monopoly. So, I think did the opposite of what, you know, people in, certainly in Silicon Valley wanted. And I think people, you know, in Washington, DC, and in America would have wanted was to, you know, make these monopolies so strong and using their market cap to further strengthen their monopoly because we moved from stock options to, you know, to esoteric to get into here. But let's just say, trust me, it was bad. Yes, yes, it was very good for big companies, very bad for startups. So, yeah,
and actually, that's another thing that actually happened in the 2000s is, so there's a fundamental characteristic of the tech industry and a particular tech startups and tech founders, which, and Ben and I would include ourselves in that group, which is we are idiosyncratic, disagreeable, iconic, classic people. And so, like, there is no tech startup association, like every, every industry group in the world, in the country, has like an association that has like offices in DC and lobbyists and like major financial firepower. And you know, these undernames like, you know, the MPAA in the music industry or the movie industry and the RAA in the record industry and the National Association of Broadcasters and, you know, the National Oil and Gas Association and so forth.
So, like every other industry has these, these groups that basically were basically the industry participants come together and agree on a policy agenda. They hire lobbyists and they put a lot of money behind it. The tech industry, just we've just never been good at actually, especially the startups, we've never been good agreeing on a common platform. And in fact, Ben, you just mentioned the stock option accounting thing like that. That's actually, that's my view of what happened at TechNet, which is TechNet was an attempt to actually get like the startup founders and the new, you know, kind of the new dynamic tech companies together.
But the problem was, we all couldn't agree on anything other than basically there were two issues. We could agree on stock option expensing as an issue and we could agree on a carried interest or venture capital firms as an issue. Yeah, very dishearsed tax treatment. And so, there were the basically what ended up happening was, again, my my view, you know, kind of TechNet early on got anchored on these, I would say pretty esoteric accounting and financial issues. And just never had a view on, you know, could not come to agreement on many other issues. And I think a lot of attempts to coordinate tech policy in the valley of head that characteristic.
And then look, quite honestly, you know, the other side of it, Ben, you highlighted this, but I want to really underline it. It's just like, look, the world has changed. And, you know, up until 2010, you know, I would say up around until about 2010, I think you could argue that, you know, politics and tech were just never that relevant to each other. You know, for the most part, what tech companies did was they made tools. You know, those tools got sold to customers, they use them in different ways. And so, you know, regularly, you know, how do you regulate it, you know, database software or an operating system or a word processor or a router? He were regulating a power drill or a hammer, right? Yeah, exactly, right. Exactly. Like, yeah, what, you know, what are appropriate trouble regulations? And so, it just wasn't that important.
And then, you know, look, this is where I think Silicon Valley deserves, you know, kind of deserves it, share a blame for anything, whatever's gone wrong, which is a consequence. I think we all just never actually thought it was that important, you know, to really explain what we were doing and to be really engaged in the process out there. And then, you know, look, the other thing that happened was, you know, there was a love affair for a long time. You know, there was a view that, you know, tech started, there was just a view that like tech startups are purely good for society, tech is purely good for society. There were really no political implications to tech. And by the way, this actually continued interesting up through 2012.
You know, people now know of all the headlines that, you know, social media is destroying democracy and, you know, all these things that kind of really, you know, kicked into gear after 2015, 2016. But you know, even 2012, like the narrative, you know, social media had become very important, actually, in 2012 election, but the narrative in the press was like almost uniformly positive. You know, you know, it was very specifically that social media is protecting democracy by making sure that certain candidates get elected. And then also, by the way, Obama, you know, there were literally headlines from, you know, very, you know, newspapers and magazines today that are very anti tech, they were very pro tech at that point, because the view is tech help Obama get reelected. And then the other thing was actually the Arab Spring, you know, there was this moment where it was like tech is not only going to protect democracy in the US, but it's going to protect democracy all over the world.
And you know, face Google where the catalyst, at the time, reviewed as the catalyst for the Arab Spring, which is going to, of course, bring a flowering of democracy in the Middle East that has been. It didn't work out that way, by the way. Did not work out that way. And so, so anyway, the point is like, it is relatively recent in the last 10, 12 years, that is just sort of just like everything is just kind of come together. And all of a sudden, you know, people in the policy arena are very focused on tech, people in the tech world have very strong policy, politics, opinions, the media, you know, weighs in all the time. And then by the way, though, this, you know, none of this is a US only phenomenon. We'll talk about other countries later on, but there's also a global, you know, kind of thing, you know, these issues are playing out globally in many different ways.
But I guess one thing I would add is like, when I'm in, you know, I do a fair amount in DC on the on the on the non-political side. And when I'm in, you know, meetings involving national security or intelligence or, you know, civil policy of whatever kind it's it's striking how many topics that you would not think are tech topics and end up being tech topics. And so, you know, and it's just because like when the state exercises power now, it does so through, you know, with technologically enabled means. And then when citizens, you know, basically resist the state or fight back against the state, they do so with technologically enabled means. And so they're sort of, you know, they're sort of this.
So, you know, sometimes say we're the dog that caught the bus on this stuff, right, which is, you know, we all want to tech to be important in the world. It turns out tech is important in the world. And then it turns out that things that are important in the world end up being end up getting pulled into politics. Yeah, I think that's right. You know, on the second part of the question, I think that's a, you know, like why is tech been so ineffective despite pouring all the money out of it. And I think there are like a few kind of important issues around that. One is, you know, really arrogance in that I think, you know, we in tech and a lot of people went in are like, oh, we're the good guys, we're for the good and everybody will love us when we get there and we can just push our agenda on, you know,
kind of on the policymakers without really putting in the time and the work to understand the issues and the things that, you know, you face as somebody in Congress or somebody in the White House in trying to figure out what the right policy is. And I think that, you know, we are coming at that from kind of our cultural value, which is we take a long view of relationships. We try never to be transactional. And I think that's especially important on policy because these things are massively complex. And so we understand our issues and our needs. And, but we have to take the time to understand, you know, the issues of the policymakers and make sure that, you know, we work with them to come up with a solution that is viable for everyone. And so I think that's thing one. I think Texas has been very bad on that. And the second one is I think that, you know, they've been partisan where like, it's been like, not necessary or not even smart to be partisan. So people have come in with whatever like political, bent mostly kind of Democrat, Democratic Party that they have and like, okay, we're going to go in without understanding you and only work with Democrats because we're Democrats and this kind of thing.
And I think, you know, our approach is like, we are here to represent tech. We want to work with policymakers on both sides of the aisle. We want to do what's best for America. We think that if we can describe that correctly, then then we'll get support from both sides. And, and that's just a really different approach. So hopefully, hopefully that's right. And hopefully we can make progress. Okay, good. So let's go to the next question. So this is again, a two-part question.
So she asks, in what ways do you see the relationship between Silicon Valley and BC evolving and coming years, particularly in light of recent regulatory efforts targeting tech giants? And we'll talk about we'll talk about TikTok later on, but you know, there's been obviously big, you know, there's big flashpoint kind of events happening right now. By the way, also for Gullo and Senior, do the DOJ just filed a massive antitrust lawsuit against Apple? Yeah, you know, that the the the the the the the the tech topics are very hot right in DC. So how do we see the relationship? Right. No way. That's an interest.
That one's an interesting one of the one of the things that I've talked about, which is, you know, a lot of little tech, I think is is very much in alignment with some of the things that the FTC is doing, but probably we would do it in a very different against a different kind of set of practices and behaviors of some the tech monopolies. And you know, it just shows why like more conversation is important on these things, because you know, what we think is the kind of abuse of the monopoly. And you know, with the lawsuit is, I would say you're not exactly the same thing. Well, let's talk about that. Let's talk about that for a moment, because this is a good case study of kind of the dynamics here.
So the traditional kind of free market libertarian view, you know, is sort of very, you know, very critical of antitrust, you know, theory in general, and it's certainly very critical of the, you know, current prevailing antitrust theories, you know, which are kind of more expansive and aggressive than the ones of the last 50 years, you know, as shown in things like the Apple, the Apple lawsuit and many other actions recently. And so, you know, there's sort of a, you know, for people in business, there's sort of a reflexive view that, you know, basically says business should be allowed to operate. But then very specifically, there's this view where I like, you know, there are certainly people who have this view that basically says, you know, any additional involvement of the political machine, especially the sort of prosecutorial machine in tech, you know, is invariably going to make everything worse in tech.
And so yeah, you know, they sue Apple today and maybe you're happy because you don't like Apple today because, you know, they abuse your startup or whatever. But like, if they win against Apple, they're just going to keep coming and coming and coming and do more and do more and more of these. The posing view, the opposite view would be the view that says, no, actually, the interest at your point, the interest of big tech and little tech have actually really diverged. And that actually, if there is not actually strong and vigorous investigation and enforcement and then ultimately things, things like the Apple lawsuit, you know, actually these these companies are going to get so powerful that they may be able to, you know, really seriously damage, you know, little tech, you know, for a very long time.
So maybe you've been talk a little bit about how, you know, kind of we think through that kind of, you know, because we even debate this inside our firm. But they're talking a little bit through about how to process through that. And then, you know, what you think and then also kind of where you think those lines of argument are taking us. Yeah. So look, I definitely think and by the way, right, you know, I mean, I should full disclosure when we were on Netscape, we were certainly on the side of little tech against big tech and, you know, Microsoft at that time had a 97% you know, market share on desktop and, you know, it was very, very difficult to innovate on the desktop. It was just, you know, bad for innovation to have them, you know, in that level of position of power. And I think that's happened on the smart phone now, you know, particularly with Apple, I think the kind of the epic case and the Spotify cases are really great examples of that where, you know, I am, I am fielding, you know, product that's competitive with Spotify. And I am charging Spotify 30% tax on their product. Like that seems unfair.
I'm just from like a consumer like, like just, just from the standpoint of the world. And, you know, it does seem like it's, you know, using monopoly power in a very aggressive way. I think it's certainly against our interest and the interest of new companies for the monopolies to exploit their power to that degree. You know, like, like when the government gets involved, it's not going to be like a clean surgical like, okay, here's exactly the change that needs that's needed.
But I also think, you know, with these global businesses with tremendous lock and, you know, you just have to, you know, at least have the conversation and say, okay, what is this going to do for consumers if we let it run? And, and, you know, we need to represent that point of view, I think from the, from the kind of small tech perspective. Yeah. And the big tech companies are certainly not doing us favors right now. So, there's certainly not acting in ways that are pro startups.
I think we can say as a general. No, no, no, the opposite. Sure. Quite the opposite. One of my, one of my ideas that kick around a lot is it feels like it feels like companies, it feels like any company is either too scrappy or too arrogant. Yeah. But never in the middle. Yeah. Yeah. Yeah. Yeah. And like, it's like people. Right. You're the underdog or you're the overdog and there's not not a lot of, yeah, a lot of reasonable dogs.
Exactly. Exactly. Yeah. So there's been here attention there. It seems very hard for these companies to reach a point of dominance and not figure out somebody who abuse it. I think you kind of touch on an important point, which is, you know, we're, you know, in representing little tech, we're not a pure libertarian, anti regulatory kind of, you know, force here. We think we need regulation in places.
We certainly need it in drug development. We certainly need regulation and crypto and financial services, the financial services aspect of crypto is very, very important. It's very important to the industry that, you know, that would be strong in America with a proper kind of regulatory regime. So we're not anti regulation. We're kind of pro the kind of regulation that will kind of make both innovation strong and, you know, the country strong.
Yeah. And we should also say, look, like, when we're advocating on behalf of little tech, there obviously there's self interest, you know, kind of as a component of that because we're a venture capital firm and we back startups. And so there's obviously a straight financial interest there.
You know, I will say, you know, I think Ben, you'd agree with me, like we also feel like philosophically, like this is a very sort of pro America position, very pro consumer position. And the reason for that is very straightforward, which is, you know, Ben, as you've said many times in the past, the motto of any monopoly is, or what's the motto? We don't care because we don't have to. Right.
Exactly. And so we take a proper experience if you've called customer service, when one of these monopolies has, you know, kicked you off their platform. Yes, exactly. Yes, exactly. And so, yeah, it's just that there is something in the nature of monopolies where they just they have a, you know, if they no longer have to compete and if they're no longer disciplined by the market, they basically go bad.
And then, and then, you know, how do you prevent that from happening? The way you prevent that is from forcing to compete the way that they have to compete. You know, in some cases, they compete with each other, although often they collude with each other, which is another thing, you know, monopoly and cartel are kind of two sides of the same coin.
But, you know, really, at least in the history, the tech industry, it's really when they're faced with startup competition. You know, when they've got, you know, when they've got a, when the elephant has a terror at his heels, nipping at him, you know, taking increasingly big bites out of his foot, like that's when big companies actually act and when they when they when they do new things. And so without healthy startup competition, you know, like, there are many sectors of the economy where it's just very clear now that there's not enough startup competition because the incumbents that everybody deals with on a daily basis are just practically intolerable.
And it's not in anybody's interest, ultimately, you know, from from a national policy standpoint, you know, for that to be the case, you know, that, you know, that, you know, things can get bad where it's to the benefit of the big companies to preserve those monopolies would very much not anybody else's benefit. Yeah, not exactly. Exactly.
Yeah, you know, which is, I would say it's such a big impetus behind our kind of political activity. Yeah, that's right. Okay, we'll keep going. So in what ways do you, okay, now we're going to future looking so in what ways do you see the relationship between Silicon Valley and DC evolving in the coming years? And then specifically, and again, what we're going to be not we're not going to be making partisan recommendations here, but you know, there is an election coming up. And it is a big deal. And it's going to have, you know, both both what happens in the White House and what happens in the Congress is going to have big consequences for everything we've just been discussing.
So how do we see the upcoming election affecting tech policy? And then why don't you start Yeah, well, I think there are, you know, several issues that end up being really important to kind of educate people on now because whatever you whatever platform you run on, you know, as a Congress person or as a president, you want to kind of live up to that promise when you get elected. And so a lot of these kind of positions that will persist over the next four years are going to be established now.
I think in, you know, crypto in particular, you know, we've been very active on this because there's, you know, we have a big kind of donation to something called the fear shake pack, which is kind of work on this and just identifying for kind of citizens like, okay, which politicians are on what side of these issues? You know, who are the kind of just flat out anti crypto anti innovation anti blockchain anti decentralized technology candidates and like, let's at least know who they are so that we can tell them we don't like it. And then, you know, tell all the kind of people who agree with us that that we don't like it.
And you know, a lot of it ends up being, you know, look, we want the right regulation for crypto. We've worked hard with policymakers to, you know, kind of help them formulate things that will you know, prevent scams, prevent nefarious uses of the technology for things like money laundering and so forth. And then enable the good companies, the companies that are, you know, pro consumer helping you own your own data and not have it owned by some monopoly corporation who can exploit it or just lose it, you know, like get broken into.
And so you now have identity problems and so forth, that can kind of help kind of a fair economy for creatives so that, you know, either it's not a 99% tech rate or take rate on, you know, things that you create on social media and these kinds of things. And so, you know, like, it's just important to kind of, I think, educate the populace on where every candidate stands on these issues. And so we're really, really focused on that. And I think, you know, same true for AI, same true for bio.
I don't also add, I don't talk a little bit more with election a moment, but I'd also add like, it's not actually the case that there's a single party in DC that's pro tech and a single party this anti tech. Definitely not. There's not and by the way, if that were the case, it would make might make life a lot easier. Yes. But but but it's not the case. And I'll just give, then I'll just give a thumbnail sketch of at least what I see when I'm in DC and see if you agree with this.
So, it was said, Democrats are sort of Democrats are much for fluent in tech. And I think that has to do with, you know, who their kind of elites are. It has to do with this kind of very long established revolving door. And I mean that in both a positive and pejorative sense between the tech companies and the Democratic Party, Democratic politicians, political offices, professional offices, White House offices. There's just a lot more integration.
You know, the big tech companies tend to be very Democratic, which you see in all the donation numbers and voting numbers. And so there's just like, there are just a lot more, I would say tech fluent tech aware Democrats, especially in powerful positions. You know, many of them have actually worked in tech companies. Just as an example, the current White House Chief of Staff is, you know, former board member at META, where I'm on the board. And so there's just, you know, there's a lot of sort of connective tissue between those.
You know, look, having said that, you know, the current Democratic Party, in particular, you know, certain of its more radical wings, you know, had become extremely anti-tech, you know, to the point of being arguably, you know, in some cases, you know, outright, you know, anti-antibisiness, anti-capitalism. And so, you know, there's there's there's a real kind of back and forth there. You know, Republicans, on the other hand, like, you know, in theory and the stereotype would have you believe, you know, Republicans are sort of inherently, you know, more pro-business and more pro-free markets and should therefore be more more pro-tech.
But I would say there again, it's a mixed bag because number one, a lot of Republicans just basically think of Silicon Valley, but it's all Democrats. And so Silicon Valley is all Democrats. If we're Republicans, that means they're de facto the enemy. They hate us. They're trying to defeat us. They're trying to defeat our policies. And so they must be the enemy. And so there's a lot of, you know, I would say some combination of distrust and fear and hate, you know, kind of on that front, you know, and then again, with much less connective tissue, you know, there are many fewer, you know, Republican, you know, executives at these companies, which means there are many fewer Republican officials or staffers who have tech experience. And so there's a lot of mistrust. And of course, you know, there have been flashpoint issues around this lately, like social media censorship that are really exacerbated this this conflict.
And then the other thing is, you know, they're very serious policy policy disagreements. And there again, there are at least wings of the modern Republican party that are actually quite, you know, sort of economically interventionist. And so, you know, the term of the moment is industrial policy. Yeah, right, which basically, you know, there are Republicans who are very much in favor of a much more interventionist government approach towards dealing with business and in particular dealing with tech. And so I guess, say like, there's real, there's real, like, this is not a, this is not an either or thing, like, there are real issues on both sides. The way we think about that is therefore, there's a real requirement to engage in both sides. There's a real requirement, Ben, to your point to educate on both sides. And there's a real, you know, if you're going to make any progress to tech issues, there's a real need to have a bipartisan approach here because you do have to actually work with both sides.
Yeah, and I think that's absolutely right. And just to kind of name names a little, if you like it like the Democratic side, you know, you've got people like Richie Torres out of the Bronx. And you know, like, by the way, a huge swath of the congressional black caucus that sees wow, crypto is a real opportunity to equal the financial system, which has historically been, you know, you know, documentedly racist against kind of a lot of their constituents. And then also, you know, the creatives, which they represent a lot to kind of get a fair shake. And then on the other hand, you have Elizabeth Warren, who has taken a very totalitarian view of the financial system and is moving to consolidate everything in the hands of, you know, a very small number of banks and basically control who can participate and who cannot in finance.
So, you know, these are just very, very different views out of the same party. And I think that, you know, we need to just make the specific issues really, really clear. Yeah, and the same thing we could, you know, spend a long time also naming names on the Republican side. So, yes, which we'll do later. But so, yeah, well, I should do it right now, just to make sure that we're fair on this, you know, there are Republican, you know, Republicans who are like full on pro free market, you know, you know, very much pro, you know, are very opposed to all current government efforts to, you know, intervene in markets like A.A. and crypto. By the way, many of those same Republicans are also very pro, are also very negative any antitrust action. They're very ideologically opposed to antitrust.
And so, they would also be opposed to things like the Apple lawsuit that a lot of startup founders might actually like. And then on the other, on the flip side, you have folks like Josh Josh Holly, for example, that are, I would say quite vocally, to say, I rate it Silicon Valley and, you know, very in favor of much more government intervention and control. You know, I think a holy administration, just as an example would be extremely interventionist in Silicon Valley and would be very, you know, kind of very pro industrial policy, very much trying to, but both, you know, sort of set goals and sort of have government management of more attack, but also much more dramatic action against, you know, at least perceived a real real enemy.
So, same same kind of expect. So anyway, that I wanted to go through that though. This is kind of the long winding answer to the question of how will the upcoming election effect tech policy, which is, you know, look, there are, you know, there are real issues of the Biden administration, in particular with the agencies and with some of the affiliated senators has been just described. So, you know, there are certainly issues where, you know, the agencies, you know, under the Trump administration, the agencies would be headed by very different kinds of people. Having said that, it's not that, you know, it's not that a Trump presidency would necessarily be a clean win, you know, and there are many people and sort of that wing who might be hostile. And by the way, in different ways, or actually might be hostile in some cases in the same ways.
Yeah, and by the way, you know, Trump has, himself has been quite the moving target on this. You know, he was very, he tried to ban TikTok and now he's very pro-Tiktok. You know, he has been, you know, negative on AI, who's originally negative on cryptos, not positive on crypto. So, you know, it's complex. And, you know, which is why I think the foundation of all of this is, you know, education and we, you know, why we're spending so much time in Washington and so forth is to make sure that, you know, we communicate all that we know about technology so that at least these decisions are highly informed that the politicians make. Good. Okay. So moving forward.
So, three, three questions in one. So, Alex asks, as tech regulation becomes more and more popular within Congress, which is happening, do you anticipate a lowering in general of the rate of innovation within the industry? Number two, Tyler asks, what is a key policy initiative that have passed in the next decade could bolster the US for a century? And then Elliot Parker asks, what's one regulation that if removed would have the biggest positive impact in economic growth? Yeah. So I think that, it's a fancy if you disagree with this, I don't know that there's a single regulation or a single law or a single issue. You know, there are certainly, I mean, there are certainly individual laws or regulations that are important. But I think the somatic thing is a much bigger problem or much bigger. The somatic thing is the thing that matters. The things that are coming are much more serious on the things that have been, I think that's correct. Okay. We'll talk about that. Yeah, go ahead.
Yeah. I mean, so, you know, if you look at the current state of regulation, you know, if it stayed here, there's not anything that like, we really feel like a burning desire to remove in the same way that things that are on the table could be extremely destructive. And basically, you know, look, if we ban large language or large models in general, or we, you know, force them to go through some kind of, you know, red government approval, or if we ban open source technology, yeah, that have just a devastating, it would basically take America out of the AI game and, you know, make us extremely vulnerable from military standpoint, make us extremely vulnerable from a technology standpoint in general. And so, you know, that's devastating. Similarly, you know, if we don't get kind of proper regulation around crypto, the trust in the system and the business model is going to fade, or is going to kind of be in jeopardy in that it's not going to be the best place in the world to build crypto companies and blockchain companies, which would be a real shame. You know, the kind of analog would be the kind of creation of the SEC, you know, after the Great Depression, which really helped put trust into the US capital markets. And I think that, you know, trust into the blockchain system as a way to kind of invest, participate, be a consumer, be an entrepreneur, really, really important and necessary and very important to get those right. Okay. And then speaking, okay, let's move straight into the specific issues then more so, expand on that.
So Lenny asks, what form do you think AI regulation will take over the next two administrations? A B. Sakandi asks, well, AI regulation result in a concentrated few companies or an explosion of startups in new innovation. E-Ray asks, how would you prevent the AI industry from being monopolized, centralized with just a few tech corpse? And then our friend, Befjizos, asks, how do you see the regulation of AI compute and open source models realistically playing out? Where can we apply pressure to make sure we maintain our freedom to build and own AI systems?
It's really interesting because there's like a regulatory um, dimension of that. And then there's the kind of technological kind of, you know, version of that. And they do intersect. So if you look at what Big Tech has been trying to do, they're trying, they're very worried about new competition to the point where they've taken upon themselves to go to Washington and try and outlaw their competitors. And, you know, if they succeed with that, then I think it is like super concentrated AI power, you know, making the kind of concentrated power of social media or search or so forth, what kind of really pale in comparison. I mean, it would be very dramatic if there were only three companies that were allowed to build AI. And that's only what they're pushing for. So I think in one regulatory world where Big Tech wins, then there's very few companies doing AI, probably, you know, Google Microsoft and Meta.
You know, Microsoft, you know, having, you know, basically full control of open AI is the kind of demonstrated. They have the source code. They have the way to, you know, such a win is for saying that. We own everything. And then they also kind of control who the CEO is since they demonstrated, you know, beautifully. So, you know, if you take that, it will all be owned by, you know, three, maybe four companies. If you just follow though, the technological dimension, I think what we're seeing play out has been super exciting in that, you know, we were all kind of wondering, would there be one model that ruled them all? And even within a company, I think we're finding that there's no current architecture that's going to gain, you know, on a single thing, a transformer model, a diffusion model and so forth, that's going to become so smart in itself that once you make it big enough, it's just going to know everything and that's going to be that.
What we've seen is, you know, even the large companies are deploying a technique called the mixture of experts, which kind of implies, you know, you need different architectures for different things. You need to be integrated in a certain way and the system has to work. And that just opens the aperture for a lot of competition, because there's many, many ways to construct a mixture of experts to architect every piece of that. We've seen, you know, little companies like Mistral field models that are highly competitive with, you know, the larger models very quickly.
And, you know, and then there's other kind of factors like, latency, cost, etc, that factor into this. And then there's also good enough, like, when is a language model good enough, you know, when it speaks English, when it knows about what things, what are you using it for? And then there's domain specific data, you know, I've been doing whatever medical research for years, and I've got, you know, data around all these kinds of genetic patterns and diseases and so forth. You know, I can build a model against that data that's differentiated by the data and so on.
So I think what we will, we're likely to see kind of a great kind of Cambrian explosion of innovation across all sectors, you know, big companies, small companies, and so forth, provided that the regulation doesn't outlaw the small companies. But that would be my prediction right now. Yeah, and I did add a bunch of things to this. So, so one is, even on the big model side, there's been this leapfrogging thing that's taking place. And so, you know, there's there's, you know, opening up, you know, GPT four was kind of, you know, the dominant model, not that long ago, and then it's been leapfrogging significant ways recently by both Google with their Gemini Pro, especially the one with the so-called long context window, where you can feed it 700,000 words, or an hour of full motion video as, you know, context for a question, which a huge advance.
And then, you know, the anthropic, their big model clod is, you know, a lot of people now are finding that to be more advanced model than GPT four. And, you know, one assumes opening is going to come back and, you know, this leapfrogging will probably happen for a while. So, so, so even if the highest end, you know, at the moment, these companies are still competing with each other, you know, there, there's still this leapfrogging that's taking place. And then, you know, Ben, as you as you articulate it, you know, very well, you know, there, there is this, this giant explosion of models of all kinds of shapes and sizes are another, you know, our company Databricks, just released another, you know, another, but what looks like a big leapfrog on the smaller model side. It's the, it's, I think it's the best model now in the benchmarks.
And it is, it actually, it's so efficient, it will run in a MacBook. Yeah, and they have the advantage of, you know, as, as an enterprise, you can connect it to a system that gives you not only like enterprise quality access control and all that kind of thing, but also, you know, it gives you the power to do SQL queries with it, gives you the power to basically create a catalog so that you can have a common understood definition of all the weird corporate words you have. Like, by the way, one of which is customer, like there's almost no two companies that define customer in the same way.
And in most companies, there are several definitions of customer, you know, from the department at AT&T is that AT&T is that, you know, some, you know, division of AT&T, etc, etc. I think, I don't want to literally speak for them, but I think if you put the CEOs, the big companies under Truth Serum, I think what they would say is their big fear is that AI is actually not going to lead to a monopoly for them. It's going to lead to a commodity. It's going to lead to a sort of a race to the bottom on price. And you see that a little bit now, which is people who are using one of the big models, APIs are able to swap to another big model API from another company pretty easily.
And then, you know, these models, the business model, the main business model for these big models at least so far is an API, you know, basically paper token generated, or per answer. And so, like, if these companies really have to compete with each other, like, it may be that it actually is a hyper competitive market, it may be the opposite of like a search market, or like an operating system market, it may be a market where there's just like continuous competition and improvement and leapfrogging, and then, you know, constant price competition. And then of course, you know, the payoff from that is, you know, to everybody else in the world is like an enormously vibrant market where there's constant innovation happening, and then there's constant cost optimization happening, where and then as a customer, you know, downstream of this, the entire world is going to use AI is going to benefit from this kind of hyper competition that's going to, you know, could potentially run for decades.
And so, I think if you put the CEOs in a Truth Serum, what they would say is that's actually their nightmare. Like that. Well, that's why they're in Washington. That's why they're in Washington. So that's that that is what's actually happening. That is the scenario they're trying to prevent. They are actually trying to shuttle competition. And by the way, and then actually, I will tell you this, there is a funny thing. Tech is so ham, tech is so historically bad at politics, that I think, I think some of these folks think they're being very clever in how they go about this. And so, you know, because they show up in Washington with the kind of, you know, kind of public service narrative or end of the world narrative or whatever it is, and they're I think they think that they're going to very cleverly kind of trick everybody, trick people in Washington and giving them sort of cartel status.
And the people in Washington don't realize until it's too late. But it actually turns out people in Washington are actually quite cynical. They've been lobbied before. Exactly. And so there is this thing and they, you know, they won't they don't I get this from them off the record a lot, especially after a couple of drinks, which is basically if you've been in Washington for longer than two minutes, you have seen many industries come to Washington. Many big companies come to Washington and want monopoly or cartel kind of regulatory protection. And so you see you've seen this in if you're in Washington, you've seen this play out, you know, in some cases, the guys have been there for a long time, dozens or hundreds of times.
And so my sense is like nine months ago or something, there was a moment where it seemed like the big tech companies could kind of get away with this. I think it's it's actually I think actually it's it's the edges. I'm still concerned and we're still working on it. But I think the edges come off a little bit because I think the cynicism of Washington, in this case is actually correct. And I think they're kind of onto these companies. And then, you know, look, if there's unifying issue, there's basically two unifying issues in Washington. One is they don't like China and the other is they don't like big tech. And so, you know, this is this is a winnable war. Like this is a winnable war on behalf of startups and open source and freedom and competition.
And so I'm actually, yeah, I'm worried, but I'm feeling much better about it than I was nine months ago. Yeah, well, like we had to show up. I mean, that's the other thing. I mean, it's taught me a real lesson, which is, you know, you can't expect people to know what's in your head. You know, you've got to go see them. You've got to put in the time. You've got to kind of say what you think. And then, you know, and if you don't, you don't have any right to like wring your hands with how like, you know, bad things are. Yeah.
And then I just wanted to know one more one more thing just for says, you know, you can you kind of mentioned the big companies being Microsoft Google meta is worth noting, meta is on the open source side of this. And so meta is actually met is actually working quite hard. And then this is a big deal because it's very, you know, contrary to the image, I think people have a meta over, you know, prior and prior issues, you know, correctly. But on the open source, I topic and I'm free to innovate, at least for now, meta is I think very strongly on that side. Yeah.
Yeah. Yeah. Yeah. Yeah. I think that's right. It's actually a very interesting point in kind of I think essential for people understand is that the way meta is thinking about this and the way that they're actually behaving and executing is very similar to how Google thought about Android, where, you know, their main concern was that Apple not have a monopoly on the smartphone, you know, not so much that they make money on the smartphone themselves. Because, you know, a monopoly on the smartphone on the smartphone for Apple would mean that, you know, Google's other business was in real jeopardy.
And so they ended up being kind of an actor for good. And, you know, Android's been an amazing thing for the world. I think, you know, including getting smartphones in the hands of people who won't be able to get them otherwise, you know, all over the world. And meta is doing kind of a very similar effort where, you know, in order to make sure that they have AI as a great ingredient in their products and services is willing to open source it and kind of gives their all of their very, very kind of large investment in AI to the world.
So that, you know, entrepreneurs and everybody can kind of keep them competitive, even though they don't plan to be in the business of AI in the same way that, you know, what Google is in the business of smart drones to some extent, but it's that they're kind of key business and, you know, meta doesn't have a plan to be in the AI business, maybe, you know, to some extent they will too but that's not the main goal.
And then I would put one other company on the concerning side on this and I it's truly to tell but where they're going to shake out. But, you know, Amazon just announced they're investing a lot more money in the anthropic. So I think they're now basically Amazon is to anthropic what Microsoft just opened AI. You know, I think that's a lot of stuff. Yep. Yeah. And so like there's there's a, anthropic is very much in the group of kind of big tech, you know, kind of new incumbent big tech, you know, lobbying very aggressively for regulation of regulatory capture in DC.
And so I think it's sort of an open question whether Amazon is going to pick up that agenda as open as as anthropic can see becomes effectively a subsidiary of Amazon. Yeah. Well, this is another place where we're on the side of Washington DC and the current regulatory motion where, you know, the big tech companies have done this thing, which we thought was illegal because we observed it occur at AOL and people went to jail.
But what they've done is they invest in startups, you know, huge amounts of money, Microsoft and Amazon and Google are all doing it, you know, like billions of dollars. With the requirement with the explicit requirement that those companies then buy GPUs from them, like not the the discount that they ordinarily get, but a relatively high price. And then be in their clouds. So that kind of and then, you know, in the Microsoft case, even more aggressive, give me your source code, give me your weights, you know, which is like extremely aggressive.
So, you know, they're moving money from the balance sheet to their P and L. You know, in a way that at least from an accounting standpoint, it was our understanding was illegal and the FTC is, you know, looking at that now, but it'll be interesting to see how that plays out. Yeah. Well, the other is that's one area. Another issue that people should watch is, you know, that's one that's around tripping. The other one is just consolidation. You know, if you own, you know, half of a company and you get to appoint the management team, right? Is that, you know, is that a like, is that not a subsidiary?
You know, there are rules on that. Like at what point you're on the company equity, you are in the intellectual property of the company and you control the management team. Yeah, is that not your company? Yeah. And then at that point, if you're not consolidating it, like, is that legal? And so the SEC is going to weigh in on that. And then of course, you know, to the extent that some of these companies have non-profit components to them, there's, you know, tax implications to the conversion to for profit and the so forth. And so like there, there's a lot of, yeah, this, this, this, yeah, the stakes, the stakes in the legal, I would say the stakes in the legal regulatory and political game that's being played here, I think are quite quite high. Quite high.
Yes. Yeah. Yeah. Stream. Ben and I, Ben and I, it's been mentioned, it's been an hour old enough where we do know a bunch of people who go on a jail. So some of these issues turn out to be serious. So Gabriel asks, what would happen if there was zero regulation of AI, the good, the bad, and the ugly? And this is, this is actually really important topic. So, you know, we're, we're vigorously arguing, you know, in DC that there should be, you know, basically anybody should be completely capable of building AI, deploying AI. Big companies should be allowed to do it. Small companies should be allowed to do it. Open source should be allowed to do it. And you know, look, a lot of the regulatory pushes we've been discussing that comes from the big companies and from the activists is to, is to prevent that from happening and put everything in the hands of the big companies. So, you know, we're definitely on the side of freedom and innovate. You know, having said that, you know, that's not the same as saying no regulations of anything ever. And so we're definitely not approaching this. It's kind of a hardcore libertarian lens.
The interesting thing about regulation of AI is that it turns out when you kind of go down the list of the things that I would say reasonable people kind of, you know, kind of, you know, kind of sort of thoughtful people considered to be concerns around AI on both sides of the aisle. Basically, the implications that they're worried about are less the technology itself and are more the use of the technology in practice, either for good or for bad. And so, you know, Ben, you brought up, for example, if AI is making decisions on things like granting credit or mortgages or insurance, then, you know, they're very serious policy issues around, you know, how those answers are derived at, which groups are affected in different ways. You know, the flip side is, you know, if AI is used to plan a crime, you know, or to, you know, plan a bank robbery or something like that or terrorist attack, you know, that's, you know, that's obviously something that people focused on national security law enforcement are very concerned about.
Look, our approach on this is actually very straightforward, which is, it seems like completely reasonable to regulate uses of AI, you know, in things that would be in things that would be dangerous. Now, the interesting thing about that is, as far as I can tell, and I've been talking to a lot of people and you see about this, starting to tell every single use of AI to do something bad is already illegal under current laws and regulations. And so it's already illegal to be discriminatory and lending. It's already illegal to redline in mortgages. It's already illegal to plan bank robberies. It's already illegal to plan terrorist attacks. Like these things are already illegal and there's, you know, decades or centuries of case law and regulation and, you know, law enforcement and intelligence capabilities around all of these.
And so to be clear, like we think it's like completely appropriate that those authorities be used. And if there are new laws or regulations needed, you know, other bad uses that makes little sense. But that basically the issues that people are worried about can be contained and controlled the level of the use as opposed to somehow saying, you know, by the way, as some of the doomer activists, you know, we need to literally prevent people from, you know, doing linear algebra on their computers. Yeah. Well, I think that's important to point out, like, what is AI? And it turns out to be, you know, it's math and specifically kind of like a mathematical model. So you can think of it for those of you who study math in school, you know, in math, you can have an equation like, you know, y equals x squared plus b or something.
And that equation can kind of model the behavior of something in physics or, you know, something in the real world. And so that you can predict, you know, something happening, like the speed that an object will drop or so forth and so on. And then AI is kind of that, but with huge computer power applied so that you get a much bigger equations with, you know, instead of two or three or four variables, you could have, you know, 300 billion variables. And so if you get into the challenge with that, of course, is if you get into regulating math, and you say, well, math is okay up to a certain number of variables, but then at the, you know, two billionth and first variable, then it's dangerous.
Then you're in a like a pretty bad place and that you're going to prevent everything good from the technology from happening as well as anything that you might think is bad. So you really do want to be in the business of regulating the kind of applications of the technology, not the math. In the same way that, you know, you want to want to like nuclear power is very potentially dangerous as nuclear weapons are extremely dangerous. You want want to kind of put parameters around what physics you could study in order to, you know, like literally in the abstract in order to kind of prevent somebody from getting a nuke, like you can no longer study physics in Iran, because then you might be able to build a nuke would be kind of the conclusion.
And that has been kind of what big tech has been pushing for not because they want safety, but because, you know, again, they want to monopoly. And so I think we have to be very, very careful not to do that. I do think there will probably be, you know, some cases that come up that are enabled by a new applications that do need to be regulated potentially. You know, for example, I don't know that there's a law that like if you recreate like something that sounds exactly like Drake, and then kind of put out a song that sounds like a Drake song like I don't know that that's illegal. Maybe that should be illegal.
I think those things need to be considered for sure. And you know, they're they're certainly danger in that. I also think many technological solutions, not just regulatory solutions for things like deep fakes, that kind of help us get to, you know, what's human what's not human. And interesting, a lot of those are kind of, you know, viable now based on kind of blockchain crypto technology. Yeah, so let's, you know, just own the voice thing real quick. Yeah, so it actually, I believe this to be the case, it is not currently possible to copyright a voice.
Yeah, right. You can copyright lyrics and you can copyright music and you can copyright tunes, right melodies and so forth, right copyright a voice. And yeah, that seems like a perfect example where that it seems like that probably is a good idea to have a law that lets you copyright your voice. Yeah, I feel that way. You know, particularly if people call their voice Drake squared or something, right? Like, you know, it could get very dodgy. Oh, yeah, again, you know, you get to get into just the details, you know
trademark, you can trademark your name. So you could probably prosecute on that. But by the way, having said that, look, this also gets to the complexity of these things that there is actually an issue around copyrighting a voice, which is okay, well, how close to that, how close to the voice of Drake does like, there are a lot of people have like a lot of voices in the world and like, how close do you have to get before you're violating copyright? And what if my natural voice actually sounds like Drake, like, am I now in trouble? Right.
And do I have a like Jamie Foxx imitating Quincy Jones and that kind of thing, right? Exactly. So anyway, yeah, I mean, but I mean, look, I, you know, agreeing violently with you on this is like, that seems like a great topic that needs to be taken up and take you looked at seriously from a legal standpoint that is sort of is actually, you know, is obviously exact, you know, sort of an issue that's sort of elevated by AI, but is a general kind of concept of what you can't be able to copyright trademark things, which has a long history in US law.
Yeah, for sure. Yeah, so let's talk about the decentralization of the blockchain aspects of this. So, you know, I want to get into this. So goose asks, how important is the development of decentralized AI? And how can the private sector catalyze prudent pragmatic regulations to ensure US retains innovation leadership in this space? So yeah, let's, so let's, Ben, let's talk about, well, let's talk about decentralized AI. And then maybe I'll just, I'll highlight real quick, and then you can build on it. Decentralized AI, like, you know, the sort of default way that AI systems are being built today is with basically, you know, supercomputer clusters in a cloud. And so you'll have a single data center somewhere that's got, you know, 10,000 or 100,000 ships, and then a whole bunch of systems interconnect them and make them all work. And then you have a company, you know, that, you know, basically, you know, owns and controls that. And, you know, these companies, AI companies are raising a lot of money to do that now.
These are very large scale centralized, you know, kinds of operations. And, you know, to train us, you know, state of the art model, you're at $100 million plus, you know, to train a, you know, a big one to train a small one, like, like the Databricks model that just came out, it's like on the order of $10 million. And so that, you know, these are large centralized efforts. And by the way, we all think that the big models are going to end up costing a billion, you know, and up in the future. And so, so then this raises a question of like, is there an alternate way to do this? And the alternate way to do this is with we believe strongly is with a decentralized approach, in particular, with a blockchain-based approach. It's actually the kind of thing that the blockchain web three kind of methods, you know, seems like it would work with very well. And in fact, we are already blocking backing companies and startups that are doing this. And then I would say there's at least three kind of obvious layers that you could decentralize that seem like they're increasingly important.
So one is the training layer. Well, actually, let me say four, there's the training layer, which is, you know, building the model. There's the inference layer, which is the running the model to answer questions. There's the data layer, bend your point on opening up the black box of where the data is coming from, which is there should probably be a blockchain-based system where people who own can contribute it for training of AIs and then get paid for it, and where you track all that. And then there's a fourth that you alluded to, which is deep fakes. It seems obvious to us that the answer to deep fakes, and I should pause for a second and say, in my last three months of trips to DC, the number one issue politicians are focused on with AI is deep fakes. It's the one that directly affects them. And I think every politician right now who's thought about this has a nightmare scenario of, it's three days before their reelection campaign, three days before the vote, deep fakes goes out with them saying something absolutely horrible, and it's so good and the voters get confused, and then they lose the election on that. And I would say that's actually the thing that actually has the most potency right now.
And then what basically a lot of people say, including the politicians, is, so therefore, we need basically a way to detect deep fakes. And so either the AI systems need to watermark AI-generated content so that you can tell there's a deep fakes or you need these kind of scanners, like the scanners that are being used in some schools now to try to detect something as AI-generated. Our view, as you know, I would say both technologists and investors in the space, is that the methods of detecting AI generated content after the fact are basically not going to work. And they're not going to work because AI is already too good at doing this.
And by the way, for example, if you have kids in a school and they're running one of these scanner programs that is supposed to detect whether your kid is submitting an essay, or he was chat to you Peter, right? The essay, like, those really don't work in a reliable way. And there's a lot of both false positives and false negatives off of those that are very bad. So those are actually very bad ideas. And for the same reason, like, detection of AI-generated photos and videos and speech is not going to be possible. And so our view is you have to flip the problem, if you invert the problem, and what you have to do instead is basically have a system in which real people can certify that content about them is real.
And where content has provenance as well, where you go ahead and describe how that would work. Yeah, so, you know, we have like, you know, one of the amazing things of crypto blockchain is it deploys something known as a public key infrastructure, which enables kind of every human to have a key that's unique to them, where they can sign. So like, if I was in a video, or in a photo, or I wrote something, I can certify that, yes, this is exactly what I wrote, and you cannot alter it to make it into something else. It is just exactly that. And then, you know, as that thing, you know, gets transferred to the world, let's say that it's something, you know, like a song that you sell and so forth, you can track just like with, you know, in a less precise way, but with the work of art, we track the provenance or with a house, who owned it before you and so forth. That's also like an easy application on the blockchain.
And so that, you know, kind of a combination of capabilities can make this whole kind of program much more viable in terms of like, okay, knowing what's real, what's fake, where it came from, where it started, where it's going, and so forth. You know, kind of going back, the data one, I think, is really, really important in that, you know, these systems, you know, one of the things that they've done, that's, I would say dodgy. And, you know, there have been like big pushback against it with, you know, Elon trying to lock down Twitter, and the New York Times suing OpenAI and so forth.
You know, these systems have gone out and just slurped in data from all over the internet and all over kind of, you know, people's businesses and so forth and train their models on them. And, you know, I think that there's a question of whether the people who created that data should have any say in whether the models trained on that data. And, you know, blockchain is an unbelievably great system for this because you can permission people to use it, you can charge them a fee. It can be all automated in a way where you can say, sure, come train, you know, and I think training data ought to be of this nature where there's a data marketplace, and people can say, yes, take this data for free.
I want the model to have this knowledge or no, you can't have it for free, but you can have it or no, you can't have it at all, rather than, you know, what's gone on, which is this very aggressive scraping. And, you know, like you have these very smart models where these companies are making enormous amounts of money taken from data that certainly didn't belong to them, you know, maybe it's in the public domain or what have you, but, you know, that ought to be an explicit relationship. And it's not today.
And that's a very great blockchain solution. And part of the reason we need the correct regulation on blockchain, and we need the SEC's to stop harassing and terrorizing people trying to innovate in this category. And so that's kind of the second category. And then you have like training and inference. And I would say, you know, right now the push against kind of decentralized training and inference is, well, you know, you need this very fast interconnect and you need it to all be in one place technologically. But, and I think that's true for people who have more money than time, right? Which is like, you know, startups and big companies and so forth.
But for people in academia, we have more time than money, they're getting completely frozen out of research. You can't do it. And there's not enough money in all of academia to participate anymore in AI research. And so, you know, having a decentralized approach where you can share, you know, all the GPUs across your network. And hey, yeah, maybe it takes a lot longer to train your network or to serve it. But you know what, you still can do your research. You can still innovate, you know, create new ideas, do architectures, and test them out at large scale, which, you know, will be amazing if we can do it. And again, we need, you know, the SEC to stop, you know, kind of illegally terrorizing every crypto company and trying to block laws from being put in place that help us, you know, enable this.
Yeah, there's actually a really, and you alluded to it, the college thing actually really matters. So we have a friend, you know, who runs one of, you know, is very involved in one of the big computer science programs, one of the major American research universities. And of course, by the way, a lot of the technology we're talking about was developed at American research universities, right? And Canadian ones to Toronto, Canadian ones and European ones, exactly. You know, historically, as with every other wave of technology in the last, you know, whatever, 100 years, you know, the research, our research universities, you know, across these countries have been kind of the gems of the, you know, the well springs of a lot of the new technology that have been in a powering, you know, the economy and everything else around us. You know, we have a friend involved in running one of these. And this friend said a while ago that he said that the, you know, his concern was that his university would be unable to fund a competitive AI cluster, basically.
So, you know, a compute grid that would actually let students and professors at that university actually work in AI, because it's now getting to be too expensive and research-rink and universities are just not funded to do have CAPEX programs that big. And then he said his concern more recently has been all research universities together might not be able to afford to do that, which means all universities together might not be able to actually have, you know, basically cutting edge AI work happening on the university side. And then I happen to have a conversation in DC, I was in a bipartisan, you know, house meeting the other day with these topics.
And actually one of the, in this case, democratic congress, congresswomen asked me, you know, the question which, you know, comes up, which is a very serious question, always right, which is how do you get kind of more, more members of unrepresented groups, underrepresented groups involved in tech. And, you know, I found myself giving the same answer that I always give on that, which is you need the most effective thing you need to do is you need to go upstream and you need to have more people coming out of the college with computer science degrees who are, you know, skilled and qualified and trained, right, and mentored in to be able to participate in industry. And you know, that's, you know, you and I then both came out of state schools, you know, with, you know, with computer science programs, you know, where we were able to then have the careers we've had. And so, you know, I find myself answering the question saying, well, we need, we need more computer science, you know, graduates from all, you know, from every, from every group. And then, but in the back of my head, I was like, and it's going to be impossible to do that, because none of these places are going to be able to afford to actually have the computer resources to be able to actually have AI programs in the future. And so like, you know, maybe the government can fix this by just dumping a ton of money on top of these universities.
And maybe that's what will happen. And, you know, the current political environment seems like maybe it's not quite feasible for a variety of reasons. And then, and then the other approach would be a decentralized approach would be a blockchain, blockchain-based approach that everybody that everybody could participate in. You know, if they were something that the government were willing to support, which right now it's not. And so I think there's a really, really, really central important vital issue here that I think, you know, I think is being glossed over by a lot of people that I think should really be looked at. Yeah, no, I think it's absolutely critical. And this is, you know, again, kind of going back to our original thing, like it's so important to the country being what America being what America should be to get these issues right. And we're definitely in danger of that not happening, you know, because, you know, look, I think people are taking much too narrow view of some of these technologies and not understanding their full capabilities. And, you know, we get into, oh, the AI could, you know, say something racist, therefore we won't cure cancer. I mean, like, we're getting into that kind of dumb idea.
And, you know, we need to have a tech-forward kind of solution to some of these things. And then the right regulatory approach to kind of make the whole environment work. So, yeah, let's go to that next, the next phase of this now, which is the sort of global implications. So, I'm going to conjoin two different topics here, but I'm going to do it on purpose. So, Michael Frank Martin asks, what could the US do to position itself as the global leader of open source software? You see any specific legislation or regulatory constraints that are hampering the development of open source projects? Arda asks similar question, what would an ideal AI policy for open source software models looks like? And then Sarah Holmes asks the China question. Do you think we will end up with two AI tech stacks, the Western China, and ultimately companies will have to pick one side to stay on it?
And so, look, I would say that this is where you get to like the really, really big geopolitical long term issue, which is basically my understanding of things is sort of as follows, which is, you know, basically the for a variety of reasons technological development in the West is being centralized in the United States, you know, with, you know, some in Canada and some in Europe, although, you know, quite frankly, a lot of the best Canadian and, you know, European tech founders are coming to Silicon Valley, you know, yeah, I'm a good teacher. Yeah, and the good is, you know, a hero in France teaches at NYU and works at Meta, most of which are American institutions. And so they're sort of an American or let's say American plus European kind of, you know, kind of, kind of, you know, sort of tech Vanguard wedge in the world.
And then there's China. And really, it's actually quite a bipolar situation. You know, it would say, you know, the dreams of tech being fully democratized and spreading, you know, throughout the world have been realized for sure on the on the use side, but you know, not nearly as much on the entrepreneurship side or the invention side. And again, immigration immigration being a great virtue. But, you know, for the countries that are beneficial as immigration, the side of that is, you know, it makes you know, other countries are going to be less competitive because they're they're best and brightest, moving to the US. So, so anyway, so we are in a bipolar, we are in a bipolar tech world is primarily by polar tech world is primarily the US and China.
You know, this is not the first time we have been in a bipolar world involving, you know, geopolitics and technology. You know, there the US and China have two very different systems. The Chinese system has all of the virtues and downsides of being centralized. The US system has all the virtues and downsides of being more decentralized. There is a very different set of views of the two systems on how society should be ordered and what freedom means and, you know, what people should be able to do and not do.
And then look, both the US and China have visions of global supremacy and visions of basically care and agendas and programs, you know, to carry forward their points of view on the technology of AI and on the societal implications of AI, you know, throughout the world. And so, you know, there is this Cold War 2, and then the other thing is just in DC, it's just crystal clear that there's this now dynamic happening where Republicans and Democrats are trying to leapfrog each other every day on being more anti China. And so, you know, we're not our friend Neil Ferguson is using the term, I think Cold War 2.0 like we're it we're it whether we want to or not, like we're in Cold War 2.0 like we're we're in a dynamic similar to the one with the USSR, you know, 30, 40, 50 years ago.
And to serve Holmes's question, it's 100% going to be the case. There are two AI tech stacks. And there are two AI governance models, and there are two AI, you know, deployment, you know, systems, and you know, there are two ways in which, you know, AI dovetails and everything from surveillance to smart cities to transportation, self-driving cars, drones, who controls what, who gets access to what, who sees what, the degree by the way to which AI is used as a method for population control. They're there, there are very different visions.
And these are national visions and global visions. And there's a very big competition developing. And, you know, it certainly looks to me like there's going to be a winner and a loser. I think it's overwhelmingly in, you know, our best interest for the US to be the winner for the US winner, we have to lean into our strengths. And, you know, our the downside of our system is that we are not as well organized and orchestrated top down as China is the upside of our system.
At least, more likely is that we're able to benefit from decentralization, we're able to benefit from competition from a market economy from a private sector, right, where, you know, we're able to basically have a much larger number of smart people being, you know, making lots of small decisions to be able to get to good outcomes as opposed to, you know, having a having a dictatorial system in which there's a small number of people trying to make decisions. I mean, look, and this is how we won the cold war against Russia is our decentralized system just worked better economically, technologically, and ultimately militarily, than the, than the Soviet centralized system.
And so it just seems like fairly obvious to me that like we have to, we have to lean into our strengths, we better lean into our strengths. And if, because if we think we're just going to be like another version of a centralized system, but without all the advantages that China has with having a more centralized system, you know, that just seems like a bad formula. So yeah, let me pause there and Ben, see what you think.
I know for sure. I think that'd be disastrous. And I think this is why it's so clear that if there's one, you know, to answer the question of there's one regulatory policy that we would enact that would ensure America's competitiveness, it would be open source. And the reason being that, as you said, this enables the largest number of participants to kind of contribute to AI to innovate to come up with novel solutions and so forth. And I think that you're right, China's, you know, what's going to happen in China is they're going to pick one, because they can. And they're going to kind of drive all their wood behind that arrow in a way that we could never do because we just don't work that way.
And they're going to impose that on their society and try them to pose it on the world. And, you know, our best counter to that is to put it in the hands of all of our smart people. I have so many smart people from all over the world from, you know, like, as we like to say, diversity is our strength. We've got this tremendous different points of view, different, you know, kind of kinds of people in our country. And, you know, the more that we can enable them, the more likely we'll be competitive. And I'll give you a tremendous example of this is, you know, I think if you go back to 2017 and you read any, you know, foreign policy, magazine, etc., there wasn't a single one that didn't say China was a head in AI.
They have more patents. They have more students going to universities, they're head in AI, they're head in AI, like we're behind AI. And then, you know, Chad, GPT comes out and goes, Oh, I guess we're not behind an AI, we're head in AI. And the truth of it was what China was ahead on was in integrating AI into the government, their one AI into their government in that way. And, you know, look, we're working on doing a better job with that with American dynamism. But we're never going to be good at that model. You know, where you know, that's the model that they're going to be great at. And we have to be great at our model. And if we start limiting that outlying startups and outlying anybody with the big companies from developing AI and all that kind of thing, we'll definitely shoot ourselves on the foot. I would say related or like another kind of important point, I think, in kind of the safety of the world is, you know, when you talk about two AIs, that's like two AI stacks, perhaps, but it's very important that countries that are in America that aren't China can align AI to their values.
And I'll just give you kind of one really important example, which, you know, like I've been spending a lot of time in the Middle East. And if you look at the kind of history of, you know, a country like Saudi Arabia, they're coming from a world of fundamentalism and, you know, a kind of set of values that they're, you know, they're trying to modernize, they, you know, they've done tremendous things with women's rights and so forth. But, you know, like, they're still the fact that they've got, you know, people who don't want to go to that future so fast, and they need to preserve some of their history in order to not have a revolution or extreme violence and so forth. And, yeah, we're seeing al-Qaeda re-spark up in Afghanistan and all these kinds of things, which are, by the way, al-Qaeda's real enemy of modern Saudi, just as much as they're America, an enemy of America.
And so if Saudi can't align an AI to the current Saudi values, they could literally spark a revolution in their country. And so it's very important that as we have technology that we develop, that it not be totally proprietary, close source, that it'd be kind of modifiable by our allies who need to kind of progress at their pace to keep their kind of country safe and keep us safe in doing so. And so this has got great geopolitical ramifications, what we do here. Like, we got into the China model that Google and Microsoft are advocating for this Chinese model of only a few can control AI. We're going to be in big trouble.
Yeah, and then I just want to close in the open source point because it's so critical. So this is where I say I get extremely at the idea of closing down open source, which people and a number of these people are lobbying for very actively. By the way, I'm going to name one more name. We even have VCs lobbying to outlaw open source, which I find to just be completely staggering. A note? So, Vinod Kosla, who is it? This is just incredible to me. He's a founder of some ecosystems, which was in many ways a company built on open source, built on open source Unix out of Berkeley, and then itself built a lot of open source, critical open source.
And then of course, you know, was the dot and dot com, which of course, you know, the internet was all built on open source. And Vinod has been lobbying to ban open source AI. And by the way, he denies that he's been doing this, but I saw him with my own eyes when the US Congressional China Committee came to Stanford. I was in the meeting where he was with, you know, 30, 20 or 30 congressmen and lobbying actually for this. And so I've seen him do it myself. And you know, look, he's got a big stake in open AI, you know, maybe it's financial self-interest. By the way, maybe he's a true believer in the dangers. But in any of that, I think he proved on Twitter, he was not a true believer in the dangers. I'll get into that. I'll explain that. But yeah. Yeah. So, so, I mean, even within little tech, even within the startup world, we are not uniform in this. And I think that's extremely dangerous. Open look open source, like what is open source software? Like open source software is, you know, it is quite literally, you know, it's the technological technological equivalent of free speech, which means it's the technological equivalent of free thought. And it is the way that the software industry has developed to be able to build many of the most critical components of the modern technological world. And then Ben, as you said earlier, to be able to secure those and to be able to have those actually be safe and reliable.
And then to have the transparency, you know, that we've talked about so that you know how they work and how they're making decisions. And then to your last point also, so that you can customize AI in many different environments. So you don't end up with a world where you just have one or a couple AIs, but you actually have like a diversity of AIs with like lots of different points of view and lots of different capabilities. And so the open source fight is actually at the core of this. And of course, the reason why, you know, the sort of, you know, sort of people with an eye towards monopoly or cartel want to ban this is open source is a tremendous threat to monopoly or cartel. Like, you know, in many ways, is a guarantee that monopoly or cartel can't last. But it is absolutely 100% required for the, you know, for the furtherance of number one, a vibrant private sector, number two, a vibrant startup sector.
And then right back to the academia point, like without open source, then at that point, you know, university college kids are just not going to be able to learn how the technology works. They're just going to be like completely boxed out. And so a world where open source is banned is bad on so many fronts. It's just incredible. It means that anybody's advocating for it. But it needs to be, I think it needs to be recognized as the threat that it is. Yeah. And on the note, you know, it was such a funny dialogue between you and he. So like, I'll just give a quick summary of it. Basically, you know, he was arguing for closed source, he for open source, his core argument was, this is the Manhattan Project. And therefore, we can't let anybody know the secrets.
And you countered that by saying, well, this is in fact the Manhattan Project then is like, you know, as the open AI team, you know, locked in a remote location, do they screen all their like employees very, very carefully? Is it air locked? Are they, you know, as a super high security? Of course, none of that is close to true. In fact, quite sure they have Chinese nationals working there, probably some are spies for the Chinese government. There's no any kind of strong security at open AI or at Google or at any of these places, you know, anywhere near the Manhattan Project, which is where they built a whole city that nobody knew about.
So they couldn't get into it. And, you know, once you caught him in that, he said nothing. And then he says back, well, you know, it costs billions of dollars to train these models. You just want to give that away. Is that good, you know, is that good economics? Like that was just like final counterpoint to you. But basically, he said, I'm trying to preserve a monopoly here, like what are you doing? I'm an investor. And I think that's true for all these arguments. Well, the kicker, you know, the kicker band of that story, the kicker to that is three, three days later, the Justice Department indicted a Chinese national Google employee who stole Google's next generation AI chip designs, which is quite literally the family tools for an AI program.
It's, you know, it's the equivalent of stealing the, you know, if you stretch the metaphor, the equivalent of stealing the design for the bomb. And that Google employee took that just chip design, loaded them and took them to China. And by definition, you know, my definition that means strictly in the Chinese government, because there's no distinction in China between the private sector and the government. It's an integrated thing. The government owns and controls everything. And so, you know, 100% of 100% guaranteed that that went straight to the Chinese government, Chinese military. And Google, Google, which, you know, like, you know, Google has like a big information security team and all the rest of it. Google did not realize, according to the indictment, Google did not realize that that, that engineer had been in China for six months.
Yeah, amazing. Well, hold on, it gets better. It gets better. This is the same Google with the same CEO who refused to sell Google proprietary AI technology to the US Department of Defense. So they're supplying China with AI and that's flying the US, which is just goes back to luck. If it's not open source, we're never going to compete like, yeah, we've lost the future of the world right here, which is why, you know, it's the single most important issue for sure. Yeah, and and you're not going to lock this stuff up, like, you're not going to lock it up. Nobody's locking it up. It's not locked up. These companies are security Swiss cheese. And like, you know, you're not going to, you know, and you would have a debate about the tactical relevance of chip embargers and so forth. But like, you're you're you're not the horses left to burn on this, not least because these companies are without a doubt riddled with with with with foreign assets. And they're very easy to penetrate. And so we just have to be like, I would say very realistic about the actual state of play here. And we have to we have to play in reality. And we have to we have to play reality.
We have to win in reality. And as we need innovation, we need competition, we need free thought, we need free speech, we need we didn't embrace the virtues of our system. And and then not shut ourselves down in the face of in the face of the you know, the conflicts that are that are coming. Another one, why are Andreas asks, why are USB C so much more engaged in politics and policy than their global counterparts? And I really appreciate that question because it basically like it is if that's the question that it means that boy, VCs outside the US most not being engaged at all because usses are engaged. Yeah. And then what do you believe the impact of this is on both the VC ecosystem and society in general, and then related directly related question Vincent asked, are European AI companies becoming less interesting investment targets for us based VCs due to the strict and predictably regulatory landscape in Europe. Would you advise early stage European AI companies to consider relocating to the US as a result?
Great question. Well, look, I think that it kind of goes back to a little of what you said earlier, which is, you know, in startup world, like there's, you know, in the West, there's the United States and then there's everywhere else to the United States is kind of bigger than everywhere else combined. And, you know, so it's natural and like, you know, in these kind of political things, it kind of starts with the leader and, you know, US as the leader in VC, we feel like we're the leaders in US VCs, so we need to go go to Washington until we go, you know, nobody's going. And so that's a lot of the reason why we started things. I'm on, well, in European regulatory policy, like it's, I think, I think generally regulatory policy is going to, is likely to dictate where you can build these companies. We've seen some interesting things. You know, France turns out to be leading a revolution in Europe on a regulatory, where they're basically telling the EU to pound sand, you know, and a large reason because they have a company there, Mistral, and you know, they it's a national national jewel for the country and they don't want to give it up because, you know, the EU has some crazy safety as some, you know, thing going on there. Yeah, and also, you know, France also, of course, is playing the same role with nuclear policy in Europe. Yeah. They're the one country.
They're the cleanest country, you know, probably one of cleanest countries in the world as a result. Right. But it's been staunchly pro-nuclear and trying to hold off, I think in a lot of ways, sort of, it attempts throughout the rest of Europe and especially from Germany to basically bad nuclear, civilian nuclear power.
Yeah. Yeah. In the UK, the UK has sort of been flip-clapping on AI policy and we'll see where they come out. And, yeah, Brussels has been ridiculous as they've been on almost everything. Yeah.
The big thing I think I note here is there's a really big philosophical distinction. I think it's rooted, actually, the difference between the traditionally it's been called, I think, the sort of Anglo-American kind of approach to law and then the continental European approach. And I forget these. It's like, I forget the terms for it, but the legal system. There's like common law and then, yeah, civil law.
So, basically, the difference, basically, is that which is not outlawed is legal or that which is specifically legal is legal. And anything that's not explicitly legal is outlawed. In other words, by default, you have freedom and then you impose the law to have constraints or by default, do you have nobility do anything and then the law enables you to do things.
And these are sort of, this is like a fundamental philosophical legal, political distinction. And then this shows up in a lot of these policy issues with this idea called the precautionary principle, which is sort of the rewarding of the sort of traditional European approach, which is basically the precautionary principle says new technologies should not be allowed to be fielded until they are proven to be harmless.
And of course, the precautionary principle very specifically is sort of a hallmark of the European approach regulation and increasingly by the US approach. And specifically, it's origin. It was actually described in that way and given that name actually by the German Greens in the 1970s, as a means to ban civilian nuclear power, with, by the way, which is catastrophic results. And we could spend a lot of time on that.
But I think everybody at this point agrees, including the Germans increasingly agreed that that was a big mistake. Among other things, has led to basically your funding Russia's invasion of Ukraine through the need for important energy because they keep shutting down their nuclear plants.
And so just like a sort of a catastrophic decision, but the precautionary principle has become, like I would say, extremely trendy, like it's one of these things like it sounds great, right? It's like, well, why would you possibly, why would you want anything to be released in the world of us not to be harmless? Like, how can you possibly be in support of anything that's going to cause harm?
But the obvious problem with that is with that principle, you could have never deployed technology such as fire, electric power, internal combustion engines, cars, airplanes, the computer, right? Like every single piece of technology we have, the power is modern day civilization, has some way in which it can be used to hurt people, every, every single one is technology, technologies are double edged swords.
There are things, you know, you can use fire to protect your village or to attack the neighboring village. Like, you know, these things can be used in both different ways. And so basically, if we had applied the precautionary principle historically, we would have, you know, we would still be living in mud huts. We would be just like absolutely miserable.
And so the idea of imposing the precautionary principle today, if you're coming from like an Anglo, you know, American kind of, you know, perspective or from a freedom to innovate, you know, perspective, that's just like, it's just like incredibly horrifying. The, you know, should basically guarantee to stall on progress.
You know, this is very much the mentality of the EU bureaucrats in particular. And this is the mentality behind a lot of their recent legislation on technology issues. France is, it does seem to be the main counterweight against this in the in Europe, you know, bend your point, like UK has been a counterweight in some areas.
But K also has like, I would say they've received a full dose of this programming. Yeah. They have that tendency. Yeah. And they've been in AI in particular, I think they've been on the wrong side of that, which hopefully they'll reconsider.
So again, this is one of these things like this, this is a really, really important issue. And just the surface level thing of like, okay, this, this technology might be able to be used for some harmful purpose. Like that, if that is allowed to be the end of the discussion, like we are never going to, nothing new is ever going to happen in the world like that, that will cause us ultimately to stall out completely. And then, you know, if we stall out that, that will over timely to regression and, like literally, you'll allow me, this is happening, like the power is going out, like, you know, German society, German, German, German industrial companies are shutting down because they can't afford the power that's resulted from this, this, you know, kind of the imposition of this policy in the energy sector. And so this is a very, very, very important thing. I think that you, who bureaucracy is lost on this, and so I think it's going to be up to the individual countries to directly confront this if they want to. Anyway, so I really applaud what France has done, and I hope more European countries join them in kind of being on the right side of this. Yeah, no, it always is funny to me to hear the EU and like the economists and these kinds of things say, oh, the EU may not be the leader in innovation, but we're the leaders in regulation, and I'm like, well, you realize those go together there? Like one is a function of the other. Okay, good.
So, and then let's do one more global question. L'aip Gong Leong asks, are there any other countries that could be receptive to technical optimism? For example, could Britain, Argentina, or Japan be ideal targets for our message and mission? Yes. So, will you all work for work on that in Britain? And look, we've got some pretty good reception from the UK government. There's a lot of very, very smart people there. We're working with them, you know, tightly on their AI and crypto efforts, and we're hoping that's the case.
You know, Japan is having spent a lot of time there is, you know, they've obviously shown that capability, you know, over time. And then, you know, there's a lot about the way Japanese society works that holds them back from that at times as well. You know, without getting into all the specifics, there is, you know, they have a very, I would just say, unusual and unique culture that has a great difference for the old way of doing things, which sometimes makes it hard to kind of promote the new way of doing things. I also think, you know, around the world, you know, the Middle East is very, very kind of subject and kind of on board with techno optimism on the UAE, Saudi, Israel, of course, you know, many countries out there are very excited about, you know, these kinds of ideas and taking the world forward and like, you know, just creating a better world through technology, which I think that look with our population growth, if we don't have a better world through technology, we're going to have a worse world without technology. I think that's like very obvious. So it's a very compelling message. And oh, by the way, South America, I should say also, there are a lot of countries who are really embracing techno optimism now in South America, and that's, you know, and some great new leadership there that, you know, that's pushing that. Yeah, I would also say if you look at the polling on this, what I think you find is what you could describe as the younger countries are more enthusiastic about technology. And I don't mean younger here literally of like when they were formed, but I mean two things. One is how recently they've kind of emerged into what we would consider to be modernity. And so, you know, for example, to embrace, you know, concepts like democracy or free market capitalism or, you know, innovation generally, you know, or global trade and so forth. And then the other is just quite simply the number of the demographics, the, you know, the countries with a large number of people. And those are often about the same countries, right? They have a they have the reverse demographic pyramid we have where they actually have a lot of young people. And young people are both, you know, but young people both need economic opportunity and are very fired up about new ideas.
Yeah, by the way, this is true in Africa as well. In many African countries, you know, Nigeria, Rwanda, Ghana, they were their techno optimism, I think, is taking hold in a real way, you know, maybe they need some of the governance improvements, but they definitely also have young populations. Saudi 70% of the population is under 30. So, you know, just to your point, that the very, very, very hopeful in those areas. Ben, Gultra asks, do you think the lobbying efforts by good faith American crypto firms will be able to move the needle politically in the next few years? What areas make you optimistic as it relates to American crypto regulation? Crypto blockchain web three. Yeah, so I think that I have hope I'm as hopeful as I've ever been. So there's a there's a bunch of things that have been really positive. First of all, you know, the SEC has lost, you think, five cases in a row. So, you know, like some of their like arbitrary enforcement of things that are in laws is not working. Secondly, you know, there was a bill that passed through the house or the house financial services committee, which is a very good bill on crypto regulation. And you know, hopefully that will eventually pass the house in the Senate.
There's, you know, we've seen Wyoming, I think, adopt really good new laws around DOWs. And so there's some progress there. And then, you know, there's been, you know, we've been working really, really hard to educate members of Congress and the administration on kind of the value of the technology. There are strong opponents to it. You know, as I mentioned earlier, and you know, that's, you know, that continues to be worrisome. But, but I, I think we're making great progress. And the fair shakeback has done just a tremendous job of, you know, kind of backing pro crypto candidates and with great success, there were six different races on Super Tuesday that they backed and all six ones. So, you know, another good sign.
Good. Fantastic. I hit a couple other topics here quickly to get into the wire. So, Father Time asks, can you give us your thoughts in the recent TikTok legislation if passed, what does this mean for big tech going forward? And so I'll just, let me give a quick swing at that. So the TikTok legislation being proposed by the U.S. Congress and currently being taken up in the Senate, which by the way, the President Biden has already said he'll sign it if it, if the Senate in the House pass it. This is legislation that would require a require a divestment of TikTok from its Chinese parent company, Bightness. And so TikTok would have to be purely American company or would have to be owned by a purely American company. And then failing that, it would be a ban of TikTok in the U.S.
This bill is a great example of the sort of bipartisan dynamic in D.C. right now on the topic of China, which is this bill is being enthusiastically supported by the majority of politicians on both sides of the aisle. I think it passed out of its committee like 50 to zero, which, you know, is basically like it's impossible to get anybody in D.C. to agree on anything right now, except basically, basically this. So this is like Super bipartisan. And then it's, you know, the head of that committee is a Republican, Mike Gallagher, and, you know, he immediately, and he worked in a bipartisan way with his committee members, but, you know, the Democratic White House immediately endorsed his bill.
So, so like, you know, this bill has like serious momentum. The Senate is taking up right now. They're gonna, they're likely to modify it in some way, but it seems, you know, reasonably like reasonably likely to pass based on what we can see. You know, I would say, like I said, overwhelmingly by partisan support, I mean, look, the argument for the ban is I would say a couple different way or not the investment of the ban. Number one is just like, you know, an app on every on Americans and the phones of ever, you know, of a large percent of Americans with surveillance and potential propaganda kind of aspects of that, you know, certainly has people in Washington concerned. And then quite frankly, there's an underlying industrial, you know, dynamic, which is, you know, the, you know, the US, the US Internet companies can't operate in China. So there's, you know, there's, there's a sort of an unfair symmetry underneath this that really undercuts, you know, I think a lot of the arguments for Biden dance, it has been striking to see that there are actually opponents of this bill who have emerged and I would describe on sort of the further to the right and further to the left in their respective parties. And, you know, they, you know, those folks, and I won't go through detail, but those folks make a variety of, a variety of arguments. One of the, and I may characterize the surface level, I think on the, on the further on the left, I think that there are people who think that it's especially kind of further left Congress people who feel like TikTok is actually a really important and vital messaging system for them to be able to use of their constituents who tend to be younger, they're very internet centric. And so, so there's that which, you know, is interesting. But then on the, on the further on the right, there is a lot, and our friend David sex, for example, maybe an example of this, there are a fair number of people who are very worried that the US government is so prone to abuse irregularity capability with respect to tech, and especially with respect to censorship that basically if you hand the US government any new regulatory authority or legal authority at all to come down on tech, it will inevitably be used not just against the Chinese company, but it will also then be used against the American companies. And so, you know, it's kind of, it's either, you know, some drama that's surfacing around this and, you know, we'll see whether the opponents can, you know, can kind of pull it through. You know, look, quite frankly, I, you know, I, you know, without coming down, particularly on, like, I think there's one of those cases where there's actually like, excellent arguments like on all three sides. Like, I think they're like very legitimate questions here. And so, you know, I think it's great to see issues being confronted, but I think it's also great. That's a, you know, the arguments that surfaced and that we're going to, you know, hopefully figure out the right thing to do.
Couple closing things. Close on, let's see, hopefully, a semi-optimistic note. So John Potter asks, how do you most effectively find common ground with groups and interests that you benefit from working with, but with which you are usually opposed ideologically or otherwise? I mean, I think this is, you know, there's this term in Washington, common ground. And I think that, you know, you always want to start by finding the common ground because I'll tell you something in politics generally is most people have the same intention, you know, like in Washington, in fact, people want life to be fair. You know, they want, they don't want people to go hungry. They want, you know, citizens to be safe, but have plenty of opportunity. So like, there's a lot of common ground that the differences lie not in the intent, but how you get there, like, what is the right policy to achieve the goal? And, you know, so I think it's always important to start with the goal and then kind of work our way through, you know, why we think our policy position is correct. You know, like, we don't really have a lot of disagreements on stated intent at least. I mean, I think there are some intentions that are very difficult in Washington, you know, like the, you know, the intention to kind of control the financial system is, you know, from the government or nationalize the banks are kind of achieved the equivalent of nationalizing the banks is, you know, when you have that intent, that's tough. But like, if you start with, you know, most intentions are, I think, you know, shared between, you know, us and policymakers on both sides.
And then we'll close on this great question. Zach asks, would either of you ever consider running for office and for fun? What would be your platform? Um, so I want to just because, like, you know, I think being a politician requires a certain kind of skill set and attitude and energy from certain things that I don't possess, unfortunately. Do you have a platform you'd run on if you did, Russ? Yeah. Yeah. I do that. Okay. Yeah. Let's hear your platform. The American dream. So I won't do it now, but I like to put up this chart that shows the change in prices in different sectors of the economy over time. And what you basically see is the price of like television sets and software and video games are like crashing hard, right? In a way that's like great for consumers. You know, like, yeah, I saw a 75 inch flat screen ultra high depth TVs now are down below $500. Like, you know, it's great. It's amazing. Like when technology is allowed to work, it's magic, like prices crash in a way that's just great for consumers. And it's equivalent of a giant basically, you know, when prices drop, it's equivalent of a raise.
Um, um, so it makes makes human welfare a lot better. The three elements of the economy that are central to the American dream are healthcare, education and housing, right? And so if you think about what does it mean to have the American dream, it means to be able to buy an owner home, it means being able to send your kids to great schools, get great education to have a great life. Uh, and then it means, you know, great healthcare to be able to take care of yourself and your family. The prices on those are skyrocketing. They're just like straight to the moon. Don't. Uh, and of course, those are the sectors that are the most controlled by the government. They're where there's the most, uh, uh, uh, subsidies for demand from the government. There's the most restrictions on supply from the government. And there is the most interference with the ability to field, uh, technology and startups. Um, and the result is we have an entire generation of kids, um, who basically, I think are quite rational and looking forward and basically saying, I'm never going to be able to achieve the American dream. I'm never going to be able to own a home. I'm never going to be able to get a good education or send my kids to good education.
I'm not going to be able to get good healthcare. Basically, I'm, I'm, I'm not going to be able to live the life that my parents live or my grandparents live. And I'm not going to be, I'm not going to be able to fundamentally form a family provide for my kids. And I think that's the, I think that's, I, my opinion, that's the underlying theme to kind of what has gone wrong. Um, sort of socially, politically, um, psychologically, uh, in the country, that's what's led to this sort of intense level of pessimism. That's what's led to sort of the subtraction, you know, kind of very zero some, uh, politics to, uh, you know, recrimination over, over, over optimism and building. Um, and so I would, I would, I would, I would confront that absolutely directly. Um, and then of course, I would, I know that I, I don't think anybody in Washington is doing that right now. So.
And maybe either I would either I would win because I'm the only one saying it out loud or I would lose because nobody cares. But I, I think it would, uh, I've always wondered whether that actually would, whether both on the substance and on the, and on the message, whether that would be a, uh, the right platform. Yeah. Now it would certainly be the thing to do as a thing. It's very complex in that, um, you know, healthcare policy is largely national, but education policy and housing policy is also got a very large local components. It would be a complex, got a complex set of policies that you have to enforce.
Yeah. Uh, we still have a ton of questions that we may do part two. And this is some point where we really appreciate your time and attention and we will see you soon. Okay. Thank you.