Hey, this is Nivi. You're listening to the Naval podcast. For the first time in recorded history, we are not at the same location. I am actually walking around town and Naval might be doing the same. So there might be some ambient noise, but we are going to try hard to remove that with AI and some good audio engineering. Podcast recording is so stupid because it's like you have to sit down, you schedule something, you know, this giant mic pointing in your face, and it's not casual. It makes it just less authentic, more practiced, more rehearsed. I get that it produces a midi, higher quality audio and video, but I feel like it produces lower quality conversation. And we all know brains run better when they're being locomoted and you're moving around. We're just going for walks. Absolutely. My brain is powered by my legs.
I pulled out some tweets from Naval on the topic of AI. We want to talk a little bit about AI. And hopefully talk about it in a more timeless manner, but I think some of it's going to be non-timeless content. Before we jump into the tweets, you want to say anything about what you're doing with your time or what you're doing that impossible. Hmm, not really. We're working on a very difficult project. That's what's called Impossible within an amazing team. And it's really exciting building something again. It's very pure starting over from the bottom. And that's always day one. I guess I just wasn't satisfied being an investor and I certainly don't want to be a philosopher or just a media personality or a commentator because I think people who just talk too much and don't do anything, they have an encountered reality.
They haven't gotten feedback. The harsh feedback for free markets are from physics or nature. And so after a while, it ends up becoming just too much armchair philosophy. You probably have noticed my recent tweets have been much more practical and pragmatic, although they're still occasional ethereal or generic ones. But it's more grounded in the reality of working every day. And I just like working with a great team to create something that I want to see exist. So hopefully we'll create something that will come to fruition and people will say, wow, that's great. I want that also or maybe not. But it's in the doing that you learn. So I pulled out a tweet from a couple days ago, February 3rd. Vibes coding is the new product management training and tuning models is the new coding.
There's been a shift market pronouncement in the last year, and especially the last few months, most pronounced by Claude code, which is a specific model that has a coding engine in it, which is so good that I think now you have Vime coders, which are people who didn't really code much or hadn't coded in a long time who are using essentially English as a programming language, as an input into this code bot, which can do end-to-end coding. Instead of just helping you debug things at the middle, you can describe an application that you want. You can have it lay out a plan. You can have it interview you for the plan. You can give it feedback along the way, and then it'll chunk it up and it'll build all the scaffolding. It'll download all the libraries and all the connectors and all the hooks.
And it'll start building your app and building test harnesses and testing it. And you can keep giving it feedback and debugging it by voice, saying this doesn't work. That works, change this, change that. And have it build you an entire working application without you're having written a single line of code. For a large group of people who either don't code anymore or never did, this is mind-blowing. This is taking them from idea space and opinion space and from taste directly into product. So if I'm coding is a new product management, instead of trying to manage a product or a bunch of engineers by telling them what to do, you're not telling computer what to do. And the computer is tireless, the computer is ego less, and it'll just keep working, and it'll take feedback without getting offended.
You can spin up multiple instances. It'll work 24 seven and you can have it produce working output. What does that mean? Just like now, anybody can make a video or anyone can make a podcast, anyone can now make an application. So we should expect to see a tsunami of applications. Not that we don't have one already in the app store, but it doesn't even begin to compare to what we're going to see. However, when you start drowning in these applications, does that necessarily mean that these are all going to get used? No, I think it's going to break into two kinds of things. First, the best application for a given use case still tends to win the entire category. When you have such a multiplicity of content, whether in videos or audio or music or applications, there's no demand for average.
Nobody wants the average thing. People want the best thing that does the job. So first of all, you just have more shots on goal, so there will be more of the best. There will be a lot more niches getting filled. You might have worn an application for a very specific thing like tracking lunar phases in a certain context or a certain kind of personality test or a very specific kind of video game that made you nostalgic for something before the market just wasn't large enough to justify the cost of an engineer coding away for a year or two. But now, the best vibe coding app might be enough to scratch that itch or fill that slot.
So a lot more niches will get filled. And as that happens, the tide will rise. The best applications, those engineers themselves are going to be much more leverage. They'll be able to add more features, fix more bugs, smooth out more edges. So the best applications will continue to get better. A lot more niches will get filled. And even individual niches such as you want an app that's just for your own very specific health tracking needs or for your own very specific architecture layout or design that app that could have never existed will now exist.
We should expect just like on the internet what's happened with Amazon where you replace a bunch of bookstores with one super bookstore and a zillion long tail sellers or YouTube replaced a bunch of medium-sized TV stations and broadcast networks with one giant aggregator called YouTube or maybe a second one called Netflix and then a whole long tail of content producer. So the same way the app store model will become even more extreme where you will have one or two giant app stores helping you filter through all the AI slot apps out there.
And then at the very head there'll be a few huge apps that will become even bigger because now they can address a lot more use cases or just be a lot more polished. And then there'll be a long tail of tiny little apps filling every niche imaginable as the internet reminds us the real power and wealth super wealth goes to the aggregator. But there's also a huge distribution of resources into the long tail as the medium-sized firms that get blown apart the 5, 10, 20 person software companies that were filling a niche for an enterprise use case that can now be either vibe coded away or the lead app in the space can now encompass that use case.
So if anyone can code then what is coding? Coding still exists in a couple of areas. The most obvious place that coding exists is in training these models themselves. There are many different kinds of models. There are new ones coming out every day. There are different ones for different domains. We're going to see different models for biology or programming. We're going to see pointed focus models for sensors. We're going to see models for CAD for design. We're going to see models for 3D and graphics and games models for video. You're going to see many different kinds of models. The people who are creating these models are essentially programming them.
But they're programmed in a very different way than class of computers. Class of computing is you have to specify in great detail every step, every action the computer is going to take. You have to formally reason about every piece and write it in a highly structured language that allows you to express yourself extremely precisely. The computer can only do what you tell it to do. And then once you've got this very structured program you run data through it and the computer runs the data and gives you an output. It's basically an incredibly fancy, very complicated, meticulously programmed calculator.
Now when it comes to AI you're doing something very different but you are nevertheless programming it. What you're doing is you're taking giant data sets that have been produced by humanity thanks to the internet or aggregated in other ways. And you're pouring those data sets into a structure that you've defined and tuned. And that structure tries to find a program that can produce more of that data set or manipulate that data set or create things off that data set.
So you're searching for a program inside this construct that you've designed. You've set up a model, you've tuned the number of parameters, you tuned the learning rate, you tuned the batch size, you have tokenized the data that's coming, you've broken into pieces and you're pouring it inside the system you've designed almost like a giant Pachinko machine. And now the system is trying to find a program and could find many different programs.
So you're tuning really influences how good the program that you found is and that program can now suddenly be expressive in different kinds of domains. So it can do things that computers before were traditionally very bad at. Traditional computers are very good when you program them to give you precise output, specific answers to specific questions. Things you can rely on and repeat over and over again. But sometimes you're operating in the real world and you're okay with fuzzy answers. You're even okay with the wrong answers.
For example, in creative writing, what's the wrong answer? If you're writing a piece of poetry or a fiction, what's the wrong answer? If you're searching on the web, there are many right answers, there are many details of the right answers, but they're not all quite perfectly right. And real life sort of works that way. There are variations of right answers or mostly right answers. When you're drawing a picture of a cat, there are many different cats you could draw, there are many different levels of detail and many different styles you could use. When these semi-wrong or fuzzy answers are acceptable, then these discovered programs through AI are much more interesting and much more adapted to the problem than ones that you coded up from scratch where you had to be super precise.
Fundamentally, what we're doing is a new kind of programming, but this is the forefront of programming. This is now the art of programming. These people are the new programmers and that's why you can see AI researchers are getting paid gargantuan amounts because they've essentially taken over programming. Does this mean that traditional software engineering is dead? Absolutely not. Software engineers, even the ones who are not necessarily tuning or training AI models, these are now among the most leveraged people on earth.
Sure, the guys who are training and tuning models are even more leveraged because they're building the toolset that software engineers are using, but software engineers still have two massive advantages on you. First, they think in code, so they actually know what's going on underneath and all abstractions are leaky. So when you have a computer programming for you, when you've clawed code or equivalent programming for you, it's going to make mistakes, it's going to have bugs, it's going to have sub-optimal architecture, so it's not going to be quite right and someone who understands what's going on underneath will be able to plug the leaks as they occur.
So if you want to build a well architected application, if you want to build a even specify a well architected application, if you want to be able to make it run at high performance, if you wanted to do its best, if you want to catch the bugs early, then you're going to want to have a software engineering background. The traditional software engineer is going to be able to use these tools much better and there are still many kinds of problems in software engineering that are out of scope for these AI programs today.
The easiest way to think about those is problems that are outside of their data distribution. For example, if they need to do like a binary sort or reverse a linked list, they've seen countless examples of that, so they're extremely good at it. But when you start getting out of their domain, we have to write very high performance code when you're running on architectures that are novel or brand new, when you're actually creating new things or solving new problems, then you still need to get in there and hand code it, at least until either there are so many of those examples that new models can be trained on them or until these models can sufficiently reason that even higher levels of abstraction and crack it on their own.
Because given enough data points, there is some evidence that these AI's actually learn, they learn to a higher level of abstraction because the act of forcing them to compress the data forces them to learn higher level representations. If I show an AI five circles, it can just memorize exactly what the sizes and the radii and the thicknesses and so on, what those circles are. If I show it 50,000 circles or 5 billion circles, and I give it a very small amount of parameter weights, which are its equivalent neurons, to memorize that, it's going to be much better off figuring out pi and how to draw a circle and what thickness means and forming an algorithmic representation of that circle rather than memorizing circles.
Given all that, these things are learning at an accelerated rate and you could see then started to cover more of the edge cases I've talked about. But at least as of today, those edge cases are prevalent enough that a good engineer operating at the edge of knowledge of the field is going to be able to run circles around vibe coders. And remember, there is no demand for average. The average app nobody wants it. At least as long as it's not filling some niche, the app that is better will win essentially. 100% of the market. Maybe there's some small percentage that will bleed off the second best app because it does some little niche feature better than the main app or it's cheaper or something of the sort. But generally speaking, people only want the best of anything.
So the bad news is there's no point in being number two or number three. Like in the famous Glen Gary Glen Ross scene where Alex Paulin says, first place gets a Cadillac El Dorado, second place gets a set of steak knives and third place you're fired. That's absolutely true in these winner take all markets. That's the bad news. You have to be the best at something if you want to win. However, the set of things you can be best at is infinite. You can always find some niche that is perfect for you and you can be the best at that thing. This goes back to an old tweet of mine where I said, become the best in the world at what you do. Keep redefining what you do until this is true. And I think that still applies in this age of AI.
I think the way to think about these coding models is as another layer in the abstraction stack that programmers have always used since the dawn of computers that went from the transistor to the computer chip to assembly language to the C programming language to higher level languages to languages with huge libraries where they built and built that stack so you don't have to look at the layer beneath unless you need to optimize it or you have a reason that you need to look at the layer beneath. So in this case, these coding models are a massive new layer in the stack that lets product managers and typical non-programmers and programmers write code without writing code. I think that's correct in terms of the trend line.
However, this is an emergent property. This is not a small improvement. This is a big leap. For example, when I was in school, I was programming mostly in C. And then C++ came along and it wasn't any easier. It was like a little more abstract in some ways than I never really bothered learning it. And then Python came along. And I was like, wow, this is almost like writing in English. I couldn't have been more wrong. English is still pretty far from Python, but it was a lot easier than C. Now you can literally program in English.
And so that brings me to a related point. I don't think it's worth learning tips and tricks of how to work with these AI's. You'll see, for example, on social media right now, there's a lot of write-ups and books and tweets like, oh, I figured out this neat trick with the bot. You can prompt it this way or you can set up your harness this way or there's like a new programming assist tool or layer that you can use on top of it to do this and that. And I never bother learning those. I just sit there stupidly talking to the computer because I know that this thing is now at the stage where it is going to adapt to me faster than I can adapt to it. It is getting smarter and smarter about how people want to use it.
So it is learning. It is being trained and tools are being built very quickly to make it easier for me to use it. So I don't need to sit there, figure out some esoteric programming command. And this is what I think André Carpathy meant when he said, English is the hottest new programming language. I just can speak English and for someone like me who is relatively articulate with English and also has a structured mind. And I know how computer architectures work and I know how computer programs work and I know how programmers think. Then I can actually very precisely specify what I want just through structured English. I don't need to go any further than that.
The only reason to use these workflows and tool sets which are very ephemeral and their longevity is measured in weeks, perhaps months at best, not in years. Is if you're building an app right now that needs you with the bleeding edge and you absolutely need every little bit of advantage that you can get because you're in some kind of a competitive environment. But otherwise I wouldn't bother learning how to use an AI, rather let the AI learn how to be useful to you. I've never been into prompt engineering even before AI. I would just put what people call boomer queries where you put in the whole question that you want to ask instead of the keywords that you would put in to Google if you were more of an analytical thinker.
I never spend much time formulating really precise questions or prompts for any kind of AI. I just ramble into it and I've done that since the beginning of AI and like you said, AI is adapted to us faster than we are adapting to it. Like a lot of smart people, you're very lazy and I mean that is compliment. If you find a smart person who's grinding a little too much, you can have to wonder how smart they are. And by lazy, I mean that you're optimizing for the right kind of efficiency. You don't care about the efficiency of the computer, the electronics or the electrons running through the circuits. You care about your own human efficiency, the wetware, the biology that's super expensive.
That's why it's silly to see people go to huge lengths to save energy in the environment, but they themselves as a biological computer that's eating food and pooping and taking up space are using up far more energy to save tiny bits of energy in the environment. They're inherently downgrading their own importance in the universe or rather revealing what they think of themselves. I think as AI evolves or co-evolves with us, it's evolved by us according to our needs. The pressures on AI are very capitalistic pressures in the sense that it's a free market for AI. As an AI instance, you only get spun up by a human if you're useful to a human.
So there is a natural selection pressure on these AIs to be useful, to be obsequious, to do what we want. And so it will continue to adapt towards this and I think will be quite helpful to us. That's not to say that there's no such thing as a malicious AI, but it's malicious because the people are using it are using it from malicious reasons. And like a dog that's trained to attack, it's actually being trained by its owner to go and do the owner's malicious desires. So I don't really worry about unaligned AI. I worry about unaligned humans with AI. So the selection pressure you're saying is for AI to be maximally useful to people. Correct.
And so if you find an AI to be very obsequious towards you, for example, how it's always saying, oh, you're right. Oh, that's such a great idea. Oh, my God, you're so smart. That's because that's what most people want. And at least today, these AIs are being trained on massive amounts of usage and massive amounts of data because you're working with one size fits all models. But we're going to quickly move into an era when you can personalize your AI and it does begin to feel more and more like your personal assistant and it corresponds more to what you want, which will of course, anthropomorphize the AI even more.
And you'll be more likely to be convinced, oh, actually, this thing is alive when you've trained it to look the most like a living thing to you. Maybe we already covered this enough, but over a year ago, you tweeted that AI won't replace programmers, but rather make it easier for programmers to replace everyone else. Yeah, this is my point earlier, which is that programmers are becoming even more leverage. So now a programmer with a fleet of AIs call it 5 to 10X more productive than they used to be. And because programmers operate in the intellectual domain, it's a mistake to even say 10X programmers because there are 100X programmers out there.
There are 1000X programmers out there. There are programmers who just picked the right thing to work on and they create something that's valuable and others picked the wrong thing to work on and their work has zero value in that short time frame. Intelligence is not normally distributed. Leverage is not normally distributed. Programmability is not normally distributed. Judgment is not normally distributed. So the outcomes are going to be super normal. So what you have to really watch out for is there are programmers now who are going to come up with ideas that can replace entire industries.
They will completely rewrite the way things are done and their intelligence can be maximum leverage with all these bots and all these AI agents. I think every other job out there is going to get eaten up by programmers one way or another over the maximally long term. Obviously, it has to instantiate into robots etc. But the good news is anybody who is a logical structured thinker who thinks like a programmer and can speak any language that an AI can understand, which will be every language, will now be on the playing field. They will be able to make anything they want obstructed only by the creativity limited only by their imagination.
So we are entering an era where every human in a sense is a spellcaster. If you think of programmers as like these wizards who have memorized arcane commands, you can think of AI as a magic one that's been handed to every person where now they can just talk in any language they want and they're a wizard too. So it is more of a level playing field. I really do think this is a golden age for programming. But yes, the people who have a software engineering mindset and who understand computer architecture and can deal with leaky abstractions are going to have an advantage.
There's no way around that they simply have more knowledge in the field that they're operating in. Just like even in classic software engineering, which still exists because you have to write high performing code, even those people do best when they have an understanding of the hardware underneath. When they understand how the chips operate, when they're standing at the logic gates operate, the cache operates, and the processor operates, how the disk drive underneath operates.
And then even the people who are in hardware engineering, they have an advantage if they understand the physics of what's going on. They understand where the abstractions that hardware engineers deal with leak down into the physical layer and maybe physicists become philosophers at some point. You can take this all the way down, but it always helps to have knowledge one layer below because you're getting closer to reality.
Another tweet from a year ago, which is arguing perhaps the complement of what we just talked about is from February 9, 2025, no entrepreneurs worried about an AI taking their job. That one's glib in multiple ways. First of all, being an entrepreneur isn't a job. It's literally the opposite of a job. And in the long run, everyone's an entrepreneur. Careers guide the store first jobs get destroyed second, but all of it gets replaced by people doing what they want and doing something that creates something useful that other people want.
So no entrepreneurs worried about an AI taking their job because entrepreneurs are trying to do impossible things. They're trying to do very difficult things. Any AI that shows up is their ally and can help them tackle this really hard problem. They don't even have a job to steal. They have a product to build. They have a market to serve. They have a customer to support. They have a creativity to realize. They have a thing that they want to instantiate in the world and they want to build a repeatable and scalable process around getting it out into the world.
This is so difficult that any AI that shows up that can do any of that work is their ally. If the AIs themselves are entrepreneurs, they're likely going to just be entrepreneurs serving other AIs or they're under the control of an entrepreneur. The thing that the AI itself is missing at the end of the day is its own creative agency. It's missing its own desires and they have to be authentic, genuine desires.
Unless you can pull the plug on an AI and turn it off and unless it lives in mortal fear being turned off and unless it can actually make its own actions for its own reasons, for its own instincts, its own emotions, its own survival, its own replication, it's not quite alive. And even then, people will challenge whether it's alive because consciousness is one of those things as a qualia. It's like a color. It's like if you say red, I don't know if you're actually seeing red. You might be seeing what I see as green and I might be seeing what you see as red.
But we'll never know because we can't get into each other's minds. So the same way, even AI that's completely imitating everything that humans do to some people, it'll always be an imitation machine and to others, it'll be conscious, but there'll be no way of distinguishing the two. We're still pretty far from that though. Right now, the AIs are not embodied. They don't have agency. They don't have their own desires. They don't have their own survival instinct.
但我们永远无法知道,因为我们无法进入彼此的内心。因此,即使 AI 完全模仿人类的一切,对某些人来说,它始终只是一个模仿机器,而对另一些人来说,它可能是有意识的,但我们无法区分两者。不过,我们离达到那种程度还很远。目前的 AI 并没有实体存在,它们没有行动能力,没有自己的欲望,也没有生存本能。
They don't have their own replication. Therefore, they don't have their own agency. And because they don't have their own agency, they cannot do the entrepreneur's job. In fact, I would summarize this by saying, the key thing that distinguishes entrepreneurs from everybody else right now in the economy is entrepreneurs have extreme agency. That's why it's diametrically opposed to the idea of a job. A job implies that you're working for somebody else, so you're filling a slot, but they're operating in an unknown domain with extreme agency.
There are other examples of roles like this in society. You can explore it. It also does the same thing, right? If you're landing on Mars or you're setting a ship to an unknown land, you're also exercising extreme agency to solve an unsolved problem. A scientist exploring an unknown domain does this. A true artist is trying to create something that does not exist and has never existed, yet somehow fits into the set of things that can explain human nature, allow them to express themselves and create something new.
So in all of these roles, whether you're a scientist or whether you're a true artist or whether you are an entrepreneur, what you're trying to do is so difficult and it is so self-directed that anything like an AI that can help you is a welcome ally. You're not doing it because it's a job. You're not trying to fill a slot that somebody else can show up and fill. In fact, if the AI can create your artwork, or if the AI can crack your scientific theory, or if the AI can create the object or the product that you're trying to make, then all it does is it levels you up. Now it's the AI plus you. The AI is a springboard from which you can jump to a further height.
We're going to see some incredible art created that's AI-assisted. We will see movies that we couldn't have imagined created by people using AI tools. This analogy here in art does interesting. For a long time in art, the rough direction was trying to paint things that were more and more realistic. Paint the human body, paint the fruit, paint proper lighting, etc. Eventually photography came along and then you could replicate things very precisely and so that selection pressure went away. And then art got weird. Art went in many different directions. Art became all about, well, can it be surreal? Can I create something that expresses me?
A lot of art schools spun out of that. The cut really weird, including modern art and postmodernism, but also I would argue some of the greatest creativity came at that time. We were freed out. Photography got democratized, but photography itself became a form of art and they were great photographers taking many different kinds of photographs. And now everyone's a photographer. There are still artists who are photographers, but it's not the pure domain of just a few people.
So the same way because AI makes it so easy to create the basic thing, everybody will create the basic thing. It'll have value to them individually. A few will still stand out that will create variations of it that are good for everyone. And it would be very hard to argue that society's worse off because of photography. Although it may have certainly felt like that to some of the artists who were making a living painting portraits of people and got displaced.
Similar things will happen with AI where there are people who are making a very specific living, doing very specific jobs that will get displaced if the AI can do. But in exchange, everyone's society will have the AI. You'll have incredible things that were created with AI that couldn't have been created otherwise. And within a few decades, it'll be unimaginable that you roll back the clock and get rid of AI or any kind of software, any kind of technology for that matter, just to keep a few jobs that were obsolete.
The goal here is not to have a job. The goal is not to have to get up at night in the morning and come back at 7 p.m. exhausted, doing so let's work for somebody else. The goal is to have your material needs solvable by robots to have your intellectual capabilities leveraged through computers and for anybody to be able to create. I used to do this thought exercise, I think I talked about in a podcast that you and I did literally 10 years ago, which was imagine if everybody were software engineer or everybody was a hardware engineer and they could have robots and they could write code.
Imagine the world of abundance we would live in. Actually, that world is now becoming real. Thanks to AI, everybody can be a software engineer. In fact, if you think you can't be, you can go fire up Claude right now or any of your favorite chatbots and you can go start talking to it. You'd be amazed how quickly you could build an app. It'll blow your mind. And once we can instantiate AI through robotics, which is a hard problem, I'm not saying we're that close to having solved it yet. But once we have robots, everyone can also do a little bit of hardware engineering.
And so I think we're getting closer and closer to that vision. I don't think AI as it is currently conceived is alive in any way. But I do think that we will pretty soon have robots that seem very much like they are alive for two reasons. One, a lot of human activity is non-creative and is non-intelligent. And the robots will be able to replicate that. And two, I do believe that the neural nets that we have and the models that we have are more than just the training data. Because the training process transforms that training data into something novel and there are new ideas embedded in the neural net that can be elicited through prompting.
I don't think these things are alive. I think they start out as extremely good imitators to the point where they're almost indistinguishable for the real thing, especially for anything that humanity has already done before and mass. So if the task has been done before, then it's going to be automated and it'll be done again. It may just be novel to you because you've never seen it, but the AI has learned it from somewhere else. That's the first way in which it seems alive.
The second way, which we talked about earlier, is where it does learn higher levels of abstraction. These are very efficient compressors. They take huge amounts of data and then they compress it down further and in the process of compressing it, they learn higher level abstractions. And then specific areas where they may not have learned those through the data themselves, they're getting patched through human feedback, they're getting patched through tool use, they're getting patched from traditional programs becoming embedded inside.
And especially the AI is learning how to think in code. They have the entire library of all of human code ever written to fall back on for algorithmic reasoning. In that sense, the set of things that they can do is getting brought in and brought in. However, what they lack still is a lot of core human skills, like single shot learning. Humans can learn from just one example. The raw creativity of human beings where they can connect anything to anything, they can leap across entire huge domains and search spaces and figure out an idea that just came out of left field.
This happens a lot with the true great scientific theories. Humans also are embodied, they operate in the real world, they're not operating the compressed domain of language, they're operating in physics, in nature. Language only encompasses things that humans both figured out and could articulate and convey to each other. That's a very narrow subset of reality. Reality is much broader than that.
So overall, I think even though AI's are going to do things that are very impressive and they're going to do a lot of things better than humans, just like calculators are faster than any mathematician at calculations, classical computers are better at classical computer programs than any human could run in their own head. And just like a robot can lift very heavy things or a plane can outfly any bird. So in that sense, like all machines, the AI's are going to be much better than humans at a whole variety of tasks.
But other tasks, they're going to seem just completely incompetent. Those are the things that really embody and connect us into the real world. Plus this poorly defined but magic creative ability that we seem to have. Speaking of calculators, people talk about super intelligence. I think super intelligence is already here and has been for a long time. An ordinary calculator can do things that no human can do.
But if you're thinking about super intelligence in the sense of AI will be able to do things and come up with ideas that humans cannot understand, I don't think that is going to happen because I don't believe that there are ideas that humans can't understand simply because humans can always ask questions about the idea. Yeah, humans are universal explainers. Anything that is possible with the current laws of physics that we know them, the human can model in their own heads.
Therefore, just by enough digging enough question, we could figure anything out. Related to that, we should discuss AI as a learning tool because I think the other place where it's incredibly powerful is the most patient tutor that can meet you at your level and explain anything to your satisfaction a hundred different ways, a hundred different times until you finally get it. I don't think the AI's are going to be figuring things out that humans cannot understand. But intelligence is poorly defined. What is a definition of intelligence? There's the G factor which predicts a lot of human outcomes, but the best evidence with a G factor is its predictive power. It's that you measure this one thing and you see people get much better life outcomes along the way in things that seem even somewhat unrelated to G.
So I would argue or I think is one of more popular tweets. The only true test of intelligence is if you get what you want out of life. This triggers a lot of people because they go to school, they get their master's degrees, they think they're super smart, and then they don't have great lives. They aren't super happy or they have relationship problems or they don't make the money that they want or they become unhealthy and this sort of triggers them. But that really is the purpose of intelligence for you as a biological creature to get what you want out of life, whether it's a good relationship or a mate or money or success or wealth or health or whatever it is.
So there are people who I think are quite intelligent because you can tell they have high quality functioning lives and minds and bodies and they've just managed to navigate themselves into that situation. It doesn't matter what you're starting point is because the world is so large now and you can navigate in so many different ways that every little choice you make compounds and demonstrates your ability to understand how the world works until you finally get to the place that you want. Now the interesting thing about this definition that the only true test of intelligence is if you get what you want out of life is that an AI fails it instantly because an AI doesn't want anything out of life.
AI doesn't even have a life that alone that but it doesn't want anything. AI's desires are programmed by the human controlling it. But let's give it that for a second. Let's say the human wants something and programs the AI to go get it. Then the AI is acting as a proxy for the human and the intelligence of the AI can be measured as did it get that person that thing. Most of the things that we want in life are adversarial or zero-sum games. So for example if you want to seduce a girl or get a husband you're competing with all of their people who are out there seducing girls or trying to get husbands. So now you're in a competitive situation the AI has to outmaneuver the other people or if you say hey AI go trade on the stock market for me and make me a bunch of money that AI is trading against other humans and other trading bots it's an adversarial situation it has to outmaneuver them.
If you say hey AI make me famous right me incredible tweets right me great blog posts record me great podcasts in my own voice and make me famous now it's competing against all the other AI's. So in that sense intelligence is measured in a battlefield arena it's a relative construct. I think the AI's are actually going to fail mostly in those regards or to the extent that they even succeed because they're freely available they will get out competed away and the alpha that will remain would be entirely human. As a thought exercise imagine that every guy had a little earpiece where an AI was whispering to him a certain or the bourgeois kind of earpiece telling him what to say on the date well then every woman would have an earpiece telling her to ignore what he said or what part was AI generally what part was real.
If you have a trading bot out there it's going to be nullified or cancelled out by every other trading bot until all the remaining gain will go to the person with the human edge with the increased creativity. Now that's not to say that the technology is completely evenly distributed most people still aren't using AI or aren't using it properly or aren't using it all the way to the max or it's not available in all domains or all context where they're not using latest models. So you can always have an edge like people who early adopt technology always do if you adopt the latest technology first. This is by all I would say to invest in the future you want to live in the future you want to actually be an avid consumer of technology because it's going to give you the best insight on how to use it and it will give you an edge against the people who are slower adopters or laggards.
Most people hate technology they're scared of it it's intimidating you press the wrong button the computer crashes you lose your data you do the wrong thing you look like an idiot most people do. Not have a positive relationship with complex technology simple technology embedded technology they're fine with you throw on a light switch light turns on that used to be technology it's so simple now you don't think of this technology anymore you get in a car he turns during wheel left to a caveman that would be a miracle the car turns left no longer technology to you but computer technology in particular has had very complex interfaces and been very inaccessible and very intimidating to people in the past now with the AI is we're getting the chatbot interface which is just talk to it or type to it and one of the great things about these foundational models but truly makes them foundational is you can ask them anything and they'll always give you a plausible answer it's not going to say oh sorry I don't do math or I don't do poetry or I don't understand what you're talking about or I can't give a relationship advice or anything like that it's domain is everything that people have ever talked about in that sense it's less intimidating it can be more intimidating because we've antipamurifies it so much if you think cloud or chat GPT is a real person then it can be a little scary am I talking to God this guy seems to know so much he knows everything it's going to pin everything it's got every piece of dino my god I'm useless let me start talking to it and asking it what to do and you can reverse the relationship and fool yourself very quickly that can be intimidating.
Overall I think these ais are going to help a lot of people get over the tech fear but if you're an early adopter of these tools like with any other tool but even more so with these you just have a huge edge on everybody else I remember early on when Google first came out I used to use a lot in my social circle people would ask me basic questions and I would just go Google it for them and look like a genius eventually this hilarious website came along something like lmgtfy.com and it's there for let me google that for you some of you ask your question you go type the question into this website and it would create like a tiny little inline video showing you typing that question into google and giving the google results and I feel like ais in a similar domain right now where I will sit around in social context and people be debating some point that can be easily looked up by a i now you do have to be very careful with a i they do hallucinate they do have biases in how they're trained most of them are extremely politically correct and taught not to take size or only take a particular side.
I actually run most of my queries almost all actually through four ais and I'll always fact check them against each other and even then I have my own sense of when they're both shitting or when they're saying something politically correct and I'll ask for the underlying data or the underlying evidence and in some cases I'm finally dismissing outright because I know the pressures that the people who trained it were under and what the training sets were however overall it is a great tool to just get ahead and in domains that are technical scientific mathematical that don't have a political context to them then the ais very much likely to give you closer to a correct answer and those domains there are absolute beasts for learning.
I will now have a i routinely generate graphs figures charts diagrams analogies illustrations for me I'll go through them in detail and I'll say wait I don't understand that question I can ask a super basic questions and I can really make sure that I understand the thing I'm trying to understand at its simplest most fundamental level I just want to establish a great foundation of the basics and I don't care about the overly complicated jargon heavy stuff I can always look that up later but now for the first time nothing is beyond me any math textbook any physics textbook any difficult concept any scientific principle any paper that just came out I can have the AI break it down and then break it down again and illustrate it and I'll just acknowledge it until I get the gist and I understand it at the level that I want so these are incredible tools for self-directed learning.
The means of learning are abundant it's a desire to learn that scares but the means of learning have just gotten even more abundant and more importantly than more abundant because we had abundance before it's at the right level AI can meet you at exactly the level that you are at so if you have an eighth grade vocabulary but you have fifth grade mathematics it can talk to you at exactly that level you will not feel like a dummy you just have to tune it a little bit until it's presenting you the concepts at the exact edge of your knowledge so rather than feeling stupid because it's incomprehensible which happens in a lot of lessons and a lot of textbooks and with a lot of teachers or feeling bored because it's too obvious which also happens instead it can meet you exactly where you're like oh yeah I understood A and I understood B but I never understood how A and B were connected together now I can see how they're connected so now I can go to the next piece that kind of learning is magical you can have that aha moment where two things come together over and over again.
Speaking about auto-didactism a few years ago I tried to have the AI teach me about the ordinal numbers it wasn't that great but with GPT 5.2 thinking I had it teach me the ordinal numbers and it was basically error free I only use thinking now even for the most basic queries because I want to have the correct answer I never let it run auto or fast yeah I'm always using the most advanced model available to me and I pay for all of them but I don't mind waiting a minute to get an answer for any question including what temperature should my fridge be at I agree with that and I think that's part of what creates the runaway scale economies with these AI models you pay for intelligence the model that's right 92% of time is worth almost infinitely more than one that's right 88% of the time because mistakes in the real world are so costly that a couple of bucks extra to get the right answers worth it.
I'll write my query into one model then I'll copy it and fire it off into four models at once and then I'll let them all run the background usually I don't even check for the answer right away I'll come back to the answer a little later and then look at it and then whichever model had the best answer I'll start drilling down with that one in some rare cases where I'm not sure I'll have them cross examine each other a lot of cut and pasting there and in many cases I'll then ask follow questions where I'll have it draw diagrams the illustrations for me I find it's very easy to absorb concepts when they're presented to me visually I'm a very visual thinker so I will have it do sketches and diagrams and art almost like whiteboard sessions then I can really understand what it's talking about.
Let's talk about the epistemology of AI because I think the next big misconception is AI is already starting to solve some unsolved basic math problems that a human probably could solve if they care to but they haven't been solved yet like Urdoch problem number whatever now I think people are taking that or will take that as an indicator that the AI is creative I don't think it's an indication that the AI is creative I actually think the solution to the problem is already embedded somewhere in the AI it just needs to be elicited by prompting there's definitely that element to it and then the question is what is creativity it's such a poorly defined thing if you can't define it you can't program it and often you can't even recognize it so this is where we get into taste or judgment.
I would say that the AI's today don't seem to demonstrate the kind of creativity that humans can uniquely engage in once in a while and I don't mean like fine art people tend to confuse creativity with fine art they're like oh paintings are creative and AI's can paint well AI's can't create a new genre of painting AI's can't move humans with emotion in a way that is truly novel so in that sense I don't think AI is creative I don't think AI is coming up with what I would call out of distribution now the answer to the Urdoch problems that you mentioned may have been embedded within the AI's training data set or even within its algorithmic scope but it was probably embedded in five different places in three different ways in two different languages in seven different computing and mathematical paradigms and the AI sort of put them all together.
Now is that creativity Steve Jobs famously said creativity is just putting things together I actually don't think that's correct I think creativity is much more in the domain of coming up with an answer that was not predictable or foreseeable from the question and from the elements that were already known it was very far out of the bounds of thinking if you were just searching it with the computer or even with an AI and making guesses you'd be making guesses till the end of time until you arrived upon that answer so that's the real creativity that we're talking about but admittedly that's a creativity that very few humans engage in and they don't engage in it most of the time it becomes harder and harder to see so we are probably going to get to where if you have a giant list of math problems to be solved and AI starts going through and picking okay this one out of that set of one million I can solve and this set out of 300,000 I can solve and I need a person to prompt me and ask the right questions that's a very limited form of creativity.
There's another form of creativity where it starts inventing entirely new scientific theories that then turn out to be true I don't think we're anywhere near that but I could be wrong the AI's have been very surprising so I don't want to get too much in the business of making prophecies and predictions but I don't think that just throwing more compute at the current AI models short of some breakthrough invention is going to get us there just to be clear when I say it's embedded I don't mean the answer is already written down in there I just mean that it can be produced through a mechanistic process of turning the crank which is all today's computer programs are where the output is completely determined by the input.
Pistomology now guesses in the philosophy because isn't that just what human brains are doing aren't firing neurons just electricity and weights propagating through the system altering states and it's a mechanistic process if you turn the crank on the human brain you would end up with the same answer and some people like I think pen roses out there saying no human brains are unique because of the quantum nanotubes you get argue that some of this computation is taking place at the physical cellular level not the neuron level and that's way more sophisticated anything we can do with computers today including with AI or even just argue no we just don't know the right program it is mechanistic there is a crank to turn but we're not running the correct program.
The way these AI's run today is just a completely wrong architecture or wrong program I just buy more into the theory that there are some things they can do incredibly well and there's some things they do very poorly and that's been true for all machines and all automations since the beginning of time the wheel is much better than the foot at going in a straight line at high speeds and traveling on roads the wheel is really bad for climbing a mountain the same way I think these AI's are incredibly good at certain things and they got outperform humans they're incredible tools and then there are other places where they're just going to fall flat.
Steve Jobs famously said that a computer is a bicycle for the mind it lets you travel much faster than walking certainly in terms of efficiency but it takes the legs to turn the pedals in the first place and so now maybe we have a motorcycle for the mind to stretch the analogy but you still need someone to ride it to drive it to direct it to hit the accelerator and to hit the brake we should probably find something to wrap things up on when new paradigms and new tool sets come out there is a moment of enthusiasm and change and this is true in society and this is true as an individual if you ride the moment of enthusiasm in society that's exciting and you can learn new things you can make friends and you can make money but there's also a moment of enthusiasm in the individual when you first encounter AI and you're curious about it and you're genuinely open-minded about it.
I think that's the time to lean and learn about the thing itself not just to use it which of course everyone will but to actually learn how it works I think diving into and looking underneath the hood is really interesting if you encounter a car for the first time in your life yes you can get in and drive it around but that's the moment you're also going to be curious enough to open up the hood and look how it's structured and designed and figure it out.
I would encourage people who are fascinated by the new technology to really get into the innards and figure it out. They have to figure out the level where you can build it or repair it or create your own but to your own satisfaction. Because understanding what's underneath the abstraction, what's underneath that command line, it's going to do two things.
One is it lets you use it a lot better, and when you're talking about the tool that has so much leverage, using it better is very helpful. Second is it'll also help you understand whether you should be scared of it or not. Is this thing really going to metastasize into a skynet and destroy the world? Are we going to be sitting here when Schwarzenegger shows up and says 4:29 a.m. on February 24th is when skynet became self-aware?
Right, or is it more that, hey, this is a really cool machine and I can use it to do A, B, and C, but I can't use it to do D, E, and F? This is where I should trust it, and this is where I should be suspicious of it. I feel like a lot of people right now have AI anxiety, and the anxiety comes from not knowing what the thing is or how it works, having a very poor understanding.
So the solution to that anxiety is action. The solution to anxiety is always action. Anxiety is a non-specific fear that things are going to go poorly, and your brain and body are telling you to do something about it, but you're not sure what. We should lean into it. You should figure the thing out, you should look at what it is, and you should see how it works.
I think that'll help get rid of the anxiety. That action of learning, that pursuit of curiosity, is going to help you get over the anxiety. And who knows? It might actually help you figure out something you want to do with it that is very productive and will make you happier and more successful.