This week in Startups is brought to you by Calm. Seize the day and sleep the night with the help of Calm. The number one app for sleep. This week in Startups listeners get 25% off a calm premium subscription at calm.com slash twist. That's c-a-l-m dot com slash twist. LinkedIn. A business is only as strong as its people and every time. Go to LinkedIn.com slash twist and get a $50 credit towards your first job post. And cabbage. Get the money you need to run your small business today. Go to cabbage.com and use code twist to get a $100 credit on your first loan statement. That's c-a-b-b-a-g-e. .com and use promo code twist. Terms and conditions apply. Offer ends November 30th, 2019.
Hey everybody welcome to this weekend Startups. I'm your host Jason Calcanis and this is the podcast where we talk to founders about their vision for how they want to change the world and today. Kind of an interesting cat on the program. His name is Alexander Wang. He is a CEO and co founder of scale AI. You got the domain name scale dot com. That's like a million dollar domain name. We'll find out what he paid for it later. And I guess the thing that most people would think is remarkable is candidly that you've raised over a hundred million dollars in your last round of funding. That's a lot of money. It's quite a bit. It's quite a bit of money from founders fund. And that I think you're 22 years old. 22. Yeah. 22.
So that's annoying to be young and successful because then every interview starts with your age. A little bit. It's annoying. I had it happen to me. It was like 23 year old publisher of cyber serve or 25 year old publisher of Silicon. And I was like, what is my age matter? Now I tell you when you hit about 3540, they don't mention it anymore. Because they're like, wow, you're 40. You should be doing interesting things or be successful in the world.
But you've been running this company since you were how old? 19. 19. Now were you a teal fellow or something? How did you get into the game? No. I, so I have a fun little history. I grew up in Los Alamos, New Mexico. So we're my parents or physicists. And they worked at the National Lab in Los Alamos. Yeah. I tell people about that lab. It was the lab where the atomic bomb was originally built. So the Manhattan Project started in Los Alamos. It was very secretive at that time. Yeah. And where did they call that lab?
Los Alamos National Lab. Los Alamos National Lab. Pretty boring. Yeah. And then I, and it's a government sponsored lab. Exactly. Yeah. Totally government funded. And then growing up in high school, I did a bunch of programming. I did all these coding competitions. And I was getting recruiter in bounds in high school.
So after high school, I actually came out here to work. I worked at this company, Kora, for a couple of years. No, we know. They do a Q and A site. Yeah, Q and A site. How did you get that job? You just applied and they saw your code. And they were like, okay. They recruiter in bounds because I was on these, I was an anonymous person on these coding competitions. And then you could just go into a coding competition. Nobody knows your age. Yeah.
And then you teach yourself how to code? Well, of course, the internet taught me how to code. So I guess the internet just looked it up. You found courses online on YouTube or. Yeah. I don't, it's hard to remember. I think I just googled around anyway.
So I worked at Kora for a couple of years doing engineering, infrastructure, et cetera. And then no college. Well, then I went to college after that. Ha. I went to MIT and then got basically bored after a year and started a year and started scale. So you left? I left. Yeah. So your parents are heartbroken about you leaving MIT at that time?
For now, yeah, for now. They're still heartbroken. Even after you raised 100 million, my god, these parents are heart-standers. Yeah, yeah, I mean, it's a meme, but it's true. Yeah. All right. Listen, mom and dad, he's going to build a building at MIT with his name, with your name on it, the family name on it. So give, cut him a break. It'll be okay. He'll be a professor emeritus at some point. Yeah.
Well, one can hope. One can hope. No, it wouldn't be a professor emeritus. That would be somebody who left. You'd be like an honorary professor. Yeah. That would probably, that would be the way to piece my parents. Yeah, because you went to work for a couple of years, then you went to MIT. That is not the way to do it, because you're going to be sitting there and everything's going to be going so slow and everything's theoretical. Yes. That's exactly what happened. Yeah, that's a. You went from running fast, you're driving race cars, and then they like, put you in the pit and the go-carts, and they're like, here's some go-carts.
Yeah. And if I'm being honest, I think the. I think the speed, the slower speed of school is sort of what got me agitated enough to eventually start the company. I think there's an alternate world where I continued working at companies after Cora for many, many years. So what was the vision for scale? How did you get the idea? When did you have the idea? So our mission is to accelerate the development of AI applications. I think we fundamentally view AI and machine learning as kind of a once-in-a-generation shift in technology. It might be a once-in-a-species, by the way. Yeah, I mean, it's depending on how this goes. Yeah, we'll see. It's obviously very hyped, but I think we hold that belief quite strongly.
And do you think it says big as the internet or bigger? The internet itself took billions of people and connected them for the first time? Yeah, I say bigger than the internet and bigger than the silicon chip being, you know, CPUs being created. I think it's more comfortable to the advent of computing than it is to the advent of the internet. Because it's an enabler of all these things that previously had to be done by humans and now can be done by machines. Got it. Okay. So if you had to rank them AI, computer, internet, or maybe computer AI, internet. We'll see what happens between AI and computer. We'll watch, we'll watch, intently over the coming decades. Computers did change our day-to-day lives pretty significantly. Yeah, well AI will as well, with autonomous vehicles and all these assistance on your phone. And I think it'll go on and on and on. There's a lot more applications.
All right, so that's the backdrop and then you have some insight on what was holding back AI or something? Yeah, so the big reason I went back to school actually was to. Was this study machine learning? So I was at Core. It was a very machine learning driven company. But I didn't have that strong and academic backing. And so I went to back to MIT to really study this more deeply. And then I had all these ideas of products I wanted to build. But there was sort of an elephant in the room problem, which was how are you supposed to get the data to build these machine learning models that you could integrate into a product?
So give me an example of that. And then define for the audience that's not super familiar with the term machine learning. What is the difference between saying that we're machine learning and AI for people who hear them together? Yeah. How would you explain what each one is? Yeah, so machine learning is a subset of AI. It's sort of a particular kind of AI where in particular we're training these. You sort of are writing programs that are able to do various things that humans normally do or various tasks that traditionally require human judgment. And they're able to do that by feeding them lots of data and sort of a particular brand of AI, if you will.
So we pick a task that humans have done with their brains, which is some combination of logic, intuition, who knows what human brains are, how they're making decisions. Exactly. A lot of debate about that. So you feed a bunch of data to a machine learning algorithm. Yep. And then it gives you an answer that it thinks would approximate a human's answer or the best answer. It thinks it would approximate the best answer, but the only way it's going to know what the best answer is is through all this data that humans have created.
Got it. Let's come up with the most illustrative example. Let's an example that when you gave this example to VCs, they threw money at you. Well, the one that has really captivated the world's attention is autonomous vehicles, right? Sure. And it's a compelling example because first, nobody likes driving, but also driving is very unsafe. And there's a lot of risk in driving. Sure. And so the- A lot of stake. Yeah, exactly. And so the captivating sort of machine learning model is one that can take in all of the camera data and other sensor data from the vehicle. Understand everything that's going on around it, something that's very easy for your eye, but currently, or at least before machine learning was very difficult for machines. And then can determine the best path to take and figure out how to drive on its own, basically.
Got it. So we see the lane markers, double yellow markers, double white markers on the highway. We know, keep the car between those two lines. Exactly. As smoothly as possible. Yeah. We see somebody deviate from their lane into ours. We know to slow down, give them some room, maybe they're drunk. Machine doesn't know that inherently. We have to teach it that. Exactly.
And what does scale.com do that? Tesla and Waymo don't already do. Yeah, exactly. Because they're solving that problem. Do they use your software and do they need to? Yeah, so-so. That's a great question.
So the core problem, as you just laid out, is that machines don't know what to do, unless they have data that actually tells them what they're supposed to be doing. And so what that means is one of the huge bottlenecks for machine learning is data. It ends up being like data that tells these algorithms, tell these models what they're supposed to be doing. And that's where scale comes in.
What we are is sort of this data refinery, if you will. We accept a bunch of raw data from our customers. We go through and process it, and we sort of, we tell the machine what it should be doing. For example, given an image taken by a self-driving car, we would outline these are where the people are. These are where the cars are. These are the lane markings, etc. So that over time, these algorithms can learn those things. You have a video of that. You can share it right here.
So here's a video of Pacific Street. And you have a better eye than I. I'm going to take a guess. That's one of those streets. And I see you are highlighting cars, you're highlighting people. And the machine is figuring out, okay, that's the approximate shape of a Dodge pickup truck. That's a Toyota Prius and these look like the silhouettes of people. But that's a human telling the machine that's what it is for now. Yeah, exactly.
So the core way that our whole pipeline works is that a lot of work is done behind the scenes by machines and our own AI models originally. And then humans basically give input and correct mistakes to make sure that the end data is extremely accurate. Because that ultimately is what's important for the safety of these systems and for low bias, etc.
All these things that are needed, imperative for machine learning to be formed. So you would go to a customer, is Waymo or Uber a customer? Yeah, exactly. They're both customers. They're both customers. Got them. And you could say that, it's public knowledge. Yes.
Got it. Okay. So they're both customers. So they would give you videos of their cars driving and then you would annotate it for them and put that data into a database somehow. This is exactly right. So for example, if they gave us a video like this, you'll see originally the first step was a human drawing a box. Yep.
And then a machine learning model that's already pre-processed through all this data has determined the path of that vehicle over time. Right. And then we confirm that all this is correct and then send that data over to the customer and they train machine learning models on top of it.
Got it. And this is how I guess one of the cars got fooled. Somebody drew on the ground like an arrow turning and a car followed the arrow, which our human would do too. But they basically drew a turning signal to see if like it would fool a self-driving car. And of course it did. I didn't see this news, but I would believe that that is how that's what would happen basically.
Yeah. And that's what would happen to a human by the way. So I thought that was the stupidest prank ever. They're like, look, we can fool a machine that's driving cars to make a wrong turn. It's like you would also fool a human to make the same right turn. It's like taking the do not enter sign off of the off ramp and putting on ramp on it. Like congratulations like fight club. You just did like some crazy stupid prank. Yeah, that's exactly right. I mean, a lot of ways they will have some of the same challenges that humans have in driving.
All right. When we get back, I want to understand if you are storing all this data and annotating it for one company or is this some sort of grand plan to have it go across multiple companies. So everybody doesn't have to recreate the wheel when we get back on this week in startups. So to speak.
Are you struggling to sleep while you're not alone one in three US adults does not get the sleep that they need and not sleeping enough. That affects all your cognitive function. Think about it like learning and problem solving and decision making. All these things we do as founders every day.
Sleeplessness causes people to also be prone to more accidents, weight gain and depression. But when we sleep and we get that great night sleep, you know what I'm talking about. Then you're more focused and relaxed and you're actually happier. So that's why we're partnering with com com com the number one app for sleep.
失眠会使人更容易遭受事故、发胖和抑郁。但当我们睡得好,得到美好的夜间睡眠时,你知道我在说什么。然后你会变得更加专注、放松,实际上更加快乐。因此,我们正在与睡眠应用程序com com com合作。它是排名第一的睡眠应用。
You're going to get a library of programs from com that are designed to help you get the sleep your brain and body needs. Like soundscapes and over 100 sleep stories and I do these with my kids and they love it and I do it with myself. And it is amazing and here's my associate, Presh, who has been having trouble sleep because his boss is too intense. And he goes through it and he finds some nonfiction and he's looking.
Ooh, a cruise on the Nile, some Matthew McConaughey watch Matthew McConaughey reading some sleep stories. He looks at the ASMR painting beauty in the beast and then he looks at sleep and he does lullaby to the star. So relaxing. So here's your call to action. This week and startup listeners will get 25% off a com premium subscription at com.com slash twist.
That's right com.com slash twist. C-A-L-M dot com slash T-W-I-S-T. 40 million people have downloaded a com and it was apples 2017 after of the year. Find out why at com.com slash twist. Thanks again to com.com. I'm an investor in the company. I love the company and I'm so proud of the work the team over there is doing. It's just such an amazing app and such a great story.
Okay, let's get back to this amazing episode. Alright Alexander Wang is here blah blah blah 20 something year old who cares. He's young, he's smart. Don't worry. It's yeah, it's all get old eventually. You'll get old eventually. You'll get people will stop saying that you're 22 and whatever.
And he is building scale dot com. That's like a six or seven figure domain name. No comment no comment on the price of that domain. It's not cheap. It's in the dictionary and it's less than six letters. So it's not cheap. It was not it was not inexpensive. No, not inexpensive. But boy, a good domain name does help the branding. Does it not? Well, I guess I guess we'll see that'll happen in the out years. We'll see what happens.
It's baller. Everybody is like, oh, what's your email? I was like Alexander at scale dot com. Just leave out an e. Pick the right to leave out. So you wait, your name is Alexander, but you're missing the e between the D and the R. Yeah, this is actually right. It's my typo on your birth certificate. My given name intentional, very intentional. So like a joke by your programmer parents in some way?
这是很酷的。每个人都会问,你的电子邮件是什么?我会说是Alexander at scale dot com。只需删掉其中一个“e”字母,选对该删的字母即可。所以,你的名字是Alexander,但是在D和R之间没写“e”字母?是的,这是正确的。这是我的出生证明中的一个笔误。我的名字是有意为之的。似乎是你的程序员父母开了个玩笑?
My parents wanted eight letters in my first name, because they're Chinese. And that's good luck. Eight eight is very good luck. Yes, extremely good luck. Yeah. So literally did that for good luck. And it worked. Yeah, it worked. I mean, I have a friend I play poker with. He's extremely superstitious in his Chinese. Yeah, buys in when you play poker.
If he's not playing well, if he's losing, he buys in for $88,000. That's that's not right. Does not look at his cards and pushes the chips in blind in poker. Texas hold them. And then two or three of us will just call him if we have an A or whatever and then he wins every time. Oh, wow. For $88,000. I've seen him do it five times in a row. He's a legend in Los Angeles. Yeah, that's what I can say. He sounds really good.
Yeah. I love to play poker. It's pretty amazing. Because we think about the worst-standing poker is like typically 80, 20 or something like that. So you even have a chance. But yeah, it's quite a thing to say. So before we left for the break, I wanted to know if you're doing all this data, are all these companies programming their algorithms and data sets in a silo over and over and over again?
There's no sharing across these companies. Yeah, that's exactly right. That's crazy. Well, because they all want to win in the end. So they're all not sharing their data. Well, I think this is what kind of comes down to is what is IP in the machine learning context. And I think it kind of-
Controelectural property. Exactly. And it would be equivalently crazy if every company in the valley were to just develop their code out in the open. I think it's like- Like open source. Exactly. There's a little bit of open source. But for the underlying technology people do open source. But Google's not giving their algorithm away. Yeah, exactly. Or open sourcing that. And so ultimately, it's not that crazy.
反向思考的财产权。确实。如果谷中的每个公司都在公开开发他们的代码,那同样会很疯狂。就像开源一样。对于底层技术,人们会有一点点的开源。但 Google 并没有公布他们的算法。所以,归根结底,这并不是很疯狂。
I think it means that it means that there's an incentive to do things that are novel and interesting and produce value. Okay, so you get the variability of 10 different people trying 10 different data sets. But you lose the efficiency of 10 different people working off a common data set. Yeah, I mean, the core- by the way, the core of your stance on this, it's like it's a very similar thing to whether or not you believe in just free market economics in general. Yeah.
You have a lot of people running around, running in the same direction, running in different directions. And it's a like- And if you believe in that approach versus a sort of planned economy where there's high efficiencies with maybe low variance and low low chaos, then I think it's really fine. Yeah, but because you have 10 different people competing with 10 different data sets and there are big prizes at the end, like whoever solves stuff driving wins $100 billion or $1 trillion. Yeah, you've incentivized a large groups of influential pools of capital to pursue it. Yeah, exactly.
And there is an open source company, isn't there? There's somebody doing an open source company. Do you know about this company? There's- They're going to open source the data and do exactly what I'm talking about. I forgot the name of it. Uh, it is not a new idea. And in fact, in the research community, um, people open source the data quite commonly. Really? Yeah, so the- The- A lot of people say the- The start of all of the machine learning, uh, particularly the deep learning life cycle was this large A- So called ImageNet, which was- ImageNet. Yeah, which was published by this Stanford professor, Fei-Fei Li, um, who, uh, basically produced this large data set, and then it really kicked off this sort of, uh, machine learning, deep learning hype, uh, hype.
Wow. So because he got all of those open source creative comments, images up there, and then had everybody train them, you had a trained set? Yes, exactly. So she had a- She- She- She published this large- She and her lab published this large, uh, this extremely large data set of- Millions of images, uh, classified with what- What was in those images? So- This is an orange, this is an apple. Yeah, this is an orange apple, this is a cat. There- There's some rare ones, like this is a rare kind of fish, et cetera. This is an cat eating an orange. Uh, not quite that detailed, so this is, uh, this is- This was the beginning mind you. Yeah. Um, and, uh, and basically that- That created this open source data set that then sort of the whole world could work on top of.
So that would be the example of centralization, open source being perhaps something between, let's say, a socialist communist, singular government approach versus a democratic capitalist approach. There was open source, which kind of sits somewhere between the two, doesn't it? Yeah, ultimately the- The, um, actually maybe it's socialist. I'm trying to figure it out. Uh, yeah. I- I do think that, um, open source is- I'm not going to comment exactly how- How to line these- In these economic situations, but, um, But I think- I think very much so like in general, the trend of providing some like core underlying infrastructure, Yeah. For a large group of people, or a large community of people who are all iterating or building on top of that infrastructure is very valuable.
In self-driving, is it the video of what's on the road, or is it some other way of recording it that is the most effective? So we have LiDAR, Google Bet the Farm on LiDAR, Elon Bet the Farm with Tesla on video cameras. Everybody thought Elon was an idiot. Um, turns out, I'm hearing now that people are starting to think the cameras are getting so good, and the data set's getting so good, that cameras will win the day in LiDAR will be ridiculously unnecessary. So we actually- Which is true.
Uh, so my personal opinion, uh, is that I do think both sensors have different advantages, and fundamentally they're- they're both very good in different scenarios. So if you explain. So, uh, so we actually- we published this blog post about this, because we obviously see a lot of LiDAR data, a lot of image data. What's the name of the post? Uh, I think it's called LiDAR versus cameras, Elon versus Larry. I think it's called LiDAR versus cameras, or something like that. I will search for it at scale.com.
But, uh, yeah, one of- I think there's- there's different scenarios where both are good, right? So, uh, LiDAR is very good. First of all, at giving you a 3D map of everything around you, that turns out to be very valuable, if you're planning very careful maneuvers, and it's very- it's very reliable in giving you that 3D map. Yeah. It makes a map that is incredibly well refined. Uh, Here it is. Is Elon wrong? How about LiDAR? Is Elon wrong about LiDAR? Exactly. There you go.
It's also very good in, uh, in dark scenarios, because the LiDAR has created its own light, so you- you know exactly what's going on around you. Um, but it's- it's bad in other scenarios.
It's bad when there's a lot of fog, etc. Why is it- Why is LiDAR bad in snow and fog? Uh, because it- it's shooting out these little lasers, and, uh, snow and fog are both very reflective, and basically screw with how these later- Got it. So it makes an imperfect model in the situation.
In the case- Exactly. An imperfect 3D model, uh, or at least one that you'd care about. So, for example, if there's a plume of smoke, LiDAR will, uh, will catch the plume of smoke, but you're actually- it's fine if you drive through a plume of smoke. Right.
Yeah. Okay. Um, and so, can machine learning now, is it able to reconcile when both systems are on effectively, to know, hey, the LiDAR is built as perfect model, but the LiDAR is hitting something that could be smoke in the cameras like- uh, or it could be a brick wall that just suddenly appeared, and the cameras like, no, it's smoke. We can tell because cameras are better at detecting smoke and- snow.
Yeah, that's why- that's why fundamentally you want both, right? So, um, so both is the best system. Both is definitely the best system. Because, so, uh, the- the place where the camera stuff breaks down is that, right now, if you were to look at, uh, at sort of the state of the art, computer vision models that work on cameras, um, they're accurate, maybe like 99% of the time. Which sounds like a lot, uh, but not 99.99%, 99% of the time.
99.99% of the time. 99.99% of the time. 99.99% of the time. 99.99% sounds like a lot, until you drive 100 hours. Yeah. 100 hours. Exactly. 100 hours is not. And that happens to be the second where a bolder rolls into the street. Exactly. Yeah. So, it's, um, it's pretty important that you get like these, these asymptotically difficult levels of quality. And you can do that. You can actually do that if you have multiple sensors that have different strengths and different weaknesses and can sort of play off of one another when you need it. Which is why you want both. You really want both.
It's a software doing that today currently, or are people just picking one system and going with it? Uh, a lot, no, no, they very much so work together. Like on a WaymoVieCool, or on a cruise vehicle or whatnot. They very much work together. So in particular scenarios, you'll pay more attention to what the camera tells you and others. You'll pay more attention to that. But the camera is the default now, right?
Uh, I, that's not true. No, OK. I think, I think a lot of, a lot of these cars still drive very much influenced by the LiDAR. Really? Yeah. Ah, it's a really good sensor. If, if we had LiDARs on our phones, life would be great. Why? Uh, it's, uh, it gives you a lot of experience. Uh, it gives you, again, it gives you a very accurate 3D map of the world around you. So you can, you can basically do a lot more with your surroundings.
And that is, I thought Google was starting to put that kind of depth sensing in there. They're not doing it with LiDAR. They're doing that with some other sensor. Yeah, there's a, there's a structured light sensor on the front of your phone. Now most of these phones have, yeah, that does face ID or whatnot. Ah, structured light sensor. Yeah. And that can do depth. It knows the depth of my nose, my eyes, all that kind of stuff. So it knows it's me, not you. Exactly.
And that's why a flat photo doesn't work, because it would be pretty hard to fake. Uh, yeah, exactly, exactly. Well, though I heard Asian faces, the original versions didn't work for, or like, the people who looked similar who were Asian, when the white guys created the algorithm, had Google, or Apple, it didn't actually, Asian people could unlock each other's phones. I, I saw, I saw the articles. Is that true or not?
The, uh, this is, this gets back to the core of the issue, which is machine learning is, uh, is really hard, because it's all about the data. So, uh, who knows what was going on in the underlying data that trained those algorithms? It was some white guys, camera roll. You said, here we go. Uh, well, either way, this is why it's really important, have really good data in algorithms, because otherwise they'll do weird things.
But it would make sense, right? Like, as if the algorithms were built off of a database in China, let's say the flicker of China, and 99.99% of the photos were of Chinese descent, you would be optimizing for that data set. And if you did it in America, and whatever percent was white, and the percentage of Asian might be whatever, two, three, four percent, let's say, it's not going to be as refined.
That's exactly right. Yeah, this is why, I mean, this is why, when a lot of our customers, and a lot of, I think, companies doing machine learning today, think about it, it's really about how do we, how do we constantly improve with more and more data, that sort of fills in the gaps, and makes the whole system, holistically more robust over time.
And you guys build which piece of this? The data storage, the algorithms, I'm still unclear, so which piece you build?
你们在这个项目中负责哪个部分?是数据存储还是算法?我还不是很清楚,你们具体负责哪个部分?
Yeah, so what we do is, uh, all this, so, all this data that comes in, let's say it's camera images, it's just images, it's simpler to think about. Talk about petabytes of data. Yeah, these petabytes of images come in, and, uh, pre-maphacia, you have no idea what's going on in these images. Right. Right. And so what you need is, you need to, uh, you need to figure out, where are the people, where are the cars, where are the, where are the stop signs, where are the cats, et cetera, and you need to figure out that out for every single one of these images, so that you can train a machine learning model on top of that.
Got it. So what we do is, we build this pipeline, where, uh, most of the work is done by machines on our end. We have classical computer vision algorithms.
Uh, So it's like you do a first scrub of the data.
嗯,这就像你先对数据进行一次初步清洗。意思是先对数据进行一次基础处理。
Exactly. So somebody like Waymo could say, hey, here are, here's 10 million miles of driving. Have at it. You say, okay, here's what we think. These are all the minivans, these are all the pickup trucks, these are all the cats, these are the bouncing balls, et cetera.
Yep. And then we, we also do, we also have a, uh, a large team of smart, well trained humans who can basically go through and spot errors that these, that, that are made.
是的。此外,我们还拥有一支庞大的聪明、经过良好培训的人员团队,可以浏览并发现这些错误。
And then the, well, that's a second level of scrubbing, which is humans looking at things that computers have a low degree of certainty of.
然后,嗯,这是第二层洗涤,即人类查看计算机确认度较低的内容。
Yes. So if the computer is 99% certain, you just go with the computer. Uh, well, we have, we have more sophisticated filters than that, but more or less. Yeah.
And then, and then basically this, this highly accurate data, it goes back to the customer, and they feel great about it. They retrain their machine learning models. It, uh, it's a wonderful cycle. So they don't have to worry about building a team, or to do this basic level. It's almost like, you're just really good at getting that data set scrubbed and clean for them, and normalizing it in some way, so that they can work on the higher level stuff, like what to do with the minivan, or what to do with the minivan turned on that side. Oh, it rolled over, with something's going wrong here, right?
That's exactly right. And, and the way that we think about this in general, is we're really providing this sort of, this infrastructure layer for machine learning globally, or AI globally, where in AI in general, there's like, there's one big problem, which is getting all the data, and there's another big problem, which is like, how do you improve, how do you build these models, and how do you improve the models? Yeah.
And we're trying to take the first problem off of people's plates. And we get back to this break, I want to know, when you believe, based on your seat, which is very close to the data, in fact, you're sitting on top of the data, you're soaking in it. I want to know when you, Alexander Wang, with no E, eight letter characters in that first name, I want to know when Alexander Wang thinks, we will not need a steering wheel on cars, in San Francisco, driving from Palo Alto, to San Francisco. When do you think that'll be legal, without a steering wheel, when we get back on the swing story.
All right, listen, there's 600 million people on LinkedIn, including me, and you, and the person sitting next to you, and the three people you just email, and you have to hire people, but where are all those potential candidates? Well, they probably have a job right now, and so you got to get in front of them because they're passive job searchers. If they see something interesting, they might just click it.
Well, how do you do that? Well, here's how you do it. Watch my associate, he's going to go on LinkedIn Talent Solutions, and he's going to post a new job for our client's success manager in our Toronto office. He quickly selects the skills that are needed, writes a quick description, adds additional screening questions, which I love, and then he sets a daily budget that's, you know, reasonable, and he's on his way to finding the perfect candidate, whether they're looking for a job or not. LinkedIn is going to get you in front of hundreds of millions of job seekers who are not actively seeking a job.
So, with LinkedIn jobs, you can pay what you want, and the first $50 is on them. What? Yes, that's right. LinkedIn Talent Solutions is going to give a $5050 for you. Right now, all you have to do is go to LinkedIn.com slash twist, LinkedIn.com slash twist. I don't know how long the sooner or less work, but it's $50 for you right now, for free, from our friends at LinkedIn. And by the way, a hire is made every eight seconds at LinkedIn, and you know that that's true, because how many people do you know who found a job or a candidate on LinkedIn? It is the central repository of talent, and now with LinkedIn Talent Solutions, you can leverage that massive network.
Okay, let's get back to this amazing episode. All right, Alexander Wang is here, scale.com, scale AI. You heard what they do, and how many people you got working on with, innit? I think now the team is about 150 folks, mostly here in San Francisco, Paira. Yeah, exactly.
Hey, when I left our hero, that's you, Alexander. I was wondering, when do you think we'll have a self-driving car? In a major city like San Francisco driving, a major route, let's say, from Palo Alto onto the highway, off the highway, you know, into, you know, Cokari, the great Greek restaurant here. Have you been? I'm not. Cokari, you can write that down. Cokari is the best Greek restaurant. Some people can say the best restaurant in San Francisco. Get the octopus and the Sagonaki. So you get them there.
You leave your house in Palo Alto, you get out, you order the Sagonaki. What year would this be possible? 2030 and over or under 2030. When do you think we'd first see this? Ballpark. Would you pick under 2030 or above 2030? So this is over under. This is the million dollar question, which is when or. It's actually a trillion dollar question. A trillion dollar question. Let's be real. But we're at, like fundamentally, the technology is getting better and better every year.
And how much better on a percentage? Do we double, triple? Well, it's hard based on what metric you're measuring, but the algorithms that perceive the environments, we're getting a lot better, like asymptotically better every year. And then the algorithms that figure out what the cars supposed to do, the planning, algorithms, etc. are also getting better. So it really is, it's sort of only a matter of time before we get to the point where this, these kinds of routes are possible and we'll live in a safer world. Got it.
So that seems to me that you're thinking less than 10 years from now, this will be happening with regulation and all that counted in. Yeah, I think, I don't think regulations necessarily going to be that tricky. Well, I think there is precedent for, like you think of when autopilot first came about. I think there's precedent for how to think about a lot of these things. You mean autopilot in the airplane sense or in the Elon sense? In the airplane sense, yeah. Got it. Sorry.
So the FAA and whatever regulatory bodies were like, okay, we get it. Autopilot works better than a human. Yeah, exactly. It's pretty obvious. Yeah. Plenty of room up there to operate when you're up at 30,000 feet. A lot less room to operate when you're going through the tunnel and there's six people in the middle of the street, though. The challenges are a bit different, but I think there's precedent for how to think about these things.
I think the technology, once it gets good enough, will be clearly extremely good. And so I don't think it will be that big of a deal. Under, so you think 10 years may be less? Yeah, I'm excited for this software. I'm excited for this software in the future. Got it. Wow, look at that. You will make a commitment. I like it.
Why is that? You want to be neutral because of all your partners, your customers, or all the driving. You don't want to speed on their behalf or something? Or you just don't like the idea of gambling? Well, I didn't put any money on this, but I'll put the Kokari lunch on it if you want. I have no problems with gambling. I'm a risk-risk-seeking guy. What do you like? You blackjack guy? You're a poker guy? What do you like? I play poker, yeah. Really? I would be a little bit above. I would be a little bit above. So you play a mix game? Yeah. I would love to play a regular game. Do you have a regular game? There are some games I play at in San Francisco. Yeah. Yeah.
You probably have bigger games in the game. Well, play the bigger game. I don't have to play the bigger games. You may watch a play. Yeah. One day. One day. Let's start. Post-ipo. Yeah. Well, I mean, there's always secondary shares, and you know, common shares play. That's where I always tell founders. Common shares play. I would settle up a 50K balance with some common shares. And scale. I'm not calm.
So let me ask you about the race between China and America for this. China is really putting a lot of effort into this. Yeah. And do you have any customers in China currently? We don't have any customers in China. We work with some US arms of Chinese companies. God. I do, for example. God. I do. I do. I do. I do. I do. I do. I do.
I don't have an employee of affairs in China because they seem to be doing a Manhattan project back to your parents. They're low-s, low-s, alamos. They're kinda doing a Manhattan project. Does that mean they're gonna get a lot further than us, you think? I think it's definitely true that China is innovating very much in machine learning and AI. And a lot of it is, it's very clear that it's very, it's a very concerted effort on the part on a lot of these large tech companies in China's part, like there's a lot of investment where I think if you were to look five years back, it would be kind of shocking at how much progress they've made today. So it's very, something is definitely happening.
And I think to be clear, I think machine learning, a lot of the machine learning is definitely very good in the US and for most things is much better, but there are certainly making a lot of progress. And there's a lot of data sets that they have access to that we don't necessarily have access to. And so like CCTV cameras, they have cameras everywhere. They can find somebody already on facial recognition. It was a 60 minutes piece. I guess they found people in like less than five minutes. Yeah. Pretty scary. Yeah.
Yeah, it's very, oh, look at that. Sian. Sian, yeah. Oh, Kai Fully, I know Kai Fully. He likes to play Parker too. Yeah. He does actually. But yeah, it's basically, it's very, there's a lot of concerted effort. There's a lot less, there's a lot fewer questions as this technology gets productionized in China versus in the US.
I think we're, you mean moral, ethical, regulatory issues? They're going to just go faster there. As Sian, the angel investor points out on Twitter, Kai Fully said that China would get further because they don't have the same issues around human casualties. That's interesting. Yeah. And they're more, they're more accepting of risks, et cetera. And I think, I mean, I do think move fast, break things, lets you move faster. Right. So I think there is, there is a real tension there. Yeah. And in the two approaches.
I've been pretty excited that in America, we haven't had a panic over self-driving. So in the horrible Uber accident in Arizona happened where the driver, the safety driver, wasn't paying attention. And I believe they were playing Candy Crush or on their SMS. Did you see the actual video of them in the, they were, yeah, they were, I, they were on their phone. Yeah, they were looking down at a video. Yes. They knew they were on camera. And they were told they're being videotaped the whole time. They still couldn't keep their eyes on the road. And they killed the poor homeless person who was walking across the street in the middle of a dark road. I, yeah, I believe it was, it was a man with a bike, but yeah. Oh, it was a man with a bike. Yeah. And why somebody said it was a homeless person at some point, but they, it's interesting that somebody said that it was almost like they're saying in the person by the nature of being homeless, they were doing something bad. But they were doing something that they were crossing like an eight lane highway in the middle of the road where they were absolutely no business being, which is what the algorithms job is.
So how does an algorithm take that into account? A, who is blatantly going against all conception of what is normal, like walking across a six or eight lane boulevard avenue freeway? Well, I think in this case, this is actually a great example of two things. First where light are is great. And then second where, didn't they have light on that or no? So the thing is the, the model is actually detected that person person.
So in this case, the sort of machine learning and the light are, by the way, the light are also detected the person who sort of it was doing its job. It was more of the sort of like higher level processing that sort of that made the unfortunate decision to not break. Yeah. And to break at whatever 45 or 60 miles an hour on one of those freeways, that comes with consequences too, because there are people behind you. Yes, definitely. How is the computer supposed to make that decision?
Let's put aside this issue where obviously the decision tree didn't make the right decision on the data, which was clearly presented. So it wasn't a sensor issue. It wasn't a processing of the data issue. It was a decision issue. It decided not to slam on the brakes. Is that right? That, I believe, if you read the N, the Nitzer report, that is what's happened. Yeah.
That's a developer who wrote code didn't write the code properly. Well, it's a complex code system that ultimately made the decision it did.
这是一个开发人员编写的代码存在问题。这是一个复杂的代码系统,最终做出了它所做的决定。
Do we even know what's going on with this machine learning? Because my understanding is a lot of times we put a bunch of data in. The answer comes out and you ask the person who set the whole system up. They don't know how the answer was come to. Just that it came to the answer.
Well, I do think, so you bring up an interesting topic, which is about explainability. How do you actually know what these algorithms are doing? And this, again, I don't know something like a broken record, but it does come down to the data. And when you actually dig in, usually when the algorithm makes a weird decision, usually you can trace that back to something weird in the data that it was.
我认为你提出了一个有趣的话题,就是可解释性。你怎样知道这些算法在做什么?而这,同样我就像唱 broken record 一样,又归结于数据。当你深入挖掘时,通常当算法做出奇怪的决定时,你可以追溯到它所处理的数据中有一些奇怪的地方。
Do you have any examples of that without mentioning specific customers, but in your testing less A or in your laboratory, or even in the real world without mentioning who, what, when, where? Do you have an example of the data was confusing and the output was then, you know?
Yeah, I think. Cool. Do you bear the responsibility of that? We don't. We do bear a quality responsibility with our customers. In fact, we sign up for the quality of data we get for our customers. They have the ability to look through all the data, look like audit, et cetera. It's kind of heavy, isn't it? It's a lot of responsibility.
Well, it's one, it's why I think, I mean, ultimately, if you think about it, if you really believe in machine learning, a lot of, a lot of the things that you need to do machine learning, there needs to be stable infrastructure just like running water, right? So if you think about, for example, AWS has a pretty tough job.
They have to, like, they have to say that all these machines that they have up and running are going to be up 99.9 or 99.9. Yeah, yeah, like a crazy high percentage of the time and that they, like, all your quarries will take less than X amount of time, et cetera.
But that infrastructure lets all these people build on top of it and build these machines, et cetera. So I do think as a general rule, as an infrastructure provider, you need to give infrastructure that's very reliable that people can depend on.
And you know, it's really interesting. They do such a good job, AWS, that when AWS does go down, which seems to be like some portion of it goes down, you know, Northeast, whatever, it's sometimes as regional or some section of it goes down, it's almost like people have a funny joking, like it's a snow day response to it for three or four hours.
As opposed to five or 10 years ago, when this would happen, people would get really bent out of shape. They would go down for two days or a day or I don't know what the longest Twitter outage was, but it would be for hours. Like, it could be a half a day of the farewell. And then before that, it was like reason to like not trust the internet. So we went from like not trusting it to being extremely frustrated. To now it's kind of like a joke like, oh, it's down. We know it's coming back. Yeah, no big deal. They're going to reboot the servers and figure it out.
Like right now, you framed it very well. Right now we're in this period of high distrust of these systems. We see them do weird things and we don't really know what's going on and we feel like it just feels foreign and weird.
And then eventually everybody will learn more about the technology, learn more about what it's good and bad at. We'll get to a point where when it does something unexpected or doesn't think bad, we'll just get really annoyed. Yeah, very frustrated because it really impacts people's lives or it impacts people's businesses, etc. And then in the long term, I think the systems will be extremely reliable and certainly much more reliable than any human could ever be.
Yeah, I mean, like we work, we interact with lots of machine learning systems already. Google search is a large federated machine learning system today. It's very, very influenced by core deep learning, machine learning, etc. And it is extremely reliable. It works like running water. It's great.
Yeah, when's the last time you're like, I couldn't find that answer. Between Kora, YouTube videos and Google acting as this glue and fabric between it, like surfacing stuff in the one box, you know, little box that gives you like a mini scraped answer.
But they're not allowed to scrape Kora. Kora doesn't let them index it, right?
但是他们不能爬取科拉网站。科拉不允许他们索引它,对吧?
I think they're still in the standoff. I actually, I don't know. Sometimes I see Kora. Well, maybe this is old.
我认为他们仍在僵局中。实际上,我不知道。有时我看到Kora。嗯,也许这是旧事了。
I think you see the link, but they won't let them put it in the one box where they kind of expose the answer. They kind of make you click through and they make you log in. This is why Kora is kind of brilliant. Kora. They won't let Google take their data. Kora. It's too valuable. It is a very brilliant company.
我认为你看到了这个链接,但他们不允许将它放在那个暴露答案的单一框中。他们让你点击并登录。这就是为什么 Kora 看起来很聪明。Kora。他们不允许 Google 获取他们的数据。Kora。这个公司太有价值了。它是一个非常聪明的公司。
DeAngelo? Adam DeAngelo. Adam DeAngelo. You know, I could get him on the pod. Let's make a note. It's been like five years I've been trying to get him. He don't like to talk. He's an introvert, I think. He is, well, he's on Twitter. So Twitter.
Yeah. Twitter Adam. What was your like to work with? He was great. He's very, very smart, very thoughtful, very long-term focused. I think he put 30 million of his own Facebook money into it, right? Well, he was the first CTO of Facebook. So I think he put 30 million, this is what I heard. Put 30 million of his own money in like the Series B or something and then like let somebody else put in like 10 or 20. Like talk about skin in the game. Yeah, I have no idea. I was bonkers. I was like, wow, that's a first.
And but he's thinking 20, 30 years out. Like this is his last, first and last startup. I think, yeah, he thinks extremely long-term, which I think means that it's a competitive advantage in today's ecosystem where most, most people in general are very short-sighted.
I think they'll look at, oh, that where's the quick way in or where's like the, the new hot thing. Right, they don't have any revenue, right? They don't, they don't have ads. They do have ads actually. They put ads on? Oh, I didn't know there's ads on there now. They're so subtle that you can't even notice them. Whoa, I'm going to go check those out.
Does it matter that we don't know how sometimes ML gives an answer when we go look at the algorithm? Why did this come up first? We say, well, there's a number of factors, but we don't actually know. Does it actually matter? I think non-explainable systems are already out in the wild and-
What's an example one in the wild? Well, Google, like before there's, yeah, before there's machine learning Google or your Facebook feed or your- Okay, so Facebook can't explain why a specific post went to the top. Well, they have, they have some information, right? Right. And they can give you a little bit, they can give you some information. But most, like, this is a problem with code more than anything. Post code systems are so big and difficult to explain that it's already a big problem.
So the cost of doing a Google search and getting a non-perfect answer, or it makes- Let's say it makes a huge mistake, is very low, right? You just do the search again or you change the search a little better, you pick the second answer. No problem. If Facebook puts something at the top of your feed that's not the most relevant and the second and third one are the most relevant, again, no problem.
However, if you do it with a self-driving car and it makes a decision, it could be somebody's life. And if you do it with some kind of system involved in justice, like talking about using ML for justice, should a computer be able to give an answer that somebody's guilty without being able to explain it?
So when we get back from this break, I want to know, would you trust in the next 10 years ML and AI to make decisions? Obviously, our fine with making decisions about driving and people's life and death there, would you make it work for the justice system versus the justice system in America, which has been proven to be biased against non-white people when we get back on this week and start.
在我们回来休息期后,我想问问你,在未来的十年里,你是否相信机器学习和人工智能会做出正确的决策?显然,我们对于关于驾驶和人们生死的决策已经感到满意了,你是否认为在司法系统中也应使用 AI 技术,以此来替代美国司法系统已经被证明有对非白人的偏见呢?当我们回到这个星期开始的时候,让我们来探讨一下这个话题。
Listen, you're running a small business, you're running a startup, you need money and it shouldn't take all your time to get money to run your business. The modern way to do that, the simplest way to do that is cabbage. They allow you to access up to $250,000 in credit to run your business.
Cabbage's application process is online and it takes just minutes to complete and get a decision. If you qualify, you can access the amount you need right away and withdraw more funds whenever you need extra capital. Cabbage has an A plus rating with the Better Business Bureau and has already provided over 200,000 small businesses with access to funding.
Cabbage的申请过程是在线的,只需几分钟就可以完成并得到决定。如果你符合资格,就可以立即获得所需金额,并在需要额外资本时提取更多资金。Cabbage在Better Business Bureau中获得了A+的评级,已经为超过20万家小企业提供了资金。
Portfolio companies have used it to cover employees over the holidays when a large client missed an invoice. That was an amazing story I heard. So I want you to get the money you need to run your small business. Today, go to Cabbage.com and use the code, twist, to get a $100 credit on your first loan statement. This is KABB, AGE. KABB, AGE.com and use the promo code, twist.
This is an important disclaimer. You must take a minimum $5,000 loan to qualify. Credit lines are subject to review and change and this offer ends November 30th. Two days after my birthday, individual requests for capital are separate installment loans issued by Celtic Bank member FDIC.
Alright, let's get back to this amazing episode. Alright, Alexander, we're here coming around the third ad break which means I'm going to ask you like the really hard questions. Like, you warmed up, you're comfortable. Maybe let your guard down a little bit. PR guys like checking a slack room. He don't care anymore. He's out of the woods. So let's get into the tough work.
I'm joking. This is a PR person in the room was probably freaked out by this question but are not. I explained that the list of your Twitter and Facebook feeds or Google and that we would all agree that doesn't matter.
We could argue Facebook maybe matters if it's pushing up stuff that is fake news and they're getting some heat about that but again nobody's dying hopefully.
But now we can self-driving cars. Should you have to prove how these things made a decision as opposed to having this non-ability to understand how the decision was made or does it matter if it gets the right answer 10 times better than a human in your mind. So in the situation where the car, where you're self-driving software or one of your customers is 10 times better than a human. It's proven. Should they be able to explain the one in 10 chance they have of an accident compared to a human should explain ability be required. Yes or no?
It's a, this is a really important question again. That's why I'm asking you, the founder of scale.com was raised 100 million to empower all this. This is actually right. I think as more and more systems are governed by machine learning, it's very natural to ask like, okay, if we're trusting our lives to these systems. We are. How are we supposed to feel good about that? Am I supposed to just live with the one in whatever chance that one of these systems will just poop out and then my life will be at risk.
So I do think that ultimately further explainability and a deep understanding of the performance of these machine learning systems is going to be needed. Now that being said, like again, there are plenty of mission critical software systems in today's world that we depend on like for example, whatever systems that we use to control the power grid or whatever system we use to control the national missile system, etc. Those are all software systems that sometimes they will just crap out.
Like every once in a while, your database will go down or every once in a while since it will go down. And whether or not you can explain those phenomenon, that tail probability that that happens still causes real risk. So in a way, you're saying you're being held to a higher standard. Why do you think you're being held to a higher standard in the systems that came before?
Well, what, yeah. I have a serious, but I'm interested in your question. What I'm saying is there is that we live in a world where randomness is a reality, right? And so accepted. So there's as these systems end up launching and end up being more and more important, I think it's important that we realize that there are always tail probabilities that bad stuff happens.
Now that being said, I do think it is the responsibility of people who operate these systems is responsibility of the people who make these systems to have an understanding of the performance of these systems and also ensure that they are doing everything they can to make sure that these systems are performing as well as they can, which ultimately comes down to, okay, if I'm trading this model and I'm trading on so and so data set, how do I make sure that the data is unbiased? For example, we talked about the facial recognition cases. How do I make sure that there's, it is properly representative? How do I make sure that there's no weird artifacts in the data that would cause something bad in the model?
Yeah. And then how do I, like how do I trace back these issues, right? Yeah. And I think, I think very much so we're, we're, the whole machine learning community is understanding these issues and building them in, but it's, it's more about how do you build these systems that are, that are robust built on, on large data sets that are, that are very diverse. And more diverse, the data set, the less biased there should be in it. Exactly.
Except if you said, okay, we got a couple of states. Let's make a, a justice system based on it. And then all of a sudden you're like, okay, let's do the whole United States and then you find out, gosh, the whole United States justice system is biased. And if we were to build on that data set, people with African-American names, Latino names would be convicted more often because we actually used the data set that had bias in it.
Yes. But the people, what you're saying is the people who are in this are acutely aware of it. And they are good actors who want to get it right. Or why else would they choose this as their profession? They would, there's nobody on your team or any ML team you've ever met who said, you know what? Let's put systematic bias into the system as opposed to getting the right answer. Getting biased into the system would mean your business is going to go out of business because you made a poor system.
So in fact, machine learning and the people working in AI are so acutely aware of this, their intentionality would be to remove bias and make the world better. So it's, again, this technological phobia, technophobia. And we're holding technologists and machine learning to a higher standard. I think in part because we as humans are so scared of being replaced that we're going to hold that which replaces us to a standard that is one that we would never hold to human too.
Yeah. And by the way, I think this is a, this is my opinion. This is a plate. There's this played out narrative, right? That AI is this magical thing that will come in and just replace humans at all of these very important tasks, right? And, and that's, I think that's, that's the dominant belief that a lot of people hold. But, but the reality in the actual nuts and bolts scenarios, it's, it's pretty far from that, right? We have a long way to go before, before machine learning will fully replace any kind of you, any jobs, right?
Yeah. And, and there is precedent for this, by the way, like when the, when the ATMs were originally, originally invented and built and, and launched, you would sort of believe, one belief you might have is that the number, as the number, as these ATMs were built, the number of bank teller jobs would, would start dropping pretty considerably. Yeah. But what actually happened is that the, the number of bank tellers in the United States actually grew pretty considerably.
And there's a number of like, sort of economic reasons you could think for what it's been. The unbank started to bank. Yeah, so, so one is that, that the, the ATMs allowed for this huge growth in the banking industry, which means that, that there's a lot more opportunity. Another is that the, you know, all those people who wouldn't get a bank account because they didn't have access to their money, kept it on their couch, were like, oh, I can get money any time. Okay, my number one fear is gone. Exactly. I will put my money in the bank, because I don't have to go there between 8 a.m. and 3 p.m.
Yeah. So, that's one. Another is that the bank tellers now can focus on higher value tasks. For example, I don't know, like, like, mortgages, lines of credit, starting bank accounts, etc. Yeah, credit cards, right. Which, which means that the, the, the per, the, the value of a bank teller goes up, which means it's more valuable to invest in more bank tellers. And so, and so these, these effects are like, the second and third order effects usually mean that there's, there's way more opportunity and way more growth as these, as these sort of like, the automation slowly seeps in.
Yeah. And I mean, you think about it. We spent all this time creating these phone routing systems. Remember that where you're like, press one to go here, press two to go here. And we did that for what? 30 years. I just sent it to the phone jail system, they used to call a voice jail instead of voice mail. And we invested for 30 years in voice mail systems. Then at some point we realized, wait a second, everybody gets to people over messaging, whatever. Somebody does pick up the phone call. They probably have a very acute, important problem. And that's a chance for us to prove how great our brand is.
And to get to know our customer better. And then beat them with our other customers. Let's bring back people who pick up the phone and talk to you. It's a delightful VIP customer type experience. And now you have people adding back. And they just call it customer success. And people look at customer success, not as a cost center anymore. It used to be a cost center. How do we reduce the number of calls? Now people look at customer success as like an investment in them renewing.
So the SaaS people are like, yeah, if we get people to call in, when they have a problem, maybe they won't turn. Maybe they used a product more. Maybe we'll up some. So sometimes we bring the jobs back. We've got rid of phone operators and receptionists. And now we're bringing them back. We just call them customer success.
Yeah, exactly. These trends are happening over and over. I mean, I think there's like helping people focus on higher and higher value work is really, I mean, that's sort of like the core of human progress in some sense. But that very much so is, I strongly believe will be the actual story of AI machine learning. And it'll have to happen more and more and more for us to be comfortable with it.
But a great example is truck driving. So there's a lot, all these automated truck driving companies. Yeah, lots. We work with a lot of them in Bark, Ike, etc. And there's sort of the naive view is that, hey, they're just, they're going to automate truck drivers. And like, if you look at the map of the states, truck driving is a top profession in a lot of states. It seems really bad, but actually, if you look, if you kind of like think that the system as a whole, there's a shortage, there's an actual shortage of truck drivers in the United States.
And the median age is like 50 or something crazy. Yeah, exactly. So the sort of this, like, there's this like millennials and Gen Zs are not becoming truck drivers. So there's this, there's this kind of like instability in the market because of all this stuff, right? And, and the, the automated truck driving systems, actually what they would do is automate the, the long hole middles of these truck drivers, the boring parts, which are the boring parts, arduous that displace people from wherever their homes are, etc.
Yeah. And allow the current truck drivers to focus on these like higher value trips that are sort of like, uh, warehouse to a, a meeting point or whatnot. Yeah, Dreyage to, like the Dreyage to the factory or even the last mile. I mean, who knows? Like, maybe these trucks will change their form factor and be half the size, be automated. And when the truck gets off the road, the same truck, instead of using 18-wheelers, we might just use smaller mid-sized trucks. That'll be electric and solar powered. So you have more of them. When they get off, they become the delivery truck.
Yeah. And they just automatically start delivering. Yeah. It can be a much better model. Yeah. So, so the, um, these, uh, these sort of like the introduction of machine learning to, uh, to improve the efficiency of the economy, it'll, it'll, it'll be slow because of how the, how in general free market economics work, it'll, it'll take effect in areas where there's an acute problem today, right? It'll happen in those places first and, uh, and it'll allow the, the current jobs exist to become higher value, more impactful, um, et cetera.
So, so the, the sort of, uh, well, we believe the true narrative will be, um, will be extremely positive, actually, versus the current narrative, which is like AI and HRI, et cetera. So, we're going to take over the world. Yeah.
我们认为真实的故事将是极为积极的,远远超过当前的AI和HRI等故事。因此,我们将要征服世界。
Silly. I mean, there is a possibility that AI could get out of control at a certain point with exponential computing. That's not far-fetched that it could do something crazy and stupid. You only think that's not far-fetched because you watched a lot of these sci-fi movies. That's the, wow.
I mean, listen, if, if you were to train an AI, um, to work on a drug to kill cancer and you didn't program it properly, it could create a drug that was too aggressive because you didn't tell it, well, in the process of killing cancer, please don't make the person blind, right? Or all these other things. So, you could just forget some edge case and some general AI might think, if you said to the general AI, what, you should work on things that make the human species better and go, okay, let's kill cancer. And then it's like, oh, yeah. Or let's, let's cure this communicable disease. Great. I took cure of communicable diseases to kill everybody who have the disease currency, so it can't be communicated. Like, this sounds far-fetched, but there will be instances where they will make the wrong decision, right?
Or will it be just too slow of a ramp up for us not to catch it in your mind? Yeah. I mean, I do, I think there's like, uh, yeah, there's this, uh, the, um, the thought experience always go like, oh, you'll make an errands comment to an AI and all of a sudden it'll, uh, it'll take over the world and do something that you really don't want it to do. But I mean, in reality, like, there's, uh, there's a lot of oversight over these machine learning systems right now. Like there's, there's like tens, hundreds of people who look at these models, they look at all the data that comes in and out and they look, they like, analyze everything and they, they try to figure out, okay, what is this model doing well? Was it doing poorly and how, how do we adjust to that, et cetera?
So, um, I think, I think that could happen in a world where it's like, we believe we, we have low oversight of these systems. So oversight is always important in any new technology, right? It's like when we started having, uh, airplane autopilot, for example, it would be crazy to just say, okay, we've airplane autopilot. Just let it fly. Should we put any oversights over the Facebook and social media companies? They have to be clear. They, they didn't have oversight. We think the F, the FCC, like giving a fine in the review mirrors, oversight, it's not oversight. They had no oversight and we lost our democracy over it. The Russians came in and spent a rubble doing it.
So that is, uh, that is a, that is one take. They manipulated, they stole the Cambridge Analytica data. They did voter rolls. They tried, and whether it actually caused the election to swing, we'll never know. Perhaps, but they definitely were able to swing some portion of it. They definitely were able to manipulate it successfully. So, and what regulation is there of AI right now? There's none. You, you're, you're acting under zero regulatory environment right now in China's got a negative regulatory environment. It's, it's true that, it's true that like, um, so you should be regulated, uh, to your mission. No, no, no, no, no, this is all I'm saying.
Well, wait a minute. You just said that you should be regulated so that we don't have problems. So which is it? Uh, I do think that there are a lot of important issues about how we, we deem what AI systems are appropriate, how we look at what they're supposed to be doing, et cetera. I do think governing bodies, the US government in particular, for example, has to take a deep look, understand the technology, determine what is reasonable, what is not reasonable, et cetera, and ultimately they're the, the, but even in their case, they're looking at the miles driven in the accidents, but they're not looking at the code that you guys are writing. They're not looking at anybody's code. They're not looking at the AI systems. They don't even have anybody on staff who could even write an algorithm, right?
Uh, well, that's also changing to be clear. Is it? So the, the, in general, um, I think they're looking at any lines of code in, at any of these systems. Uh, I'm not sure about the answer to that, but I do think they look at a large amount of data.
So these, okay, they do, yeah. So in, um, yeah, in Europe, for example, there are all these, uh, there are these ADAS systems, right? So there are these driver assistance, uh, programs or driver assistance systems and a lot of like high end vehicles that you buy today, right? Keep in the lane. Yeah, exactly. Keep in the lane. If you have like stopping a traffic, you don't need to do anything. Yeah, adaptive cruise control. Yeah, and lane change warning. Yep. Standard on BMW's, BMW said outies these days. Yeah, exactly. So there's all these, um, these systems exist. They, people buy these systems, people rely on these systems. And in the EU, for example, that were a lot of these car makers are where BMW, Audi, uh, VW, etc. are, they, um, they have a responsibility to actually, uh, both have a large data set, uh, that they, they have collected themselves that is able to validate that these systems are performing well, as well as pass, uh, a series of sort of trials and actual, uh, really?
But, uh, we do have a lot of different forms of data that, uh, that the government, that governing bodies place in front of them.
但是,我们确实有许多不同形式的数据,由政府机构提供给他们使用。
Well, that would be very interesting. Now, to think about it, we do crash tests for cars. You're required to give three cars or something to the government for them to just destroy in their crash tests. Yeah. But we don't require those cars to go into a lab, get taken over by the governing body and force them to go into real world testing environments. Because there's some real world testing environment where you do self-driving up north here, I think, some military base.
Uh, you're referring to, there's a military base that everybody uses for self-driving. It's like a town that was converted into like a self-driving town. Yeah. It's got the name of it. Yeah, a lot of these, um, a lot of these, a lot of these companies, uh, they, they buy cheap real estate. They, uh, they help with them into like these mini towns so they can create these funny scenarios. Have you ever been to one of those? Uh, I've never been, but I've definitely seen the video from that. I've seen the videos, yeah. Yeah, it's pretty cool. Like, children come darting out like a little cardboard cutouts of children to see if it hits it. This way they can do that in private. Yeah.
Uh, but it is interesting that some point the government's going to have to have people who are developers and coders actually getting into the data and understanding some portion of this, right? Uh, at the very least, they'll have to put, they'll have to create like the, the driver's test, for example, the driver's license test for, for a self-driving car. I mean, that will exist.
I mean, I believe in, in the, I believe in some sense. I believe in the sense that like, um, for most technological things that, uh, humans can conceive of that aren't physically impossible. If, if humans arrive, they'll happen at some point. Like I think, I think humans are like infinitely creative, infinitely ingenious, et cetera.
Sure. I think it is very overblown the timelines that people are talking about generally AI happening. Uh, I think it is, uh, there's, I could rant about this for a while. There's a lot of things that are wrong about like the common arguments.
Um, one of which is people say that, uh, like if Moore's Law keeps going, then we'll have all this exponential compute and that's going to like, uh, that's going to be a lock it. Yeah, it's going to mean it's only a matter of time before we produce these general AI. Now, not to mention quantum computing. Yeah, exactly. It's involved.
So yeah, I mean, Moore's Law is going to, is going to be dead. Yeah, exactly. It's going to be dead. And then, uh, quantum computing is very far away, despite recent press releases, et cetera. Um, and so that, I think that leg of the argument is not actually that strong.
And then I also think it's not even clear that if you have infinite compute, that you, you'll be able to produce general AI, I think that's very unclear. Uh, infinite compute helps narrow AI because you're doing a number of scenarios and, you know, playing out every scenario and go the, the game, uh, the stone based game, many more permutations than poker, many more permutations than chess, which is a finite data set. So yeah, more compute power on those things certainly get you quicker, definitely, or even just throwing people into a random video game like open AI is sure. Definitely. But generally, I taking somebody who've mastered chess and then saying master go and then master, you know, fortnight or master, impressionist painting, it's different. It's very different. It's very, it's, it's, so the argument, uh, one of the arguments goes that once you have enough compute, you can basically, uh, you can basically simulate, um, you can create artificial life by basically simulating evolution.
然后,我认为即使你有无限的计算能力,也并不清楚你能否创造出通用人工智能,我认为这非常不清楚。无限计算能力有助于狭义人工智能,因为你正在执行多种场景,对每种可能性进行推演,而基于石头的游戏具有比扑克牌更多的排列组合,也比象棋更多,因为数据集是有限的。所以,对于这些任务来说,增加计算能力肯定会更快,或者甚至只需要将人们扔到类似 open AI 这样的随机视频游戏中即可。但通常来说,将已经精通棋类的人,请他去精通围棋,占卜之类的,是截然不同的。非常不同。因此,其中一个观点认为,一旦你拥有足够的计算能力,你基本上可以通过模拟进化来创造出人工生命。
So that's, okay. So that's, that's one of the more Vogue arguments. Uh, okay. Let me just see if I understand that you have so much compute power that you can say, start with this tiny, whatever it is, you know, piece of bacteria, whatever, then grow it and grow and grow an entire evolutionary system.
Yes. To the point at which there is a human-like species and then grow that human-like species in whatever number of scenarios with a big brain into whatever comes after us. Yeah. Even, I mean, even if you just grow the human, the, like the human-like species that's as intelligent as, then you're, that's kind of good. That would be general AI as well.
Yeah, exactly. Because general AI would normally, most people would define general AI as not even being smarter than us, but being as smart. Yeah, exactly.
是的,完全正确。一般人类智能通常被定义为甚至不比我们更聪明,而是一样聪明。是的,完全正确。
Huh. Um, so, so that's an interesting approach. Uh, yeah. But somebody would have to code that and program that and build the systems to do that. It's not just going to magically happen, right? Yeah, exactly. It's not, it's very unclear if that's even possible. But that's, um, that's an argument. That's, I honestly think that's the most plausible argument, but, um, but, uh, but it's so, it's, um, it's very much so science fiction in the sense that it is, uh, we're not close to being able to even validate the hypothesis. So I, I think, um, I, uh, yeah, I don't believe in general AI anytime soon, uh, despite what the pundits will say.
Yeah, what's the next big mind blowing AI project, narrow AI project? Let's say that most people are in considering right now after self driving, which is the one that's captured our attention.
Well, I think there's, uh, I think there's a bunch of really boring ones. Okay. Um, so the boring ones are like, hey, can you automate form processing really well? Like, like paper form processing or, oh my god, that is boring. It's super boring, but it'll be big. Uh, just like, what, like I have to fill out a form to get my driver's license and you know, use AI or, yeah, well, anyway, I, uh, we'll move on.
But, uh, yeah, um, there's a lot of boring examples. Uh, there's a, uh, replying to email. That's kind of the dope one in Gmail now. Do you have you seen that?
嗯,有很多无聊的例子。比如回复电子邮件,在Gmail中确实是一项很酷的功能。你有没有看到过呢?
Yeah, it's great. It's pretty demented how fast it's getting good and it's personalized, right?
是啊,这太棒了。它进步得非常迅速而且还是个性化的,是吧?
Uh, yeah, I would believe it's personal. I think it's personalized too because it's starting to use my lingo.
嗯,是的,我相信这是个人的。我认为它也是个性化的,因为它开始使用我的术语。
Yeah.
好的。请翻译成中文,并表达完整的意思,尽量使语言简单易懂。
So I'm like, I would never use that. And then I'm finding myself like, wait a second, it's finishing the sentence in my voice. And then you're like, wait a second, my voice is pretty narrow. I'm a human. Yeah. Um, so giving you suggested replies is actually kind of low-hang you fruit.
Yeah, it's pretty, and that's like, I would, I would say that's like kind of a boring one.
嗯,它很漂亮,但我觉得有点无聊。
Yeah. Um, but I think there's ones, I think there's applications that have like, uh, pretty large scale economic impact. Okay. So, so for example, all this automated radiology, uh, and automated like medical imaging work, um, is, uh, is very impactful. And the technology is, uh, like the core technology is good enough given enough data to actually make that possible. So I get my lung scan because I was a smoker and they're doing lung cancer. Now they send those X-rays to India to be reviewed by technicians there, uh, who go through it or even, um, heart rate monitors for 24 hours because that's the cheapest labor with the highest, you know, ability. But that can, all that data can just be done by a computer better than a human could ever do it.
嗯,我认为有一些应用程序具有很大的经济影响力。例如,自动化放射学和自动化医学成像工作的应用非常有影响力。只要有足够的数据,核心技术就足够好,就可以实现这一点。我之所以要做肺部扫描,是因为我曾是吸烟者,他们正在做肺癌检查。现在他们将这些 X 光片发送到印度,由那里的技术人员进行审查,甚至进行 24 小时的心率监测,因为那是最便宜的劳动力,但所有这些数据都可以由计算机更好地完成,比人类做得更好。
Yeah. I mean, so for example, so in general globally, there's a huge shortage of doctors. I believe like, um, ridiculous. I believe, yeah, the, uh, the world health organization published something is like 10x shortage in doctors globally. Right. Um, so, uh, so it, like there is, there's this massive shortage. And if you can, if you can fulfill some of this demand with automated systems that are much more scalable, there's a, there's a huge amount of value.
Um, and I think it's, it's easy for us to think in the United States, uh, that like, hey, it's not clear what the lift would be. This seems like it'll just automate jobs or whatnot in the US. Yeah. Um, but that's because you, like you already have access to the stable infrastructure that, or not everybody, but, um, but a lot of people already have access to the stable infrastructure that is healthcare, right?
Yeah, no, as bad as our health system is here in America or flawed, probably the better word, um, it's not like somebody's not going to be taken to an emergency room, right? Like, and there's other places where they're just never going to have access to a doctor or maybe once a year.
And getting an x-ray might be out of the question because of the cost.
由于费用过高,进行X光检查可能无法实现。
Yeah. Not just the cost of the x-ray, but the cost of actually interpreting the x-ray. Yeah.
没错。不仅是X光的成本,而且还包括解读X光的成本。没错。
So you think x-rays are the big one?
你觉得X射线是最重要的吗?意思是询问对方是否认为X射线是最为重要的医疗技术。
Well, it's all, all the forms of medical imaging, right? Like x-rays, ultrasounds, cat scans. Are you working on that yet? We do work with a bunch of this data. Yeah.
Yeah, exactly. Um, so I, I think a lot of those systems are, are, um, will be very impactful. Uh, I think that there's, there's a lot of other boring things that people don't think about. Um, and then, and then I think that like, uh, I think that more and more, a lot of the things that, I mean, really the, like, sort of the market forces are the, the things that like, there's either incredible demand for, or the things that people don't like doing are going to, like, are going to be the, the, the clearest things to work on.
I think education is going to be a huge one. Like the adaptive learning where kids can sit in front of a computer and it starts to learn from looking at their facial expressions when they're frustrated or on the, to, to, cusp of being frustrated. I know this sounds like really like, uh, dystopian. But if the computer was watching the child and the child is frustrated at a certain math problem and then it takes them back 20% to an easier math problem and they can tell from the facial expression that they're enjoying it and that they're feeling confident.
And then when they feel not confident, they can push a little bit into that. You're, hey, I know you're uncomfortable. Let me walk you through this again or I get the sense that you might want me to work through this again with you. Can you imagine what a Khan Academy with machine learning and adaptive learning and AI could do? Yeah. I think anything in a narrow AI project to teach people how to learn.
I haven't heard of a project before, but imagine for literacy, there's still lots of people in the planet who can't read and write. Yeah. No, I think, uh, I think it's a clear application. There are, um, I mean, it's fun to talk about healthcare. There are separate problems where education systems are like, are pretty broken and, uh, and are, are not ripe places for, for a lot of economic opportunity. But, um, but these, like a system like you're talking about is, is really, uh, I mean, it was easy to make, right? It's only a matter of time.
Yeah, yeah. I mean, I think it'd be easier than self-driving or similar challenges. It's very easy. So the easier than self-driving, the challenge of figuring out when someone is frustrated based on their facial, based on a photo of the face is very easy. That's done. Yeah. But combining that with some adaptive learning technology, well, then it's just about, um, it's about understanding what's a hard question, what's an easy question. Yeah, that should be super easy.
Yeah. Nobody's put that glue together. Isn't it amazing that we don't find, I would bet you it probably exists somewhere. We got to find that out. If somebody on the pod is listening and there's an adaptive learning system using AI and facial recognition to kind of understand where the students at. See, that's where I think technology, it's really interesting when you combine two things.
Like the, for every dystopian, terrorizing thing about facial recognition, you can think of there's 20 you could think of that would actually be amazing. Like, if you knew somebody who was walking on the golden gate bridge and you knew they were despondent and considering suicide, you could literally know that a person walking across the bridge was doing so with the potential of jumping off the bridge. Yeah. I mean, there's a lot of, there's boring machine learning that's really great, right?
Which is, um, like Apple watches, for example, or a lot of these like things wearables. Um, Apple watches are set up with an algorithm that can, it basically looks at the accelerometer and how fast you're moving, etc. Yeah. And it can detect when you have a hard fall. Yeah. So if you fall with an Apple watch, it'll detect they have a hard fall. And if you don't respond to it with sometime period, it'll call an ambulance to you actually. Incredible.
Which is like, it's a super, now that's science fiction. Yeah, it's crazy. It's actually really crazy, right? And that exists today. It exists today. It's amazing. By an Apple watch, you could save your life, right? Yeah. But, uh, but there's a lot of like boring uses of machine learning that, uh, and this is what, this is like, this is why really it, people should view it as like this, uh, this like crazy, incredible thing.
I think a lot of people do, but, but, uh, adding to the fire, it enables all of these things that, um, that you couldn't have done before. Uh, and so it, it just, it enhances the sort of like capability set of, of the technology that we've built pretty considerably.
Hmm, we're all this B in 20 years. When you're 42, what do you think the world's going to look like an AI machine learning? If you had to describe it, obviously your company will be a publicly traded trillion dollar company putting that aside.
Uh, you'll be the richest man on the planet, but putting that aside, what will the world in machine learning and AI look like? Well, wake up in the morning and AI will interact with us, how? Uh, it, it's a good question.
I mean, I think it's almost, um, uh, well, one thing is it's very hard. You can see of, right? Because it'll sort of be like big ideas aren't big ideas from day one.
There's sort of like slow ideas that snowball and snowball and snowball and then eventually it's sort of like this huge thing that everybody thinks is, is kind of changed the world. Um, so, so it's hard to conceive of how these things happen today, but ultimately I think, uh, the, the sort of the dream and system is, is sort of as a, uh, as a, I mean, a lot of people like this idea of the assistant, right? Like, like a, like a machine learning assistant.
But ideally it's, it's some system that you can basically, you interface with it through, uh, through voice or through typing or basically through language. Yeah. And you're able to, uh, dictate, um, like questions or things that you want on the world, et cetera, and it's able to understand that reason through it and then, um, and then understand what the, what the result is.
And then, uh, a lot of the AI that we're building today, a lot of the machine learning we're building today, which a lot of it is sort of core perception technology, like just understanding what's happening in general, like knowing that there's a car there or knowing that there's a sofa there or whatnot, a lot of that will sort of be a base layer of intelligence that powers the next layer.
And then there will be a base layer of like reasoning or whatnot that powers the next layer and so on. Yeah. It's very interesting. You believe in this like brain interface stuff, neural link? Uh, that is, uh, well, I, I don't have any.
I don't have any deep inside on it. I, I know, uh, one of the neural link founders, I think they're working on cool stuff. Um, I don't know. All these things, the question is like, what's the killer app, right? Like what are you actually going to want to use that thing for, um, and what's feasible and where it's like the intersection of those things?
Yeah, ordering food without anything. You just think what you want in a burger shows up. So Uber eats plus neural link means like you and I be looking at each other and I'd be like cheeseburger and you'd be like cheeseburger and I'd be like bacon and blue cheese and you'd be like chatter and turkey bacon and then in 15 minutes or so out.
But is that that much better than Uber eats? Yeah. I'd be incredible because you would literally have to not spend the 60 seconds to think about ordering it or pressing the button.
Of course, it's not. It's not much different, but it's going to be kind of mind blowing. It is kind of mind blowing today that you can just take out your phone and order with like three clicks and get food. And it used to be like, I don't know, five minutes on the phone making four phone calls seeing who's open. Yeah, I mean, this was the big, by the way, this was like the whole thing with chatbots.
Right? When when chatbots, when there's the chatbots craze, everybody thought like, oh, this is great. It's so much easier.
对吗?在聊天机器人时代,每个人都认为这很棒。它非常简单易用。
But then if you think about the actual number of clicks that you have to make, if you click like 60 times, you get something done with the chatbots versus like three or four times you get like your hamburger from McDonald's or whatnot. What was the world like before the internet?
You don't know. Well, I can read books. You don't remember a time before the internet. When you were seven years old, it would have been 2005. You would have been on a broadband connection at home. Did you ever use a dial-up modem?
I did use a dial-up modem. Really? The dial-up tone. Yeah, really. I remember that. Yeah. For like, keep in mind, I'm from New Mexico. So it was, yeah, not a lot of bread ban out there. A backcountry part of town. A part of the world. Yeah. Well, I mean, are your parents still there? Yeah, my parents are still there. Are they still working there? My father's retired. My mom's still working. Wow. Yeah. Some breaking bad county country, right? Yeah, I'm actually watching Breaking Badger for the first time now. Oh, really? What season are you on? Season five. Yeah. It gets better and better. The first season is so slow. I skipped the first two seasons, but yeah. It gets crazy and I just watched the El Camino. Like they made a movie that takes place after the end of Breaking Badger. And I actually enjoyed it very much, but it's definitely top 10.
Have you watched the sopranos yet? That ended before you were born, I think. I've watched a good chunk of this. Here's what you do. This Christmas break. You're going to be off. Take a break. Is my advice to you as a 22 year old? Actually take a break over the holidays. Watch binge watch the sopranos. It lit. I mean, it was one of the first shows that had a lot of plot lines going at once. Not as many as Game of Thrones introduced or some of these really high density ones.
At some point, television writers realized if they made it more dense and harder to follow, you would actually get more out of it. And then because DVDs existed, people could go back and watch the previous seasons on Netflix or by the DVDs. And it was in the best interest of the people making the TV shows to make them more dense, to create more characters, to create more plot lines and make it more complex because they drove more DVDs sales. Think about that as a system. The DVD was such a popular format and was so profitable that it impacted the art. Because before that, they said every episode needs to stand on its own. If you've never watched an episode of the Brady Bunch, this episode of the Brady Bunch, you don't need any prior knowledge. So then you were like, it was almost like Groundhog Day. You're waking up every day and you don't even know there's ever been another episode of Gilligan's Island because they're all just singular pieces of work.
But the sopranos was another TV show called The Shield where the first to make them very intense, lots of characters, lots of themes, and then having story arcs that would go over seasons. And so you'd have multiple arcs over multiple seasons. And of course the wires considered the king of the genre. And I've only gotten five episodes into the wire and I stopped because my wife wants to watch with me. Okay.
We'll end here with our favorite new game show. That's the game show where we pull up your tweets. And we say like, retweet or block, you're saying it. Well the audience is going to do it too. Here we go. We're going to pull up your first tweet. Now you're querying your mind going, what tweets have I done? Here's your first tweet. Many folks believe there's a smooth continuum between good people and great people. In my experience, there's a huge band gap between good and great. What you're referring to here is like a band like an electron band. Yeah. Yeah, the electrons, there's like these gaps where they won't be the only thing. They can't cross. Exactly. Great people are clever, determined, fight hard, moral oil and hardship and strategic. It's obvious when somebody's great. And the counter to this is, hey, it is obvious as well when they're just good. So the difference between good and great is exponential in your mind. I would believe that. Yeah. To me, this is a like and a retweet. Woo! I like it. You get both.
Alright, here we go. Now, here's one that people said might have been a little callous. No, I'm joking. How to build something insanely great. According to Alexander Wang, you could follow him. He's Alexander, underscore Wang. No e. It's just a and dr underscore Wang. Build something you care about. Number two, find users. Listen to them. Three, improve it every single day. This is the one people leave out. Four, find people who inspire you and convince them to work with you. Ah, yes. Five, repeat. Two to four for years. Correct. Just five simple steps. Yeah, the devil's in the details there. It's really hard to improve every single day, isn't it? Super hard. Super hard. You get more people, it gets harder. Yeah. I mean, there's a, this is what somebody told me. So this is a second hand. They told them they're kung fu teacher told them this or something. You're either, every day you're either getting better or you're getting worse. There's no, there's no sideways, no saying the same. Yeah. And so your products are either getting better or getting worse. Right. Because if you are not improving it, it's deprecating and a competitor is improving. And your users are getting used to this average product.
好的,我们开始吧。这里有一个人们说可能有点冷酷的问题。不,我开玩笑的。如何建造一个非常棒的东西。据亚历山大·王(Alexander Wang)说,你可以跟随他的步伐。他的名字是 Alexander underscore Wang,没有 "e",只是 a 和 dr underscore Wang。第一步,建造你关心的东西。第二步,找到用户并且倾听他们的需求。第三步,每天都不断改进。这是许多人忽视的一步。第四步,找到那些能够激励你的人,说服他们与你一起工作。第五步,重复执行第二步到第四步,坚持多年。只需要五个简单的步骤。但很难每天不断改进,对吧?非常困难。团队越多,难度越大。有人告诉我,这是他们的功夫老师告诉他们的。每天你要么进步,要么退步。没有平行,没有停滞。这样说,你的产品要么进步,要么退步。因为如果你不不断改进产品,它就会退化,而竞争对手则在不断改进。同时,你的用户也会习惯于这款平庸的产品。
Okay, here we go. Watching YC alumni demodays, if you thought one, none of these companies are as good as mine. Number two. That's not what it says. I'm joking. I'm joking. I'm like, what have I got to be something to?
Okay, watch your, I say what many ideas derivative to the hot financing's up today, Door Dash Compass, Checker, Hems, etc. Two, lots of duplicate. 175, greater saturation point for startup ideas in all likelihood. Greater than one of the unhipped ideas will be the big company. So what you're saying here is, being a follower, it never becomes a big business and maybe 175 is just too big. It's too many.
That's what, yeah, that's what I was saying. I think there's, there's, there's, did you go to YC? We did YC, yeah. Okay. So what do you think, when you went, how many people were there and what do you think about this ginormous class size? A hundred companies. Even then there was like, there was duplicates. There were people working on similar stuff. What do they do when there's people working on competitive ideas in the same class? There's one thing for class, the class. That creates enough tension amongst the loyalty and, oh, now you've got, you're on the cap table of two competing companies, but in the same class?
I think they're les a fair about it. I think they just, they let it go. They give a lot of love. They give everybody love. And they're like, go ahead. Yeah. Fight it out. Yeah. Interesting. What's the right number? I don't know. 50. I think a hundred is fine. I mean, again, it's like, it's your, your black swan farming. So, you know, takes as many as it takes.
Yeah. That's what people don't understand. The systems like the, like, why combinator or just Silicon Valley in general seem broken to a lay person who's not part of the system because they look at and say, this is too many failures here and too many derivative ideas and these people seem unqualified. And this company got too much money. And then what they don't realize is that chaos that we talked about at the beginning of the show can lead to people being given permission to try something outlandish that then in fact changes the world in a way that nobody could have determined, which is the definition of a black swan is that you could not have seen it coming until you've seen it.
Yeah. Because up until the point of the black swan, nobody ever believed there were anything other than white swans. Yeah. Exactly. What a great book. Really good. Did you read anti-fragile? yeah. Great. That's my favorite.
Yeah. Developing companies and systems that do better in chaos. yeah. What a tremendous idea when you think about it. The world's getting more chaotic and you're doing better. Trump. Captain Chaos. yeah. No, he's the. It seemed to leave is really good. Oh, I thought you were saying Trump. No, no. No, no, no. No, no, no. I'm not. No opinion on Trump. But.
No, what I love about it seems you follow him on Twitter. He is brutal and awesome. He's crazy on Twitter. yeah, yeah. He's got wild. He's. He's like Steve Pinker. He's like Steve Pinker is an absolute moron. yeah. I'm like. yeah. Steve Pinker's a moron. Really? Steve Pinker's brilliant. What are you talking about? I do think. I think directness is good, but I think black and whiteness is bad. So, you know, I'm not totally going to endorse his Twitter activity.
Yeah. But I do think. I think people need to be okay with disagreement. yeah. It's so weird that you say that as the 22 year old because this whole generation that you're a part of, they're literally going to college. This is why you left it a year or there. And they're protesting when they bring somebody to campus who they don't agree with. Imagine that. You don't agree with a person. They're coming. You're diametrically opposed to their opinion. And then you protest having them there. It's like, well, you could either go to the lecture and learn about the person who you disagree with and no understanding either the enemy or the opposing side or the other side of the argument makes you so much richer because of that. And they're like, no, you're platforming them.
Platforming. Since when does talking to somebody mean you're platforming them? Like, people are like, you talk to Steve Bannon. Bannon, you're platforming them. So, what does that even mean? yeah, I think the guy put Trump in office, he ran Breitbart. These things are having a major impact on the world. You can't have a conversation with him even if he think he's evil. I think there's a big problem when you have like, I think this is like a derivative of like too much content out there. So basically, people can anyone who has any belief can basically read enough content that reinforces that belief. Like for all these people, for example, they believe they have internally a very clear picture and a very high confidence perspective on like who these people are with they believe etc.
And that's just how, by the way, this is just how human brains are wired. Human brains are wired to be like, oh, you have a couple data points. Okay, you have to believe that. Yeah, tribalistic. Right. Because it's like in, before now, it was actually, it was difficult to get these data points that all told consistent narratives, etc.
But now because like, there's just so much content out there and you can like, you can read all of it and you can develop these very strong opinions. It's hard to think of other people as like nuanced human beings. Versus these like very one note kind of like figures.
Which is mind boggling since anybody in their life need only look at their own life, whether it's 22 years on the planet or 48 or 98. And realize how many times they've changed their mind about it and issue. You need only like, is your favorite ice cream the same for your whole life? Which your favorite ice cream right now?
For me, it actually has been the same for me. Coffee ice cream, we just love that. No, no, no. Chocolate chip. Yeah. So you haven't had butter pecan in a while. You need to just try that butter pecan one time. That might change everything for you. Well, yeah.
Well, I don't know. I mean, mint chocolate chip is kind of close to heart, close to my identity. Have you been to salt and straw yet? Salt and straw. You had the mint chocolate chip there? The mint is so fresh. It feels like you're chewing on mint leaves.