This isn't your average business podcast and he's not your average host. This is the James Altature Show. Today on the James Altature Show. I was quoted as saying and have been saying since then your privacy is over. From 1995 on just get used to it.
Because everyone, every service that you use is highly insented to figure out everything else about your life. And all times. Now yes, the government always knew for the last 50 years when you bought an airplane ticket and when you flew somewhere. If they really wanted to know the FBI could know in 10 minutes, it was not a big deal. So we granted that ability for a government to figure that out anyway. But now we've got private companies that for the last 20 plus years know everything about you.
I've got Kevin Sirace with me, one of the I will say Kevin, one of the world's experts on AI and we're going to discuss what to expect and what's going to surprise us in artificial intelligence over the next year. What you should be thinking about in terms of artificial intelligence. And Kevin, my first question is, I want to have to find out more about your background and stuff.
But it's China just going to destroy us because they have no ethical qualms about taking this to the edge. Well, look, China has some amazing programs around AI. They have figured out long before I think our government that they got a lead in this space. And China has led in other things. Of course, solar panels, they just decided they were going to own it. They do LED lighting. They decided they were going to own it. They do. And when the government puts their head to it in China, they just own it. They're going to own it. Lockstock and burn.
But look at genomics, for instance, right? So this is an area where we have ethical issues correctly or incorrectly, but there's also science that needs to be innovated. And once you innovate at science, it's going to have enormous ramifications on the entire universe of healthcare. And China just doesn't give a shit. So they'll do whatever they'll clone 5 million babies and then kill 4,900,000 of them and just keep the smartest. You know, I'm just making this up.
Yeah, you're right. Look, I have a view here. Once CRISPR was out, right? Once people understood it. And this was, you know, it's widely published how one does this, right? But once gene editing, for example, was out of the bag, it is a given that some countries or some labs somewhere will do things that we are not going to do in the United States, either ethically, morally, legally, whatever the case is, right? They're going to engineer babies. They're going to mix DNA from different species. They're going to do things just to figure out what happens. They are. In China, they're doing it.
Well, yes, China seems willing to do it. I am less worried about China. I mean, China's going to do some interesting things and we're going to learn from that as they already have, right? They've engineered babies. They're very clear about that. But what I worry about are actually really bad actors. They're sort of a bad actor, but in the end, you know, they need our economy and they don't want to blow everything up, right? But you've got people like North Korea, right, and Iran, that are truly bad actors. Now they may have and probably do have the capability in some labs to do some things with CRISPR that are really dangerous, like engineering, you know, a bug that you can never kill or reproduces it in immense rate. I mean, they could do some really, really bad things.
And so let alone just human genomics, right? So I'd be more concerned about the true bad actors, including Al Qaeda, ISIS, blah, blah, blah, as these technologies become more available. Look, when you look at nuclear weapons, we kind of us in just a few countries kind of kept nuclear weapons under wraps as a secret, you know, in terms of how to really make them for more than 50 years. That's a hell of a problem. That's because the resources are rare too. Like uranium, you know, that kind of uranium is rare. It's rare, but still, you know, it turned out it was hard to make one as a third world country until recently, until the last decade or two, and enough of the secrets got out.
In other words, you can't keep anything a secret forever. That's a, right, that's the learning here. And you can't keep CRISPR a secret. It's been around for a decade. People know how to do it. Every lab's experimenting with it. They're doing gene editing. And we're going to, you know, the world is going to make some bad things, including some really gross, crazy, they, and look, and China and others are experimenting with, you know, could we make super humans? You know, what genes, yes, at it to make super humans.
And, and, and, you know, what's going to happen to the gene pool when you do that? You know, in the past, when we played these games, really bad things happened. We played it with animals, you know, bad things happen. You get a super animal, but it dies in a week, right? Something's wrong. But, but here's, here's where AI comes in.
So you have, you know, mapping the human genome, and we understand, you know, single gene mutations and how CRISPR can, you know, or enhance CRISPR can, can solve these single gene diseases, like tastes, X disease, and so on.
But with deep learning and AI, you're going to be able to map all the permutations, multiple gene, you know, mutations, which ones cause intelligence, which ones cause all these weird diseases, whatever. And that's where AI kind of comes in, and again, China has no qualms about research at all.
Look, it's a big data problem, and any big data problem provided it's not noisy. And I think DNA is not, I think the genes are not. I think, I think there's some noise, but I think it's very consistent, at least the human gene pool. And, and the human genome.
And I think that people are already using AI to figure out more and more, you know, what buttons do we push to get a super intelligent, very strong tall human, right? Or, or, you know, the perfect looking human, whatever that means today, right? And people say, I need that one. Yeah, yeah, we all need that. Or human that lives 200 years. Right? Could we do that? Could we turn off the, you know, the telemere thing? Well, maybe. I don't know.
And is that good? Should humans, should humans live 200 years? Well, here's what's going to happen. I'll give you one prediction. Like every other technology, what's going to happen is the really, really good stuff that is useful to you will be available to those with money. And it's going to further separate those with money from those who don't have it. Because, okay, right? They will get access to things that they shouldn't have access to.
Now, I sort of agree. Yvall Horari, who wrote sapiens and 21-polysis of 21st century, he feels the same way. But my feeling is, you know, it's just like computers. When you have an industry that grows exponentially, all right, yes, rich people had a supercomputer for a year or two years. But then they became smartphones three years later. Technology improved so fast. And then everybody gets it.
Yeah. Look, that's certainly been true with the rapid adoption of technology, which doesn't take decades anymore. Or even the old rule was 11 years, right? It takes about 11 minutes now. However, things like CRISPR that have to be done very specific gene editing, they have to be done in a specific kind of lab, while the cost may come down, just like, you know, gene sequencing, the cost has come down, it's heading to zero. You know, there's just certain things that will be available to, you know, to those who have wealth, I suspect, for a period of time.
Because it involves a lot of steps. And it may involve, you know, labs overseas, can't do it in the US, blah, blah, blah, right? And it's interesting to see how it plays out. We're going to learn a lot from the experiments in China. I have a feeling not all will be kept under wraps, right? And by the way, we can go way back in history, right? You know, I mean, back even to Germany, you know, Nazi Germany that was doing, it wasn't at this level because they didn't have CRISPR. But of course they were trying to make super humans through very, very, very simple means. But nevertheless, they, you know, they had multiple experiments doing it. And this has been going on for, you know, almost a hundred years.
So you know, the other, not issue with AI, but again, people, I sometimes people bring up privacy too much, but definitely with advanced facial recognition, the whole privacy thing is just going to be a matter of policy rather than a matter of technology. You know, not only government policy, but Facebook's policy, Google's policy, and so on. And China, having no qualms, they're going to be like the best spies in the world. They're going to know where everybody is at every moment. Absolutely.
Look, in 1995, and you know, I speak around the world a lot on AI and it's in pick on society. It's up. But from 1995 on, basically when the web browser became popular, I was quoted as saying, and have been saying since then, your privacy is over. From 1995 on, just get used to it because everyone, every service that you use is highly insented to figure out everything else about your life at all times.
Now yes, the government always knew for the last 50 years when you bought an airplane ticket and when you flew somewhere. If they really wanted to know the FBI could know in 10 minutes, it was not a big deal. So we granted that ability for a government to figure that out anyway. But now we've got private companies that for the last 20 plus years know everything about you.
And I'll give you an example. And I think you know this. But maybe not all your listeners do. Try this at home sometime if you're married. You got to be married to do it. So the person you're married to, have them start to look at something, you know, trips to South Africa. Just go over there and look at trips to South Africa and look at a bunch of tourist companies and still a bunch of stuff on that. I can guarantee you that within an hour and maybe within 10 minutes on Facebook, you not being that person over there, but related to that person start to see ads for trips to South Africa.
I can guarantee it because I've seen it over and over and over again. I believe you now. Here's a question. And maybe I'm just, I forgot what it's called. Some kind of bias. I'm just like what's called the Honda effect. Once I buy a Honda, I start seeing Honda's. Sure. I feel like I'm talking about trips to South Africa with my wife and then I start to see the ads. How likely is that already happening?
It's probably not happening. It is because talking on your phone is, you know, it can be monitored, but it's really not being monitored. There's maybe monitored by the government, but it's not being monitored by anyone that is going to do anything to feed you ads.
Now, yes, Alexa listens, but no, they're not feeding that to some massive supercomputer that's analyzing the words and trying to feed you ads at this time. Could they? Sure. But they're not. That's the thing.
The technology's there. It would be sort of trivial with the technology we have. Oh, but just turn it on. I mean, you know, so you may not know, but I invented long before there was Siri. I invented the first personal assistant, digital assistant. Her name was Mary. The project was called Portico.
It was for General Magic. Actually, I'll, yeah, you know, became owned by General Motors for on-stair virtual advisor. But the first virtual assistant, I have all the original pan and all those pans got licensed by Apple and others for Siri and other programs many years later. But we had teams of linguists that had to listen to what you were saying, not to sell you ads, but to make the service better because they would codify what wasn't getting caught by the voice recognition, but the speech recognition, right? That's how you did it.
Now, about a year ago, this all came out about Google, about Apple and about Amazon having people, banks of people in rooms listening to what you said. And I just laughed and said, I invented that method. I hired a linguist. I hired actual linguists to listen to you. And then codify. We gave them a language they could code in. So you would say, we would expect you to say, read my email.
And someone would say, get me my email. Oh, well, we got to code in, get me my email. I don't know what get me my email is, right? We have to tell the system what it is. That's still done today, right? Because you got to have some natural language understanding. And sometimes the system, it got the recognition, right? But it doesn't know what to do with that sentence, right? Because it's kind of slaying. Give me my email. Where's my email? Well, I think all of those mean read my email, right? So we got to codify that.
So still there's banks of people in rooms listening to what you say. Turns out they don't really care. They're just doing their job. But again, your privacy was gone in 1995. The day the web browser came out and became popular, it was over. It had to be over. Yeah. And I'm okay with that.
So it seems like, you know, it's interesting because you talk about 1995 and you can even go back earlier with, you know, the real beginnings of voice recognition. It feels like until recently, there hasn't really been that many huge innovations in AI. Like I feel, you know, deep mind, you know, Google with their deep mind and AlphaGo program. That was for me, it seemed like the first real innovation in a long time in taking big data and converting it into actionable activity using AI.
Yeah. Yeah. You're right. I think that I look at AI as augment intelligence. And so if you say AI as artificial intelligence is more or less a marketing term and you say, what we're really talking about is machine learning from large data sets and finding things that would be hard for a human to grok because the data sets so large.
Well then, and it can keep learning is more data comes in. And you can apply those learnings to the new data. And so an example obviously is facial recognition at Facebook. You know, the first time they turned that on. But then they were the obvious people to do it because they had billions and billions of pictures of faces and names attached to those faces, mostly correct.
So you didn't have a very noisy database, but you had a huge database. And over some period of time, they could build a neural net that recognizes your face versus my face. Absolutely pretty much 100% of the time, better than 95% recognition. That's amazing actually when you think about it, right? Huge data problem, such a huge data problem that a human could not look at billions of faces instead of calling out names. It's an impossible problem for our brain.
So we look at that and say, well that seems artificially intelligent. Well no, it's a very, very, very, very good data match. Right? Very, very deep neural net. And it's going to make great decisions based on every shadow in your face, in your classes, in the hair and sort of everything else. But in the end, guys, it's just math. It's just math. All we're doing is trying to find the highest scoring thing that we can match to.
And by the way, if you take that same recognizer that recognizes faces and put a chair in there, it'll say, Jim, it doesn't know a chair, right? Didn't learn a chair. It learned faces. It only knows faces. So this isn't artificially intelligent. We're getting into now where artificial intelligence is going, right? It isn't artificially intelligent like we see in the movies, like X-Mac and I, which is an amazing film or hers, an amazing film. They're so far away. An AI system having a general understanding.
Well, no, related to that is I agree with you on the facial recognition. It's similar to voice recognition. But then you have something like a game like, like Go or chess where the computer looked at, you know, a million games and a figured out the rules. And within a few more hours was already the world champion of chess. And then there's kind of a hidden layer. So there's no hidden layer on the facial recognition. We're using a very standard 30 year old statistical technique and, you know, matching faces to what, to a data set we know. But with Go and chess, it's sort of doing several things.
It's kind of figuring out the metrics that are important. Like it doesn't know in advance what metrics are important. It sort of figures it out. It's a reward system, right? It's a reward system. But think about it this way. I give some examples in my talks of a reward system that the people at Unity shared with me, some games that they built to show how giving it no rules other than to win, it will figure out how to play the game better.
And it does this by watching, either watching many games or playing many games, depending on if it's computer game or not or an offline game, right? But either way, think of it this way. If you tomorrow could play, you know, Go or Chester checkers, it doesn't really matter. Let's say it's Chess, but you could play Chess a million times in an hour. And remember, every move that took you further to winning and every move that took you further away from winning and what all the other circumstance is, if your brain could do that within an hour you'd be an amazing player because you could explore the outcome of those million things.
And if you could play a game on a computer like a Atari game that we used to play or whatever, you know, a simple game that just, you know, you have to, you know, it's Pac-Man or you have to eat the thing. You could very quickly outrun any human in that game simply by playing it about 10,000 times. And a computer could play that in parallel 10,000 times, maybe in 10 minutes or an hour or a few hours. And this is a reward based system.
And these are very simple because all you're doing is giving the darn thing a reward when it wins and giving it no reward when it loses and it doesn't want to lose. That's all it knows. So it learns how to win. No different, I'll give you an example of the mouse. We all know about the mouse in the lab and you put him in the maze, right? And first you ring the bell and the mouse just sits there going, you're an idiot, right? But over months and months and months of trial and error, it figures out that when you ring the bell there's food in the upper right corner and it runs the maze, it goes and gets the food, it comes back so the bell can ring again. It learns this.
Now what's fascinating about that is it tried 10,000 times but after a while it figured out exactly what you're out to take to get the food every time. Now put a second mouse in there.
The second mouse watches that first mouse every time the bell rings goes and gets the food and thinks the first mouse is artificially intelligent because clearly it's got a much bigger brain than the second mouse that can't, don't even know what the bell means, let alone where to find the food.
We are the second mouse. The first mouse is no smarter than we are, right? In fact, it's totally dumb, it just had thousands of tries at it and it could remember every try. Well, right now though, particularly with these game examples, there's three mice, there's the initial data set, okay, there's humans looking with wonder and there's now the ability to scale the data set because the AI will play itself to create more data.
With facial recognition, I can't create more variations. Actually, this is an interesting thing. I can take a picture of your face and now with AI, I don't have to see other pictures of you, I can create other configurations that are probably your face and add to my data set. So, I think AI is also being applied to its own data to generate more data set to learn from. Absolutely.
In games, that's true. It can be done in facial recognition. Everybody's seen what's happening in deep fakes, I think a great use of this technology, the whole plan it is seeing is the Netflix film, right? Did you see the Irishman? Oh, no, I didn't see it, but they reverse aging, the D aging. The D aging.
Now, it's a little freaky because you go, that isn't exactly what they looked like when they were 20 because I knew what they looked like when they were 20 and Scorsese, he wanted them to look the way he wanted them to look, right? The fact of the matter is, aside from the fact they walk like old men, no matter what you do, it is fascinating how good that deep fake technology has gotten. Yeah, particularly they make the clasping back disappear also on the film. Yeah, it's a fact.
You can take them right out, get them onto the wheelchair. No, it is fascinating what we're doing. And of course, that can be done in a number of AI methods today. Gans, of course, General Iversary on that work, great way to basically now make up people that didn't exist at all simply by comparing the forgery to real things and saying, I still think it's a forgery. And you keep getting better and better. Again, that's a reward system, right?
The system just wants to get better and get a better reward until they fooled again, right? So it's the inspector sort of. So it's really fascinating what we're able to do. That said, most of what we just talked about is fun and fancy, it's interesting for games, didn't change too many people's lives, didn't change lives.
So what do you think, what's the next thing coming up that either you're worried about or that will change people's lives? So look, a lot is and some people are really worried about AI taking over the world and you know, they're going to be our overlords, etc, etc. I don't see that probably in our lifetime.
The reason is, as always, with these technologies. Remember, in AI in the 60s and 70s, we made huge leaps and then we made none. And then in the 90s, we made some leaps and then we made none. And then finally around 2012, we got neural nets to work, deeper neural nets, right?
And all of a sudden that math was out of the bag and then we made big leaps again and then it flattened out. And what happens is in all these technologies is we far overestimate the short term impact. But we underestimate the long term impact.
So again with AI, we have two, two, three years ago when people were saying, AI is about take over the world, neural nets are the thing, you know, it's over. We overestimated the impact within a three year timeframe because the impact actually wasn't that much. I mean, it's interesting to you and I technically and some of your listeners technically that, yeah, we can blow away Go now. That's amazing. So it's like, it didn't change anyone's life, right? That's your professional Go player. All right.
So, but let's take the one game that they haven't beat. Like how would you go about? Because I get it, like a Go board or a chess board is just essentially a vector of attributes and boom. But how would you take a game like poker that has a lot of hidden information and there's this human component? There's no world, there's no computer that is a world champion poker player.
Well, I think that's what they grasp. Yeah, that's a complicated issue. So it depends on who you're playing, right? A real poker player. Look at what poker is. Poker is, did you get the right cards?
First of all, I mean, that's just luck of the draw and there is luck of the draw. And that's just part of the game. That's different than Go and that's different than chess is different than the checkers, right? It's luck of the draw. So that's the first problem.
The second problem is the rest of it is human emotion inference and people learning to read people's faces and people learning how to hide that. Then we teach a computer to read your face and over time guess whether you're bluffing or not.
So the answer is absolutely yes, if you kept the same people around the table and they played 500 games. That's enough data to read all the faces to ultimately then figure out who did win, who was bluffing, who wasn't bluffing, right? Totally doable.
But if you changed up the people and you got someone who did not give the same facial expressions and maybe didn't give any hint at all, who knows their hints are different, right? Maybe they get fussy, maybe they drink more, maybe whatever.
Now once you change the person, we may not be able to win the game again until we watch that person play hundreds of times, then we can win the game. Unless the AI discovers there's subtle micro expressions that can't be controlled. It could be, we don't know, right?
I mean, I don't know, even professional poker players don't know what those are, but professional, really good poker players still lose. They just win more than they lose and I think they do that partially because they get used to watching the tells on this player, right?
The eyes get bigger, they get smaller. And they start to memorize a bunch of things that person did, watch them win or lose a few times or watch them bluff a few times. Then finally go, I can read their bluff because that's the trick to the game. The trick to the game is reading one's face or that they drink more. You're bluffing and you always order tea. I don't know, right?
So yes, AI could do that, but no better than a human can do it, provided it's the same set of people. If you never change the people, they will figure out things. A human might go and scratch his leg, doesn't even know he scratches his leg, right? But he does it every time he's bluffing and he's holding the hand that's no good. Yeah, we could apply AI there for sure.
So it seems like 90% of the development in AI since the 1980s has just been increased processing speed of computers. So using roughly the same techniques, yes, we'll improve, we'll throw some more layers onto the neural networks, we'll play around with the statistics a little, but it's basically just we can now handle big data. And maybe there might be an innovation if you can use AI to increase your data set in interesting ways like, you know, imagine you have data about self-driving.
Now you start to imagine scenarios where the same car makes a left turn and you have to kind of simulate that first with AI to create the data set. That seems interesting to me, but what else? What could be, that seems incremental. So what seems like a big change?
Well, yeah, so the big change that everybody wants to figure out, yeah, there is no breakthrough in sight because I think we just don't understand it is. If you really want artificial intelligence, it isn't about processing big data. It's about two things.
General knowledge, which we're not good at. Remember what we're doing today in AI is very vertical. You notice we're going to teach an AI algorithm to play this game. Okay, that AI algorithm can't recognize a dog that's sitting next to him. Doesn't know if the dog pooped on it has no clue, but it does know how to play go brilliantly, but it's all it does, just like facial recognition systems can't recognize a chair. Why? It's not what they do. They do facial recognition. They're very attuned to that.
So professionals have been working on very, very vertical business-oriented kinds of things. Facial recognition, speech recognition, translation. Driving has turned out, driverless vehicles has turned out to be a huge problem. With every neural net we've got thrown at this with everyone at Google with 15 years experience on the road now, the problem is the following.
There may be an unlimited number of unusual events because if we had to drive on a track and there were no humans allowed on the track, we could have done that 30 years ago. Virtually, you know, with barely a vision recognition system. Vision recognition system can recognize a white line, we could do that. By the way, John Deere has had driverless tractors for the better part of a decade because they're in farm fields. They can mark the boundaries of the farm field and the thing just goes up and down as long as no one gets in front of it. It does its thing, right? Tractor does its thing. Totally doable, by the way.
When you put cars on the road, you get an unlimited number of unusual events. You see something in front of you through a vision system or a lidar and you don't know the system because it's never seen it before. It doesn't actually know how to respond. Is it a shadow? Is it something that's happening in the rain or the wind? Is it leaves just blowing by and they're gone? Now we as humans have something unique. We have literally, you know, if you're 50 years old, you've got 50 years of taking information into your eyes and you know you can just recognize that thing looks solid. I better stop. Or it is solid, but it's a bag and I can run over it.
Now think about that, right? You see a bag but it's kind of blowing in the wind. You know it's not full of rocks and just run over it. It's fine. You might get caught under the car. It's not going to hurt anyone, right? But an AI system goes, I don't know what that is. In fact, it might be a curled up baby. It's, it's, it's, stop. You remember when we were kids, we'd look up at the clouds and parents would say, oh, what does that look like? Oh, it looks like a dinosaur. It looks like this. It looks like that. The thing looks like a dinosaur to the AI system. It might be a dinosaur. It doesn't know, right? It's, and then you got unknown things.
You know, you got, yes, it's granny crossing the road, but, but now you got, you know, granny on a, on a, on a, on a unicycle. And I've never seen a unicycle before. What do I do with that? Do I stop? Do I run it over? So the unlimited events problem is a real problem for AI. It's a, it's a super problem for AI. But, but they haven't driving, they haven't driving on highways, right? And there hasn't been major issues. I think this is fine because very little happens there. It's the straight in the city where you get an unlimited set of crazy events going on, right?
So, so I mean, potentially you could just say, okay, legal on highways. And then last mile, we need bed X or whatever to meet your trucks. But what do you think of Andrew Yang's predictions about, you know, you know, the elimination of millions of truck drivers because highway driving is, is solved. Highway driving is nearly solved.
Well look, first of all, there's a shortage of, of, of, of long distance truck drivers in the United States by the tune of 20 to 30%. That is we really need 20 to 30% more than we have. So the first thing that will go is to close the gap on the shortage. That's number one. You can't, not everything will be driverless on day one, but, but I think we will start to close that gap in long distance driving. And then the truck will stop and stop and a human will get in and do the last X miles, right? I think that's going to happen. And then right after that, just to be competitive, you have to do the rest of it because a long haul trucking across the country is, let's call it five or six thousand dollars to take, you know, 40 foot truck across the country.
It's just what it is today. That's what it costs. Most of that is labor followed by gasoline, followed by right off of the vehicle, right? That those are the three things. And so you've got to get labor out of it to take the $6,000 down to say $4,000 or $3,000. That's how you're going to do it. It's the only way to do it. It's the only way you've got to do it. So that labor is going to come out in the next five or six or seven years and that will displace truck drivers. No question.
But let's talk about, I'm going to jump from that for a second because we're talking about jobs. There is going to be job loss in the world, certainly in the United States, from AI in the next decade. But it's not going to hit us as bad as it's going to hit other countries. And let me tell you my theory on why.
Because over the last 20 years, the U.S., in much of the West, the U.S., specifically, has shed its lowest end mundane tasks to India, China, Mexico, some other countries. And we did so because we could hire people over there to do the mundane tasks in under $1 an hour. When here they were $10 to $15 an hour. We had 20 years of shedding as many mundane tasks as could practically be shed.
That includes customer support. Look, you call your bank, I don't care who you call, they answer in India. It's not right or wrong. They do because it's a dollar an hour versus whatever would have been $20 an hour here. That's the fact of the matter. So who's going to get hurt the worst in the first decade? China and India. And maybe Mexico. Why? Because it's factory work and it's customer support. And it's software QA and it's all of these mundane tasks that can be automated with AI at the earliest level.
We're seeing RPA companies like Automation Anywhere automate out customer support now. For 80% of the calls, not 100, but those 80%. It's the 80% that we're sent to India. Microsoft got rid of those 20 years ago. So what happened to job loss then? Did we, the economy suffer? I don't recall it suffering then. That's why I wonder how much of this is fear mongering. It's fear mongering. And for now, it's fear mongering. And the reason is, so for the next decade, most of the jobs that would be lost to AI are offshore already.
We sent them offshore. And so they're the easiest to automate. And the first things you automate are the ones that are the easiest. Not because of the most expensive, just the easiest. Like to automate. What about like, what about like middle management or white collar type jobs or you know, radiologists, lawyers. Sure, sure, sure. So there will be some things like radiologists are first going to be augmented by AI.
That's already happening. You can send, you can send a lot of these pictures and these images to the cloud. And the cloud will do a better analysis than the radiologist. But the FDA doesn't allow a system to diagnose today under any circumstances. So the system can only report to the person who will review those, those, that data and diagnose. I'll give you an example. One of the heart monitor companies that sends you a little heart monitor, your home, it replaced the halter monitor, they took all of their data from 53,000 patients and developed a neural net around it to try to identify specific anomalies in the heartbeat, right, in the EKG.
And they ended up identifying 12 different ones that ended up being more accurate than 12 the best cardiologists in a room arguing whether that person has this arrhythmia or not, right, the AI is already better than they are. However, the FDA will never allow the AI to diagnose directly to you, not in the next decade. It's going to go to the cardiologist and the cardiologist will look at it and decide if he or she agrees with it and then give you your diagnosis. That's the FDA stand on this right now for a lot of reasons, including the doctors have said, you know, I'm not going to be replaced by some artificially intelligent thing, but it's fine if it augments my work because I don't have any time during the day anyway.
And they haven't read a peer review paper and God knows how many years, right? So you want your doctor augmented by AI. You want to take your symptoms, put them in a computer and there's already these systems now. When they put them in a computer, the computer comes back and says, run these five tests, three of them they might not have thought of. And they turn it you and say, let's run these five tests. That's fine. The computer augmented their work. Those doctors are going to go away 10, 20 years, lawyers are going to go away. But lawyers are already doing NDAs with I AI now. Why do I want to review an NDA? It's the most mundane stupid thing. You're paying me $350 to review an NDA. The machine can do it for a dollar. They don't want to do it anyway. So the mundane tests are getting done.
Lastly, this country right or wrong, I'm not not making a political statement here, is it pretty much full employment and I know there's arguments about people doing two jobs and there are lousy jobs in this and that. But all up, we're 3.5% unemployment, which means there's more demand and in fact there's more job openings out there in a variety of fields than they're ever back. Right.
And the reality is, if you took 100 people, would you say 97.5% of them deserve to be employed? That's part of the problem right now. Right. That's exactly right. But they are employed and that's because we've really got full employment. We've got the best employment picture this country has had. Essentially, it's a record keeping, right. And again, it's not even a political statement, just is.
And so that means that as those truck jobs go away, as you lose a million truckers, yes, they may not be driving a truck long distance, but they may be driving more of them locally because more of them are coming in, right? It lowers the cost of transportation across the country. Do you know what that does? Lower consumer prices. Do you know what that does? Spur demand. You know what that does? Improve the economy.
Whenever you lower cost, the economy goes up, more money circulates around and there's more jobs. Now, their job might be in a factory, their job might be in local deliveries, their job might be in something else. I don't know, might not be driving a long distance truck. Just as we used to have many more people in agriculture, 90% of the country was in ag and today it's 1%. And yet that 1% feeds the entire country in Nessa. Why? Well, because we've got machines. And employment didn't go down, it's gone up. Why? It brought essentially the net across the food way down. How is it that you can go to the store and buy an e-record for 20 cents? 20 cents.
There's got to be 20 cents of water in that corn. How do you do it? Well, you do it because it's a lot of machines and a lot of yield. We've learned how to yield the crops better. Everything has worked better, right? So I think that's going to happen here too. I'm not worried about it. For the next 10 to 20 years.
So I feel like there's sort of three conclusions here. First, we're screwed because North Korea is going to make, use AI, big data, and CRISPR to make the worst pandemic in the world. And there's no way to really avoid that. How does North Korea get the technical resources to do that? How do they get educated to do that? You know, the problem...
I don't want to say the problem is, look, much of that work was not done by governments if it was done by academia. And they published their peer reviewed papers. And they published exactly how to repeat it because they want their experiment repeated. That's part of the whole goal of scientists, right? They please repeat my experiment to validate that it works for you and your lab if you follow all these steps. The steps are out there. And that's the problem. I don't want to say it's a problem. It's one of the wonderful things about academic research is it shared worldwide. One of the negative things about academic research is shared worldwide. So everyone who has a reasonable lab can execute what the academics have done. No question. So that's on the bad side.
Second is more, the second clue is more neutral, which is as we were saying and as we've even discussed before, over the past 30, 35 years, maybe there's been incremental improvements in AI, but the big advantage it seems has been computers have gotten a lot faster. So whatever analysis you've been doing on big data before is 20 million times faster than it was 20 years ago.
Yeah, 20 million times faster and better and we're finding more things. We're finding more connections between the data and what's happening in the future. I mean, look, as you know at FFANTS, we're using AI to automatically test software. And it doesn't completely eliminate the humans, but it changes their task from I have to test or write scripts to I'm going to let the machine find the bugs for me. And frankly, the machine is way better at finding bugs than 200 humans are. No question.
But are you finding now the companies that are your clients, they're not firing QA people, they're just being, they have time to make more applications and make more profits to the company? It turns out that's exactly right. They're not firing the QA people. What they're doing instead is saying, when we get this garage of bugs now, let's prioritize them, let's work in production, let's work on developing more test data to put into the system. Right? And by the way, let's increase the coverage.
So I'll give you an example. Very, very typical. We'll have a client of FFANTS that says, I've got 100 people in QA, let's say on this big application. And we've been doing a release every four weeks, we now want to make it once a day, four weeks to once a day. And we want to go from 20% code coverage to 100% code coverage.
Okay. Let's do the multiplication. So four weeks to one day is about, call it 20X improvement in productivity you would need, right? You got to go 22X, right? 22, 22 work days in a month.
So, so let's call 20X improvement in productivity to do that with the same team. If you just left the team. Well, how are you going to make them 20 times more productive? But actually, it's more than that because now I want to go from 20% code coverage to 100 percent.
So it's 20 times five of that. I now have to be 100 times more productive as a team to meet management's goal of four weeks down to one day and 20% out, 200% coverage. I've got to be 100 times more productive. So either I'm going from 100 people to 10,000, that's one way to do it. I guess, right? 110,000 people.
Or I better get AI to augment my 100 person team to make each person worth 100 people. And that's what we're doing. We're using AI to augment what they're doing to make each person 100 times more productive than they were before. And that gets used by shortening the cycle time and increasing the coverage that is finding more bonds. And so you actually need the same size team to meet that. Right.
So this leads to the third conclusion, which is that all the theories that quote unquote, this time things are different because middle class jobs are being outsourced to AI is overblown because of all the historical, we've been through this before historically many times.
Yeah, look, there is a time in the future. And I'm going to say 100 years, right? It's further out than everybody thinks. Where virtually every job we could possibly imagine probably can be done better by a machine, including jobs that require a high EQ, right? And in AI today, we're not talking about high EQ. We've got essentially high IQ down one little pathway.
Again, processing big data and making judgment calls as new data comes in great. They're good at that, but that's about it. So we're a very, very long way from having empathy, real empathy, not programmed empathy, but real empathy.
In fact, all the people working in the labs on this and even in academia, when you talk to the scientists about real empathy, they really look at you and go, you're kind of crazy, right? We don't know how humans have empathy. We don't even know why. No, I worked on these types of problems when I was in graduate school for computer science and AI and nothing has changed. Nothing has changed.
We have a school of 1989. Right. We have no idea how to have empathy other than the program it so that when you ask Alexa to marry you, she says, oh, I'm sorry, I can't do that. I'm a disembodied, you know, whatever. Right? I mean, it's cute, but that's just a programmatic response.
We had that at General Magic with Portico in the 90s, which was the first virtual assistant, with great programmatic responses that made people laugh. But after three or four of them, you'd realize it would recycle, right? They'd cycle back around and it was the same three or four that were programmed in. It's not true empathy.
So how do we get empathy? Well, we don't even understand why humans have empathy. And humans have empathy because at some point in our history, we had to survive and that and it took empathy to survive somehow, right? To keep, probably to keep the group of people around you and having that group gave you a higher chance of survival and we're the offshoot of that, right?
I wonder if you can pose things like this though as a big data problem. Like, let's say you have a million transcripts between therapists and their patients and you just pattern match now. I go into an AI therapist and I ask a question that's been asked before or similar questions been asked before and here's the therapist response.
And we're here as a couple therapist responses and I'm allowed to respond to any of them. And that could be, again, it's not real empathy. It's again, what we've been talking about was just pattern matching, but that's a fake solution to a real problem.
All you would have done is program a psychologist. Right. That's someone you actually want to live with. But true empathy is when something happens in your life, James, and your partner goes looks at you and starts to tear up and says, I really, really feel for you. What can we do? Can I make you dinner? Can we watch a movie tonight? What can I do to help you feel better? And you go, wow, that's a whole share and empathy thing. That's real. It's real.
And it turns out that's very important for humans. And it's very important in our work. When we talk about going to work or doing our work or whatever it is, a lot of is the interaction we have with other people. A lot of the reason we go every day is we love that interaction, right? Like the interaction we like, whatever it is, right? If you don't like the people, I don't know why you go. So part of that is that part of it is a sense of purpose. A sense of purpose. A machine has no sense of purpose. It just executes its code, right?
原文:
And it turns out that's very important for humans. And it's very important in our work. When we talk about going to work or doing our work or whatever it is, a lot of is the interaction we have with other people. A lot of the reason we go every day is we love that interaction, right? Like the interaction we like, whatever it is, right? If you don't like the people, I don't know why you go. So part of that is that part of it is a sense of purpose. A sense of purpose. A machine has no sense of purpose. It just executes its code, right?
翻译:
原来这对人类非常重要。对我们的工作也非常重要。当我们谈论去上班,做我们的工作或者做其他事情时,很大程度上是和其他人的互动。我们每天去工作的原因很多是因为我们喜欢和其他人的互动,对吧?不管是什么互动,只要是我们喜欢的就可以了,对吧?如果你不喜欢这些人,那我不知道你为什么去啊。所以,这其中一部分是一种目的感。目的感。机器没有目的感。它只是执行它的代码,对吧?
So there's, this is what I'm trying to say. We're so far away, just like you worked on this in 89 and I worked on it in 98, 99. And people have been working on this since then. And no matter how big and deep and smart we make these systems, we don't understand how to model empathy because we don't even know why we're empathetic. Rather than we did it to survive.
So really the only thing I get worried about is that, again, processing speeds get faster and faster, maybe some techniques improve. And we find more interesting data sets now because the processing can handle it, more interesting data sets that are dangerous. So for instance, the human genome and looking at permutations of genes instead of single gene mutations, that's potentially helpful but also potentially dangerous. And maybe there's data sets that are bigger, that are more complicated right now that we can't solve that are even more dangerous. I don't know. There are always dangerous data sets, right?
There are things we're going to learn from large sets of data that I suspect the US government is already doing, right? That you might be able to learn from huge sets of genomic data, how to really create something that will wipe out all life on earth. I mean, it's not an impossibility that you could develop a virus of bacteria or something. It would literally invade everything, every plant, every human, everything, it would wipe out life on earth. That's possible. It's not crazy, right? You could certainly look at data sets on the ocean and say, what could we do to reach a tipping point?
I mean, maybe we're going to do that with climate change anyway, the way things are going and wipe us all out. But you're right, there's dangerous data sets for sure. There's danger in cybersecurity because people are using AI to hack cybersecurity and then people are using AI on the other side of it to kind of keep the AI out, AI battling AI. We're seeing all those things happen.
But I think most people today that I talk about with AI say, how's it going to impact my life? What do I need to know? When's it going to impact my life? I've seen facial recognition and I've seen some cute stuff in movies and I hate you know other than that. I don't have a robot in the kitchen, the cook's shut. I don't have a simple robot that only has to do one function, cook me a meal. That's actually a very valuable task. Cooking clean, like cooking clean in a house. Someone's got a real robot that really does that even though it's not empathetic, that's a powerful idea.
But people have tried. There are lots of cooking robots that people have toyed with but they end up getting back down to basic, basic machinery and basic storage of certain things and a refrigerated section and certain ingredients have to be there and then there's just an oven thing so it has to be cooked in that. I mean, it's a robot that we could have probably built 30 years ago, right? It's just not that smart actually. It's not that interesting and it's very, very expensive. Well, that's not interesting. What I want is a thousand dollar robot that cooks and cleans and everybody wants that. That's a market by the way. We go build that.
It turns out it's really hard. It's really, really hard to replicate all the little things that we do around a house. Think about cleaning in places that are hard to reach or cleaning window cells or cleaning things where you have to lift the blinds and then clean and put it down or taking some books out and clean.
原文:It turns out it's really hard. It's really, really hard to replicate all the little things that we do around a house. Think about cleaning in places that are hard to reach or cleaning window cells or cleaning things where you have to lift the blinds and then clean and put it down or taking some books out and clean.
翻译:原来这真的很难。真的、真的很难模仿我们在房屋中所做的所有小事。想想清洁难以到达的地方、清理窗户边框或者清理需要抬起百叶窗来清理的物品,还有取出一些书来清理。
When you go, people ask me, what's the last job that will ever be replaced by AI? I say a plumber and an HVAC repairman, repair person. That's fine. And Andrea and Osmo agrees with that too. Like this is where we're full circle where here he might be correct. Yeah, yeah, he is correct because every house is different, every plumbing problem is different where the pipes are is different. It would be so expensive to create a robot in a database would be full of so much noise that it's an impossible problem to solve.
当你走的时候,人们会问我,哪项工作将是AI最后能取代的工作?我说是管道工和暖通维修工。这很好。 Andrea 和 Osmo 也同意这个观点。我们又回到了起点,这里可能是正确的。是的,他是正确的,因为每个房子都不同,每个管道问题也不同,管道的位置也不同。创造一个机器人会是一个非常昂贵的问题,而且数据库中会充满很多噪音,解决这个问题几乎是不可能的。
Yet, you can send a human in who is a plumber and HVAC repair person and if they're any good, they will eventually find the problem. They will eventually fix it and they'll charge you a couple hundred dollars to do so.
Well for a couple hundred dollars, it's way cheaper to have those people do that work than it will ever be in our lifetime to build a robot that would come to your house and fix your plumbing.
用几百美元雇人来做这项工作要比在我们的一生中建立一个机器人去你家修理你的水管要便宜得多。
Now they're going to happen. It's too complicated. It's too, it's just plumbing and it's too complicated. That should level set everyone listening to this. Plumbing is too complicated for a robot.
Well, Kevin Sirace, this has been enlightening. Informative particularly on the economic stuff is really fascinating and scary on the pandemic stuff. Although the flip side of that is that AI will get better at drug discovery too for the virus, for any AI developed diseases as well. So it balances out.
But I really appreciate you coming on the podcast, giving us the state of the world in AI this year. Yeah, great conversation we can talk for hours. I'm sure we'll do it again, but thank you so much for having me.