Thanks for joining us. A pleasure. Excited to chat. I wish we had days, but we have like 40 minutes, so we'll get through as much as we can in this time. This is a moment of a lot of public facing progress, a lot of hype, a lot of concern. How would you describe this moment in AI? A combination of excitement and too many things happening that we can't follow everything. It's hard to keep up. It is, even for me. And a lot of, perhaps, ideological debates that are both scientific, technological, and even political. And even moral, in some ways. And moral. Yeah, that's right. Boy, I want to dig into that. But I want to do just a brief background on your journey to get here. Is it right that you got into this reading a book about the origins of language? Was that how it started? It was a debate between Noam Chomsky, the famous linguist, and Jean Piaget, the developmental psychologist, about whether language is learned or innate. So Chomsky on website saying it's innate, and then Piaget on the other side saying, yeah, there is a need for structure, but it's mostly learned. And there were interesting articles by various people at this conference debate that took place in France.
And one of them was by Seymour Papard from MIT, who was describing the perceptron, which was one of the early machine learning models. And I read this. It was maybe 20 years old, or something. I looked fascinated. But it's ideal that a machine could learn, and that's what got me to it. And so you got interested in neural nets, but the broader community was not interested in neural nets. No, we're talking about 1980. So essentially, very, very, very few people working on neural nets then. And there were not really being published in domain venues and anything like that. So a few cognitive scientists in San Diego, for example, working on this, David Rammoharit, Jim MacLaland, and then Geoffrey Hinton, who I ended up working with after my PhD, who was interested in this. But it was really kind of a bit alone. There were a few isolated people in Japan, and Germany working on this kind of stuff. But it was not a field. It started being kind of a field again around 1986 or something like that. And then there's another big AI winter. And what's the phrase you used? You and Geoffrey Hinton in Yasuo Benjio had a type of conspiracy you said to bring neural nets back. It was that desperate. It was that hard to do this work at that point? OK, well, the notion of AI winter is complicated, because what's happened since the 50s is that there's been waves of interest in one particular technique, an excitement, and people working on it. And then people are realizing that this new set of techniques was limited, and then sort of interest wayings or people start using it for other things and lose the ambition of building intelligent machines.
And there's been a lot of waves like this, with the perceptron, things like that, with sort of more classical computer science, a lot of logic-based AI. And there was a big wave of excitement in the 80s about logic-based AI, what we call world-based systems, expert systems. And then in the late 80s, about neural nets, and then that died in the mid-90s. So that's the winter that I was out in the cold. Like your pardon. And so what happened in the early 2000s is that Jeff here, Shrana, I kind of got together and said, we have to rekindle the interest of the community in those methods, because we know their work. We just have to be a little more show experimentally that their work can perhaps come up with new techniques that are applicable to the new world. In the meantime, what's happened is that the internet took off, and now we have sources of data that we didn't have before. And the computer's got much faster. And the computer's got faster. And so all of that converged towards the end of the 2000 and early 2010, when we started having real results in speech recognition, image recognition, and then a bit later, natural language understanding. And that really sort of sparked a new wave of interest in sort of machine learning-based AI.
So we call that deep learning. We didn't want to use the word neural nets because it had a bad reputation, so we changed the name to deep learning. It must be strange, I imagine, having been on the outside, even of just computer science for decades, to now be at the center, not just of tech, but in some ways the global conversation. It's quite a journey. It is, but I would have expected the progress to be sort of more continuous if you want, instead of those waves. Yeah, I wasn't at all prepared for what happened there. Neither on the side of losing interest, the lost interest, but the community for those methods. And for the incredibly fast explosion of the renewed field in the early 2000s, near over the last 10, 12 years. And now there's been this huge, at least public-facing explosion in the last, whatever, 18 months, a couple years. And there's been this big push for government regulation that you have had concerns about. What are your concerns? OK, so first of all, there's been a lot of progress in AI deep learning applications over the last decade, a little more than a decade. But a lot of it has been a little behind the scenes. So on social networks, it's content moderation, protection against all kinds of attacks. It was things like that. That uses AI massively. When Facebook knows it's my friend in the photo, that's you. Yes, but no, not anymore. Oh, not anymore. There is no face recognition on Facebook anymore. Oh, isn't there? No, there was turned off several years ago. Oh my god, I feel so dated. But the point being that a lot of your work is integrated in different ways into these products. Oh, if you try to reap out deep learning out of meta today, the entire company crumbles. It's literally built around it. So a lot of things behind the scenes and things that are a little more visible, like translation, for example, that uses AI massively, obviously, or generating subtitles for the video. So you can watch them silently. That's speech recognition. So it is translated. So that is visible, but most of it is behind the scenes.
And in the rest of society is also largely behind the scenes. You buy a car now, and most cars have a little camera looking at the windshield, and the car will break automatically if there is an obstacle in front, that's called automatic emergency braking system. It's actually a required feature in Europe. A car cannot be sold unless it has that. It uses all non-inseverable American cars, well. Yeah. And that uses deep learning. It uses conventional net, in fact, my invention. So that saves lives, same for medical applications and things like that. So that's a little more visible, but still kind of behind the scenes. What has changed in the last year or two is now that there is sort of AI-first products that are in the hands of the public. The fact that the public got so enthusiastic about it was a complete surprise to all of us, including OpenAI and Google and us.
OK, but let me get your take though on the regulation. Because there's even some big players. You've got Sam Altman in OpenAI. You've got everyone, at least saying publicly, regulation. We think it makes sense. OK, so there's several types of regulations. There is regulation of products. So if you put one of those emergency braking systems in your car, of course, it's been checked by a government agency that makes sure it's safe. I mean, it has to happen, right? So you need to regulate products that are certainly the ones that are life critical in health care and transportation and things like that, and probably in other areas as well. The debate is about whether research and development should be regulated. And there, I'm clearly very strongly of the opinion that it should not. The people who believe it should are people who are afraid of the who claim that there is an intrinsic danger in putting the technology in the hands of essentially everyone or every technologist. And I think on the exact opposite, that this is actually a huge beneficial effect. What's the benefit? Well, the benefit is that we need to get AI technology to disseminate in all corners of society and the economy. Because it makes people smarter. It makes people more creative. It helps people who don't necessarily have the technique to write a nicely put together piece of text or a picture or a video or a music or whatever to be more creative, right?
The creation tools, essentially. Creation aids. It may facilitate a lot of businesses, a lot of boring jobs that can be automated. And so it has a lot of beneficial effects on the economy, on entertainment, all kinds of things. Making people smarter is intrinsically good. You could think of it this way. May have, in the long term, the similar effect as the invention of the printing press, that had the effect of making people literate and smarter and more informed. So some people try to regulate that too. Well, that's true.
Actually, the printing press was banned in the Ottoman Empire, at least for Arabic. And some people say that the Minister of AI of the UAE says that it's contributed to the decline of the Ottoman Empire. So yeah, if you want to ban technological progress, you're taking a much bigger risk than if you favor it. You have to do it right, obviously. I mean, there are side effects of technology that you have to mitigate as much as you can. But the benefits are far overwhelmed the dentures. The EU has some proposed regulation. Do you think that's the right kind? Well, so there are good things in the proposal for that regulation.
And there are things, again, when it comes to regulating research and development and essentially making it very difficult for companies to open source their platforms. I think are very counterproductive. And in fact, the French, German, and Italian governments basically have blocked the legislation in front of the EU parliament for that reason. They really want open source. And the reason they want open source is because imagine a future where everyone's interaction with the digital world is mediated by the AI system. That's where we're headed. That's where we're heading.
So one of us will have an AI system. Within a few months, you will have that in your smart glass. So you can get smart glasses from MITTA. And you can talk to it. And there's an AI system behind it. And you can ask questions. Eventually, it will have displays. So these things would be able to, I could speak French to you. And it would be automatically translated. In your glasses, you'll have subtitles.
Or you would hear my voice in English. And so erasing barriers and stuff like that, you would be in a place and it would indicate where you should go or information about the building you're looking at. Or whatever. So we'll have intelligent assistants living with us at all times. This will actually provide intelligence. This will be like having a human staff working for you, except they're not human. And it might be even smarter than you. But it's fine. I mean, I work with people who are smarter than me. So that's the future.
Now, if you imagine this kind of future, where all of our information diet is mediated by those AI systems, you do not want those things to be controlled by a small number of companies on the west coast of the US. It has to be an open platform, kind of like the internet. The internet is all the software infrastructure of the internet is completely open source. And it's not by design. It's just that it's the most efficient way to have a platform that is safe, customizable, et cetera.
And for assistance you want, those systems will constitute the repository of all human knowledge and culture. You can't have that centralized. Everybody has to contribute to it. So it needs to be open. You said at the fair 10 anniversary event that you wouldn't work for a company that didn't do it the open way. Why is it so important to you? Two reasons. The first thing is science and technology progress through the quick exchange of information and scientific information. One problem that we have to solve with AI is not a technological problem of what product do we have to build. That's, of course, a problem. But the main problem we have to solve is, how do we make machines more intelligent? That's a scientific question.
And we don't have a monopoly on good ideas. A lot of good ideas come from academia. They come from other research labs, public or private. And so if there is a faster general information, the field progresses faster. And if you become secretive, you fall behind. Because people don't want to talk to you anymore. Let's talk about what you see for the future. It seems like one of the big things you're trying to do is a shift from these large language models that are trained on text to looking much more at images.
Why is that so important? OK, so as you saw from the question, we have those LLMs. It's amazing what they can do. They can pass the bar exam. But we still don't have cell-driving cars. We still don't have domestic robots. Like, where is the domestic robot? They can do what a 10-year-old can do. Share all the dinner table and share all the dishwasher. Where is the robot? They can learn to do this in one shot like a 10-year-old.
Where is the robot that can learn to drive a car in 20 hours of practice like a 17-year-old? We don't have that. That tells you we're missing something really big. We're training the wrong way. We're not training the wrong way, but we're missing essential components to reach human-level intelligence. So we have systems that can absorb an enormous amount of training data from text. And the problem with text is that text only represents a tiny portion of human knowledge. This sounds surprising. But in fact, most of human knowledge is things that we learn when we're babies and that has nothing to do with language. We learn how the world works. We learn intuitive physics. We learn how people interact with each other. We learn all kinds of stuff. But they really don't have anything with language. And think about animals. A lot of animals are super smart. In many domains, we're actually that's smarter than humans in some domains, right? They don't have a language. And they seem to do pretty well. So what type of learning is taking place in human babies and in animals that allow them to understand how the world works and become really smart have common sense that no AI system today has.
So the joke I make very often is the smartest AI system we have today are stupid or a house cat. Because a cat can navigate the world in a way that a chatbot certainly can't. A cat understands how the world works, a hundred times causality, understands that if it does something else will happen, right? And so it can plan sequence of actions. You ever seen a cat sitting at the bottom of a bunch of furniture and looking around moving ahead and then going, jump, jump, jump, jump, jump. That's amazing planning. No robot can do this today. And so we have a lot of work to do. It's not a so problem. We're not going to get human level AI systems before we get significant progress in being able to train systems to understand the world. Basically by watching video and acting in the world. Another thing using focus on is, I think, what you call a objective-based model. Objective-driven. Objective-driven. Yeah. Explain why you think that is important. And I haven't been clear just in hearing you talk about it, whether safety is an important component of that or safety is kind of separate or alongside that. It's part of it. So the idea of objective-driven, OK, let me tell you, first of all, what a current data is. Define the problem. It does, right? So NLMs really should be called auto-regressive NLMs. The reason we should call them this way is that they just produce one word or one token, which is a sub-word unit. It doesn't matter. One word after the other without really kind of planning what they're going to say.
因此,我经常开玩笑说,我们今天拥有的最智能的 AI 系统要么是愚蠢的,要么是一只家猫。因为一只猫能够以一种聊天机器人绝对做不到的方式来遍历世界。猫能理解世界如何运作,了解因果关系,明白如果它做了某事会发生什么,对吧?因此它能够规划出一系列的行动。你有没有见过一只猫坐在一堆家具底下四处张望,然后向前移动,跳,跳,跳,跳,跳。这是令人惊叹的规划。今天没有任何机器人能做到这一点。因此,我们有很多工作要做。这不仅仅是一个问题。在我们能够训练系统理解世界的能力取得显著进展之前,我们不会得到人类级别的 AI 系统。基本上是通过观看视频并在现实世界中行动来实现。我们还要关注的另一件事是,我认为你们所说的目标驱动模型很重要。目标驱动。目标驱动。是的。请解释一下为什么你认为这很重要。在听你谈论时,我并不清楚安全是否是其中重要组成部分,还是独立的或与之并列。安全是其中的一部分。所以目标驱动的概念,好,首先让我告诉你,当前的数据是什么。它定义了问题,对吧?因此 NLM 实际上应该被称为自回归 NLM。我们之所以应该这样称呼它们是因为它们只是逐个生成一个单词或一个标记,无论是子单词单元都无关紧要。它们只是直接一个接一个地生成单词,而并没有真正计划它们要说什么。
So you give them a prompt, and then you ask it, what word comes next? And they produce one word. And then you shift that word into their input. And then say, what word comes next now, et cetera, right? That's called auto-regressive prediction. It's a very old concept. But that's how it works now. Jeff did it like 30 years ago or something. Actually, Jeff had some work on this with EDSS. I mean, it was a student a while back, but that wasn't very long ago. Yes, you're been to had a very tracing paper on this in the 2000s using neural nets to do this, actually. It was probably one of the first. Anyway, I got you distracted here. So you can get to what's. Right, OK. So you produce a word one after the other without really thinking about it beforehand, without knowing the system doesn't know in advance what it's going to say, right? It just produces those words.
And the problem with this is that it can elucidate, in a sense that sometimes it will produce a word that is really not part of a correct answer, and then that's it. The second problem is that you can control it. So you can't tell it, OK, you're talking to a 12-year-old. So only produce words that are understandable about a 12-year-old. You can put this in a prompt, but that has kind of limited effect unless the system has been fine-tuned for that. So it's very difficult, in fact, to control those systems. And you can never guarantee that whatever they're going to produce is not going to escape the conditioning, if you want, the training that they've gone through to produce not just useful answers, but answers that are non-toxic and everything, and that non-biased and everything.
So right now, that's done by kind of fine-tuning the system and training it on lots of people kind of answering questions and rating questions that's called human feedback. There's an alternative to this. And the alternative is you give the system an objective. So the objective is a mathematical function that measures to what extent the answer produced by the system conforms to a bunch of constraints that you wanted to satisfy. Is this understandable by a 12-year-old? Is this toxic in this particular culture? Does this answer the question in a way that I want? Is it consistent with what my favorite newspaper was saying yesterday or whatever? So a bunch of things like this constraints that could be safety guardrails or just task. And then what the system does, instead of just blindly producing one word or the other, it plans an answer that satisfies all of those criteria. And then you produce that answer. That's objective driven AI. That's the future, in my opinion. We haven't made this work yet, or at least not in the situation that we want. People have been working on this kind of stuff for robotics for a long time. That's called model predictive control of motion planning.
There's obviously been so much attention to Jeffrey Hinton and Yasuo Benjio having these concerns about what the technology could do. How do you explain the three of you reaching these different conclusions? OK, so it's a bit difficult to explain for Jeff. He had a bit of an epiphany in April where he realized that the systems that we have now are a lot smarter than he expected them to be. And he realized, oh my god, we're kind of close to having systems that have human ability. I disagree with this completely. They're not as smart as he thinks they are. Right. Yeah, right. And he's thinking in sort of very long term, and so abstract term. So I can understand why he's saying what he's saying, but I just think he's wrong. And we've disagreed on things before. We're good friends. But we disagreed on these kind of questions before, on technical questions, among other things. So I don't think he's thought about the problem of existential risk and stuff like that for very long, basically since April. I've been sort of thinking about this on a philosophical moral point of view for a long time. For Yasuo, I think it's more concerned about short term risks that would be due to misuse of technology. By terrorist group or people with bad intentions. And also about the motivation of the industry developing AI, which he sees as not necessarily aligned with the common good because he claims it's motivated by profits. So that may be a bit of a political science there, that perhaps he has less trust in the democratic institutions for doing the right thing than I have. I've heard you say that that is the distinction, that you have more faith in democracy and in institutions than they do. I think that's the case.
I don't want to put words in their mouth, and I don't want to misrepresent them. Ultimately, I think we have the same goal. We know that there's going to be a lot of benefits to AI technology. Otherwise, we wouldn't be working on this. And the question is how you do it right. Do we have to have, as Yasuo advocates for, some overarching multinational regulatory agency to make sure everything is safe? Should we ban open sourcing models that are potentially dangerous? But run the risk of basically slowing down progress, slowing the dissemination of technology in the economy and society. So those are trade-offs. And reasonable people can disagree on this. Yeah. That's the, in my opinion, the criterion, the reason really that I'm really very much in favor of open platforms is the fact that AI systems are going to constitute a very basic infrastructure in the future. And there has to be some way of ensuring that culturally and in terms of knowledge, those things are diverse. A bit like Wikipedia, right? You can't have just Wikipedia in one language. It has to cover all languages, all cultures and everything. Same story.
There has been, it's obviously not just the two of them. It's a growing number of people who say, not that it's likely, but there's like a real chance, like a 10, 20, 30, 40% chance of literally wiping out humanity, which is kind of terrifying. Why are so many in your view getting it wrong? It's a tiny, tiny number of people. Ask the vast. It's like 40% of researchers in one poll. No. No, but it's a self-selected poll online. People say, select themselves to answer those polls. No. Like, the vast majority of people in AI research, particularly in academia or in startups, but also in large labs, like ours, don't believe in this at all. Like, they don't believe there is a significant risk of existential risk to humanity. All of us believe that there are proper ways to deploy the technology and bad ways to deploy it and that we need to work on the proper way to do it. OK. And the analogy I draw, I think, is the people who are really afraid of this today would be a bit like people in 1920 or 1925 saying, oh, we have to ban airplanes because it can be misused. Someone can fly over a city and drop a bomb. And those can be dangerous because they can crash. So we're never going to have planes that cross the Atlantic because it's just too dangerous. Like a lot of people will die out of this, right?
And then they will ask it to regulate the technology, like, you know, ban the invention of the turbojet, OK, or regulate turbojets. In 1920, turbojets weren't invented yet. In 2023, human level AI has not been invented yet. So the question is to discussing how to make this technology safe, superhuman, intelligent safe. It's the same as asking a 1920 engineer, you know, how you can make turbojet safe. Like, they're not invented yet, right? And the way to make them safe is going to be like turbojet. It's going to be years and decades of iterative refinements and careful engineering of how to make those things proper and they're not going to be deployed unless they're safe. So again, you have to trust in the institutions of society to make that happen. And just so I understand your view on the existential risk, I don't think you're saying it's zero, but you're saying it's quite small, like below 1%. You know, it's below the chances of an asteroid hitting the Earth and global nuclear war and things of that type. I mean, it's on the same order. I mean, there are things that you should worry about and there are things that you can do anything about.
But in the case, like natural phenomenon, right? There's not much you can do about them. But things like deploying an AI, we have agency. Like, we can decide not to deploy if we think there is a danger, right? So attributing a probability to this makes no sense because we have agency. Last thing on this topic, autonomous weapons, how will we make those safe and not have at least the possibility of really bad outcomes with them? So autonomous weapons already exist. But not in the form that they will in the future. We're talking about missiles that are self-guided, but that's a lot different than a soldier that's sent into battle. OK, the first example of autonomous weapon is land mines. And some countries, not the US, but some countries banned its use, its international agreements about this, that neither the US nor Russia nor China assigned to ban them. And the reason for banning them is not because they're smart, it's because they're stupid. They're autonomous and stupid. And so they're OK with anybody, right? Guided missile, the more guided the missile is, the less collateral dimension makes.
So then there is a moral debate. Is it better to actually have smarter weapons that only destroy what you need and doesn't kill hundreds of civilians next to it? Can that technology be used to protect democracy? In Ukraine, Ukraine makes massive use of drones, and they're starting to put AI into it? Is it good or is it bad? I think it's necessary regardless of whether you think it's good or bad. Autonomous weapons are necessary. Well, for the protection of democracy in that case, right? But obviously, the concern is what if it's Hitler who has them rather than Roosevelt? Well, then it's the history of the world.
Who has better technology? Is it the good guys and the bad guys? So the good guys should be doing everything they can. It's, again, a complicated moral issue, it's not a speciality. I don't work on weapons. But you're a prominent voice saying, hey, guys, don't be worried. Let's go forward. So and this is, I think, one of the main concerns people have. Okay, so another passive is like some of my colleagues. And I think you have to be realistic about the fact that this technology is being deployed in defense. And for good things, the Ukrainian conflict has actually made this quite obvious, that progressive technology can actually help protect democracy.
We talk generally about all the good things AI can do. I'd love to the extent you can to talk really specifically about things that people, like let's say your middle aged or younger, can hope in their lifetime that AI will do to make their lives better. So this thing is in a short term, safety systems for transportation, for medical diagnosis, the technique tumors and stuff like that, which is improved with AI. And then mid-term term, understanding more about how life works, which would allow us to do things like drug design more efficiently, like all the work on protein folding and design of proteins, synthesis of new chemical compounds and things like that.
So there's a lot of activity on this. There's not been a huge revolutionary outcome of this yet. But there are a few techniques that have been developed with the help of AI to treat rare genetic diseases, for example, in things of that type. So this is going to make a lot of progress over the next few years, make people's life more enjoyable than longer perhaps, etc. And then beyond that, again, imagine all of us would be like a leader in either science, business, politics, or whatever it is. And we'll have a staff of people assisting us. But there won't be people. There will be virtual people working for us.
Everybody is going to be a boss, essentially. And everybody is going to be smarter as a consequence. Not individually smarter perhaps, although they will learn from those things. But smarter in the sense that they will have a system that makes them smarter, right? Make them easier for them to learn the right thing, to access the right knowledge, to make the proper decisions. So we'll be in charge of AI systems. We'll control them. We'll be subservient to us. We set their goals. But they can be very smart in fulfilling those goals.
As the leader of a research lab, a lot of people at fair are smarter than me. And that's why we hire them. And there is kind of an interesting interaction between people. People is particularly between politics, right? The politician, the sort of visible persona, makes a decision. And that's setting goals, essentially, for other people to fulfill. So that's the interaction we'll have with AI systems. We set goals for them. And they fulfill it.
I think you've said AGI is at least a decade away, maybe farther. Is this something you guys are working toward? Or are you leaving that kind of to the other guys? Or is that your goal? Oh, it's our goal. Of course, it's always been our goal. But I guess in the last 10 years, there were so many useful things we could do in the short term that a part of the labs ended up being devoted to those useful things, like content moderation, translation, computer vision, robotics. A lot of things that are kind of application areas of this type.
What has changed in the last year or two is now that we have products that are AI first, right? Assistance that are built on top of LAMA and things like that. So things services that META is deploying will be deploying not just on mobile devices, but also on smart glasses and ARVR devices and stuff like that, that are AI first. So now there is a product pipeline where there is a need for a system that has essentially a human-level AI. We don't call this AGI because human intelligence is actually very specialized. It's not general.
So we call this AMI advanced machine intelligence. But when you say AMI, you're basically meaning AGI. Basically, it's the same that what people mean by AGI. We like it. Joanna and I like it because we speak French. And that's AMI. It means French. Yes. Mon ami, my friend. So yeah, no, we're totally focused on that. That's the mission of FAIR, really. Whenever AGI happens, it's going to change the relationship between people and machines. Do you worry at all if we have to hand over control to things like corporations or governments to these smarter entities?
We don't hand over control. We hand over the execution. We control. We set the goals, as I said before. And they execute the goals. It's very much like being a leader of a team of people. You set the goal. This is a wild one, but I find it fascinating. There are some people who think, even if humanity got wiped out by these machines, not a bad outcome because it would just be the natural progression of intelligence. Larry Page is apparently a famous proponent of this, according to Elon Musk. Would it be terrible if we got wiped out, or would there be some benefits because it's a form of progress?
I don't think this is something that we should think about right now because predictions of this type that are more than, let's say, 10 years ahead are complete speculation. So how are descendants will see progress or their future? It's not for us to decide. We have to get them the tools to do whatever they want. But I don't think it's for us to decide. We don't have the legitimacy for that. We don't know what it's going to be. That's so interesting, though. You don't think necessarily humans should worry about humanity continuing?
I don't think it's a worry that people should have at the moment. I mean, OK, so you can rely also. How long has humanity existed? About 300,000 years. It's very short. So if you project 300,000 years in your future, what will humans then look like, given the progress of technology? We can figure it out. And probably the biggest changes will not be through AI. It probably be through genetic engineering or something like that, which currently has banned probably for reasons that we don't know the potential dangers of that. Last thing, because I know our time is running out. Do you see a middle path that acknowledges more of the concerns, at least considers maybe you're wrong and to an extent this other group is right, and still maintains the things that are important to you around open use of AI? Is there kind of a compromise?
So there's certainly potential dangers in the medium term of that are essentially due to potential misuse of the technology. And the more available you make the technology, the more people you make it accessible to more people. So you have a higher chance of people with bad intentions being able to use it. So the question is, why countermeasures do you use for that? So some people are worried about things like massive flood of misinformation, for example, that is generated by AI. What measures can you take against that? So what we're working on is things like watermarking so that you know when a piece of data has been generated by a system. Another thing that we're extremely familiar with at MITA is detecting false accounts. But divisive speech that is sometimes generated, sometimes just typed by people with bad intentions, hate speech, dangerous misinformation, we already have systems in place to protect against this on social networks. And the thing that people should understand is that those systems make massive use of AI.
So hate speech, take down and detection, in all languages in the world was not possible five years ago because the technology was just not there. And now it's much, much better because of the progress in AI. So same for cybersecurity. You can use AI systems to try to attack computer system. But that means you can also use it to protect. So every attack has a countermeasure, and they both make use of AI. So it's a cat and mouse game as it's always been. Nothing new there. So that's for the short to medium term dangers. And then there is the long term danger of risk of existential risk. And I just do not believe in this at all because we have agency. So it's not a natural phenomenon that we can't stop.
This is something that we do without going to distinguish ourselves by accident. The reason why people think this, among other things, is because of a scenario that has been popularized by science fiction, which I've received the name Fum. OK. And what that means is that one day someone is going to discover the secret of a GI, whatever you want to call it, superhuman intelligence, is going to turn on the system. And two minutes later, that system will take over the entire world, destroy humanity, make such fast progress in technology and science that we're all dead. And some people actually are predicting this in the next three months, which is insane. So this is not happening. So this scenario is completely realistic.
This is not the way things work. The progress towards human level AI is going to be slow and incremental. And we're going to start by having systems that may have the ability to potentially reach human level AI, but at first, they're going to be as smart as a rat or a cat, something like that. And then we're going to crank them up and put some more guardrails to make sure they're safe and then work our way through smarter and smarter systems that are more and more controllable, et cetera. It's going to be like the same process we used to make turbo jets safe. It took decades. And now you can fly across the Pacific on the two-engine airplane.
You couldn't do this 10 years ago. You had to have three engines before, because the reliability of turbo jets wasn't that high. So it's going to be the same thing, a lot of engineering, a lot of really complicated engineering. We're out of time for today. But if we're all still here in three months, maybe we'll do it again. My pleasure. Thanks a lot.