It is my pleasure to welcome Dr. Andrew Moon tonight. Andrew is the managing general partner of AI Fund, founder of Deep Learning AI and Lending AI, chairman and co-founder of COSERA and an adjunct professor of computer science here at Stanford. Previously he had started and led the Google Brain team, which had helped Google adopt modern AI and he was also director of the Stanford AI Lab. About 8 million people, one in 1,000 persons on the planet, have taken an AI class from him and through both his education and his AI work. He has changed numerous lives.
Thank you Lisa, it's good to see everyone. So what I want to do today is chat to you about some opportunities in AI. So I've been saying AI is the new electricity. One of the difficult things to understand about AI is that it is a general purpose technology, meaning that it's not useful only for one thing but it's useful for lots of different applications. Kind of like electricity. If I were to ask you what is electricity good for, you know, it's not any one thing, it's a lot of things. So what I like to do is start off sharing with you how I view the technology landscape and this will lead into the set of opportunities.
So a lot of excitement about AI and I think a good way to think about AI is as a collection of tools. So this includes a technique called supervised learning which is very good at recognizing things or labeling things and generative AI which is relatively new exciting development. If you're familiar with AI, you may have heard of other tools but I'm going to talk less about these additional tools and I'll focus today on what I think are currently the two most important tools which are supervised learning and generative AI.
So supervised learning is very good at labeling things or very good at computing inputs, outputs or A to B mappings given input A, give me an output B. For example, given an email, we can use supervised learning to label it as spam or not spam. The most lucrative application of this that I've ever worked on is probably online advertising where given an ad, we can label if a user is likely to click on it and therefore show more relevant ads. For self-driving cars, given the sense of readings of a car, we can label it with where the other cars. One project that my team AI found worked on was ship route optimization where given a route to the ship is taking or considering taking, we can label that with how much fuel we think those are consumed and use this ship's more fuel efficient. It's still a lot of work in automated visual inspection in factories. So you can take a picture of a smartphone that was just manufactured and label is there a stretch or any other defect in it. Or if you want to build a restaurant review reputation monitoring system, you can have a little piece of software that looks at online restaurant reviews and labels that as positive or negative sentiment.
So one nice thing, one cool thing about supervised learning is that it's not useful for one thing. It's useful for all of these different applications and many more besides.
监督学习的一个好处是它不仅仅适用于一个领域,而是适用于各种不同的应用以及其他许多领域。
Let me just walk through concretely the workflow of one example of a supervised learning labeling things kind of project. If you want to build a system to label restaurant reviews, you then collect a few data points or collect the data set where say the bestrami sandwich grade, you say that it's positive, server is slow, that's negative, my favorite ship can curry, that's positive. And here I've shown three data points but you're building this, you may get thousands of data points like this or thousands of training examples, we call it. And the workflow of a machine learning project when the AI project is you get labeled data, maybe thousands of data points, then you have an AI entry team train an AI model to learn from this data. And then finally, you would find maybe a cloud service to run the trained AI model and then you can feed it, you know, less quality I've ever had and that's positive sentiment.
And so I think the last decade was maybe the decade of large scale supervised learning. What we found starting about 10, 15 years ago was if you were to train a small AI model, to train a small neural network or small deep learning algorithm basically, a small AI model may be not on a very powerful computer. Then as you fed more data, its performance would get better for a little bit but then it would flatten out, it would plateau and it would start being able to use the data to get better and better.
But if you were to train a very large AI model, lots of compute on maybe powerful GPUs, then as we scaled up the amount of data we gave the machine learning model, its performance would kind of keep on getting better and better. So this is why when I started and let the Google brain team, the primary mission that I directed the team to solve at the time was let's just build really, really large neural networks that we then fed a lot of data to and that recipe fortunately worked. And I think the idea of driving large compute and large scale data that recipes really helped us, driven a lot of AI progress over the last decade.
So if that was the last decade of AI, I think this decade is turning out to be also doing everything we had in supervisor learning but adding to it the exciting two of generous of AI. So many of you, maybe all of you, have played with chat GPT and bard and so on but just given a piece of text which you call prompt, like I love eating, if you run this multiple times, maybe you get bigger screen keys or my mother's meat level, outer friends and the AI system can generate output like that.
Given the amounts of buzz and excitement about generous of AI, I thought I'd take just half a slide to say a little bit about how this works. So it turns out that generous of AI, at least this type of text generation, the core of it is using supervised learning that inputs output mappings to repeatedly predict the next word. And so if your system reads on the internet, the sentence like, my favorite food is a bagel with cream cheese and locks, then this is translated into a few data points where if it sees my favorite food is A, in this case, try to guess that the right next word was bagel or my favorite food is a bagel, try to guess next word is worth. And similarly, if it sees that, you know, in this case, the right guess for the next word would have been cream. So by taking text that you find on the internet or other sources, and by using this input output, supervised learning to try to repeatedly predict the next word, if you train a very large AI system on hundreds of billions of words, or in the case of the largest models, now more than a trillion words, then you get a large language model like chat GPT.
And you know, there are additional other important technical details, I thought of predicting the next word. Technically, these systems predict the next sub word, a part of what called a token. And then there are other techniques like our HF for further tuning the AI output to be more hopeful, honest and harmless. But at the heart of it is this using supervised learning to repeatedly predict the next word that that's really was enabling the exciting, you know, really fantastic progress on large language models.
So while many people have seen large language models as a fantastic consumer to you can go to a website like chat GPT's website or bots or other large language models and use it as a fantastic to this one of the trend I think is still underappreciated, which is the power of large language models, not just as consumer to, but as a developer to. So it turns out that there are applications that used to take me months to build that a lot of people can now build much faster by using a large language model.
So specifically, the workflow for supervised learning, building the restaurant review system, say, would be that you need to get a bunch of label data and you know, maybe that takes a month to get a few thousand data points. And then have an AI team train and tune and really get, you know, optimized performance on your AI model. Maybe that will take three months. Then find a cloud service to run it, make sure it's running robustly, make sure it's recognized, maybe that will take another three months. So pretty realistic timeline for building a commercial great machine learning system is like six to 12 months, right?
So teams have led will often took roughly six to 12 months to build and deploy these systems and some of them turned out to be really valuable. But this is a realistic timeline for building and deploying a commercial great AI system. In contrast with prompt based AI, where you write a prompt, this is what the workflow looks like. You can specify a prompt that takes maybe minutes or hours and then you can deploy it to the cloud and that takes maybe hours or days.
So there are now certain AI applications that used to take me, you know, literally six months, maybe a year to build that many teams around the world can now build in maybe a week. And I think this is already starting, but the best is still yet to come. This is starting to open up a flood of a lot more AI applications that can be built by a lot of people. So I think many people still underestimate the magnitude of the flood of custom AI applications that I think is going to come down the pipe.
Now, I know you probably were not expecting me to write code in this presentation, but that's what I'm going to do. So it turns out this is all the code I need in order to write a sentiment classifier. So I'm going to, you know, some of you will know Python, I guess, import some tools from open AI. And then I have this prompt that says, classify the text of the load delimited by three dashes as having either a positive or negative sentiment. Oh, no. I had a fantastic time at Stanford, GSB, and also made great new friends. All right. So that's my prompt. And now I'm just going to run it. And I've never run it before. So I really hope, thank goodness we got the right answer. And this is literally all the code that takes the builder sentiment classifier. And so today, you know, developers around the world can take literally maybe like 10 minutes to build a system like this. And that's a very exciting development.
So one of the things I've been working on was trying to teach, you know, online classes about how they use prompting, not just as a consumer tool, but as a developer tool. So start about the technology landscape. Let me now share my thoughts on what are some of the AI opportunities I see. This shows what I think is the value of different AI technologies today. And I'll talk about three years from now. But the vast majority of financial value from AI today is I think supervised learning, where for a single company like Google can be worth more than 100 billion US dollars a year. And also, there are millions of developers building supervised learning applications.
So it's already massively valuable and also with tremendous momentum behind it, just because of the sheer effort in, you know, finding applications and building applications. And in generous of AI is the really exciting new entrance, which is much smaller right now. And then there are the other tools I'm including for completeness, we can, you know, the sizes that these circles represent the value today. This is what I think I might grow to in three years. So supervised learning already really massive may double say in the next three years from truly massive to even more massive. And generous of AI, which is much smaller today, I think will much more than double in the next three years because of the number of amount of developer interest, the amount of venture capital investments, number of large corporates exploring applications.
And I also just point out three years is a very short time horizon. If it continues to compound anything near this rate, then in six years, you know, it'll be even vastly larger. But this light shaded region in green or orange, that light shaded region is where the opportunities for either new startups or for large companies incumbents to create and to enjoy value capture. But one thing I hope you take away from this slide is that all of these technologies are general purpose technologies. So in the case of supervised learning, a lot of the work that had to be done over the last decade, but it's continuing for the next decade is to identify and to execute on the concrete use cases. And that process is also kicking off for generous of AI.
So for this part of the presentation, I hope you take away from it that general purpose technologies are useful for many different tasks. A lot of value remains be created using supervised learning. And even though we're nowhere near finishing figure out the exciting use cases of supervised learning, we have this other fantastic tool of generous of AI, which further expands the set of things we can now do using AI. But one caveat, which is that there will be short term facts along the way.
So I don't know if some of you might remember the app called Lenza. This is the app that will let you upload pictures of yourself and they'll render a cool picture of you as an astronaut or scientist or something. And it was a good idea and people liked it. And it's roughly just took off like crazy like that through last December. And then it did that. And that's because Lenza was, it was a good idea. People liked it. But it was a relatively thin software layer on top of someone else's really powerful APIs. And so even though it was a useful product, it was in a defensible business.
And when I think about, you know, absolute Lenza, I'm actually reminded of when Steve Jobs gave us the iPhone. Shortly after, someone wrote an app that I paid $1.94 to do this, to turn on the LED, to turn the phone into a flashlight. And that was also a good idea to write an app to turn on the LED light. But it also wasn't a defensible long term. It also didn't create very long term value because it was a easy, replicated and underprised and eventually incorporated into iOS. But with the rise of iOS, with the rise of iPhone, someone also figured out how to build things like Uber and Airbnb and Tinder. The very long term, very defensible businesses that created sustaining value. And I think with the rise of Gen.2.ai or the rise of new.ai tools, I think what really excites me is the opportunity to create those really deep, really hard applications that hopefully can create very long term value.
So the first trend I want to share is AI as a general purpose technology. And a lot of the work that lies ahead of us is to find the very diverse use cases and to build them.
There's a second trend I want to share with you, which relates to why AI isn't more widely adopted yet. It feels like a bunch of us have been talking about AI for like 15 years or something. But if you look at where the value of AI is today, a lot of it is still very concentrated in consumer software internet. Once you go outside, you go tech or consumer software internet, there's some air adoption, but the law feels very early. So why is that?
It turns out if you were to take all current and potential AI projects and sort them in decreasing all their value, then to the left of this curve, the head of this curve, are the multi-billion dollar projects like advertising or web search or for e-commerce, product recommendations or company Amazon. And it turns out that about 10, 15 years ago, you know, various of my friends and I, we figured out a recipe for how to hire, say, 100 engineers to write one piece of software to serve more relevant ads and apply that one piece of software to billion users and generate massive financial value. So that works. But once you go outside consumer software internet, hardly anyone has 100 million or a billion users, they can write and apply one piece of software to you. So once you go to other industries, as we go from the head of this curve on the left over to the long tail, these are some of the projects I see and I'm excited about.
I was working with a pizza maker that was taking pictures of the pizza they were making because they needed to do things like make sure that the cheese is spread evenly. So this is about a $5 million project, but that recipe of hiring 100 engineers or dozens of engineers to work on a $5 million project, that doesn't make sense. Or another example, working with an agricultural company that we figured out that we used cameras to find out how tall is the wheat and wheat is often bent over because of wind or rain or something. And we can chop off the wheat to the right height, then that results in more food for the farmer to sell and is also better for the environment. But this is another $5 million project that owed recipe of having a large group of high school engineers to work on this one project. That doesn't make sense. And similarly materials grading, cloth grading, sheet metal grading, many projects like this.
So whereas to the left in the head of this curve, there's a small number of, let's say, multi-billion dollar projects and we know how to execute those delivering value.
因此,在这个曲线的头部向左,有一小部分,比如说价值数十亿美元的项目,我们知道如何执行并交付价值。
In other industries, I'm seeing a very long tail of tens of thousands of, let's call them, $5 million projects that until now have been very difficult to execute on because of the high cost of customization. The trend that I think is exciting is that the AI community has been building better tools that lets us advocate these use cases and make it easy for the end user to do the customization.
So specifically, I'm seeing a lot of exciting low-code and no-code tools that enable the user to customize the AI system. But this means this, instead of me needing to worry that much about pictures of pizza, we have tools, we can start into C2s that can enable the IT department of the pizza making factory to train an AI system on their own pictures of pizza to realize this $5 million worth of value. And by the way, the pictures of pizza, they don't exist on the internet. So Google and Bing don't have access to these pictures. We need tools that can be used by really the pizza factory themselves to build and deploy and maintain their own custom AI system that works on their own pictures of pizza.
And broadly, the technology for enabling this, some of it is prompting, text prompting, visual prompting, but really large language models and similar tools like that, or a technology called Data Century AI whereby instead of asking the pizza factory to write a lot of code, which is challenging, we can ask them to provide data which turns out to be more feasible. And I think the second trend is important because I think this is a key part of the recipe for taking the value of AI, which so far still feels very concentrated in the tech world and consumer software into their world and pushing this out to all industries really to the rest of the economy, which sometimes is easy to forget. The rest of the economy is much bigger in the tech world.
So, the two trends I shared, AI is a general purpose technology, lots of concrete use cases to be realized, as well as local, no-code, easily used tools, enabling AI to be deployed in more industries.
How do we go after these opportunities? So about five years ago, there was a puzzle I wanted to solve, which is I felt that many valuable AI projects are now possible. I was thinking, how do we get them done? And having led AI teams in Google and Baidu in big tech companies, I had a hard time figuring out how I could operate a team in a big tech company to go off there a very diverse set of opportunities and everything from maritime shipping to education to financial services and health care and all and all. It's just very diverse use cases, very diverse, go to marketers and very diverse really, you know, customer bases and applications.
And I felt that the most efficient way to do this would be if we can start a lot of different companies to pursue these very diverse opportunities. So that's why I ended up starting AI Fund, which is a venture studio that builds startups to pursue a diverse set of AI opportunities. And of course, in addition to lots of startups in company companies also have a lot of opportunities to integrate AI into existing businesses. In fact, one pattern I'm seeing for incumbent businesses is distribution is often one of the cyclical advantages of incumbent companies that they play the cards right can allow them to integrate AI into their products quite efficiently.
But just to be concrete, where are the opportunities? So I think of this as a, this is why I think of as the AI stack. At the bottom level is the hardware, semiconductor layer. Fantastic opportunities there, but very capital intensive, very concentrated. So these are a lot of resources relatively few winners. So some people can and should play there. I personally don't like to play them myself.
There's also the infrastructure layer also fantastic opportunities, but very capital intensive, very concentrated. So I tend not to play them myself either. And then that's a developer to layer. What I showed you just now was I was actually using OpenAI's API as a developer tool. And then I think the developer to sector is a hyper competitive. Look at all the startups chasing OpenAI right now, but there will be some mega winners. And so I sometimes play here, but primarily when I think of a meaningful technology advantage, because I think that earns you the right or earns you a better shot at being one of the mega winners.
And then lastly, even though a lot of the media attention in the buzz is in the infrastructure and developer tooling layer, it turns out that that layer can be successful only if the application layer is even more successful. And we saw this as the rise of SaaS as well. All of the buzz excitement is on the technology, the tooling layer, which is fine. Nothing wrong with that. But the only way for that to be successful is that the application layer is even more successful so that frankly, they can generate enough revenue to pay the infrastructure and the tooling layer.
So actually, let me mention one example. Amor Rai, it's actually just texting the CEO yesterday. But Amor Rai is a company that we built that uses AI for romantic relationship coaching. And just to point out, I'm an AI guy and I feel like I know nothing really about romance. And if you don't believe me, you can ask my wife, she will confirm that I know nothing about romance. But we want to build this. We want to get together with the former CEO of Tinder, Renata Niebog, and with my team's expertise in AI and her expertise in relationships. Yes, she ran Tinder. She knows more about relationships than anyone I know. We're able to build something pretty unique using AI for romantic relationship mentoring.
And the interesting thing about applications like these is when we look around, how many teams in the world are simultaneously expert in AI and in relationships? And so at the application layer, I'm seeing a lot of exciting opportunities that seem to have a very large market, but where the competition set is very light relative to the magnitude of the opportunity. It's not that they're no competitors, but it's just much less intense compared to the developer tool or the infrastructure layer.
And so because I've spent a lot of time iterating on a process of building startups, what I'm going to do is just very transparently tell you the recipe we've developed for building startups. And so after many years of iteration and improvement, this is how we now build startups. My team's always had access to a lot of different ideas, internally generated ideas from partners. And I want to walk through this with one example of something we did, which is a company bearing AI, which uses AI to make ships more fuel efficient.
So this idea came to me when a few years ago, a large Japanese conglomerate called Mitsui that is a major shareholder in sort of operates major shipping lines. They came to me and they said, hey, Andrew, you should build a business to use AI to make ships more fuel efficient. And the specific idea was, you know, think of it as a Google mask for ships. We can suggest their ship or tell a ship how to steer so that you still get to your destination on time by using, it turns out, about 10% less fuel.
And so what we now do is we spend about a month validating the idea. So double check. This idea even technically feasible and in terms of prospective customers to make sure that this is marketing. So we spend up to about a month doing that. And if it passes this stage, then we will go and recruit a CO to work with us on the project.
When I was starting out, I used to spend a long time working on the project myself before bringing on the CO. But after iterating, we realized that bringing on the leader at the very beginning to work with us, it reduces a lot of the burden of having to transfer knowledge or having a CO come in and revalidate whether we discover it. So the process is we've learned much more efficiently, just bringing the leader at the very start.
And so in the case of bearing AI, we found that fantastic CO Dylan Kyle, who's a repeat entrepreneur, one successful exit before. And then we spent three months, six, two weeks sprints to work with them to build a prototype, as well as do deep customer validation. If it survives this stage, and we have about a 2-thirds, 66% survival rate, we then write the first check in, which then gives the company resources to hire the executive team, build the key team, get the MVP working, minimum viable product working, and get some real customers.
And then after that, hopefully then successfully raises additional external rounds of funding, and can keep on growing and scaling. So I'm really proud of the work that my team was able to do to support Mitsui's idea and Dylan Kyle as CO. And today, there are hundreds of ships on the high seas right now that are steering themselves differently because of bearing AI. And 10% fuel savings translates to rough order, maybe $450,000 in savings in fuel per year. And of course, it's also, frankly, quite a bit better for the environment.
And I think this is not a, I think would not have existed, if not for Dylan's fantastic work, and then also Mitsui brings this idea to me. And I like this example, because this is another one, it's like, this is a start of idea that, just a point out, I would never have come up with myself, right? Because, you know, I've been on a boat, but what do I know about maritime shipping? But it's the deep subject matter expertise of Mitsui that had to zen site together with Dylan and then my team's expertise in AI that made this possible.
And so as I operate in AI, one thing I've learned is my swim lane is AI, and that's it, because I don't have time. It was very difficult for me to be expert in maritime shipping and romantic relationships and healthcare and financial services and on and on and on. And so I've learned that if I can just help get accurate technical validation and then use, you know, AI resources to make sure the AI tech is built quickly and well. And I think we've always managed to help the company's build a strong technical team, then partnering with subject matter experts often results in exciting new opportunities.
And I want to share with you one other weird aspect of one other weird lesson I've learned about, you know, building startups, which is I like to engage only when there's a concrete idea. And this runs counter to a lot of the advice you hear from the design thinking methodology, which often says don't rush to solution, right? Explore a lot of alternatives for the solution. Honestly, we tried that. It was very slow.
But what we've learned is that at the ideation stage, if someone comes to me and says, hey, I'm not a financial manager, you should apply AI to financial services. Because I'm not a subject matter expert in financial services, it's very slow for me to go and learn enough about financial services to even figure out what to do. I mean, eventually you could get a good outcome, but it's a very labor intensive, very slow, very expensive process for me to try to learn industry after industry.
In contrast, one of my partners wrote this idea as a tongue on the cheek, not really seriously. But, you know, let's say the concrete idea is by GPT, this eliminates commercials by automatically buying every product advertised and they change from not having seen yet. It's not a good idea, but it is a concrete idea. And it turns out concrete ideas can be validated or falsified efficiently. They also give a team a clear direction to execute.
And I've learned that in today's world, especially with, you know, the excitement buzz and exposure to the AI of a lot of people, it turns out that there are a lot of subject matter experts in today's world that have deeply thought about a problem for months, sometimes even, you know, one or two years, but they've not yet had to build a partner. And when we get together with them and here, and they share the idea of us, it allows us to work with them to very quickly go into validation and building.
And I find that this works because there are a lot of people that have already done the, you know, design thinking of exploring a lot of ideas and willing down to really good ideas. And I find that there's so many good ideas sitting out there that no one is working on, that finding those good ideas that someone has already had and wants to share it with us and wants to build part before that turns out to be a much more efficient engine.
So before I wrap up, we'll go to the question in a second. Just a few slides to talk about risks and social impact. So it is very powerful technology to state something you probably guess. My team's and I, we only work on projects that move humanity forward. And, you know, we have multiple times, kill projects that we assess to be financially sound based on ethical grounds. It turns out I've been surprising, sometimes dismayed at the creativity of people to come up with good ideas, sorry, to come up with really bad ideas that seem profitable but really should not be built with kill a few projects on those grounds.
And then I think has to make knowledge that AI today does have problems with bias, fairness, accuracy, but also, you know, the technology is improving quickly. So I see that AI systems today are less biased than six months ago and more fair than six months ago, which is not to dismiss the importance of these problems. They are problems that we should continue to work on them, but I'm also gratified at the number of AI teams working hard on these issues to make them much better.
When I think of the biggest risk of AI, I think that the biggest risk, one of the biggest risks is the disruption to jobs. This is a diagram from a paper by our friend at the University of Pennsylvania and some folks that open AI analyzing the exposure of different jobs to AI automation.
And it turns out that whereas the previous wave of automation, mainly the most exposed jobs were often the lower wage jobs, such as when, you know, we put robots into factories. With this current wave of automation is actually the higher wage jobs for the right of these axes that seems to have more of their tasks exposed to AI automation.
So even as we create tremendous value using AI, I feel like as citizens and our corporations and the governments and really our society, I feel a strong obligation to make sure that people, especially people whose livelihoods are disrupted, are still well taken care of, are still treated well.
And then lastly, there's also been, it feels like every time there's a big wave of progress in AI, there's a big wave of hype about artificial gender intelligence as well. When deep learning started to work really well 10 years ago, there was a lot of hype about AGI. Now that gender is working really well, there's another wave of hype about AGI.
But I think that artificial intelligence, AI didn't do anything that human can do. It still decades away. Maybe 30 to 50 years, maybe even longer. I hope we'll see it in our lifetimes. But I don't think there's any time soon. One of the challenges is that the biological path to intelligence, like humans and the digital path to intelligence, AI, they've taken very different paths. And the funny thing about the definition of AGI is your benchmarking this very different digital path to intelligence with really the biological path to intelligence.
So I think, you know, large language models are smarter than any of us in certain key dimensions, but much dumber than any of us in other dimensions. And so forcing it to do everything a human can do is a funny comparison. But I hope we'll get there. Maybe hopefully we'll in our lifetimes.
And then there's also a lot of, I think, overblown hype about AI creating extinction risk for humanity. Candidly, I don't see it. I just don't see how AI creates any meaningful extinction risk for humanity. I think that people worry, we can't control AI, but we have lots of AI will be more powerful than any person.
But with lots of experience steering very powerful entities, such as corporations or nation states, they're far more powerful than any single person and making sure they for the most part benefit humanity. And also technology develops gradually the so-called hot take off scenario, whereas not really working today.
And then suddenly one day overnight it works brilliantly and we achieve super intelligence takes over world. That's just not realistic. And I think the AI technology would develop slowly like all of that, you know, like, and then it gives us plenty of time to make sure that we provide oversight and can manage it to be safe.
And lastly, if you look at the real extinction risk of humanity, such as fingers crossed the next pandemic or climate change leading to a massive depopulation of some positive planets or much lower odds that maybe someday and as they're doing to us what it had done to the dinosaurs, I think we looked the actual real extinction risk to humanity. AI having more intelligence, even artificial intelligence in the world, would be a key part of the solution.
So I feel like if you want humanity to survive and thrive for the next thousand years rather than slowing AI down, which some people propose, I would rather make AI go as fast as possible.
So with that, just to summarize, this is a lot of the last slide, I think that AI as a general purpose technology creates a lot of new opportunities for everyone. And a lot of the exciting and important work that lies ahead of us all is to go and build those concrete use cases and hopefully in the future, hopefully I have opportunities to maybe engage with more of you on those opportunities as well.