This is our first ever experience talking to this God-like feeling, you know, AI that was all of a sudden doing these tasks that would take me when I practiced the whole day and it's being done in a minute and a half. The whole company, all 120 of us did not sleep for those months before TPD4. We felt like we had this amazing opportunity to run far ahead of the market. That's why you're the first man on the moon. Yeah. Welcome back to another episode of The Lightcone. I'm Gary. This is Jared and Diana. Harge is out, but he'll be back on the next one. And today we have a very special guest, Jake Heller of Case Text. I think of Jake as a little bit like one of the first people on the surface of the moon. He created Case Text more than I think 11, 12 years ago, actually. And in the first 10 years, you went from zero to a hundred million dollars valuation. And then in a matter of two months after the release of GPD4, that valuation went to a liquid exit to Thomson Reuters for $650 million. So you have a lot of lessons about how to create real value from really like large language models. I think you were of our friends in YC, one of the first people to actually realize this is a sea change in revolution. And not only that, we're going to bet the company on it and you were super right. So welcome, Jake. Have it to be here.
One of the cool things I think about Jake's story and reason why we wanted to bring him on today is that if you just look at the companies that good founders are starting now, it's a lot of vertical AI agents. I mean, I was trying to count the ones in S24. We have literally dozens of the YC companies in the last batch for building vertical specific AI agents. And I think Jake is the founder who is currently running the most successful vertical AI agent. It's by far the largest acquisition and it's actually deployed at scale in a lot of mission critical situations. And the inspiration for this was we hosted this retreat a few months ago and Jake gave an incredible talk about how he built it. And we thought that it would be super useful for people who watched the light cone who are interested in this area to hear directly from one of the most successful builders in this area, how he did it.
So how did you do it? Well, first of all, like a lot of these things, there's a certain amount of luck over the course of our decade long journey. We started investing very deeply in AI and natural language processing. And we became close with a number of different research labs, including some of the folks at OpenAI. And when it came time for them to start testing early versions, we didn't realize it was GPD for the time. What was GPD for? We got a very early kind of a view of it. And so, you know, months before the public release of GPD for, you know, we as a company, we're all under NDA all working on this thing. And I'll never forget the first time I saw it, it took maybe 48 hours for us to decide to take every single person of the company and shift what they were working on from, but the projects we were then working on at the time to 100 percent of the company all working on building this new product we call co-counsel based on the GPD for technology. How many people was that? We're about 120 people at the time. So you took like 120 people and completely changed where they were all working on. Yes, yes, yes. In 48 hours. Yes. And for the people watching case texts originally, I mean, had always been in the legal space, you're a lawyer and you built something for yourself. And you know, sort of the first versions of it were actually sort of annotated versions of case law, actually.
Yeah, that's exactly right. So in the very early origins, the company, the mission of the company, what we're always focused on is how can we build something that brings the best of technology to the legal space as a lawyer? I actually liked the job a lot. The parts of my job that I hated the most was when I had to interact with the technology that lawyers have to use regularly to get the job done. I remember thinking and this is like 2012 when I was at a law firm, if I would want to do something really trivial, I had like a new iPhone at the time, I can go and Google and find like movie times or where's the closest open, high restaurant with vegetarian options. That was super easy. But if I wanted to find the piece of evidence that was going to exonerate my client and make it so he doesn't have to do the jail for the rest of his life or the key legal case, they'll help me win a billion dollar lawsuit. Well, that's going to be like five days in the real until five AM every day. It's like there's got to be a better way. What is the process as a lawyer? You would have to read the stacks and stacks of documents. Pretty much. Yeah. Right before I started practicing, before everything went virtual or like online, you would literally be in a basement with bankers boxes full of documents, reading them one by one by one to try to find all the emails in a company like Pfizer or Google to see if there was potential fraud or.
And then if you wanted to find case law slightly before my time, you'd literally go to the library and open up books and just start reading. And new products were coming out that were some of the first web based research tools, but they're pretty clunky. It was just hard to find the relevant information. You couldn't do control F or I knew this stuff. Basically not. Yeah. And what was interesting about your background is you also happen to be the rare breed of having also computer science training. So this must have driven you nuts. Yeah, exactly. I mean, in the law firm, I'll never forget I was building like browser plugins to go on top of the tools I was using just to make my life more efficient and effective. And actually, one of the reasons I left the law firm to start a company and apply to YC was I got in trouble with the general council who thought, like, hey, why are you spending all your time doing this tech stuff? And also made it the time very clear that that my law firm owns all that technology. So I decided to do something different.
So do you want to tell us a little bit about the first 10 years of case text, the sort of like long slog in the pre LLM era? One of the lessons here, I think, that I took away from that time period is that when you start a company, you may not get the exact right. You may have like the right kind of general direction. You know, there's a problem. You're trying to solve it, but it could take a very long time to figure out what the solution is. For us, for example, you know, we saw that there was this kind of combined issue of like bad technology and legal sphere, but also like this very like a lot of lawyers use content to do things like research and understanding what the law is. And so we thought, OK, well, we can do the technology better, but how are we going to get this content? And we spent like a couple of years trying to get, as Gary said, lawyers to annotate case law and to provide information.
所以,你想跟我们说说 CaseText 在第一个十年的情况吗?比如说在大语言模型(LLM)出现之前那段漫长且艰难的过程。我从那段时间中学到的一个教训是,创业初期可能会走对大方向,但不一定一开始就能找到正确的解决方案。你会知道有个问题需要解决,但找出解决方法可能需要很长时间。比如说,我们当时发现法律领域有技术上的缺陷,同时又有很多律师依赖内容来进行研究和理解法律。因此,我们意识到技术可以做得更好,但如何获取这些内容是个难题。就像 Gary 说的,我们花了几年的时间,试图让律师们标注判例法并提供信息。
So it's like a UGC site, like the user generating. Yeah, that was a big focus of ours, like the kind of one to punch of better technology, but also better content. We, you know, at the time, our heroes were like stuck over flow and Wikipedia and GitHub and other kind of open source or UGC kind of websites. And it was a total failure. Like we could not get lawyers to contribute their time and information. And I think these are just different populations. The typical Wikipedia editor has more time on their hands than they know what to do with. And so they're adding not all, but many do and they're adding content for free and and altruistically lawyers, bill by the hour, their time is incredibly valuable. They're always running at a time. They had no time to kind of contribute to the UGC site.
So we had to pivot and we started investing investing very deeply at the time it was not called AI. It's just like natural processing and machine learning and saw that first of all, we didn't need to create all this UGC like to to replicate some of the best benefits of what our competitors had in these big content databases. Some of it you can basically do even then kind of automated basis. And then also we were starting to create these user experiences that were a lot better than when our competitors could offer based on then at the time, what seems kind of quaint, like AI stuff, like, you know, the same recommendation algorithm that powers Pandora and Spotify's like recommended music. You can use it with a look at basically is this song relates to that song. People listen this also into this and this. Right.
Similarly, we looked at, OK, cases that cite to other cases, they all reference earlier opinions, you know, they kind of build out this network of citations and we found ways that we can check a lawyer's work. They'd upload their work so far and be like, well, everybody who talks about this case too and you miss that. So cool experiences like that. But the truth is until the very end until co-council, a lot of what we did were relatively speaking, kind of incremental improvements on the legal workflow. And one of the things that's kind of weird about this is when there's just an incremental improvement, it's actually pretty easy to ignore.
A lot of our clients, they would never say this literally, but you kind of this impression, you walk into the room, their office, and you try to pitch them a product and say, this is going to change everything about the way you practice. And they go, well, I make five million dollars a year. I don't want nothing to change. This technology, yeah, it's not I do not want to introduce anything that has the opportunity to make my life at all worse or potentially worse or potentially more efficient because they build by the hour. It was really only after like much later when Chachi GPD came out. You know, at the time we were privately and secretly working on GPD for Chachi PPD came out and all of a sudden every lawyer in America, probably in the world, saw, oh my God, I don't know exactly how this is going to change my work, but it's going to change it very substantially. Like they could feel it.
And the same guys and gals were telling us I make five million dollars a year. Why would it change anything about my life? We're like, I make five million dollars a year. This is going to change something. I need to be ahead of this. The technology itself, and we'll get into the second, really changed what we build for lawyers, but also the market perceptions of what was like, what was necessary really changed as well. And for the first time in our 10 years, even before we launched co-council publicly based on GPD for, they were calling us and like, you know, we know you work on AI. We need to get on top of this. What can you show us? What can we work on? And I think it's because the change was on incremental anymore. It was like fundamental. And all of a sudden they had to pay attention. They could not ignore it.
I guess the mental model I have for you is there's this concept of the idea maze. The founder goes in the beginning of the maze and they're just like feeling around, like actually in the arena talking to customers, learning like where are the walls, which path to go? Should I go left or right? And then as is actually common for startup founders in the idea maze, you will actually reach a dead end and then usually you have to pivot. Yeah. And then I think you have a very interesting story because you were sort of towards the end of maybe like one of the parts that weren't going to get you all the way to product market fit, but then L M's drop and then it's like the maze got shaken up. Yeah. And then you're actually much closer to product market fit than absolutely anyone else. And so that's why it's not crazy time. Yeah, it's exactly right. That's why you're the first man on the moon. Yeah.
Yeah. I think there's there's something to that. And the thing is, you know, each time we got progressed to that maze, it felt like maybe now we're a product market fit. You know, we were making real revenue before we launched co-counsel and we had real customers and they said really great things about us. I keep on thinking about this article written by Mark Andreessen in like the early 2000s. I think it's called the only thing that matters. And in it, he describes that what it feels like to have product market fit. He lists things like your servers will go down. You can't hire support people and salespeople fast enough. You're going to eat for a year free at bucks. The kind of famous woodside, you know, diner where where a lot of VCs will take you the the process. And I read that really on in my like, like, you know, career. And I was like, OK, well, that's like hyperbolic. But when we launched co-counsel, it was literally exactly that. Our servers are going down. We could not hire support people fast enough. We can hire sales people fast enough. I ate a lot of bucks. You know, before we were it was a really big day if we're in the ABA journal or some other, you know, legal specific publication, we were on CNN and MSNBC. Like, you know, all of a sudden, everything changed. And that's what real product market fit looks like. I think market markets, even in like 2005 or never the article came out. Exactly right about like in 2023. Can you talk about that crazy time? Because there's only two months from when you launched co-counsel to getting bought for $650 million. So like what happened in those two months? Well, to be clear, the transaction only closed six months after you launched, but it was two months of the conversation started. And so so we started building co-counsel. And for just just to kind of background purposes, the idea we came up with, again, like 48 hours, like a weekend after seeing GPT four was and it's something that is not the kind of sound crazy today, but it felt crazy at the time, which is this AI legal assistant by which we mean it's like almost like a new member of the firm. You can just talk to it, not unlike how you might talk to something like chat or GPT today and give it tasks like I need you to read these a million documents for me and tell me if there's any evidence fraud happening in this company. And then within a couple of hours, it's like I've read all the documents. Here's what the summary is or summarized documents or do legal research and put together a whole memo after researching hundreds of thousands of cases answering the lawyer's initial research question. And so in that sense, it was this like really powerful extension of the workforce of his law firms. That was the concept from the beginning. And we made a very early initial version of it. And we started because we couldn't under our agreement with OpenAI, we could not be public about this product, but they did let us extend the NDA to handful of our customers. And so we started having our customers use it. And so for months before GPT four was launched publicly, we had a number of law firms and like they had no idea they're using GPT four, but they are like, something really special, right? This is actually even before chat GPT. So there is their first ever experience talking to this God-like feeling, AI that was all of a sudden doing these tasks that would take me when I practice the whole day and it's being done in a minute and a half. Right. And so as you might imagine, it was nuts. I mean, first of all, the whole company, all 120 of us did not sleep for those months before GPT four was like public launch. They're very good public launch the product. We felt like we had this amazing opportunity to run far ahead of the market. Something really beautiful happens when everybody's working super, super hard, which is you iterate so quickly past.
And actually, I still see some companies out there. They're stuck where we were in the first month of seeing GPT four, right? And I think it's because they're just not like as intensely focused and engaged as we were able to be during those like couple, about six months or so before the public launch of GPT four. You kind of to do this transition, you had to shake the company. You kind of went into deep founder mode. Yeah, because there was a lot of pushback from employees. I was like, oh, this thing was working. Why should we go into through ourselves into the deep end of AI? And tell us about that founder mode moment for you.
And so first of all, like this is especially true. We've been running a business for 10 years because they have seen you wander through that maze and and bump into dead ends. And a lot of those folks have been there for most or all that time. Watching, you know, me as the founder saying, we're definitely going this direction, it's definitely going to work. And it's sometimes it doesn't. And you only get so many of those with employees, right? So this was maybe my last one that I had with some of these folks. And they're like, here, Jake goes again with this crazy new technology and some idea we're going to invest deeply in. And yeah, it took some job to convince people.
And if you imagine like what some of the different roles are, if you're in the go to market role, if you're if you're selling or marketing a product and we're making, you know, we're growing 70, 80 percent year over year, we're between 15 and $20 million in ARR. Things were like terrible, right? That's great. Yeah, we're great. Yeah. We but like so they're like, what's why are we even the board? You know, some of the members like I get this immediately and some of them had to be persuaded, right? And about the founder mode moment, like one thing that really worked for me is I led the way through example, I built the first version of it myself. Wow. Even with 120 person company with like a whole bunch of engineers and lawyers and stuff like before that, you like opened up your like ID and actually built the thing yourself. Oh, yeah.
And part of it was the NDA only sent it at first to me and my co-founder. That was it. That was a blessing, that was it. It turned out to be like perfect. And even after we got extended a little bit, we kept it pretty small at first for the first like, you know, a little bit of time. I made my mind within 40 hours, all companies need to do this. But we actually only told the company, I think a week and a half after we first got access. During that week and a half, we built the very first version, like prototype version of this. And again, I won't I've never forget this. The timing is just so funny. Like we saw it on like a Friday, we had it all weekend long, we're working with it. And then Monday was an executive off site where everybody came, all my executives came and they expected to work. We're going to be talking about how we're going to hit our sales target for the next quarter, however, we're talking about none of that. You know, we are talking about something totally different right now. Let me show you something in my laptop. You know, so yeah, I built the first version myself.
But going through that process, me and then a handful of other people, I think was really helpful. And we also brought in customers early and that helped convince a lot of people. As soon as like a skeptical sales or marketing or whatever person or even engineer was on the other line at end of the Zoom call where a customer was was reacting to the product in real time and giving us their honest reactions and like seeing the look on their face. And you have to imagine it's almost hard to imagine that the world was like pre chat GPT. But there are some of these who were seeing that exact idea for the first time. And they were they were just blown away. And that really changed minds quickly. I mean, we saw people go through like existential crises, live, you know, at Zoom calls like, oh, we could see their expression change.
Exactly. On all kinds of ways, it's like, what am I going to do? A lot of the very common reaction amongst the senior attorneys we showed it to was like, well, they got a retired suit. Like, you know, I have to deal with this. And some of this was really driven by GPT for coming out. Like you had access to three. You had access even to two, I think. Yeah. We were in a close relationship. We're with a lot of the labs, but including opening on, they kept on showing us stuff kind of early on in its development. And they're like, well, can you build something with this for legal? And every time we're like, no, this sucks. Like, you know, by by time you got to three and three point five, it was like, OK, well, this is plausible sounding English and it sounds kind of a lawyer. So kudos to that. But it's just making stuff up wildly.
Like we just did it's very hard to connect it to a real use case, especially in legal where it's so important that you actually get the facts right that you can't hallucinate, you can't even make the wrong kinds of assumptions. And we had to do a lot of work with those earlier models to even get them close to usable. And they just weren't really like one like totem or like one example along the way. Is when GPD 3.5 came out that the study was run and it showed that GPD 3.5 got a 10th percentile on the bar passage. So like it did better than some people actually, but the 10% of them. Yeah, probably the ones are just filling it out randomly, basically.
When we got early on the GPD four, we're like, let's run the study again, too. And we work with the open AI, like we were going to confirm this. This test is not in the training set and wasn't totally to test to it. And the test we ran did better than 90% of the test acres. Right. So this is a big difference. And also we started running some tests like, OK, here's like four or five cases to read using those cases, write a memo, respond to this question. And we did a lot of prompt work to get it to essentially just do it accurately to cite the actual things in the in context that we gave it and not make things up. And we're like, OK, well, this is very different than we saw before.
So is a big moment for us. And honestly, I'm not sure what the mindset was of the researchers we were working with. But almost felt like by the time we were having that meeting, it felt like one of those other meetings we'd had in the past where we were getting ready to say, like, this, this is not going to work for legal on trying. And I think they saw us go through maybe some form of the existential crisis on that call that our customers did. You're like, oh, wait, this is super, super, super different. I guess, you know, today we have one. We have chain of thoughts, reasoning. I think a lot of people look at it as it's not merely the text itself, but also the instructions that lead up to the workflow.
But, you know, way at the beginning, nobody knew any of this stuff. How did you start? You had your sort of tests that you had written for previous versions of the model, they outperformed. But then there's this moment where you say, OK, well, now it's something, but what do we do next and how do we do it? So the process that we started with then, and it's actually not too just similar to what we're doing today, it started with a question of like, OK, well, what problem are we trying to solve for the user? Right. The user wants to do research, legal research. So and they went like a memo answering their question with citations to the original source. So like, that's the end result. And then we're like, OK, well, how do we go from that end result, like working backwards almost, what would it take to get there? And what ends up happening a lot with the things that we built for co-counsel, we called them skills, which felt very unique. And at the time, I think a lot of companies now call their AI capability skills.
So when you're building these skills, it turns out it usually takes a lot of work to go from, like, say, the customer inputs something, say, like a set of documents or a question or what have you to the end result that they're looking for. And the way that we thought about it was how would the best attorney in the world approach this problem? And so in the case of research, for example, the best attorney would get the request safe from a partner and then break that request down into like actual search queries that run against these platforms. And sometimes they use special search syntax. It looks actually pretty like SQL almost, right? So like from the English language query, you have to break it down to these different kind of search queries, maybe a dozen different search queries. You were being really diligent. And then they'd execute the search queries against these databases of law. And they come back with, say, like a hundred results each.
And then, you know, the most diligent, best attorney with sit down just to read every single one of these results that come back, all the case law, statutes, regulations. And you start to do things like make notes and summarize and kind of compile it and outline of what your response might be. Like line by line, paragraph by paragraph, actually. Yeah, it's 100%. And you start like just taking out those like insights or getting from what you're reading and then finally based on all of that work and all those citations you've gathered, et cetera, then finally you put together your, your, you know, research memo and so we're like, OK, well, each one of those steps along the way for the vast majority of them, those were impossible to accomplish with previous technology, but now they're their prompts.
Think step by step. Yeah, think step by step. Yeah, exactly. But we actually broke it down each, each, you know, so getting to the final result may be a dozen or two dozen different individual prompts, each of which might, by the way, be thinking step by step themselves. But and then for that, for each of those prompts, you know, as part of this, like chain of actions you take to get to the final result, we get a very clear sense of what good looks like and we're able, you know, we had a series like a battery of tests before, but this got way more intense where we'd write at first, maybe if he doesn't tests and then if you hundred and if you thousand for every single of those prompts.
So, you know, if the job to be done in the very beginning of this research process, for example, is taking the English language query and breaking it down into search queries, we had a very clear sense of what good search queries look like and wrote like gold standard answers for given this input. This is what the app looks like, right? And so our prompt engineers, and I was one of them at the very beginning, we all just kind of in it together. We're writing these English language prompts to try to, you know, write the test first, basically, and we're with these English language prompts to try to get it. So of a thousand 200 times, they got the right answer 1,199 times or what have you.
Just sort of like a test driven development. Yeah. Really approach from doing software engineering to prompt. That's exactly right. And the funny thing is I never really believe in test driven development before prompting like I was like, oh, the code works. It doesn't. It's fine. Like you'll see it when you but like with prompting actually, I think it becomes even more important because of the kind of like nature of these LMS that they might go in crazy directions unexpectedly. And so, you know, you might very easily add an incentive instruction to solve one problem you're seeing with these sets of tests and then to break something with sets of tests. And so that exact kind of theory of kind of testing development applies, you know, 10x more, I'd say in the world of prompting.
There's a lot of sort of the naysayers saying that a lot of companies are just building GP rappers and there's not a lot of IP getting built. But it's actually there's a lot of finesse to how you explain all of these. Like you tell us about all of that and how much more there's to be built. Oh, yeah. I mean, I think the thing is we're actually trying to solve a problem for a customer and actually doing the job in our case of like what a young associate might do and do it really well. There are many layers of things you have to add in to actually get the job done. And by the time you like add that all up, you're not like a GPT rapper, you're a full application that may include in our case proprietary data sets like the law itself and our annotations to the law that we added automatically. It may include connections into customer databases.
In our case and legal, they have these very specific legal specific document management systems, you know, so connecting into those is like very important. It may include something as subtle as like how well you OCR and like what OCR programs you use and how you set those up when you're doing that task of one of the tasks that the co-counsel does, for example, is reviewing large sets of documents. Once you start working a lot of documents, you see like stuff is handwriting all over it and they're like tilted in the scan. And there's this crazy thing that they do in law where they print four pages on one page to save like room and all of the CRS can read it directly across, but actually goes, you know, one, two, three, four.
So by the time you've dealt with like all the edge cases, frankly, not even before you hit the large language model, like everything else up to the large language model, there might be dozens of things you've built into your application to actually make it work and work well. And then you get to the prompting piece and writing out tests and very specific prompts and the strategy for how you break down, you know, a big problem into step by step, kind of thinking and how you feed in the information, how you format the information in the right way. All of that also becomes like, you know, your IP and it's very hard to replicate, very hard to build and therefore very hard to replicate.
It's all the business logic, which is all even all the very successful SAS companies with very specific domain. You need very, very custom esoteric niche integrations like plug into this esoteric law database. Yeah, absolutely. Two things that I think about all the time, it's like basically all SAS for a while was just like a SQL wrapper, right? Like if you think about like various companies like Salesforce, they've built that business logic around basically just databases and connections between like tables in a database and sometimes bridging that gap between something that like either a very technical person can do, but most people can't and making accessible or bridging that gap between them that almost works like you can do a lot of cool demos in chat GPT without building a line of code, but that almost works and works, you know, 70% of the time, but going to 100% of the time is a very different kind of task and people pay $20 a month for the 70% and maybe $500 or $1000 a month is in that actually works depending on the use case, right? So there's a lot of value gained going that last mile or 100 miles, whatever it is.
Yeah. Can you talk about how you went from 70% to 100% because I think the other knock on this technology that we hear a lot is like, Oh, these albums hallucinate too much. They're not accurate enough for real world use. But as you said earlier, like the use case that you're working on is a mission critical use case. There's like a lot at stake if the agent gives bad information to lawyers who are working on important court cases. How did you make it accurate enough for lawyers who are conservative by nature to trust it? This test driven development framework first one goes a long way because you can start seeing patterns and why it's making a mistake and then you add instructions against that pattern and then sometimes it still doesn't do the right thing and then you kind of really ask yourself, OK, well, was that being super clear in my instructions? Am I including information? Doesn't it doesn't it shouldn't see you or too much or too little information for it to really get the full context?
And usually like these things are pretty intelligent. And so usually you can kind of root cause while you're failing certain tests. And then build to a place where you're actually passing those tests and just getting it right, you know, and one of the things we learned is after passes, frankly, like a hundred tests, the odd that it will do on any random distribution of like user inputs that is 100,000, 100% accurately is like very high.
One of the things that strikes me that is tricky, like many founders we work with are very tempted to just raw dog it. Yeah. It is like no e-vals, no test driven. We're just like vibes only prompt engineering. And maybe I mean, you switched over to this very quickly then. Like, was it just obvious from the beginning? You're like, we just can't do it that other way. We should not raw dog any of these prompts.
Yeah, I think I think the biggest thing first of all depends on the use case. For a lot of things that we were working on, for better or for worse, there was a right answer. And if you get the wrong answer, lawyers are not going to be happy about it. You know, I had been a lawyer myself, but also been signed lawyers for a decade. Every time we made the smallest mistake and anything we did, we heard about it immediately, right? And so I had that voice in my head, maybe, but I was going through this process. And that that was the learning from the 10 years of slogging through pre LMS, you're like, no, it has to be a hundred percent.
Oh, yeah. Oh, yeah. That's probably true of way more domains than we realize, actually. It could be because I know the thing that we're thinking about a lot is you can lose faith in these things really quickly, right? You have one bad experience, especially if it's your first bad, your first experience is bad. And you're like, you know, maybe we'll check on this AI stuff a year from now, especially if you're like a busy lawyer, not a technologist.
So we knew you had to make that first encounter the first week really, really work for the lawyer or else they're not going to invest in it deeply. So let's talk a bit about open AI 01 because it is very different model. I mean, up to this point with GPT4 and all that previous generation, the analogy in terms of the intelligence is sort of the kind of system, one thinking in the Daniel Kahneman type of intelligence, right? Yes, this whole economic theory, you want the Nobel prize around this. Someone thinking is just very fast is kind of these decisions that humans make very intuitively and based on patterns and elements are fantastic at that. But they're terrible at the executive function because what I'm hearing with all the stuff that you're describing is kind of you're just giving the LLM executive function is like, how do you think it's right? How do I manage it? Really that slower thinking? And I think 01 is exciting. We haven't seen things built yet because it just got announced a few days ago, right?
I think it's getting to that system to thinking. And I think this has been a big area research, which I saw a lot in a new reps a year ago where a lot of the researchers were excited to unlock this because this is the missing piece to where AGI. Let's talk about what are your thoughts on 01 and how this changes. So first of all, I think 01 is a very impressive model. Like with other things, we gave it the kinds of tests that we knew were failing and the degree of it's not just math, the degree of thurnis, precision intelligence applied to some of these questions. And sometimes it's the stuff that you wouldn't wouldn't expect you need a super smart model to do.
Like in one of the tests that we run, we give it lawyers, real legal brief, but we edited very slightly some of that lawyers quotations to the case to make it a wrong quotation or wrong kind of summarization of his case. It's like 40 page legal brief. You alter things with just adding the word like not can change the meaning of something entirely, right? And then we give the full text of the case as well to the AI. And we say, well, what did, you know, what did the lawyer get wrong about this case of anything? And literally every LLM before that would be like, nothing. It's perfectly right. And it's just not a precise thinker about some of the very nuanced things that we altered about the brief to make it slightly wrong.
And one got like immediately, like you said, like it thinks actually for a while, like it sits there for a minute or like, is it something it's thing on, you know, like, but then it starts answering and it's like, oh, well, you know, change an and to a neither nor. So those are the kinds of tests that you kind of expect even, frankly, earlier AI, like, LLMs to be able to pass, but just could not. And all of a sudden, one is even doing these things that take like, like precise detail thinking. Obviously, we don't have the internals on how a one really works. We have, you know, this broad idea of chain of thoughts.
Seemingly, we know that if open AI had a giant corpus of internal monologue of people thinking through doing things step by step, one would be even a lot better. It sort of rhymes with the thing you did to put your first step on the moon, right? Like, yeah, it rhymes with break it down into, you know, chunks where you can get to a hundred percent accuracy instead of just throw it all in the context window and, you know, maybe magically it will work. Yeah. Do you think that that's what's happening then? There's a shot that they've had, you know, maybe change what their contractors are doing instead of just doing, you know, input in answer out. They're doing input in how would I think about solving this problem and then answer out, but then it's, you know, the interesting thing is then it's kind of limited by the intelligence of the people writing those instructions.
And one of the things that we're investigating for it's worth with O1 is can we prompt it to tell it what to think about during its thinking process and inject, like again, like we've hired some of the best lawyers in the country. How would the most of the best lawyers in the country think about solving this problem and maybe, you know, we have no conclusive evidence from where the other yet that this dramatically improves things. It's so early and just just not have time yet has passed. There's a chance that one of the new prompting techniques with O1 is teaching it not just like how to answer the question, what can the answer look like, but how to think.
And I think that that's another really interesting opportunity here is injecting domain expertise or just your own intelligence. I'm just so thankful because I think you're sort of sharing the breadcrumbs and where there are a great many other spaces where this technology is just beginning. I mean, you go to pretty much any company. People have no concept of what's just happened. Yeah, they actually literally still repeat all of those sort of tired tropes of, oh, you better be fine tuning or all of these. I mean, these things are just not connected to like what we're seeing day to day with startups and founders trying to create things for users.
What I'm kind of glad for is that we get to actually share this news, like this knowledge because like even the things we talked about, you know, hey, you probably do e-vals. Like there's a lot of alpha and getting to 100 percent, not just 70 percent. These are sort of the breadcrumbs that will actually go on to create all of the billion dollar companies, maybe thousands of them actually. We hope so. I mean, I think that you're about to see a lot of other fields like law really level up when you don't have to spend, you know, millions of dollars in six months, literally in a basement reading document by document by document.
Right. When you actually can just get past that and get just the results. Now you're thinking strategically and intelligently and the unlock for these companies, I mean, they currently pay again, millions of dollars in salaries for these jobs to be done, each of them. Right. So for any company to come out with a AI that can do even 80 percent of that, the value is really there. And I just want to encourage people to not kind of give up based on those tropes, right? Like, Oh, it hallucinates too much. It's too inaccurate. It's too whatever.
There's a, for example, if anything, it's like there's a path and you can do it. And there's some good news in that, you know, what the jobs aren't going to go away. They'll just be more interesting. That's what I think. Yeah. Well, with that, we're out of time, but Jake, thank you so much for being with us. Thanks for having me. See you guys next time.