Consumer activity typically lags by 6 to 9 to 12 months. What's happening on the research side? I think compared to where we're going to be, we're still incredibly early. So many of these assumptions, and that's why they're assumptions, they seem intuitively correct, they're going to turn out to be incorrect. We are finally on the verge of AI video starting to work, to really work. It sort of follows the trend of AI decreasing the cost of creation in every way. 95% of YC companies are now kind of building, using those tools. The app store is going to be chaos.
Yeah, I guess. We're back for the fourth edition of the Gen AI 100 list. You guys have been working hard and tracking the consumer landscape for years now, but specifically for the last two and a half years since we really had that kind of chat GBT moment. Tell me more about how you're tracking that ecosystem and how that comes through in this list.
好,我想我们又回到了第四届Gen AI 100榜单。你们一直在努力工作,追踪消费者市场多年,但特别是在过去的两年半里,自从我们经历了类似ChatGBT的时刻以来。请告诉我你们是如何追踪这个生态系统的,以及这些追踪结果如何体现在这个榜单中。
Yeah, it's super fun. This is one of my favorite reports that we put together a couple times a year. We track the consumer AI landscape through what we do every day, which is like meeting with consumer AI startups that come to pitch us, seeing what goes viral on Twitter. But actually, there's a whole separate set of companies and products that might be reaching the true mainstream consumer that might not even be marketing themselves as AI products, but they're kind of powered by and made possible by AI.
And so the whole original purpose of this report was to see how much overlap is there between those two categories and what is the actual everyday person who might not know that they care about AI using in their day to day. That's great. And so talk about the methodology, like what makes it onto this list or not? Because to your point, there's certain household names that you might see on Twitter or have that viral moment. But I think some people might be surprised to see what made it onto this list.
So let's start with the methodology and what it requires. Yeah, so it's entirely based on data. We have two lists here, the top 50 on web and the top 50 on mobile. So the top 50 on web, we use a data provider called similar web, which tracks every single website globally. And we essentially go down and descending order of how many visits they get each month. For this report, it was January 2025. And then we go and we pick the first 50 of those that have the most monthly visits that are Gen AI first products.
We do something similar on mobile, but a different dataset from sensor tower for mobile. We look at monthly active users on the app and then again, we pick the top 50 that are Gen AI products. And then for the first time ever, we actually looked at the top 50 on mobile by revenue, which we hadn't done before. And it was a really interesting experiment because the lists were pretty non-overlapping.
We've been in this AI ecosystem for a few years now. In your eyes, what were the pivotal moments that led up to this point in time where we have like you said 50 and mobile, 50 on desktop and a whole lot more in the wider ecosystem? You often say actually it's usually like the papers are written and then the models are developed and then applications are built on top of it.
So the consumer activity typically lags by six to nine to 12 months. What's happening on the research side? So maybe just from like a consumer awareness or behavioral perspective, there's a couple moments for me. Actually mid-journey and character AI both came out before ChatGBT, which I think a lot of people don't know, but there was maybe these early niche communities of early adopters that were using both of those products in the summer and the fall of 2022 leading up to ChatGBT.
And then post ChatGBT things that just brought AI to consumer consciousness. So even remember Snapchats, my AI, with that little bot that appeared at the very top of your feed. And like 150 million people used it. And for a lot of kind of younger consumers, that was actually probably their first real chance having a conversation with an LLM.
On the image side, I think of the Valenciaga Pope, which was also I think spring 2020. Such a cultural moment. It was. And I think it made a lot of people realize for the first time that they should even be interested in AI images because they could be that good and that convincing. The first big AI music moment for me was well the BBL Drizzly song, which I think was spring of 2024.
And that also went like mega viral. I think one of the moments where creative AI really shifted into almost enterprise consciousness was the end of last year when Coke did their Christmas ad. And a lot of that was generated by AI. And then of course the deep seek launch earlier this year. Deep seek was so interesting because I think it sort of had become settled wisdom that it would be very hard for horizontal model to get to mass consumer scale quickly again.
Like Chatchy PT had kind of done it and Chatchy PT had become a verb and that opportunity had already been explored. And now we see deep seek growing as quickly as it did. And there's actually a couple of interesting nuances to deep seek. So one, I think, important nuances, the fact that they released their reasoning model for free at scale. So previously you had to use O1 Pro and you had to pay Chatchy PT's premium subscription to get access to it. The other thing was just the product execution around chain of thought, which we've talked about a lot and I think is pretty well understood. But the fact that it showed you its thought process in real time was just super captivating. And now something that's become a step that every model takes.
So I think it just really illustrates how early we are. You know, we as sophisticated users and investors are looking for further and further refinements. And once in a while something like deep seek comes out of the clear blue sky and just blows away all assumptions. That word specifically assumptions, I think is so key when you talk about these pivotal moments, I feel like you could actually match each pivotal moment with an assumption, like an assumption being, oh well, like AI could never trick me into thinking a picture is real when it's not right. Or I would never actually listen to a top 100 song that's generated by AI or you know, Chatchy PT is corner of the market. No one else can penetrate it, right? All of these assumptions that people are like, okay, sure, I was wrong about that prior one. But this one I'm pretty sure about.
We're seeing just like months, you know, being the delta between assumptions being broken. And so to your point on like the arc of the market or the industry, I could see an argument where people are like, oh, you know, we're actually pretty far along because we've already like slashed all of those assumptions. But on the other hand, I'm hearing, you know, we still have a long way to go. So kind of maybe put us along that arc if we were to compare to, you know, the mobile era or the cloud era or previous technology era. And you know, do you are we in that early innovator stage still or are we somewhere else?
Yeah. I think we're still very much in the early adopter phase in many of these categories were kind of just, I mean, arguably still in like the infrastructure building era and kind of moving into the application building era. It depends on the modality. Like now LLMs are maybe people thought that was a solve problem. But then again, deep sea came in and upended all of that. There's a lot of things that are definitely not, you know, fully solved. Like AI video right now can generate great three or five or six second clips. But hopefully years from now, we have AI video that can generate minutes long or, you know, even hours long movies.
And so I think compared to where we're going to be, we're still incredibly early. You're here to assumptions that I think are interesting because it may turn out the reality is the exact opposite. One is that AI will be very good at transactional interactions. But humans will still be the ones to build relationships and connection. So an example of that would be, you know, what kind of phone calls are AI going to be best at. And I think the assumption was, well, they'll be great at sort of scheduling and logistics and the exchanging of information facts. But we've heard over and over that in many cases, the AI's are more human than humans. They just have more patience, more nuance. They're never having a bad day. They're never hung over. So that's an interesting area of exploration.
You know, the other one that I think is interesting is the idea that humans will delegate work to the AI's and the AI's will do it. Like what if the AI's are the ones delegating the work to us? Perhaps AI is really good at organizing work and we're really good and also get a lot of joy out of doing it. So I think so many of these assumptions. And that's why the assumptions they seem intuitively correct are going to turn out to be incorrect. Totally. And if we think about the report, maybe one, what important data point is the fact that we see so many newcomers still, right? If we were in that later part of the innovation curve, you might expect more stagnancy. You might expect to see the same players.
But every time you guys build out this report, we're seeing all of these newcomers in this particular time. The fourth report we saw 17 new companies on the web rankings in particular. And you actually have this quote where you say a few unexpected players rewrote the leaderboard overnight. So can you just speak to that and the movement that we're seeing? One of the biggest trends among the newcomers is we are finally on the verge of AI video starting to work to really work. So we had three new video models on the list this time. Highlo and Cling, which are both Chinese models. And then Sora, which was open AI's model that was announced, I guess more than a year ago at this point. And finally was released. I think we'll see even more of a shake up here because VO2 is the new Google model that is even next level beyond that from what we've seen in testing. And that is probably finally going to hopefully come out in the next three or six months.
The other big category of newcomers were these vibe coding products. Cursor made the list. It's more of like an agenteic ID for a technical audience. And then Bolt made the list, which is for a non-technical audience where you basically go from a text prompt to a fully functioning web app. Even though they made the list, I think we've still seen there's a really significant portion of their users that are people who are in tech and are actually technical. But they might be using something like a Bolt or a lovable, which made our brink list, which we can talk about to maybe prototype something easier and then export the code and go and play with it themselves.
So I think we haven't quite seen the vibe coding products hit the true mainstream user in terms of someone who's never worked in tech or developed an app. I love this category. It's so fun and it's so satisfying to actually see your ideas come to life. I mean, in the case of Bolt and Lovable, sometimes they are just sort of compelling interactive prototypes. More than they are full fledged products. But that's usually enough to get a feel for whether this is something you want to invest deeper in. I mean, it sort of follows the trend of AI decreasing the cost of creation in every way. And people just trying more ideas. Just think about what that says about the untapped market of people who want to build things with code that this is on, you know, the top 50 list.
And I think honestly, both of them haven't had many apps built on them yet that have gone super viral. And though when that happens, and I'm sure it will, those will become stories of their own, which will then increase awareness of the products of the true mainstream audience. I think we're going to see a really interesting diversity or range of products built on these, which it might just be like, this is my app that I just use for my very specific niche pain point, or there might be people who never learned how to code who want to build a venture scale product on something like a bolt or a lovable. And so seeing how that plays out will be very cool.
Yeah, I think there's two phrases I've heard that I like, you know, one is sort of DIY or personal software. Yeah, you know, it never made sense, economic sense to design software for one. Really? The other is disposable software, you know, just as Suno and UDO made it possible to make a song just to capture a joke that would be irrelevant the next day. These products make it possible to create a product or an experience that may have an extremely short shelf life like 20 minutes or a week or, you know, any other time period.
Let's talk about the brink list because that's completely new to this year's fourth generation. So what is the brink list and why at it? Yeah. So the brink list is essentially the five companies that almost made a list and were right below the cut off again purely based on the data. So we pulled the five websites and the five mobile apps. And I think honestly, we were just curious to see what it would capture. We didn't quite know. The takeaway for me, it does reflect how fast things are changing because there were a couple companies on the list, like Runway, Auto, UMACs across Web and mobile that have been on the core top 50 ranks in the past.
But maybe they got just edged out by like deep seek watching this time. And so they lost their spot for this ranking, but might be on there the next one and they still have massive usage. And then the other trend that it caught was a rise in more recent products like Cria made the list and lovable made the list that are very much on that kind of consistent upswing. And if it continues, we might see them on the main ranks and they haven't made the main ranks before. What did you predict that you would see on the list that you didn't really see there? Were there any surprises on that end?
So one thing I thought would we'd see more of is style transfer as an approach to scalable video? Because style transfer is just a much more tractable, tractable problem and has a lot lower cost of inference versus raw text to video. But you know, researchers and product developers seem to be really going for it on text to video. And we've seen more of that than I would have expected. I think the other things that we didn't see on this list that we have seen at the model level. So that means maybe they'll be on the next list or like consumer voice products. There are a few of them, but not a ton of them.
Some of the new, like the Gemini Flash model that can see what's going on on your screen and interact with you. Like I built something to yell at me like if I go on Netflix or something. It's time to get back to work now. You've got this and can accomplish all of your goals. Or like the new OpenAI operator model which can actually interact with things on the browser level on your computer and get tasks done for you like pay a bill or you know make a graphic design or hire someone to landscape your yard something like that.
I think there's always a lag because the models have to be released to developers and they have to be tuned by the developers and so it takes a while. But I would expect to see maybe an explosion of fun and unique and interesting products built on models like that on hopefully the next list or two because it feels like we really have seen an explosion on the model side and it is right there in terms of manifesting at the app level too. So one of the examples of this is deep research which if you played with it is completely magical. But it's a primitive right it's not a product. It's something to build other things with.
So it's really unclear if deep research is going to be used to you know write college thesis or as it can be used to find the perfect meme to match a joke you want to make. And that's all going to be up to the app developers. And just to double click on that because you could see maybe a world where you know deep research is just this like more broad horizontal application or you could see what you just described where developers are tailoring that to specific end use cases. Are you basically saying that you think the latter is more likely in terms of the progression of these these models and apps? Not more likely but I think it's under explored.
Yeah. If you come to deep research today you kind of have the blank page problem. Yeah. And I'd love to see developers you know create some constraints that leads to unexpected outcomes. Yeah. Like the known or the prescribed use of deep research right now is basically market research reports and it's amazing for that. I've used it for that a lot of times. But if you try other things like one day we were trying to trace the origin of a meme and deep research is like a 100x better version of that know your meme website that kind of like goes through the history.
Yes. And like the etymology or however you're describing how I love that. I mean that should be an app. Yeah. So there's lots of other use cases that aren't market research reports that could really benefit from an incredibly obsessive compelling model that that will go and kind of read every website on the internet until it finds the answer. So those are the things that you know you thought might be on the list but you didn't actually see there.
What about the opposite? I mean I think the fact that the vibe coding products like the bolt and the cursors and loveables made the mainstream consumer list is just a testament to how widely they're used by the technical audience like they have gotten to saturation so quickly. I think Gary Tan had some tweet that like 95% of YC companies or something are now kind of building using those tools with something that every nearly every developer now is probably using which was maybe a surprise and how quickly we reach saturation.
那么反过来呢?我的意思是,像 Bolt、Cursors 和 Loveables 这样的 vibe coding(感觉编码)产品能够登上主流消费者的榜单,恰恰说明它们在技术用户中被广泛使用,并且迅速饱和。我记得 Gary Tan 发过一条推文,说大约95%的 YC(Y Combinator)公司或其他公司现在基本都在利用这些工具开发产品,几乎每个开发者都在使用这些工具,这可能让人感到惊讶,也说明我们达到了饱和的速度有多快。
We've talked about this but a continuing surprise so I don't know if it counts as one but I still am surprised every time is how many companion products are on the list and also how many of them rank so high. I think we had three companion products in the top 10. Two of them were NSFW oriented. Maybe not surprising when you think about like traffic on the internet in general outside of AI but a lot of people are even using them as like interactive fanfiction and some of the biggest fanfiction sites in the world are also you know top 100 top 200 global sites so it makes sense in that way.
And then I guess my last surprise would be there's actually quite a bit of consistency in the list over the past four versions. There's always new entrance which is really exciting but across the four list there's now 16 companies on the web ranks who have made it every single patent and have kept the street going which is pretty remarkable when you think of how early we are in AI but I think a testament to how those companies have cemented their brands their products their kind of I guess status and consumer consciousness and I think a testament to the fact that like real businesses have been have been built in consumer AI already.
No to add to that one of the surprises for me on companion was not seeing more multimodality. Yes. The kind of the first glimmers of that at scale were GROC you know GROC added a bunch of voices with some real aesthetics and points of view. Yeah but it's look it's it's just interesting that that feels in it of course characters got voice mode and you know and more but it feels like character is in companionship is such a horizontal category there's so much latent demand and a really increase once you have multimodality.
You know the other interesting thing is that a lot of the text to code work my assumption was that there was a small number of people who are creating sites that were heavily traffic and that explained the rise of them but actually the majority of the traffic correct me if I'm wrong here Olivia is from people doing creation not just consuming other people's creations so there's just it really shows how much demand there is to make things even if people are not that interested in consuming them.
Yeah you can track the traffic of like apps that people have launched on lovable.app versus visits to lovable.dev which is where people go to make a lovable product and like lovable.dev has more usage. Significant. Significant. Then you know traffic to lovable.app which gets back to what I was saying before like we have not even seen the first wave of viral products built on top of lovable and bold and and so when that happens I think the awareness of these types of platforms is going to go significantly up show.
I mean the app store is going to be chaos. Yeah it is going to be chaos. We don't need an AI just to solve that AI app management problem. Completely and I mean to that and you talked about the fact that there are some consistent players. Yeah one of those players is chatgbt which we you know we we've talked about it was kind of like the starting gun of some of this application development.
我的意思是,应用商店将会变得一片混乱。是的,它将会很混乱。我们不需要一个 AI 来仅仅解决这些 AI 应用管理的问题。完全同意,而且你提到了一些稳定的参与者。是的,其中一个参与者是 ChatGPT,我们已经谈过,它有点像是这些应用开发的起跑枪声。
Chatgbt has been at the very top of the list. Has it been that way for every single iteration of web and mobile. But maybe what would surprise people is that the traffic to chatgbt hasn't always been the same trajectory. So maybe can you talk about that and what did we see this time around?
Yeah so it was basically flat for a while which I think was surprising to a lot of people between February 2023 basically for a whole year through February 2024. It was essentially flat in monthly visits to the website and I think like at that point from the data that I've seen basically 50% plus of the traffic with students who were using it for essays or homework problems but the vast majority of other people me included to the honest had not maybe found a daily active use case for chatgbt yet and it's completely researched more recently.
So they two x the number of visits on web since then they actually made their own announcement to where they counted across web and mobile and in the past six months they grew from 200 million to 400 million weekly active users. Wow which is especially surprising because it took them nine months to double before that and it usually gets way harder to double at scale not easier.
I think from our perspective and if you've even plotted on the graph you can kind of track the increases to the release of new models that unlock new use cases. So like the new 01 reasoning models, the 4-O models which were kind of multimodal for the first time and then advanced voice mode and then they've also watched new products like the operator that can perform tasks in your computer like canvas where you can write more naturally.
So it's both bringing in like new users who never tried it and then taking people like me who honestly I was maybe a weekly if not less than a weekly active use case. And now I'm a daily active but across several use cases now. Like some days I'm driving and talking to a voice mode. Some days I'm working on a memo and I'm generating something with the research. Some days I'm doing you know some random other project and I'm brainstorming ideas with it.
So I would expect that to continue as they release new models. And have you heard from the ecosystem in terms of what more frequent use cases have emerged kind of like yours in terms of if before it was a lot of students writing research reports. Are there now, is there a sense of understanding of what those newer use cases are? Yeah I think it's gotten better at some things related to coding. It's gotten better at data analysis. And then I mean the reasoning models it's hard to overestimate because in the past like you couldn't even rely on Chatchee. We need to tell you how many hours were in strawberry. Right. Accurately. So it was hard to feel good about really tasking any sort of delicate or serious work to it. And so I think there are probably a long tail of use cases that people have just migrated over now that they have more confidence in the models.
You know what's interesting to add to that is that Quad is not a traditional number two player. And typically the number two player has 10% of the market share and you know and just sort of 10% of the product quality. And instead Quad sits in this very interesting place where it seems like it's more beloved by a smaller number of people. It's better at creative writing. You know it seems to have more of a personality which is interesting because at least I think it's designed to be more constrained. Yeah. And then it's also strangely much much better at coding. Yes.
Yeah. Why? I don't know but it's very interesting to see there's a place for both Chatchee PT and Quad and Mistral and potentially other models all to sort of augment each other. To me the really interesting thing about this list when it came to general LLM assistant usage was like we only had 10 days of data for deep seek for January because it launched at the end of the month and it shot up from literally nothing to number two on the list. You know 10% of Chatchee PT scale on web within a week a little bit more than a week on mobile. It had even less than that five days and it was number 14 and if it had had five more days it would have been number two.
And the gap is even narrower there between deep seek and Chatchee PT. So again to an issue's point like that was a surprise and that we could see kind of a broad base cell product go so viral still and capture so many users. Yep. And deep seek was obviously the story when it came out. What have we learned about retention since then? And is that learning specific to deep seek or are we seeing that learning applied across the ecosystem? On a retention basis like how many users are coming back to the app say exactly 30 days exactly seven days exactly 60 days it's just slightly below Chatchee PT.
So we're looking at 7% day 30 for deep seek and 9% day 30 for Chatchee PT. It's too early to call on web because it's kind of hard to track usage. My part of my theory here is if you look at deep seek usage a lot of it is the US but a lot of it is China and other countries where Chatchee PT is either is basically you can't use it or they try to make you not use it and you can only get by it with a VPN. And so in those markets it's not Chatchee PT versus deep seek versus perplexity it's like deep seek versus nothing. And so in those markets I think they have like structural a structural advantage from the retention side that might like skew the overall sample.
Totally. Next time we should add a different cut for deep seek USA. Yes exactly. Yeah yeah geographic breakdown. Yeah talking about trends that we're seeing on the list you mentioned AI video before but anything else you want to call out there in terms of its presence on the report? Yeah I mean two of the video models were Chinese video models which is super interesting. The models are less copyright sensitive in their training data.
And it's a great euphemism. Yeah there are maybe more realistic and more prompted here and in the outputs as a result but also just in China it's easier to hire people to kind of capture videos. They have maybe a greater volume of researchers doing image and video stuff versus other stuff. So that makes sense and I think we might expect to continue to see I think Sora in some ways was a little bit disappointing for some people and whereas like the Chinese video models were maybe better than a lot of people expected given the relative lack of capital that they've raised.
That's right I think an interesting trend is just seeing Korea in the brink list. Yes. Korea is the single best place to access all the models and all the tools and the nice thing that they do is stitch all of these things together to make them greater than some of their parts. So insofar as we live in this sort of multi-polar world of models, image models, video models, language models, they'll be a role for aggregators like Korea to put the model together in a pot totally. Especially because people who are deep in AI video understand this but each model is kind of known for being good at specific things like shots of people, shots of landscapes, anime, hyper realistic and so paying it can it can rack up very quickly on $20 a month's subscriptions if you're paying for 10 or 15 different models independently versus like having one canvas to work with all of them. You also typically use the products together.
Yes. You know, usually generate an image in mid-June or flux and then you take that image and upscale it and then you put it as the beginning frame in a video. So you really want to like not have the seams between all those products. Completely. Are we seeing these video models in particular become more opinionated and what I mean by that is like you know we see that in image models right where sort of mid-June might be good at this and then you might see another model better at something else and people the users will gravitate towards either models or applications that provide them with that specificity or a piece. Are we seeing that video?
Yeah. I'd say they're both becoming more opinionated on the model level but also the applications. The application choices that they're making that even the model companies are making are becoming more opinionated. If you've used like a runway or a clean or something you can now prompt basically the camera angles or the wideness of the shot or all of these things a human cinematographer would do you can prompt how the video kind of sweeps over the surface of the screen and so that's also kind of a big factor in what you use for maybe even different parts of one video which is interesting. I mean the comments specifically I still think Audiogram is one of the most unique models for sort of what it does, what it's great at which is text generation.
It's really aesthetic that it has. It just sits in a very unique place in the ecosystem. Yeah. We did an internal competition where we had to generate a bunch of video, a 30-second video and iDgram was amazing for that because you could not get that layer of specificity anywhere else and then you could then take what was generated at iDgram and put it into another model to animate it or to swerve or to do whatever you needed to do. Well they also have a fun feature which essentially is image to text so that you have a meme or a copyrighted image that you want to replicate or at least be inspired by.
Yeah. You can use their image to text and then use that text as the prompt to create a image. I also found that fascinating because I would prompt something and as you learn when you're prompting with AI in general you learn that you don't know what you're looking for. Yes. And so when I was prompt iDgram would modify your prompt yeah or generating the image and then you could actually go and interrogate that and be like oh that's why I'm getting xy or z. Video actually in general it tends to be more of a mobile first phenomena right we see tons of even before AI tons of applications that focus on creators being able to edit and splice video.
What are we seeing in terms of the difference between what's working on mobile and what's working on desktop? Yeah. I mean it's somewhat obvious but like a lot of the things that are working on mobile are either things you want to use on the go or where the asset the underlying asset you're working with is easily captured by the phones. The avatar apps will open mobile because you have you know 10 selfies of yourself sitting on your phone. A lot of the voice first consumer products that we are seeing working actually are on mobile versus web because it's easier and more natural to talk into your phone for a language learning or for companionship or other use cases than it is to maybe talk into your laptop and same with homework helper apps like those are really blown up on mobile as compared to web.
So maybe another interesting breakdown that kind of represents where we are in the innovation curve is not just what is getting views but what's actually making money and how those aren't always one to one mirrored. What is making money today? Yeah. We're relearning there and is that the same as you know what's getting traffic. So for the first time we actually ranked the top 50 by what sensor tower can measure as mobile revenue which is typically in app purchases and subscription so probably not ads and we ranked those separately from what has the most monthly active users and there was only 40% overlap between the two lists. So a lot of difference.
The surprise to me actually was the main categories are the same in terms of what's making money versus what people are using. So photo and video generators, photo and video editors, beauty filters and beauty enhancer is massive standalone category and then the realm of kind of like chat GPT copy cat apps both making a ton of money getting a lot of users but the companies within those categories are very different in terms of who's making money and who has the most usage.
We actually found we plotted like revenue per user versus number of users and we found the apps that had smaller user bases out of the sample set were much more likely to be making significantly more money on a per user basis. So apps like speak apps like auto captions and video editing. There's a lot of reasons for this. One is that if you are making a lot of money per user you're probably more of a serious prosumer app and so you've probably actually gated the usage pretty significantly like you have to subscribe to use the product and so there are companies on here that might be making 50 a hundred million dollars an ARR off of only a million users 2 million users so they wouldn't make the ranks ironically enough for monthly active users but they rank really really high on a revenue basis which is exciting.
And then as anyone who looks at the mobile list knows there's a lot of maybe for the tech audience seemingly random products on there of like I've never heard of this is this from a startup and on mobile especially there is a very precise game that you can play with like app store ads and and other kind of paid but fairly low cost acquisition channels and if you're doing this as an indie developer or maybe an app studio running internationally you're not looking for the 10x payback of acquisition costs that we might be looking for as venture investors so if you make back one or two extra money on a user that's amazing so you can get to 10 million users mostly by paying for them but you're probably not going to make as much revenue or ultimately as much profit maybe as as some of the companies that are lower usage but higher revenue and is there a learning there in terms of you mentioned how I mean by nature if you start getting certain features or an application entirely yeah you are potentially stifling growth of the overall user base is there a learning in terms of how AI founders should be thinking about that trade off today.
I mean I think it depends some of these markets are naturally maybe not mainstream behavior like one example of a category that did appear on the mobile revenue list but was not on mobile usage was several plant identification apps I love those yeah well you take a picture you saved on the plan it tells you exactly what it is if you've seen that plant before is that an app that a hundred million people will have on their phones maybe maybe not but if you're one of the like I can think of a few relatives who love plants or love birds and like totally they'll pay a hundred dollars a year for that and they'll use it every day or every other night so I think it's more for founders kind of optimizing for the type of product you have and how mainstream it can be.
All right so there's a lot of information here that we've covered we've covered desktop we've covered mobile we've covered revenue versus users yeah and then we've also talked about the stickiness of some of these players right you said there was what was it 16 that have showed up yes every single list so what can we learn from the last few lists feeling the biggest thing now being consumer investor for close to a decade now it's almost like the more you know the less you know in some cases because it all just comes back to the product at the end of the day like technologists or investors or can have opinions on the best monetization strategy or the best you know growth hacks but in the end if the product isn't capturing users attention and isn't retaining them the business is just going to be kind of a completely like leaky bucket of users and users out.
So I would say the biggest like often we meet with these amazing like PhD researchers best in class in the whole world in terms of their technical understanding of a model or capability and they can struggle building in consumer sometimes because often the more complicated. thing is not actually the thing that is highest utility most delightful most helpful to a consumer user so we always we never like to be prescriptive on consumer products but in general we see when teams focus on either the pain point they're trying to solve or the unique experience they're trying to create and build towards that and if that means your actually the old model is better than the new model use that if that means it's just one AI feature instead of the whole product being built on AI because it's not stable enough like do that.
I think in in consumer you really have to kind of let the data the data your guide there. Thank you so much for listening to the A16C podcast. If you've made it this far don't forget to subscribe so that you are the first to get our exclusive video content or you can check out this video that we've can selected for you.