I feel like agents for consumers are like fairly bright. Right. Here we go. Hot day. Trying to have an agent like fully book a vacation for you. Almost just as hard as just going and booking it yourself. Today we're going behind the scenes on one of our recent blog posts, Building Effective Agents. I'm Alex. I lead Claude Relations here at Anthropic. I'm Eric. I'm in the research team at Anthropic. I'm Barry. I'm on the Apply to the Eye team. I'm going to kick us off here for viewers just jumping in.
What's the quick version of what an agent actually is? I mean, there's a million definitions of it. And why should a developer or somebody that's actually building with AI care about these things? Eric, maybe we can start with you. Sure. Yeah. So I think something we explored in the blog post is that, first of all, a lot of people have been saying everything is an agent, referring to almost anything more than just a single LLM call. One of the things we tried to do in the blog post is really kind of separate this out of like, hey, there's workflows, which is where you have a few LLM calls chained together.
And really, what we think an agent is is where you're letting the LLM decide sort of how many times to run. You're having it continuing to loop until it's found a resolution. And that could be talking to a customer for customer support. That could be iterating on code changes. But something where you don't know how many steps it's going to take to complete, that's really sort of what we consider an agent. Interesting. So in the definition of an agent, we are letting the LLM kind of pick its own fate and decide what it wants to do, what actions to take, instead of us predefining a path for it.
Exactly. It's more autonomous. Whereas a workflow, you can kind of think of it as like a workflow or sort of like it's on rails through a fixed number of steps. I see. So this distinction, I assume this was the result of many, many conversations with customers and working with different teams and even trying things to make it happen to yourself. Barry, can you speak more to maybe what that looks like as we got to create this divide between a workflow and agent and what sort of patterns surprised you the most as you were going through this? Sure. Honestly, I think all of this kind of evolved as model got better and teams got more sophisticated.
We both worked with a large number of customers where they're sophisticated. And we kind of went from having a single LLM to having a lot of LLMs and eventually having our own orchestrating themselves. So one of the reasons why we decided to create this distinction is because we started to see these two distinct patterns where you have workflows that's pretty orchestrated by code. And then you also have agent, which is a simpler but complex in other sense, like different shape that we're starting to see. Really, I think as the models and all of the tools start to get better, agents are becoming more and more prevalent and more and more capable.
And that's when we decided, hey, this is probably a time for us to give a formal definition. So in practice, if you're a developer implementing one of these things, what would that actually look like in your code as you're starting to build this, the differences between, maybe we actually go down to the prompt level here. What does an agent prompt to look like or flow and what does a workflow look like? Yeah. So I think a workflow prompt looks like you have one prompt. You take the output of it. You feed it into prompt B. Take the output of that. Feed it into prompt C. And then you're done.
There's this straight line, fixed number of steps. You know exactly what's going to happen. And maybe you have some extra code that sort of checks the intermediate results of these and makes sure they're OK. But you kind of know exactly what's going to happen in one of these paths. And each of those prompts is sort of a very specific prompt, just sort of taking one input and transforming it into another output. For instance, maybe one of these prompts is taking in the user question and categorizing it into one of five categories so that then the next prompt can be more specific for that.
In contrast, an agent prompt will be sort of much more open-ended and usually give the model tools or multiple things to check and say, hey, here's the question. And you can do web searches or you can edit these code files or run code and keep doing this until you have the answer. I see. So there's a few different use cases there. That makes sense as we start to arrive at these different conclusions. I'm curious, as we've now kind of covered at a high level how we're thinking about these workflows and agents and talking about the blog post, I want to dive even further behind the scenes.
Were there any funny stories, Barry, of wild things that you saw from customers that were interesting or are just kind of far out there in terms of how people are starting to actually use these things in production? Yeah, this is actually from my own experience, like, viewing agents. I joined about a month before the Son of V2 refresh. And one of my onboarding tasks was to run OS World, which was a computer user benchmark. And for a whole week, me and this other engineer, we were just staring at these agent trajectories that were counterintuitive to us. And then we weren't sure why the model was making the decision. You was given the instructions that we would give it. And so we decided we're going to act like cloud and put ourselves in that environment. So we would do this really silly thing, where we close our eyes for a whole minute. And then we blink at a screen for a second. We close our eyes again and just think, well, I have to write Python code to operate in this environment. What would I do?
有没有什么有趣的故事,Barry,比如你见过的客户做过的疯狂事情,这些事情既有趣又让人感到不可思议,尤其是在人们开始实际上在生产中使用这些东西的时候?是的,这实际上来源于我自己的经历,比如观察代理。我是在Son of V2更新前一个月加入公司的,其中一个入职任务是运行OS World,这是一个计算机用户基准测试。整整一个星期,我和另一位工程师一直盯着这些让我们感到违背直觉的代理轨迹。我们不确定模型为什么会根据我们给出的指令做出这样的决策。所以我们决定假装自己是云端,把自己放在那样的环境中。我们会做一件非常傻的事情:闭上眼睛整整一分钟,然后眨眼看屏幕一秒钟,再次闭上眼睛思考,假如我要写Python代码在这个环境中工作,我应该怎么做?
I suddenly made a lot more sense. And I feel like a lot of agent design comes down to that. There's a lot of context and a lot of knowledge that the model maybe does not have. And we have to be empathetic to the model. And we have to make a lot of that clear in the prompt in the two description in the environment. I see. So a tip here for developers is almost like to act as if you are looking through the lens of the model itself, in terms of what would be the most applicable instructions here. I was the model seeing the world, which is very different than how we operate as a human, I guess, with additional context. Eric, I'm curious if you have any other stories that you've seen.
Yeah. I think actually, in a very similar vein, I think a lot of people really forget to do this. And I think maybe the funniest things I see is that people will put a lot of effort into creating these really beautiful, detailed prompts. And then the tools that they make to give the model are sort of these incredibly bare bones, like no documentation, the parameters are named A and B. And it's kind of like, oh, an engineer wouldn't be able to work with this as a work with this as if this was a function they had to use, because there's no documentation.
是的,我也有类似的看法。我觉得很多人常常忘记这么做。我觉得最有趣的是,人们会花很多精力去创建非常漂亮、详细的提示,但他们给模型使用的工具却非常简陋,比如没有任何文档,参数名字只是 A 和 B。如果把它当作一个函数使用,工程师根本没法有效工作,因为没有文档说明。
How can you expect qualities this as well? So it's like that lack of putting yourself in the model shoes. And I think a lot of people, when they start trying to use tool use and function calling, they kind of forget that they have to prompt as well. And they think about the model just as a more classical programming system. But it is still a model. And you need to be prompt engineering in the descriptions of your tools themselves. Yeah, I've noticed that. It's like people forget that it's all part of the same prompt. It's all getting fed into the same prompt in the context window. And writing a good tool description influences other parts of the prompt as well.
So that is one aspect to consider. Agents is this kind of all the hype term right now. A lot of people are talking about it. And there's been plenty of articles written and videos made on the subject. What made you guys think that now is the right time to write something ourselves and talk a little bit more about the details of Agents? Sure, yeah. I think one of the most important things for us is just to be able to explain things well. I think that's a big part of our motivation, which is we walk into customer meetings, and everything is referred to as a different term, even though they share the same shape.
So we thought you'd be really useful if we can just have a set of definitions and a set of diagrams and code to explain these things to our customers. And we are getting to the point where the model is capable of doing a lot of the agentic workflows that we're seeing. And that seems like the right time for us to have some definitions or just to make these conversations easier. I think for me, I saw that there was a lot of excitement around Agents, but also a lot of people really didn't know what it meant in practice. And so they were trying to bring Agents to any problem they had, even when much simpler systems would work.
And so I saw that as one of the reasons that we should write this is guide people about how to do Agents, but also where Agents are appropriate, and that you shouldn't go after a fly with a bazooka. I see. I see. That was a perfect part. Lance, my next question here. There's a lot of talk about the potential of Agents. And every developer out there in every startup and business is trying to think about how they can build their own version of an Agent for their company or product. But you guys are starting to see what actually works in production.
So we're going to play a little game here. I want to know one thing that's overhyped about Agents right now, and also one thing that's underhyped, just in terms of implementations or actual uses in production or potentials here as well. So Eric, let's start with you first. I feel like underhyped is like things that save people time, even if it's a very small amount of time. I think a lot of times if you just look at that on the surface, it's like, oh, this is something that takes me a minute. And even if you can fully automate it, it's only a minute. Like, what help is that?
But really, that changes the dynamics of now you can do that thing 100 times more than you previously would. So I think I'm most excited about things that, if they were easier, could be really scaled up. Yeah, I don't know if this is necessarily related to hype, but I think it's really difficult to calibrate right now where Agents are really needed. I think there's this intersection that's a sweet spot for using Agent, and it's a set of tasks that's valuable and complex, but also maybe the cost of error or cost of monitoring error is relatively low.
That set of tasks is not super clear and obvious, unless we actually look into the existing processes. I think coding and search are two pretty canonical examples where Agents are very useful. Take Search as an example. It's a really valuable task. It's very hard to do deep iterative search, but you can always trade off some precision for recall and then just get a little bit more documents or a little bit more information that needs needed and filter it down.
So we've seen a lot of success there with Agent, so what does a coding agent look like right now? Coding agents, I think, are super exciting because they are verifiable, at least partially. Code has this great property that you can write tests for it and then you edit the code and either the tests pass or they don't pass. Now that assumes that you have good unit tests, which I think every engineer in the world can say, like, we don't. But at least it's better than a lot of things.
There's no equivalent way to do that for many other fields. So this at least gives a coding agent some way that it can get more signal every time it goes through a loop. So if every time it's running the tests again, it's seeing what the error of the output is, that makes me think that the model can converge on the right answer by getting this feedback. And if you don't have some mechanism to get feedback as you're iterating, you're not injecting any more signal. You're just going to have noise.
And so there's no reason without something like this that an agent will converge to the right answer. I see. So what's the biggest blockers then in terms of improving agent performance on the coding at the moment? Yeah. So I think for coding, we've seen over the last year like on Sweetbench, results have gone really from like very, very low to like, I think, you know, over 50% now, which is really incredible. So the models are getting really good at writing code to solve these issues.
I feel like I have a slightly controversial take here that I think the next limiting factor is going to come back to that verification. Like it's great for these cases where we do have perfect unit tests. And that's starting to work. But for the real world cases, we usually don't have perfect unit tests for them. And so I'm thinking now, like, finding ways that we can verify and we can add tests for the things that you really care about so that the model itself can test this and know whether it's right or wrong before it goes back to the human.
I see. Making sure that we can embed some sort of feedback loop into the processes that's the right or wrong. OK. What's the future of agents look like in 2025? Very, we're going to start with you. Yeah, I think that's a really difficult question. This is probably not like a practical thing. But one thing I've been really interested in just like how a multi-agent environment will look like.
I think I've already shown Eric that it's like a building environment where a bunch of cloud can spin up other clouds and play werewolf together. And it's like a completely what is werewolf? Werewolf is a social deduction game where all of the players are trying to figure out what each other's role is. It's very similar to mafia. It's entirely text-based, which is great for cloud to play in.
I see. So we have multiple different clouds playing different roles within this game, all communicating with each other. Yeah, exactly. And then you see a lot of interesting interaction in there that you just haven't seen before. And that's something I'm really excited about. It's like very similar to how we went from single LOM to multi LOM. I think by the end of the year, we could potentially see us going from agent to multi-agent. And there are some interesting research questions that figure out in that domain. In terms of how the agents interact with each other, what does this emergent behavior look like in that one as you coordinate between agents doing different things? Exactly. And just whether this is actually going to be useful or better than a single agent with access to a lot more resources.
Do we see any multi-agent approaches right now that are actually working in production? I feel like in production, we haven't even seen a lot of successful single agents. OK, interest. But this is kind of like a potential extension of successful agents with the improved capabilities of the next couple of generations of models. Yeah, so this is not a vice that everyone should go explore about the agent environment. It's just I think to understand the models behavior, this provides us with a better way to understand model behaviors.
I see. OK, Eric, what's the future of agents 25? Yeah, I feel like in 2025, we're going to see a lot of business adoption of agents starting to automate a lot of repetitive tasks and really scale up a lot of things that people wanted to do more before, but were too expensive. You can now have 10x or 100x how much you do these things. I'm imagining things like every single pull request in triggers a coding agent to come and update all of your documentation. Things like that will be cost prohibitive to do before. But once you think of agents as almost free, you can start adding these bells and whistles everywhere.
I think maybe something that's not going to happen yet, going back to what's overhyped. I feel like agents for consumers are fairly hyped right now. OK, here we go. Hot take. Because I think that we talked about a verifiability. I think that for a lot of consumer tasks, it's almost as much work to fully specify your preferences and what the task is as to just do it yourself. And it's very expensive to verify. So trying to have an agent fully book a vacation for you, describing exactly what you want your vacation to be and your preferences is almost just as hard as just going and booking it yourself.
Interesting. And it's very high risk. You don't want the agent to actually go book a plane flight. Interesting. Without you first accepting it. Is there a matter of maybe context that we're missing here, too, from the models being able to infer this information about somebody without having to explicitly go ask and learn the preference over time? Yeah, so I think that these things will get there. But first, you need to build up this context so that the model already knows your preferences and things. And I think that takes time. I see. And we'll need some stepping stones to get to bigger tasks like planning a whole vacation.
I see. OK, very interesting. Last question. Any advice that you give to a developer that's exploring this right now in terms of starting to build this or just thinking about it from a general future-proofing perspective that you can give? I feel like my best advice is make sure that you have a way to measure your results. Because I've seen a lot of people will go and build in a vacuum without any way to get feedback about whether they're building is working or not. And you can end up building a lot without realizing that it's either it's not working or maybe something much simpler would have actually done just as good a job.
Yeah, I think very similarly, starting as simple as possible and having that measurable result as you are building more complexity into it. One thing I've been really impressed by is I work with some really resourceful startups. And they can do everything within 1LM call. And the orchestration around the code, which will persist even as the model gets better, is their niche. And I always get very happy when I see one of those. Because I think they reap the benefit of future capability improvements.
And realistically, we don't know what use case will be great for agents. And the landscape is going to shift. But it's probably a good time to start building up some of that muscle to think in the agent land just to understand that capability a little bit better.
Yeah, I think I want to double click on something you said of being excited for the models to get better. I think that if you look at your startup or your product and think, oh, man, if the models get smarter, all of our mode's going to disappear, that means you're building the wrong thing.
Instead, you should be building something so that as the models get smarter, your product gets better and better. Right. That's great advice. Eric, Barry, thank you guys. This is Building Effective Agents. Thank you. Thanks.