Dario Amodei — "We are near the end of the exponential"

发布时间 2026-02-13 16:46:36    来源

摘要

Dario Amodei thinks we are just a few years away from AGI — or as he puts it, from having “a country of geniuses in a data center”. In this episode, we discuss what to make of the scaling hypothesis in the current RL regime, why task-specific RL might lead to generalization, and how AI will diffuse throughout the economy. We also dive into Anthropic’s revenue projections, compute commitments, path to profitability, and more.Watch on YouTube; read the transcript.Sponsors* Labelbox can get you the RL tasks and environments you need. Their massive network of subject-matter experts ensures realism across domains, and their in-house tooling lets them continuously tweak task difficulty to optimize learning. Reach out at labelbox.com/dwarkesh.* Jane Street sent me another puzzle… this time, they’ve trained backdoors into 3 different language models — they want you to find the triggers. Jane Street isn’t even sure this is possible, but they’ve set aside $50,000 for the best attempts and write-ups. They’re accepting submissions until April 1st at janestreet.com/dwarkesh.* Mercury’s personal accounts make it easy to share finances with a partner, a roommate… or OpenClaw. Last week, I wanted to try OpenClaw for myself, so I used Mercury to spin up a virtual debit card with a small spend limit, and then I let my agent loose. No matter your use case, apply at mercury.com/personal-banking.Timestamps(00:00:00) - What exactly are we scaling?(00:12:36) - Is diffusion cope?(00:29:42) - Is continual learning necessary?(00:46:20) - If AGI is imminent, why not buy more compute?(00:58:49) - How will AI labs actually make profit?(01:31:19) - Will regulations destroy the boons of AGI?(01:47:41) - Why can’t China and America both have a country of geniuses in a datacenter? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

GPT-4正在为你翻译摘要中......

中英文字稿