Eric Jang – Building AlphaGo from scratch

发布时间    来源
Episode 设置




摘要

Eric Jang walks through how to build AlphaGo from scratch, but with modern AI tools.Sometimes you understand the future better by stepping backward. AlphaGo is still the cleanest worked example of the primitives of intelligence: search, learning from experience, and self-play. You have to go back to 2017 to get insight into how the more general AIs of the future might learn.Once he explained how AlphaGo works, it gave us the context to have a discussion about how RL works in LLMs and how it could work better – naive policy gradient RL has to figure out which of the 100k+ tokens in your trajectory actually got you the right answer, while AlphaGo’s MCTS suggests a strictly better action every single move, giving you a training target that sidesteps the credit assignment problem. The way humans learn is surely closer to the second.Eric also kickstarted an Autoresearch loop on his project. And it was very interesting to discuss which parts of AI research LLMs can already automate pretty well (implementing and running experiments, optimizing hyperparameters) and which they still struggle with (choosing the right question to investigate next, escaping research dead ends). Informative to all the recent discussion about when we should expect an intelligence explosion, and what it would look like from the inside.Watch on YouTube. Read the transcript.And check out the flashcards I wrote to retain the insights.Sponsors* Cursor‘s agent SDK let me build a pipeline to generate flashcards for this episode. For each card, I had an agent read the transcript, ingest blackboard screenshots, generate an SVG visual, and run everything through a critic. A durable agent is much better at this kind of work than a chain of LLM calls, and Cursor’s SDK made it easy. Check out the cards at flashcards.dwarkesh.com and get started with the SDK at cursor.com/dwarkesh* Jane Street gave me a real deep-dive tour of one of their datacenters. I got to ask a bunch of questions to Ron Minsky, who co-leads Jane Street’s tech group, and Dan Pontecorvo, who runs Jane Street’s physical engineering team. They were willing to literally pull up the floorboards and take out racks to explain how everything works. Check out the full tour at janestreet.com/dwarkeshTimestamps(00:00:00) – Basics of Go(00:08:17) – Monte Carlo Tree Search(00:32:04) – What the neural network does(01:00:33) – Self-play(01:25:38) – Alternative RL approaches(01:45:47) – Why doesn't MCTS work for LLMs(02:01:09) – Off-policy training(02:12:02) – RL is even more information inefficient than you thought(02:22:16) – Automated AI researchers Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

GPT-4正在为你翻译摘要中......

中英文字稿