AI is Already Building AI — Google DeepMind’s Mostafa Dehghani
发布时间 来源
Episode 设置
摘要
Are we truly on the verge of AI automating its own research and development? In this deep-dive episode of the MAD Podcast, Matt Turck sits down with Mostafa Dehghani, a pioneering AI researcher at Google DeepMind whose work on Universal Transformers and Vision Transformers (ViT) helped lay the groundwork for today's frontier models.
Moving past the hype, Mostafa breaks down the actual mechanics of "thinking in loops" and Recursive Self-Improvement (RSI). He explores the critical bottlenecks holding back true AGI—from evaluation limits and formal verification to the brutal math of long-horizon reliability.
Mostafa and Matt also discuss the shift from pre-training to post-training, how Gemini's Nano Banana 2 processes pixels and text simultaneously, and why the "frozen" nature of today's models means Continual Learning is the next massive frontier for enterprise AI and data pipelines.
Mostafa Dehghani
LinkedIn - https://www.linkedin.com/in/dehghani-mostafa
X/Twitter - https://x.com/m__dehghani
Google DeepMind
Website - https://deepmind.google
X/Twitter - https://x.com/GoogleDeepMind
Matt Turck (Managing Director)
Blog - https://mattturck.com
LinkedIn - https://www.linkedin.com/in/turck/
X/Twitter - https://x.com/mattturck
FirstMark
Website - https://firstmark.com
X/Twitter - https://x.com/FirstMarkCap
Listen on:
Spotify - https://open.spotify.com/show/7yLATDSaFvgJG80ACcRJtq
Apple - https://podcasts.apple.com/us/podcast/the-mad-podcast-with-matt-turck/id1686238724
00:00 Intro
01:17 What “loops” in AI actually mean
05:04 Self-improvement as the next chapter of machine learning
07:32 Are Karpathy’s autoresearch agents an early form of AI self-improvement?
08:56 AI building AI: how close are we?
10:02 The biggest bottlenecks: evals, automation, and long horizons
12:36 Can formal verification unlock recursive self-improvement?
14:06 What is model collapse?
15:33 Generalization vs specialization in AI
18:04 What is a specialized model today?
20:57 Could top AI researchers themselves be automated?
24:02 If AI builds AI, does data matter less than compute?
26:22 Post-training vs pre-training: where will progress come from?
28:14 Why pre-training is not dead
29:45 What is continual learning?
31:53 How real is continual learning today?
33:43 Mostafa Dehghani’s background and path into AI
36:13 The story behind Universal Transformers
39:56 How Vision Transformers changed AI
43:47 Gemini, multimodality, and Nano Banana
47:46 Why multimodality helps build a world model
52:44 Why image generation is getting faster and more efficient
54:44 Hot takes
54:53 What the AI field is getting wrong
56:17 Why continual learning is underrated
57:26 Does RAG go away over time?
58:21 What people are too confident about in AI
59:56 If he were starting from scratch today
GPT-4正在为你翻译摘要中......
