What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado
发布时间 2026-03-17 11:05:23 来源
摘要
Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect.
GPT-4正在为你翻译摘要中......
