Can AI Match the Human Brain? | Surya Ganguli | TED

发布时间 2025-02-21 12:00:56    来源
这份报告概述了演讲者对一种新型“智能科学”的愿景,这是一个结合了物理学、数学、神经科学、心理学和计算机科学的多学科领域,旨在理解生物智能和人工智能。核心论点是,人工智能工程的发展已经超越了我们的科学理解,导致了强大但晦涩且低效的系统。该报告探讨了人工智能需要在五个关键领域进行改进:数据效率、能源效率、超越进化、可解释性和心-机融合。 **数据效率:** 人工智能模型需要比人类多得多的数据才能有效地学习。演讲者将训练语言模型所需的万亿单词,与人类DNA(700兆字节)和学习经历(到成年时约1亿个单词)中编码的相对少量信息进行了对比。这表明了人类学习的内在效率。解决方案在于超越蛮力训练,并创建“非冗余”数据集。演讲者的研究表明,通过战略性地选择数据点以最大化新信息,可以显著改善控制误差减少的比例定律,从而需要更少的数据。此外,机器学习需要演变为“机器教学科学”,从人类(尤其是儿童)的学习方式中汲取灵感,强调算法理解而不是仅仅接触大量数据集。 **能源效率:** 人脑仅消耗20瓦的功率,而训练大型人工智能模型需要数百万瓦的功率。这种差异源于数字计算对快速、可靠的位翻转的依赖,这在热力学上是昂贵的。相反,生物学使用缓慢、不可靠的中间步骤,并将计算与宇宙的固有物理特性相匹配。例如,神经元根据麦克斯韦定律直接相加电压输入,而计算机使用复杂的晶体管电路进行加法。要在人工智能中实现能源效率,需要重新思考技术堆栈,从底层电子设备到算法。该报告提到了对给定能量预算下计算速度和准确性的基本限制的研究,以及对接近这些限制的化学计算机(类似于神经元中的G蛋白偶联受体)的发现。此外,神经科学揭示了大脑中的预测性能量分配,其中ATP水平随着神经活动的预期而上升,表明存在优化的能量传递系统。 **超越进化:** 进化的局限性可以通过在量子硬件上实现由进化发现的神经算法来绕过。用原子代替神经元,用光子代替突触,可以构建量子联想记忆和量子优化器。这些量子系统利用量子力学的独特特性,提供增强的记忆容量、鲁棒性、回忆能力和新的优化方法,构成“量子神经形态计算”的基础。 **可解释性:** 虽然人工智能可以创建高度精确的大脑模型,但这些模型通常复杂且难以理解。目标是超越简单的复制,实现对大脑功能的概念性理解。演讲者用一个应用于视网膜详细模型的可解释人工智能的例子来说明这一点。该模型准确地再现了各种实验结果,包括对牛顿第一定律违规行为的检测。通过开发识别负责特定神经反应的基本子电路的方法,研究人员可以了解该模型(以及由此类推的生物视网膜)如何处理信息。这种方法可以通过构建大脑区域的数字孪生,然后进行可解释的AI驱动分析,来加速神经科学的发现。 **心-机融合:** 大脑的数字孪生可以促进大脑和机器之间的双向通信。控制理论可用于学习控制数字孪生的神经活动模式,然后这些模式可用于刺激真实的大脑。在小鼠中,人工智能被用来解码大脑活动中的视觉感知,反之,写入特定的神经活动模式以诱导幻觉。通过控制仅仅20个神经元,研究人员就可以可靠地控制小鼠的感知。双向脑机通信的潜力对于理解、治疗和增强大脑是巨大的。 演讲者最后强调,这种统一的智能科学必须以开放和长远的眼光去追求。学术界不受短期财务压力和企业审查的束缚,是开展这项工作的理想场所。为此,斯坦福大学正在建立一个新的智能科学中心,旨在推动基础进步并在全球范围内分享知识。演讲者将这项工作定位为从探索外部宇宙转向深入研究生物智能和人工智能的内在机制。

This presentation outlines the speaker's vision for a new "science of intelligence," a multidisciplinary field combining physics, math, neuroscience, psychology, and computer science to understand both biological and artificial intelligence. The core argument is that AI engineering has outstripped our scientific understanding, leading to powerful but opaque and inefficient systems. The talk addresses five critical areas where AI needs improvement: data efficiency, energy efficiency, going beyond evolution, explainability, and mind-machine melding. **Data Efficiency:** AI models require vastly more data than humans to learn effectively. The speaker contrasts the trillion words used to train language models with the relatively small amount of information encoded in human DNA (700 megabytes) and learning experiences (100 million words by adulthood). This points to the inherent efficiency of human learning. The solution lies in moving beyond brute-force training and creating "non-redundant" datasets. The speaker's research demonstrates that by strategically selecting data points to maximize new information, scaling laws governing error reduction can be significantly improved, requiring far less data. Moreover, machine learning needs to evolve into a "science of machine teaching," drawing inspiration from how humans, particularly children, are taught, emphasizing algorithmic understanding rather than mere exposure to vast datasets. **Energy Efficiency:** The human brain consumes a mere 20 watts, compared to the millions of watts required to train large AI models. This discrepancy stems from digital computation's reliance on fast, reliable bit flips, which are thermodynamically expensive. Biology, conversely, uses slow, unreliable intermediate steps and matches computation to the native physics of the universe. For example, neurons directly add voltage inputs based on Maxwell's laws, whereas computers use complex transistor circuits for addition. Achieving energy efficiency in AI necessitates rethinking the technology stack, from the underlying electronics to the algorithms. The presentation mentions research exploring fundamental limits on computational speed and accuracy given an energy budget, and the discovery of chemical computers (similar to G-protein coupled receptors in neurons) that approach these limits. Furthermore, neuroscience reveals predictive energy allocation in the brain, where ATP levels rise in anticipation of neural activity, suggesting an optimized energy delivery system. **Going Beyond Evolution:** The limitations of evolution can be bypassed by implementing neural algorithms, discovered by evolution, on quantum hardware. Replacing neurons with atoms and synapses with photons allows for the construction of quantum associative memories and quantum optimizers. These quantum systems, leveraging the unique properties of quantum mechanics, offer enhanced memory capacity, robustness, recall, and novel optimization approaches, forming the basis of "quantum neuromorphic computing." **Explainability:** While AI can create highly accurate models of the brain, these models are often complex and difficult to understand. The goal is to move beyond mere replication and achieve a conceptual understanding of brain function. The speaker illustrates this with an example of explainable AI applied to a detailed model of the retina. The model accurately reproduces various experimental results, including the detection of violations of Newton's first law. By developing methods to identify the essential sub-circuits responsible for specific neural responses, researchers can understand how the model, and by extension the biological retina, processes information. This approach can accelerate neuroscience discovery by enabling the construction of digital twins of brain regions followed by explainable AI-driven analysis. **Melding Minds and Machines:** Digital twins of the brain can facilitate bidirectional communication between brains and machines. Control theory can be used to learn neural activity patterns that control the digital twin, and those patterns can then be used to stimulate the real brain. In mice, AI was used to decode visual percepts from brain activity and, conversely, to write specific neural activity patterns to induce hallucinations. By controlling a mere 20 neurons, researchers could reliably control the mouse's perception. The potential of bidirectional brain-machine communication is immense for understanding, treating, and augmenting the brain. The speaker concludes by emphasizing that this unified science of intelligence must be pursued openly and with a long-term vision. Academia, free from short-term financial pressures and corporate censorship, is ideally suited for this endeavor. To that end, a new center for the science of intelligence is being established at Stanford, aiming to drive fundamental advances and share knowledge globally. The speaker positions this effort as a shift from exploring the external universe to delving into the inner workings of intelligence, both biological and artificial.

摘要

AI is evolving into a mysterious new form of intelligence — powerful yet flawed, capable of remarkable feats but still far from ...

GPT-4正在为你翻译摘要中......

中英文字稿