首页  >>  来自播客: Dwarkesh Patel 更新   反馈

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

发布时间 2023-04-06 13:57:35    来源

摘要

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong. We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more. If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely. Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky Apple Podcasts: https://apple.co/3mcPjON Spotify: https://spoti.fi/3KDFzX9 Follow me on Twitter: https://twitter.com/dwarkesh_sp Timestamps: (0:00:00) - TIME article (0:09:06) - Are humans aligned? (0:37:35) - Large language models (1:07:15) - Can AIs help with alignment? (1:30:17) - Society’s response to AI (1:44:42) - Predictions (or lack thereof) (1:56:55) - Being Eliezer (2:13:06) - Othogonality (2:35:00) - Could alignment be easier than we think? (3:02:15) - What will AIs want? (3:43:54) - Writing fiction & whether rationality helps you win

GPT-4正在为你翻译摘要中......

中英文字稿