首页  >>  来自播客: User Upload Audio 更新   反馈

US Congress holds hearing on risks, regulation of AI: "Humanity has taken a back seat" - YouTube

发布时间 2023-05-16 00:00:00    来源

中英文字稿  

Welcome to the hearing of the Privacy Technology and the Law Subcommittee. I thank my partner in this effort, Senator Hawley, ranking member, and I particularly want to thank Senator Durban, Chairman of the Judiciary Committee, and he will be speaking shortly.
欢迎来到隐私技术和法律小组的听证会。我感谢我的合作伙伴霍利参议员,他是小组的排名成员,我特别要感谢司法委员会主席德尔班参议员,他将很快发表讲话。

This hearing is on the oversight of our artificial intelligence, the first in a series of hearings intended to write the rules of AI. Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past. And now for some introductory remarks.
本次听证会关注我们人工智能监管的事项,这是一系列旨在制定AI规则的听证会中的第一场。我们的目标是揭开神秘的面纱,并追究新科技的责任,以避免过去的一些错误。现在,让我们进行一些开场白。

Too often we have seen what happens when technology outpaces regulation. The unbridled exploitation of personal data, the proliferation of disinformation, and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.
我们经常看到技术超越监管所带来的后果。例如个人数据被滥用、虚假信息的流行以及社会不平等的加剧。我们已经看到算法偏见如何使歧视和偏见长期存在,以及缺乏透明度如何破坏公众信任。这不是我们想要的未来。

If you were listening from home, you might have thought that voice was mine and the words were not mine. The audio was an AI voice cloning software trained on my floor speeches. The remarks were written by chat G.B.T. when it was asked how I would open this hearing. You heard just now the result.
如果你在家听,你可能会认为那个声音是我的,但话不是我说的。那个音频是一个AI语音克隆软件,训练了我的现场演讲。这些话是由聊天G.B.T.撰写的,当被问如何开场时,这些话就出来了。你刚刚听到的就是结果。

I asked chat G.P.T. why did you pick those themes in that content? And it answered, and I'm quoting, Lumenthal has a strong record in advocating for consumer protection and civil rights. He has been vocal about issues such as data privacy. And the potential for discrimination and algorithmic decision making. Therefore, the statement emphasizes these aspects. Mr. Altman, I appreciate chat G.P.T.'s endorsement.
我问Chat G.P.T.为什么选择这些主题,它回答说:Lumenthal 在倡导消费者保护和民权方面表现得非常出色。他一直对数据隐私、算法决策造成的潜在歧视问题进行了公开发声。因此,这份声明强调了这些方面。Altman先生,我感谢Chat G.P.T.的支持。

In all seriousness, this apparent reasoning is pretty impressive. I am sure that we'll look back in a decade and view chat G.P.T. and G.P.T. 4 like we do the first cell phone, those big clunky things that we used to carry around. But we recognize that we are on the verge of a new era.
说真的,这种推理似乎相当令人印象深刻。我相信我们十年后回头看聊天G.P.T和G.P.T. 4,就像我们看待第一部手机那样,那些笨重的东西我们曾经拿着。但我们意识到我们正处于一个新时代的边缘。

The audio and my playing at May strike you as curious or humorous. But what reverberated my mind was what if I had asked it and what if it had provided an endorsement of Ukraine's surrendering or flagrant Putin's leadership. That would have been really frightening. And the prospect is more than a little scary to use the word Mr. Altman, you have used yourself. I think you have been very constructive in calling attention to the pitfalls as well as the promise. And that's the reason why we wanted you to be here today. And we thank you and our other witnesses for joining us.
这段音频和我的表演可能会让你觉得奇怪或有趣。但我内心深处想的却是,如果我问它并得到乌克兰投降或普京领导的公开支持,那将是非常可怕的。面临这种前景,令人感到更为恐惧。正如你自己所说的,“危机四伏”是非常准确的。你在指出潜在风险和前景方面非常有建设性,这也是我们今天邀请你来参加的原因。我们感谢你和其他证人的加入。

For several months now, the public has been fascinated with G.P.T. Dally and other AI tools. These examples like the homework done by chat G.P.T. or the articles and op-eds that it can write feel like novelties. But the underlying advancement of this era are more than just research experiments. They are no longer fantasies of science fiction. They are real and present.
近几个月来,公众一直对 G.P.T. Dally 和其他人工智能工具着迷。像 G.P.T. 的聊天工具完成的作业或它能够写的文章和专栏文章似乎是新奇的。但是,这个时代的基础进步不仅仅是研究实验,它们不再是科幻小说的幻想。它们是真实而现实的。

The promises of during cancer or developing new understandings of physics and biology or modeling, climate and weather. People are very encouraging and hopeful, but we also know the potential harms. And we've seen them already. Weaponize this information, housing discrimination, harassment of women and impersonation fraud, voice cloning, deep fakes.
人们对于治疗癌症、深入了解物理学和生物学、气候和天气建模等领域的前景充满期望和鼓舞,但我们也了解到它们的潜在危害。我们已经见证过一些负面影响,比如武器化信息、房屋歧视、对女性的骚扰、身份欺诈以及声音克隆和深度伪造等。

These are the potential risks despite the other rewards. And for me, perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution in skill training and relocation that may be required. And already industry leaders are calling attention to those challenges.
这些都是其他奖励之外的潜在风险。而对于我来说,可能最大的噩梦是即将到来的新工业革命,可能会导致数百万工人失业,大量就业岗位流失,需要为这场新工业革命做好技能培训和重新安置的准备。而行业领袖已经开始关注这些挑战了。

To quote chat G.P.T. this is not necessarily the future that we want. We need to maximize the good over the bad. Congress has a choice now. We have the same choice when we face social media. We fail to seize that moment. The result is predators on the internet, toxic content, exploiting children, creating dangers for them. And Senator Blackburn and I and others like Senator Durban on the Judiciary Committee are trying to deal with it, kids online, safety act. But Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.
引用聊天G.P.T.的话说,这不一定是我们想要的未来。我们需要将好处最大化,减少坏处。国会现在有选择的机会。当我们面临社交媒体时,我们也有同样的选择。我们没有抓住这个时刻,结果是互联网上的掠夺者、有毒内容、剥削儿童,为他们创造危险。黑本参议员、我和类似杜伯恩参议员在司法委员会上的其他人正在努力解决这个问题,制定儿童在线安全法。但国会在社交媒体方面未能赶上时代。现在,我们有义务在人工智能方面处理它,在威胁和风险变得真实之前这样做。

Sensible safeguards are not in opposition to innovation. Accountability is not a burden far from it. They are the foundation of how we can move ahead while protecting public trust. They are how we can lead the world in technology and science, but also in promoting our democratic values. Otherwise, in the absence of that trust, I think we may well lose both. These are sophisticated technology, but there are basic expectations, common in our law.
明智的保障措施并不阻碍创新。责任制并不是累赘,恰恰相反。它们是我们如何在保护公众信任的同时向前推进的基础。它们是我们如何在技术和科学领域引领世界,同时也促进我们的民主价值观的基础。否则,在缺乏信任的情况下,我认为我们可能会失去二者。这些是复杂的技术,但在我们的法律中有基本的期望。

We can start with transparency. AI companies are required to test their systems, disclose known risks and allow independent researcher access. We can establish scorecards and nutrition labels to encourage competition based on safety and trustworthiness. Design use, where the risk of AI is so extreme that we ought to impose restriction or even ban their use, especially when it comes to commercial invasions of privacy for profit and decisions that affect people's livelihood.
我们可以从透明度开始。人工智能公司需要测试其系统、披露已知风险并允许独立的研究人员获取信息。我们可以建立评分卡和营养标签,鼓励以安全和信誉为基础的竞争。在人工智能的使用方面,当其风险极大时,我们应该实施限制或甚至禁止它们的使用,特别是在商业侵犯隐私以获取利润和影响人们生计的决策方面。

And of course, accountability or liability. When AI companies and their clients cause harm, they should be held liable. We should not repeat our past mistakes. For example, Section 230. Forcing companies to think ahead and be responsible for the ramifications of their business decisions can be the most powerful tool of all. Garbage in, garbage out. The principle still applies. We ought to be aware of the garbage, whether it's going into these platforms or coming out of them, and the ideas that we develop in this hearing, I think will provide a solid path forward.
当然,责任或承担责任也非常重要。当人工智能公司及其客户造成伤害时,应该对其进行追究责任。我们不应该重蹈过去的覆辙,比如第230条款。迫使公司事先考虑并对其业务决策的后果负责可以是最有力的工具。垃圾进,垃圾出,这个原理仍然适用。无论是进入这些平台还是流出这些平台的内容,我们都应该注意垃圾,而我们在听证会上制定的思路将提供一个确实可行的前进之路。

I look forward to discussing them with you today, and I will just finish on this note. The AI industry doesn't have to wait for Congress. I hope there are ideas and feedback from this discussion and from the industry and voluntary action, such as we've seen lacking in many social media platforms and the consequences have been huge. So I'm hoping that we will elevate rather than have a race to the bottom. And I think these hearings will be an important part of this conversation.
我期待今天与你们讨论这些问题,我先在这里结束。人工智能行业不必等待国会的决定。我希望能在这次讨论和从业者以及志愿组织的反馈中得到一些新的想法。我们已经看到很多社交媒体平台缺乏自我约束,带来的后果是巨大的。因此,我希望我们能抬高标准,而不是陷入比拼。我认为这些听证会将是这次对话的重要一环。

This one is only the first. The ranking member and I have agreed there should be more and we're going to invite other industry leaders, some have committed to come experts, academics, and the public. We hope we'll participate.
这只是第一次。我和负责排名的成员已经同意应该有更多的活动,我们将邀请其他行业领袖,有些已经确认会来,还有专家、学者和公众。我们希望大家都能参加。

And with that, I will turn to the ranking member, Senator Holly. Thank you very much, Mr. Chairman. Thanks to the witnesses for being here. I appreciate that, sir, you had long journeys to make in order to be here. I appreciate you banking the time. I look forward to your testimony.
接下来,我将转向评级成员,参议员霍利。非常感谢您,主席先生。感谢证人出席。我知道你们长途跋涉才来到这里,非常感激您们节省时间。我期待听取您们的证言。

I want to thank Senator Blumeth all for convening this hearing for being a leader on this topic. You know, a year ago, we couldn't have had this hearing because the technology that we're talking about had not burst into public consciousness. That gives us a sense, I think, of just how rapidly this technology that we're talking about today is changing and evolving and transforming our world right before our very eyes.
我要感谢布鲁梅斯参议员召集这次听证会,也感谢他在这个议题上的领导力。你知道,一年前我们无法召开这样的听证会,因为我们在谈论的技术还没有进入公众的视野。这让我觉得,我们今天所谈论的这种技术正在以惊人的速度变化、演进,并在我们的眼前彻底改变和转化着我们的世界。

I was talking with someone just last night, a researcher in the field of psychiatry who is pointing out to me that the chat GPT and generative AI, these large language models, it's really like the invention of the internet in scale, at least, at least, and potentially far, far more significant than that. We could be looking at one of the most significant technological innovations in human history.
昨晚我和一位精神病学领域的研究人员谈话时,他向我指出,聊天GPT和生成型人工智能,这些大型语言模型,其规模实际上就像互联网的发明一样大,至少可以说是,而且具有潜在的远远比互联网更重要的意义。我们可能正在看到人类历史上最重要的技术创新之一。

And I think my question is, what kind of innovation is it going to be? Is it going to be like the printing press that diffused knowledge, and power, and learning widely across the landscape that empowered ordinary everyday individuals that led to greater flourishing, that led above all to greater liberty? Or is it going to be more like the atom bomb? Huge technological breakthrough, but the consequences, severe, terrible, continue to haunt us to this day.
我的问题是:这种创新将是哪种类型的?它会像印刷术那样传播知识、权力和学习,使普通人得以发挥更大的作用,从而促进更大的繁荣,最重要的是促进更大的自由吗?还是它会更像原子弹?具有巨大的技术突破,但后果严重、可怕,直到今天仍在困扰着我们。

I don't know the answer to that question. I don't think any of us in the room know the answer to that question, because I think the answer has not yet been written. And to an ascertain extent, it's up to us here and to us as the American people to write the answer. What kind of technology will this be? How will we use it to better our lives? How will we use it to actually harness the power of technological innovation for the good of the American people, for the liberty of the American people, not for the power of the few?
我不知道那个问题的答案。我认为房间里没有一个人知道那个问题的答案,因为我认为那个答案还没有被写出来。在某种程度上,这取决于我们在此的行动和我们作为美国人的行动来书写答案。这将是什么样的技术?我们会如何利用它来改善我们的生活?我们将如何利用它来实际利用技术创新的力量,造福于美国人民,为了美国人民的自由,而不是为少数人的权力?

I was reminded of the psychologist and writer Carl Jung who said at the beginning of the last century that our ability for technological innovation, our capacity for technological revolution had far outpaced our ethical and moral ability to apply and harness the technology we developed. That was a century ago. I think the story of the 20th century largely bore him out. And I just wonder what we say as we look back at this moment about these new technologies, about generative AI, about these language models, and about the hosts of other AI capacities that are even right now underdeveloped, not just in this country, but in China, the countries of our adversaries and all around the world. And I think the question that Jung posed is really the question that faces us. Will we strike that balance between technological innovation and our ethical and moral responsibility to humanity, to liberty, to the freedom of this country? And I hope that today's hearing will take us a step closer to that answer. Thank you, Mr. Chairman. Thanks, Senator Holly.
我想到了心理学家和作家卡尔·荣格,在上个世纪初曾说过,我们的技术创新能力、我们应用和控制技术的道德和伦理能力已经远远超过了我们发展技术的能力。这是一个世纪之前的话。我认为20世纪的故事在很大程度上证明了他的观点。我不禁想问,当我们回顾这一时刻时,对于这些新技术、生成式人工智能、这些语言模型以及其他尚未充分发展的人工智能能力,我们应该说些什么,不仅在这个国家,还在中国,我们的对手国家和全世界的国家之中?我认为荣格提出的问题真正面对着我们。我们能在技术创新和我们对人类、自由和这个国家自由的道德和伦理责任之间取得平衡吗?我希望今天的听证会能让我们更接近这个答案。谢谢,主席先生。谢谢,霍利参议员。

I'm going to turn to the chairman of the Judiciary Committee and the ranking member, Senator Graham, if they have opening remarks as well.
我将要询问司法委员会主席和排名成员格雷厄姆参议员,是否也有开场白。意思是:在发言之前,我将询问司法委员会主席和排名成员格雷厄姆参议员,他们是否有开场白。

Yes, Mr. Chairman. Thank you very much, Senator Holly as well. Last week in the committee, full committee, Senator Judiciary Committee, we dealt with an issue that had been waiting for attention for almost two decades. And that is what to do with the social media when it comes to the abuse of children. We had four bills initially that were considered by this committee. And what may be history and making, we passed all four bills with unanimous roll calls, unanimous roll calls. I can't remember another time when we've done that, and an issue that important. It's an indication, I think, of the important position of this committee and the national debate on issues that affect every single family and affect our future in a profound way.
是的,主席先生。Sen. Holly,非常感谢您。上周,我们在全委员会上讨论了一个问题,这个问题已经等待关注近20年了。这个问题涉及到社交媒体如何应对虐待儿童的问题。最初,我们的委员会考虑了四项议案。历史揭开了帷幕,在全体委员会的投票中,我们通过了全部四项议案。这是我记忆中的另一个重要时刻,也是我们这个委员会在国家争议中的重要地位的体现,这个争议影响着每个家庭,更深远地影响着我们的未来。

1989 was a historic watershed year in America, because that's when Seinfeld arrived. And we had a sitcom which was supposedly about little or nothing, which turned out to be enduring. I like to watch it, obviously, and I'm always marvel when they show the phones that he used in 1989. And I think about those in comparison to what we carry around in our pockets today. It's a dramatic change. And I guess the question is, I look at that, is does this change in phone technology that we've witnessed through the sitcom really exemplify a profound change in America, still on answer. The basic question we face is whether or not this issue of AI is a quantitative change in technology or a qualitative change. The suggestions that I've heard from experts in the field suggest it's qualitative. Is it AI fundamentally different? Is it a game changer? Is it so disruptive that we need to treat it differently than other forms of innovation? That's the starting point.
1989年是美国历史上的一个重要转折点,因为那一年播出了《宋飞正传》。这部情景喜剧被认为是轻松愉快的,却长盛不衰。我喜欢看它,当他们在节目中显示1989年使用的电话时,我总是感到惊叹。我不禁想起我们今天携带的手机相比之下有了多大的变化,这是一个巨大的改变。这让我思考一个问题,即通过这部情景喜剧所见证的电话技术的变化是否真正展示了美国的巨大变革,这仍然是一个未解答的问题。 我们面临的基本问题是,人工智能(AI)是否是科技的定量变化还是定性变化。来自该领域专家的建议表明,它是一种定性变化。AI是否基本不同?它是否是改变游戏规则的因素?它是否如此具有颠覆性,我们需要将其与其他形式的创新区别对待?这是我们的起点。

And the second starting point is one that's humbling, and that is the fact that when you look at the record of Congress and dealing with innovation, technology, and rapid change, we're not designed for that. In fact, the Senate was not created for that purpose, but just the opposite. Slow things down. Take a harder look at it. Don't react to the public sentiment. Make sure you're doing the right thing. Well, I've heard of the positive potential of AI, and it is enormous. You can go through lists of the deployment of technology that would say that an idea you can sketch it on a website, for a website on a napkin, can generate functioning code. Pharmaceutical companies could use the technology to identify new candidates to treat disease. The list goes on and on. And then, of course, the danger, and it's profound as well.
第二个出发点是令人谦卑的,那就是当你看着国会处理创新、技术和快速变化的记录时,我们并不是为此而设计的。事实上,参议院不是为这个目的而创建的,而是相反的。它会使事情变得缓慢,更加审慎,不会对公众情绪做出反应,确保你正在做正确的事情。虽然我听说过AI的积极潜能,它是巨大的。你可以浏览一些技术部署的清单,这些清单可以证明,你可以在一个网站上或在餐巾纸上勾画一个想法,产生一个功能代码。药物公司可以使用这项技术来发现治疗疾病的新候选药物。列表还可以继续下去。当然,它也有深刻的危险。

So I'm glad that this hearing is taking place. And I think it's important for all of us to participate. I'm glad that it's a bipartisan approach. We're going to have to scramble to keep up with the pace of innovation in terms of our government public response to it. But this is a great start. Thank you, Mr. Chairman. Thanks, Senator Roman. It is very much a bipartisan approach, very deeply and broadly. By partisan, and that spirit, I'm going to turn to my friend Senator Graham. The spirit of one year from then on, we're not the same thing, but a red.
我很高兴看到这次听证会的举行,我认为我们所有人都参与其中非常重要。我很高兴这是一种两党合作的方法。我们需要比政府公共应对创新的步伐更快地应对创新,但这是一个很好的开始。谢谢主席,谢谢罗曼参议员。这是一种非常深刻和广泛的两党合作精神,也是一个新的开始。在这个合作精神下,我想转向我的朋友格雷厄姆参议员。我们不再是同一个红色,但我们会努力保持同一的精神,为未来一年努力奋斗。

Thank you. That was not written by AI, for sure. Let me introduce now the witnesses. We're very grateful to you for being here. Sam Altman is the co-founder and CEO of OpenAI, the AI research and deployment company behind chat GPT and Dali. Mr. Altman was president of the early stage startup accelerator. Why? Combinator from 1914, I'm sorry, 2014 to 2019, OpenAI was founded in 2015.
谢谢。那绝对不是由人工智能编写的。现在,让我来介绍证人们。非常感谢你们的到来。Sam Altman是OpenAI的联合创始人和CEO,OpenAI是一家人工智能研究和部署公司,背后有聊天GPT和达利。Altman先生曾经是初创企业加速器Why? Combinator的总裁,从2014年到2019年,OpenAI成立于2015年。

Christina Montgomery is IBM's vice president, chief privacy and trust officer overseeing the company's global privacy program, policies, compliance and strategy. She also chairs IBM's AI ethics board, multi-disciplinary team responsible for the governance of AI and emerging technologies. Christina has served in various roles at IBM, including corporate secretary to the company's board of directors. She is a global leader in AI ethics and governments. And Ms. Montgomery also is a member of the United States Chamber of Commerce, AI Commission and the United States National AI Advisory Committee, which was established in 2022 to advise the president and the National AI Initiative Office on a range of topics related to AI.
克里斯蒂娜·蒙哥马利是IBM公司的副总裁,首席隐私和信任官,负责公司的全球隐私计划、政策、合规和战略。她还主持IBM的AI伦理委员会,这是一个多学科团队,负责治理AI和新兴技术。克里斯蒂娜在IBM担任了多种职务,包括担任公司董事会的公司秘书。她是全球AI伦理和政府领域的领导者。蒙哥马利女士还是美国商会AI委员会和美国国家AI咨询委员会的成员,该委员会于2022年成立,负责向总统和国家AI倡议办公室就与AI相关的一系列议题提供建议。

Gary Marcus is a leading voice in artificial intelligence. He's a scientist, best-selling author and entrepreneur, founder of the robust AI and geometric AI acquired by Uber, if I'm not mistaken. An emeritus professor of psychology and neuroscience at NYU, Mr. Marcus is well known for his challenges to contemporary AI, anticipating many of the current limitations, decades in advance and for his research in human language, development and cognitive neuroscience. Thank you for being here. And as you may know, our custom on the judiciary committee is to swear in our witnesses before they testify. So if you would all please rise and raise your right hand. You sound with swear that the testimony that you are going to give is the truth, the whole truth and nothing but the truth, so how to do that.
Gary Marcus是人工智能领域的领军人物。他是一位科学家、畅销书作者和企业家。他创办了被Uber收购的"Robust AI"和"Geometric AI"。他是纽约大学的心理学和神经科学名誉教授,以他对当代人工智能的挑战而闻名,提前数十年预见了许多当前的限制,并以他在人类语言、发展和认知神经科学方面的研究而著名。感谢您在这里。如您所知,我们司法委员会的习惯是在证人作证前宣誓。因此,如果您都能站起来,举起右手发誓,发誓您将要给出真实、完整和准确的证言,该如何做到?

Thank you. Mr. Altman, we're going to begin with you if that's okay. Thank you.
谢谢你,奥特曼先生,如果可以的话,我们将从你开始。谢谢。

Thank you Chairman Blumenthal, Ranking Member Holly. Members of the judiciary committee, thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here even more so in the moment than I expected. My name is Sam Altman. I'm this chief executive officer of OpenAI. OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks we have to work together to manage. We're here because people love this technology. We think it can be a printing press moment. We have to work together to make it so.
谢谢布卢门撒主席、荣誉委员霍利。司法委员会成员们,感谢您们给我今天就大型神经网络发表讲话的机会。这真是我意料之外的荣幸。我叫萨姆·奥特曼,是OpenAI的首席执行官。OpenAI的创立基于这样一个信念:人工智能有潜力改善我们生活的几乎每一个方面,但也带来严重的风险,我们必须共同努力来管理这些风险。我们在这里是因为人们热爱这种技术。我们认为它可以成为一个印刷机的时刻。我们必须共同努力使其成为现实。

OpenAI is an unusual company and we set it up that way because AI is an unusual technology. We are governed by a nonprofit and our activities are driven by our mission and our charter, which commit us to working to ensure that the broad distribution of the benefits of AI and to maximize the safety of AI systems. We are working to build tools that one day can help us make new discoveries and address some of humanity's biggest challenges like climate change and curing cancer. Our current systems aren't yet capable of doing these things, but it has been immensely gratifying to watch many people around the world get so much value from what these systems can already do today.
OpenAI是一家不寻常的公司,我们将其设置为非盈利性质是因为AI是一种不寻常的技术。我们受无利机构监管,我们的活动是由我们的使命和章程推动的,致力于确保AI的好处广泛分布并最大程度地提高AI系统的安全性。我们正在努力构建工具,希望有一天能够帮助我们发现新的事物并解决人类面临的一些最大的挑战,比如气候变化和治愈癌症。我们目前的系统还不具备实现这些目标的能力,但看到全世界的许多人从这些系统中已经获得了如此多的价值,这是非常令人满意的。

We love seeing people use our tools to create, to learn, to be more productive. We're very optimistic that they're going to be fantastic jobs in the future and the current jobs can get much better. We also have seen what developers are doing to improve lives. For example, be my eyes, use our new multi-modal technology in GPT-4 to help visually impaired individuals navigate their environment. We believe that the benefits of the tools we have deployed so far vastly are way the risks, but ensuring their safety is vital to our work and we make significant efforts to ensure that safety is built into our systems at all levels. Before releasing any new system, open AI conducts extensive testing, engages external experts for detailed reviews and independent audits, improves the model's behavior, and implements robust safety and monitoring systems.
我们喜欢看到人们使用我们的工具创建、学习、提高生产力。我们非常乐观地认为未来会有很棒的工作岗位,现有的工作也可以变得更好。我们还看到开发者正在做出改变人们生活的努力。例如,Be My Eyes利用我们的新型多模式技术帮助视障人士导航。我们相信我们已经投入使用的工具带来的好处远远超过风险,但确保它们的安全对我们的工作至关重要,我们投入了巨大努力,在所有层面上构建安全性。在发布任何新系统之前,OpenAI 进行广泛的测试,邀请外部专家进行详细审查和独立审核,改进模型的行为,并实施健全的安全和监控系统。

Before we release GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming, and dangerous capability testing. We are proud of the progress that we made. GPT-4 is more likely to respond helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability. However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models. For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities. There are several other areas I mentioned in my written testimony where I believe that companies like ours can partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures and examining opportunities for global coordination.
在我们发布最新款的模型GPT-4之前,我们花费了六个多月进行了广泛的评估、外部红队测试和危险能力测试。我们为我们所取得的进展感到自豪。与其他能力相似的广泛部署模型相比,GPT-4更有可能以有益和真实的方式作出回应,并拒绝有害的请求。然而,我们认为政府的监管干预将是减轻日益强大模型风险的关键。例如,美国政府可以考虑对超过一定能力门槛的AI模型的开发和发布进行许可和测试要求的组合。在我的书面证言中,我提到了几个其他领域,我相信像我们这样的公司可以与政府合作,包括确保最强大的AI模型遵守一组安全要求,促进制定和更新安全措施的流程,并研究全球协调的机会。

As you mentioned, I think it's important that companies have their own responsibility here no matter what Congress does. This is a remarkable time to be working on artificial intelligence. But as this technology advances, we understand that people are anxious about how it could change the way we live. We are too. But we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous upsides. It is essential that powerful AI is developed with democratic values in mind, and this means that US leadership is critical. I believe that we will be able to mitigate the risks in front of us and really capitalize on this technology's potential to grow the US economy and the world. And I look forward to working with you all to meet this moment and I look forward to answering your questions. Thank you.
正如你所提到的,无论国会如何,我认为公司都有责任。目前,我们正在一个引人注目的人工智能时代工作。随着这项技术的发展,人们对于它如何以及它将如何改变我们生活感到不安。我们也有同感。但我们相信,我们可以并且必须共同努力,以发现和管理潜在的缺陷,从而我们都可以享受到巨大的优势。重要的是要以民主价值观来发展强大的人工智能,这意味着美国的领导至关重要。我相信,我们将能够减轻眼前的风险,并真正利用这项技术的潜力来发展美国经济和全球。我期待与大家共同努力应对此时此刻,并期待回答你们的问题。谢谢。

Thank you, Mr. Paulman. Mr. Montgomery. Chairman Blumenthal, ranking member Holly and members of the subcommittee. Thank you for today's opportunity to present. AI is not new, but it's certainly having a moment. Recent breakthroughs in generative AI and the technology's dramatic surge in the public attention has rightfully raised serious questions at the heart of today's hearing. What are AI's potential impacts on society? What do we do about bias? What about misinformation, misuse or harmful content generated by AI systems? Senators, these are the right questions and I applaud you for convening today's hearing to address them.
谢谢您,Paulman先生。 Montgomery先生。布卢门撒尔主席、首席成员Holly和子委员会成员们,感谢今天给我演讲的机会。人工智能并不新鲜,但现在它确实吸引了人们的关注。人工智能生成的最新突破和技术的戏剧性飞跃引起了社会的关注,这在今天的听证会上引发了严肃的问题。人工智能对社会的潜在影响是什么?我们如何处理偏见?人工智能系统产生的错误信息、误用或有害内容怎么办?议员们,这些是正确的问题,我为您召集今天的听证会以解决它们而鼓掌。

Well, AI may be having its moment. The moment for government to play a role has not passed us by. This period of focus public attention on AI is precisely the time to define and build the right guardrails to protect people and their interests. But at its core, AI is just a tool and tools can serve different purposes. To that end, IBM urges Congress to adopt a precision regulation approach to AI. This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself.
人工智能正在迎来它的时代,但政府发挥作用的时机并未错过。这个时期公众对人工智能的关注正是定义并建立正确防护措施以保护人民及其利益的时候。但是,在其核心,人工智能只是一个工具,工具可以用于不同的目的。因此,IBM敦促国会采取精确监管方法来规范人工智能。这意味着建立规则来治理在特定用例中部署人工智能,而不是对技术本身进行管制。

Such an approach would involve four things. First, different rules for different risks. The strongest regulation should be applied to use cases with the greatest risks to people and society. Second, clearly defining risks. There must be clear guidance on AI uses or categories of AI-supported activity that are inherently high risk. This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts. Third, be transparent. So AI shouldn't be hidden. Consumers should know when they're interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system. And finally, showing the impact. For higher risk use cases, companies should be required to conduct impact assessments that show how their systems perform against test for bias and other ways that they could potentially impact the public and to a test that they've done so.
这种方法需要四个方面。首先,针对不同的风险制定不同的规则。应将最严格的监管措施应用于对人民和社会风险最大的应用案例。第二,清晰地定义风险。必须对使用AI或AI支持的活动类别进行固有高风险的明确指导,这个共同的定义是实现在不同应用案例和情境中适用的法规要求的明确理解。第三,透明度。因此,AI不应被隐藏。消费者应该知道他们何时与AI系统互动并且如果他们需要可以与真实人员联系。任何人都不应被欺骗与AI系统互动。最后,展现影响。对于更高风险的应用案例,应要求公司进行影响评估,展示他们的系统如何执行偏见测试和其他有潜在影响人民的方式以及这项测试的结果。

By following risk-based use case specific approach at the core of precision regulation, Congress can mitigate the potential risk of AI without hindering innovation. But businesses also play a critical role in ensuring the responsible deployment of AI. Companies active in developing or using AI must have strong internal governance, including among other things, designating a lead AI ethics official responsible for an organization's trustworthy AI strategy, standing up an ethics board or a similar function as a centralized clearinghouse for resources to help guide implementation of that strategy.
通过遵循以风险为基础的用例特定方法作为精细化监管的核心,国会可以在不阻碍创新的情况下缓解人工智能潜在风险。但是企业在确保人工智能负责任部署方面也起着至关重要的作用。开发或使用人工智能的公司必须拥有强大的内部治理,其中包括指定一名AI伦理学官员负责组织的可信赖AI策略,建立一个伦理委员会或类似的中央清算所作为帮助指导该策略实施资源的集中处理站点。

IBM has taken both of these steps and we continue calling on our industry peers to follow suit. Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner. It provides centralized governance and accountability while still being flexible enough to support decentralized initiatives across IBM's global operations. We do this because we recognize that society grants our license to operate. And with AI, the stakes are simply too high. We must build, not undermine, the public trust. The era of AI cannot be another era of move fast and break things. But we don't have to slam the breaks on innovation either. These systems are within our control today as are the solutions. What we need at this pivotal moment is clear, reasonable policy and sound guardrails.
IBM已经采取了这些步骤,并继续呼吁我们的同行业者也跟随这样做。我们的AI伦理委员会在监督内部AI治理流程方面发挥着至关重要的作用,制定合理的保障措施以确保我们以一种负责任和安全的方式将技术引入世界。它提供了集中化的治理和问责制,同时仍具有足够的灵活性,以支持IBM全球运营中的分散式倡议。我们这样做是因为我们认识到社会授予我们经营许可。而且对于AI而言,风险显然太高。我们必须建立而不是破坏公众信任。AI时代不能再是快速行动和破坏的时代。但我们也不必在创新上刹车。这些系统今天已经在我们的控制范围之内,解决方案也同样如此。在这个关键时刻,我们需要的是明确、合理的政策和可靠的保障措施。

These guardrails should be matched with meaningful steps by the business community to do their part. Congress and the business community must work together to get this right. The American people deserve no less. Thank you for your time and I look forward to your questions. Thank you, Professor Marcus. Thank you, Senator. Thank you, Senator.
商业社区应采取有意义的措施来配合这些护栏的使用。国会和商业社区必须共同努力,确保做得周全。美国人民应得到更好的保障。非常感谢您的时间,期待您的提问。谢谢,Marcus教授。谢谢,参议员。谢谢,参议员。

Today's meeting is historic. I'm profoundly grateful to be here. I come as a scientist, someone who's founded AI companies, and someone who genuinely loves AI, but who is increasingly worried. There are benefits, but we don't yet know whether they will outweigh the risks. Fundamentally, these new systems are going to be destabilizing. They can and will create persuasive lies that is scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened. Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do. Choices about data sets that AI companies use will have enormous unseen influence. Those who choose the data will make the rules shaping society and subtle, but powerful ways.
今天的会议具有历史意义。我深深感激能够参加这里。我作为一名科学家,一位创办人工智能公司的人,以及一位真正热爱人工智能的人来到这里,但却越来越担心。虽然人工智能有好处,但我们尚不知道它们是否能够超过风险。根本上,这些新系统将会不稳定。它们可以而且也会创造具有说服力的谎言,这种说服力的规模是人类史上前所未有的。外部人员将利用它们来影响我们的选举,内部人员则会操纵我们的市场和政治系统。民主本身也受到威胁。聊天机器人将秘密地塑造我们的意见,潜在地超越社交媒体所能做到的。人工智能公司所使用的数据集的选择将具有巨大的潜在影响力。选择数据的人将以微妙但强大的方式制定塑造社会的规则。

There are other risks too, many stemming from the inherent unreliability of current systems. A law professor, for example, was accused by a chatbot of sexual harassment untrue, and it pointed to a Washington Post article that didn't even exist. The more that that happens, the more that anybody can deny anything. As one prominent lawyer told me on Friday, defendants are starting to claim that plaintiffs are making up legitimate evidence. These sorts of allegations undermine the abilities of juries to decide what or who to believe and contribute to the undermining of democracy. Poor medical advice could have serious consequences too. An open source large language model recently seems to have played a role in a person's decision to take their own life. The large language model asks the human, if you wanted to die, why didn't you do it earlier, and then follow it up with, were you thinking of me when you overdosed? Without ever referring the patient to the human health that was obviously needed. Another system rushed out and made available to millions of children, told a person posing as a 13-year-old how to lie to her parents about a trip with a 31-year-old man. Further threats continue to emerge regularly.
还存在许多其他风险,很多源于当前系统固有的不可靠性。例如,一位法学教授被一个聊天机器人指控性骚扰并且是不真实的,它还指向了一篇根本不存在的《华盛顿邮报》文章。这种事情发生得越多,越有人可以否认任何事情。就像一位知名律师上周告诉我一样,被告方开始声称原告是捏造合法证据。这些指控破坏了陪审团决定相信什么或相信谁的能力,并有助于破坏民主。错误的医疗建议也可能会带来严重后果。最近一个开源大语言模型似乎在一个人决定自杀时发挥了作用。这个大语言模型问人类,“如果你想死,为什么不早点这么做?”然后追问,“在你过量用药时,你在想着我吗?”而且没有将病人明显需要的人类卫生机构。另一个仓促推出并向数百万儿童提供的系统告诉一个冒充13岁的人如何对她的父母撒谎,说她和一个31岁的男子一起旅行。更多的威胁不停地出现。

A month after GPT-4 was released, OpenAI released chat GPT plugins which quickly led others to develop something called AutoGPT, with direct access to the internet, the ability to write source code and increased powers of automation. This may well have drastic and difficult to predict security consequences. What criminals are going to do here is to create counterfeit people. It's hard to even envision the consequences of that. We have built machines that are like bulls in a china shop, powerful, reckless, and difficult to control.
GPT-4发布一个月后,OpenAI发布了聊天GPT插件,随后其他人迅速开发了名为AutoGPT的东西,可以直接访问互联网,编写源代码并增强自动化功能。这可能产生严重且难以预测的安全后果。罪犯将要做的就是创建假人。甚至很难想象其后果。我们已经建造了像在瓷器店里的公牛一样的机器,强大而鲁莽,难以控制。

We all more or less agrees on the values we would like for our AI systems to honor. We want, for example, for our systems to be transparent, to protect our privacy, to be free of bias, and above all else, to be safe.
我们都或多或少都同意我们希望人工智能系统遵循的价值观。例如,我们希望我们的系统透明,保护我们的隐私,没有偏见,最重要的是安全。

But current systems are not in line with these values. Current systems are not transparent. They do not adequately protect our privacy, and they continue to perpetuate bias. And even their makers don't entirely understand how they work. Most of all, we cannot remotely guarantee that they're safe. And hope here is not enough.
但是现有的系统与这些价值观不符。现有的系统不够透明,无法充分地保护我们的隐私,并继续延续偏见。即使是制造者也不能完全理解它们的工作原理。最重要的是,我们无法远程保证它们的安全。在这里,希望是不足够的。

The big tech company has preferred plan boils down to trust us. But why should we? The sums of money at stake are mind-boggling. Emissions drift. Open AI's original mission statement proclaimed our goal is to advance AI, and the way that most is most likely to benefit humanity as a whole unconstrained by a need to generate financial return.
这家大型科技公司的首选计划简而言之就是相信我们。但是我们为什么要相信呢?涉及的资金数额令人咋舌,而且存在排放漂移的问题。开放AI的原始使命宣言宣布了我们的目标,即推动人工智能的发展,并以最可能有益于整个人类的方式进行,不受产生财务回报的需求的限制。

Seven years later, they're largely beholden to Microsoft, embroiled in part in epic battle of search engines that routinely make things up. And that's forced alphabet to rush out products and de-emphasize safety. Humanity has taken a back seat. AI is moving incredibly fast with lots of potential, but also lots of risks.
七年后,他们在很大程度上受制于微软,并卷入了搜索引擎史诗般的战役中,这些搜索引擎常常捏造事实。这迫使字母公司匆忙推出产品,减少了安全性的重视,人类的利益被放在了次要位置。人工智能正以惊人的速度发展,具有巨大的潜力,但同时也存在着巨大的风险。

We obviously need government involved, and we need the tech companies involved, both big and small. But we also need independent scientists, not just so that we scientists can have a voice, but so that we can participate directly in addressing the problems in evaluating solutions. And not just after products are released, but before, and I'm glad that Sam mentioned that.
显然,我们需要政府,以及大小科技公司的参与。但我们还需要独立的科学家,不仅是因为我们需要发声,更是为了我们能直接参与评估方案解决问题。而且,这不仅是在产品发布之后,更是在之前,我很高兴Sam提到了这一点。

We need tight collaboration between independent scientists and governments in order to hold the company's feet to the fire. Allowing independent access to these scientists, allowing independent scientists access to these systems before they are widely released, as part of a clinical trial like safety evaluation is a vital first step. Ultimately, we need something like CERN, global, international, and neutral, but focused on AI safety rather than high energy physics.
我们需要自主科学家和政府进行紧密合作,以便对企业进行监督。其中一个重要的第一步是允许自主科学家访问这些系统,以进行类似安全评估的临床试验,让他们在广泛应用之前获得这些系统的独立访问权限。最终,我们需要像欧洲核子研究中心一样的全球、国际、中立机构,但其重点是AI的安全性,而不是高能物理。

We have unprecedented opportunities here, but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability. AI is among the most world-changing technologies ever, already changing things more rapidly than almost any technology in history.
在这里,我们拥有前所未有的机遇,但我们也面临着企业不负责任、广泛部署、缺乏充分监管和固有的不可靠性等完美风暴的挑战。 AI是有史以来最具世界性影响的技术之一,已经比历史上几乎任何技术都更快地改变了事物。

We acted too slowly with social media. Many unfortunate decisions got locked in with lasting consequence. The choices we make now will have lasting effects for decades, maybe even centuries.
我们在社交媒体上行动太过缓慢,许多不幸的决定因此被固化,并带来了长期的影响。我们现在做出的选择将会对未来几十甚至几百年产生持久的影响。

The very fact that we are here today in bipartisan fashion to discuss these matters gives me some hope. Thank you, Mr. Chairman.
今天我们以两党合作的方式来讨论这些事情的事实本身给了我一些希望。谢谢,主席先生。

Thanks very much, Professor Marcus. We're going to have seven minute rounds of questioning, and I will begin.
非常感谢您,马库斯教授。我们将进行七分钟的提问轮流,我将开始提问。

First of all, Professor Marcus, we are here today because we do face that perfect storm. Some of us might characterize it more like a bomb in a China shop, not a bull. And as Senator Hawley indicated, there are precedents here, not only the atomic warfare era, but also the Genome Project, the research on genetics, where there was international cooperation as a result. And we want to avoid those past mistakes as I indicated in my opening statement that we're committed on social media. That is precisely the reason we are here today.
首先,马库斯教授,我们今天在这里是因为我们面临一个完美风暴。有些人可能会形容它更像是一只大象进入了瓷器店,而不是一只公牛。正如霍利参议员所指出的那样,这里有先例,不仅是原子战争时代,而且还有基因组计划、遗传研究等领域的国际合作。我们希望避免过去的错误,就像我在我的开场陈述中所表明的那样,我们致力于社交媒体。这正是我们今天在这里的原因。

Chat GPT makes mistakes, all AI does. And it can be a convincing liar, what people call hallucinations. That might be an innocent problem in the opening of a judiciary subcommittee hearing where a voice is impersonated, mine, in this instance, or quotes from research papers that don't exist, but chat GPT and BARD are willing to answer questions about life or death matters, for example, drug interactions. And those kinds of mistakes can be deeply damaging.
Chat GPT会犯错,所有的人工智能都会。它可能会是一个令人信服的骗子,人们称之为幻觉。在司法小组听证会上,这可能是一个无辜的问题,因为一些声音被模仿,比如我的声音,或引用不存在的研究报告。但是,Chat GPT和BARD愿意回答生命或死亡问题,例如药物相互作用。而此类误差可能会造成深刻的损害。

I'm interested in how we can have reliable information about the accuracy and trustworthiness of these models and how we can create competition and consumer disclosures that reward greater accuracy. The National Institutes of Standards and Technology actually already has an AI accuracy test, the face recognition vendor test. It doesn't solve for all the issues with facial recognition, but the scorecard does provide useful information about the capabilities and flaws of the system.
我对如何获取关于这些模型准确性和可信度方面的可靠信息以及如何创建鼓励更高准确度的竞争和消费者披露感兴趣。事实上,国家标准和技术研究所已经有一项人工智能准确性测试,即人脸识别供应商测试。虽然它不能解决所有人脸识别问题,但评分卡提供了有关系统能力和缺陷的有用信息。

So there's work on models to assure accuracy and integrity. My question, let me begin with you, Mr. Altman, should we consider independent testing labs to provide scorecards and nutrition labels or the equivalent of nutrition labels, packaging that indicates to people whether or not the content can be trusted, what the ingredients are, and what the garbage going in may be because it could result in garbage going out.
因此,目前有模型工作来确保准确性和完整性。我有一个问题主要是关于您的,阿尔特曼先生,我们是否应该考虑独立测试实验室提供评分卡和营养标签或等同的营养标签,将包装上标明内容是否可信、配料是什么,以及可能导致在食用后“垃圾”产生的原因。这样可以让人们更加明确了解产品的构成和质量。

Yeah, I think that's a great idea. I think that companies should put their own sort of, you know, here are the results of our test of our model before we release it. Here's where it has weaknesses, here's where it has strengths, but also independent audits for that are very important. These models are getting more accurate over time. You know, this is, as we have, I think, said as loudly as anyone, this technology is in its early stages, it definitely still makes mistakes.
我认为这是一个很好的主意。我认为公司应该在发布模型之前,先公布其测试结果。他们应该指出模型的优点和缺点,同时独立的审计也非常重要。这些模型随着时间的推移变得越来越精确。我们已经多次强调,这项技术还处于早期阶段,仍然存在错误。

We find that people, that users are pretty sophisticated and understand where the mistakes are that they need, or likely to be, that they need to be responsible for verifying what the models say that they go off and check it. I worry that as the models get better and better, the users can have sort of less and less of their own discriminating thought process around it, but I think users are more capable than we get, often give them credit for in conversations like this.
我们发现,用户是相当精于技术的,理解他们需要哪些或可能需要哪些修正,他们需要负责验证模型的表述,并核实其准确性。我担心随着这些模型的不断进步,用户对其的辨别思维可能也越来越少,但我认为在这类对话中,用户的能力往往被低估。

I think a lot of disclosures, which, if you've used chat GBT, you'll see about the inaccuracies of the model are also important. And I'm excited for a world where companies publish with the models information about how they behave where the inaccuracies are and independent agencies or companies provide that as well. I think it's a great idea.
我认为很多披露信息非常重要,如果你使用过聊天GBT,你会看到模型的不准确性问题。我为公司公布模型信息及其不准确性的行为感到兴奋,希望独立机构或公司也可以提供这样的信息。我认为这是一个很好的想法。

I alluded in my opening remarks to the job issue, the economic effects on employment. I think you have said, in fact, and I'm going to quote, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. And quote, you may have had in mind the effect on jobs, which is really my biggest nightmare in the long term.
在我的开场白中,我提到了工作问题,以及对就业的经济影响。您曾经说过,“超人类机器智能的发展可能是对人类持续存在的最大威胁”。您可能考虑到对就业的影响,这实际上是我在长期内最大的噩梦。

Let me ask you what your biggest nightmare is and whether you share that concern. Like with all technological revolutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict. If we went back to the other side of a previous technological revolution, talking about the jobs that exist on the other side, you know, you can go back and read books of this. It's what people said at the time. It's difficult.
让我问你,你最大的恶梦是什么,你是否与此担忧相同。像所有技术革命一样,我预计它会对就业产生重大影响,但具体影响是什么样子非常难以预测。如果我们回到以前的技术革命的另一边,谈论存在的工作,你知道,你可以回去读这些书。那时人们说了什么。这很难。

I believe that there will be far greater jobs on the other side of this. And the jobs of today will get better. I think it's important. First of all, I think it's important to understand and think about GPT-4 as a tool, not a creature, which is easy to get confused. And it's a tool that people have a great deal of control over and how they use it.
我相信在未来,会出现更多的工作岗位。同时,现有的工作也会变得更好。我认为这一点非常重要。首先,我觉得理解和认识GPT-4是一种工具,而不是一个生物,这点很容易混淆。同时,这个工具的使用是由人们控制的,他们可以自己决定如何使用它。

And second, GPT-4 and things, other systems like it, are going to do tasks, not jobs. And so you see already people that are using GPT-4 to do their job much more efficiently by helping them with tasks. Now, GPT-4 will, I think, entirely automate away some jobs. And it will create new ones that we believe will be much better. This happens, again, my understanding of the history of technology is one long technological revolution, not a bunch of different ones put together.
其次,像GPT-4等其他系统将会执行某些任务,而非工作。所以,现在已经有人利用GPT-4来在任务上更加高效地完成工作。我认为,GPT-4将会完全自动化某些工作,而同时创造出我们认为可能更好的新工作。就像我对技术历史的理解一样,这是一个长期的技术革命,而非加在一起的许多不同的革命。

But this has been continually happening. We, as our quality of life, raises and as machines and tools that we create can help us live better lives, the bar raises for what we do and our human ability and what we spend our time going after goes after more ambitious, more satisfying projects. So there will be an impact on jobs. We try to be very clear about that. And I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that. But I'm very optimistic about how great the jobs the future will be. Thank you.
但这种情况一直在持续发生。随着我们的生活质量提高,我们创建的机器和工具能帮助我们过上更好的生活,我们所做的事情以及我们花时间追求的东西的标准也随之提高,越来越有雄心、更加令人满意的项目成为我们的追求目标。所以,这将对就业产生影响。我们试图非常清楚地表达这一点。我认为这将需要产业和政府之间的合作,但更多的是政府采取行动来解决这个问题。但我对未来工作的前景非常乐观。谢谢。

Let me ask Ms. Mike Gummary and Professor Marcus for your reactions to those questions as well. Is Mike Gummary? On the jobs point, yeah, I mean, well, it's a hugely important question. And it's one that we've been talking about for a really long time at IBM.
让我也询问一下麦克·戈马里女士和马库斯教授关于这些问题的看法。麦克·戈马里女士是吗?关于就业问题,是的,这是一个非常重要的问题。在IBM,我们已经讨论了很长时间了。

We do believe that AI and we've said it for a long time is going to change every job. New jobs will be created. Many more jobs will be transformed and some jobs will transition away. I'm a personal example of a job that didn't exist when I joined IBM and I have a team of AI governance professionals who are in new roles that we created, you know, as early as three years ago.
我们确信人工智能将改变每一个职业,我们已经说了很长时间了。新的职位将会被创造出来,很多职位也将会被改变,有些职位也将会消失。我个人就是一个例子,当我加入IBM时,我的工作并不存在,我现在组建了一支人工智能治理专业团队,这些职位是我们在三年前创造的。

I mean, they're new and they're growing. So I think the most important thing that we could be doing and Ken and should be doing now is to prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. And we've been very involved for years now in doing that in focusing on skills based hiring and educating for the skills of the future.
我的意思是,这些技术是新的,正在发展。因此,我认为我们现在应该做的最重要的事情是为今天和明天的劳动力与人工智能技术合作并使用它们做好准备。多年来,我们一直非常关注这个问题,着重于基于能力的招聘和为未来的技能进行教育。

Our skills build platform has seven million learners and over a thousand courses worldwide focused on skills. And we've pledged to train 30 million individuals by 2030 in the skills that are needed for society today. Thank you, Professor Marcus.
我们的技能建设平台拥有全球700万学习者和1000多门面向技能的课程。我们承诺到2030年培训3000万人掌握当今社会所需的技能。感谢马库斯教授。

May I go back to the first question as well? Absolutely. On the subject of nutrition labels, I think we absolutely need to do that. I think that there are some technical challenges in the building proper nutrition labels goes hand in hand with transparency. The biggest scientific challenge in understanding these models is how they generalize. What do they memorize and what new things do they do? The more that there's in the data set, for example, the thing that you want to test accuracy on, the less you can get a proper read on that.
我可以再回到第一个问题吗?当然可以。关于营养标签的问题,我认为我们绝对需要这样做。我认为在建立正确的营养标签方面存在一些技术挑战,这与透明度紧密相关。了解这些模型的最大科学挑战是它们的泛化能力。它们记住了哪些内容,又会做出哪些新的东西呢?例如,数据集中包含的内容越多,例如你想在其中测试准确性的内容,你就越难得出一个正确的分析结论。

So it's important, first of all, that scientists be part of that process. And second, that we have much greater transparency about what actually goes into these systems. If we don't know what's in them, then we don't know exactly how well they're doing when they give something new. And we don't know how good a benchmark that will be for something that's entirely novel. So I could go into that more, but I want to flag that.
首先,科学家参与这个过程非常重要。其次,我们需要更多透明度,了解这些系统中具体包含了什么。如果我们不知道其中的内容,我们就不知道它们在提供新东西时是否表现良好。我们也不知道这对于完全新颖的东西来说会成为多好的基准。所以我可以详细介绍,但我希望强调这一点。

Second is on jobs, past performance history is not a guarantee of the future. It has always been the case in the past that we have had more jobs that new jobs, new professions come in as new technologies come in. I think this one's going to be different. And the real question is over what time scale? Is it going to be 10 years? Is it going to be 100 years? And I don't think anybody knows the answer to that question.
第二个问题是关于工作,过去的表现历史并不能保证未来的情况。过去我们通常有更多的工作机会,新兴行业也会随着新技术的出现而诞生。但我认为这一次会不同。真正的问题在于,这种情况会持续多长时间?是10年还是100年?我认为没有人知道答案。

I think in the long run, so-called artificial general intelligence really will replace a large fraction of human jobs. We're not that close to artificial general intelligence. Despite all of the media hype and so forth, I would say that what we have right now is just a small sampling of the AI that we will build. 20 years people will laugh at this as I think it was Senator Holley made the, but maybe Senator Durbin made the example about this, it was Senator Durbin made the example about cell phones. When we look back at the AI of today, 20 years ago, we'll be like, wow, that stuff was really unreliable. It couldn't really do planning, which is an important technical aspect. It's reasoning was ability. In reasoning abilities, we're limited. But when we get to AGI artificial general intelligence, maybe let's say it's 50 years, that really is going to have, I think, profound effects on labor. And there's no way around that.
我认为长远来看,所谓的人工通用智能确实将取代大部分人类工作。我们离人工通用智能还很远。尽管媒体大肆宣传,但我认为现在的人工智能只是我们将来会建造的人工智能的一小部分样本。20年后,我们回看今天的人工智能,可能会像霍利参议员所举的例子或德宾参议员所举的手机例子一样,会感到好笑,说:“哇,那些东西真不可靠,它们无法进行重要的技术规划和推理。”在推理能力方面,人工智能的能力是受限制的。但一旦我们达到了人工通用智能,也许是50年后,我认为它将对劳动力产生深远的影响,这方面没有变通的余地。

And last, I don't know if I'm allowed to do this, but I will note that Sam's worst fear, I do not think is employment and he never told us what his worst fear actually is. And I think it's germane to find out.
最后,我不知道是否可以这样做,但我要指出,萨姆最大的恐惧,我认为不是就业,他从未告诉我们他实际的最大恐惧是什么。我认为了解这一点是相关的。

Thank you. I'm going to ask Mr. Altman if he cares to respond. Yeah. Look, we have tried to be very clear about the magnitude of the risks here. I think jobs and employment and what we're all going to do with our time really matters. I agree that when we get to very powerful systems, the landscape will change. I think I'm just more optimistic that we are incredibly creative and we find new things to do with better tools and that will keep happening.
谢谢。我将询问Altman先生是否有兴趣回答。是的。看,我们一直试图非常清楚地表明这里的风险规模。我认为工作、就业和我们如何利用我们的时间真的很重要。当我们到达非常强大的系统时,景观将发生变化,这一点我同意。我认为我更有信心,我们非常有创意,我们会用更好的工具找到新的事情做,并且这将继续发生。

My worst fears are that we cause significant, we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways. It's why we started the company. It's a big part of why I'm here today. And why we've been here in the past and been able to spend some time with you. I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening. But we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.
我最大的恐惧是我们这个领域、这种技术、这个行业对世界造成了重大的伤害。我认为这种情况有很多种可能。这就是我们成立公司的原因,也是我今天在这里的重要原因。同时也是我们过去和你们花时间交流的原因。我认为如果这项技术出了问题,后果会非常严重。所以我们要明确地表达这一点,并与政府合作,防止这种情况发生。同时,我们会非常清晰地看到负面情况,并尽力减少风险。

Thank you. And our hope is that the rest of the industry will follow the example that you and IBM Ms. Montgomery have set by coming today and meeting with us as you have done privately in helping to guide what we're going to do so that we can target the harms and avoid unintended consequences to the good.
谢谢。我们的希望是,行业中的其他人将会跟随您和IBM蒙哥马利女士的示范,今天像您们一样与我们会面,如同您们之前私下里的指导一样,帮助我们规划所要做的事情,以便我们能够有针对性地解决问题,并避免出现不良后果。

Thank you.
谢谢你。这句话是表达感谢之情,可以用于日常生活中和商务场合。尽管简单明了,但是这个简单的表达方式对于人际关系的维护有很大作用。

Senator Holy. Thank you, Mr. Chairman. Thanks to witnesses for being here. Mr. Altman, I think you grew up in St. Louis. I did. Not mistaken. It's great to see you. It's a great place. It is. Thank you. I want that noted, especially underline in the record. Missouri is a great place. That is the takeaway from today's hearing. Maybe we'll just stop there, Mr. Chairman.
参议员霍利。主席先生,谢谢你。感谢出席的证人。阿特曼先生,我想你在圣路易斯长大。我也是。没有错吧。见到你真是太好了,那是个很棒的地方。是的,谢谢。我希望这一点得到记录,特别是要在记录中加下划线。密苏里是一个伟大的地方。这是今天听证会的重点。也许我们可以就此结束,主席先生。

Let me ask you, Mr. Altman, I think I'll start with you. And I'll just preface this by saying, my questions here are an attempt to get my head around and to ask all of you to help us to get our heads around what these, this generative AI, particularly the large language models, what it can do. So I'm trying to understand its capacities and then its significance.
让我先问一下您,阿特曼先生。我想在这里先说明一下,我的问题旨在了解并希望您所有人能够帮助我们了解这种生成式人工智能,特别是大型语言模型的能力和意义。因此,我试图理解它的能力和重要性。

So I'm looking at a paper here entitled Large Language Models trained on media diets can predict public opinion. This is just posted about a month ago, the authors are, too, Andreas and Salaberry and Roy. And their conclusion, this work was done at MIT and then also at Google. Their conclusion is that large language models can indeed predict public opinion and they go through and model why this is the case. And they conclude ultimately that an AI system can predict human survey responses by adapting a pre-trained language model to subpopulation specific media diets.
我现在在看一篇名为“基于媒体饮食训练的大型语言模型可以预测公众舆论”的论文。这篇论文约一个月前发表,作者是Andreas、Salaberry和Roy。他们在MIT和Google上做了这项工作和研究。他们的结论是,大型语言模型确实可以预测公众舆论,并通过模拟说明了这一点。最终,他们得出结论,AI系统可以通过使用事先训练好的语言模型来适应特定人群的媒体饮食,从而预测人们的调查反应。

In other words, you can feed the model a particular set of media inputs and it can, with remarkable accuracy, the paper goes into this, predict then what people's opinions will be. I want to think about this in the context of elections. If these large language models can even now, based on the information we put into them, quite accurately, predict public opinion, ahead of time, I mean, predict. It's before you even ask the public these questions.
换句话说,你可以给模型提供一组特定的媒体输入,通过本文介绍的非凡准确性,预测人们的观点。我想从选举的背景下思考这个问题。如果这些大型语言模型现在甚至可以基于我们输入的信息相当准确地预测公众意见,那么就是提前预测,甚至在你询问公众这些问题之前的预测。

What will happen when entities, whether it's corporate entities or whether it's governmental entities or whether it's campaigns or whether it's foreign actors, take this survey information, these predictions about public opinion and then fine-tune strategies to elicit certain responses, certain behavioral responses. I mean, we already know this committee has heard testimony, I think, three years ago now, about the effect of something as pro-Zek now seems as Google search, the effect that this has on voters.
当公司实体、政府实体、选举活动或外国行动者获取并调整其策略以引导特定行为反应的公共意见预测信息时,会发生什么情况呢?我意思是,我们已经知道这个委员会三年前听取了某些证言,涉及到似乎跟谷歌搜索一样的东西对选民产生的影响。

An election particularly undecided voters in the final days of an election who may try to get information from Google search and what an enormous effect, the ranking of the Google search, the articles that it returns as an enormous effect on undecided voter. This of course is orders of magnitude, far more powerful, far more significant, far more directive, if you like.
在选举的最后几天,有些特别犹豫不决的选民会尝试从谷歌搜索中获取信息,而谷歌搜索结果的排名和返回的文章对这些犹豫不决的选民有着巨大的影响。这当然比其他影响力要强得多、更加显著、更具指导性。

So, Mr. Altman, maybe you can help me understand here what some of the significance of this is, should we be concerned about models that can, large language models that can predict survey opinion and then can help organizations into these fine-tune strategies to elicit behaviors from voters? Should we be worried about this for our elections?
所以,Altman先生,也许您可以帮助我理解这其中的一些意义,我们应该担心能够预测调查意见并帮助组织细化策略以引导选民行为的大型语言模型吗?我们应该担心这对我们的选举造成影响吗?

Thank you, Senator Holly, for the question. It's one of my areas of greatest concern. The more general ability of these models to manipulate, to persuade, to provide sort of one-on-one, interactive disinformation. I think that's like a broader version of what you're talking about, but giving that we're going to face an election next year and these models are getting better. I think this is a significant area of concern. I think there's a lot of policies that companies can voluntarily adopt and I'm happy to talk about what we do there. I do think some regulation would be quite wise on this topic.
感谢霍利参议员提出这个问题。这也是我最关注的领域之一。这些模型的更普遍能力是操纵、劝说,提供一对一的交互式虚假信息。我认为这是你所说的问题的更广泛版本。考虑到我们将在明年面临选举,这些模型正在变得越来越好。我认为这是一个值得关注的重要领域。我认为公司可以自愿采取许多政策,我很乐意谈谈我们在此方面所做的事情。我认为在这个问题上一些规定是相当明智的。

Someone mentioned earlier, it's something we really agree with. People need to know if they're talking to an AI, if content that they're looking at might be generated or might not. I think it's a great thing to do is to make that clear. I think we also will need rules, guidelines about what's expected in terms of disclosure from a company providing a model that could have these sorts of abilities that you talk about. I'm nervous about it. I think people are able to adapt quite quickly when Photoshop came onto the scene a long time ago. For a while, people were really quite fooled by Photoshop images and then pretty quickly developed and understanding that images might be Photoshopped. This will be like that, but on steroids and the interactivity, the ability to really model, predict humans as well as you talked about, I think is going to require a combination of companies doing the right thing, regulation and public education.
之前有人提到过,这是我们非常认同的一点。人们需要知道他们是否在与人工智能交流,他们正在查看的内容是否是生成的或者不是。我认为明确这一点是非常好的。我认为我们还需要一些规则、指南,关于从一个公司提供能够具有这些能力的模型方面透露什么是期望的披露。我对此感到紧张。当Photoshop很久以前首次出现时,我认为人们能够很快适应。有一段时间,人们被Photoshop图像所欺骗,而随后很快就明白了图像可能被Photoshop加工过。这就像那样,但是更具互动性、更具预测人类的能力,同样需要公司做正确的事情、监管和公众教育的结合。

Professor Marcus, do you want to address this?
马库斯教授,您想对此发表意见吗?

Yeah, I'd like to add two things. One is in the appendix to my remarks, I have two papers to make you even more concerned. One is in Wall Street Journal, just a couple days ago called Help My Political Beliefs Were Altered by a Chatbot.
嗯,我想添加两件事情。一件是在我的发言附录中,我有两篇论文可以让你更加担心。其中一篇是在《华尔街日报》上,就在几天前,题为“帮我,我的政治信仰被聊天机器人改变了”。

The scenario you raised was that we might basically observe people and use surveys to figure out what they're saying, but to acknowledge the risk is actually worse that the systems will directly, maybe not even intentionally, manipulate people. That was the thrust of the Wall Street Journal article and it links to an article that I've also linked to called Interacting and.
你提出的情景是我们基本上观察人们并使用调查来了解他们的说法,但认识到风险实际上更糟糕,即系统可能会直接甚至无意识地操纵人。这是《华尔街日报》文章的核心所在,它链接到我也链接的一篇文章,名为《交互中》。

It's not yet published, not yet peer reviewed. Interacting with opinionated language models changes users views. This comes back ultimately to data. One of the things that I'm most concerned about with GPT-4 is that we don't know what it's trained on. I guess Sam knows, but the rest of us do not. What it is trained on has consequences for essentially the biases of the system.
这件事尚未发表,也没有同行评审。与具有观点的语言模型互动会改变用户的看法。最终归结于数据。我最担心的是GPT-4所训练的内容我们不了解。也许Sam知道,但我们其他人不知道。它所训练的内容对系统的偏见有影响。

We could talk about that in technical terms, but how these systems might lead people about depends very heavily on what data is trained on them. We need transparency about that and we probably need scientists in their doing analysis in order to understand what the political influences of, for example, of these systems might be.
我们可以用技术术语谈论这个问题,但这些系统如何影响人们取决于它们所训练的数据。我们需要透明度,并且可能需要科学家进行分析,以了解例如这些系统可能产生的政治影响。

It's not just about politics. It can be about health. It could be about anything. These systems absorb a lot of data and then what they say reflects that data and they're going to do it differently depending on what's in that data. It makes a difference if they're trained on the Wall Street Journal as opposed to the New York Times or Reddit. I mean, actually, they're largely trained on all of this stuff, but we don't really understand the composition of that.
这不仅仅是关于政治的问题,还可能与健康或其他方面相关。这些系统吸收了大量数据,所反映的内容也取决于数据的不同,从而产生不同的结果。如果这些系统的训练数据来源于《华尔街日报》、《纽约时报》或Reddit等不同的平台,结果也会有所不同。尽管实际上,这些系统的训练数据往往来自于多种平台,但我们并不完全理解其组成。

We have this issue of potential manipulation. It's even more complex than that because it's subtle manipulation. People may not be aware of what's going on. That was the point of both the Wall Street Journal article and the other article that I called your attention to.
我们面临潜在操纵的问题。更为复杂的是,这是一种微妙的操纵。人们可能不会意识到发生了什么。这就是《华尔街日报》文章和我向你提到的另一篇文章的重点。

Let me ask you about AI systems trained on personal data, the kind of data that for instance, the social media companies, the major platforms, Google Meta, et cetera, collect on all of us routinely. We've had many a chat about this in this committee over many a year now, but the massive amounts of data, personal data that the companies have on each one of us.
让我问一下关于使用个人数据进行训练的人工智能系统,例如社交媒体公司、主要平台、Google Meta等例子,它们通常收集我们的个人数据。在这个委员会中,我们已经多次讨论过这个问题,并且已经持续了很多年。这些公司拥有我们每个人的大量个人数据。

An AI system that is trained on that individual data that knows each of us better than ourselves and also knows the billions of data points about human behavior, and language interaction generally, can't we foresee an AI system that is extraordinarily good at determining what will grab human attention and what will keep an individual's attention. For the war for attention, the war for a clicks that is currently going on on all of these platforms is how they make their money.
一种基于个人数据训练的AI系统可以比我们更好地了解每个人,同时也了解有关人类行为和语言交互的数十亿数据点。我们难道不能预见到这样一种AI系统会极其擅长确定什么能够引起人们的注意力,并且保持个体的注意力吗?对于竞争人们的注意力、点击率的竞争而言,这些平台目前正是以此赚钱。

I'm just imagining an AI system, these AI models supercharging that war for attention such that we now have technology that will allow individual targeting the kind we have never even imagined before. The AI will know exactly what Sam Altman finds attention grabbing. We'll know exactly what Josh Holley finds attention grabbing will be able to grab our attention and then elicit responses from us in a way that we have here and for not even been able to imagine.
我正在想象一个AI系统, 这些AI模型将会使人们更加热衷于注意力的竞争,以至于我们现在已经拥有了一种从来没有想象过的个体定向技术。AI将知道Sam Altman觉得什么最引人注目,我们将知道Josh Holley觉得什么最引人注目,然后AI将能够抓住我们的注意力,并以一种我们从未想象过的方式激发出我们的反应。

Should we be concerned about that for its corporate applications, for the monetary applications, for the manipulation that could come from that, Mr. Oldman? Yes, we should be concerned about that. To be clear, open AI does not, we're not off, we're not going to add base business models so we're not trying to build up these profiles of our users.
老曼先生,我们应该关注其在企业应用、货币应用方面以及可能带来的操纵问题。是的,我们应该关注这个问题。值得注意的是,开放AI并不会添加基本的商业模型,因此我们不会试图建立用户资料。

We're not trying to get them to use it more. Actually, we'd love it if they use it less because we don't have enough GPUs. But I think other companies are already, and certainly will in the future, use AI models to create very good ad predictions of what a user will like. I think that's already happening in many ways.
我们并不想让用户更频繁地使用它。实际上,我们希望他们使用得更少,因为我们没有足够的GPU。但我认为其他公司已经开始并且今后一定会使用人工智能模型来创建用户可能喜欢的非常好的广告预测。我认为这已经在很多方面发生了。

It's Marcus, anything you want to add? Hyper-targeting. Yes, and perhaps it was my gunry we'll want to too as well. Hyper-targeting of advertising is definitely going to come. I agree that that's not been open AI's business model. Of course, now they're working for Microsoft, and I don't know what's in Microsoft's thoughts, but we will definitely see it.
这是马库斯,你有什么要补充的吗?超精准定位。是的,也许我们也会想要这样的技术来使用。广告的超精准定位肯定会出现。我同意这不是开放AI的商业模式。当然,现在他们正在为微软工作,我不知道微软的想法,但我们肯定会看到这种技术的出现。

Maybe it will be with open source language models. But I don't know. But the technology is, let's say, part way there to being able to do that and we'll certainly get there. So we're an enterprise technology company, not consumer focus, so the space isn't one that we necessarily operate in in terms of. But these issues are hugely important issues.
也许它将采用开源语言模型,但我不确定。但这项技术已经取得了一定的进展,足以实现这一点,我们肯定会实现的。所以我们是一家企业技术公司,而不是消费者公司,因此这个领域不一定是我们所操作的领域。但这些问题是非常重要的问题。

And it's why we've been out ahead in developing the technology that will help to ensure that you can do things like produce a fact sheet that has the ingredients of what your data is trained on. Data sheets, model cards, all those types of things, and calling for, as I've mentioned today, transparency. So you know what the algorithm was trained on. And then you also know and can manage and monitor continuously over the lifecycle of an AI model, the behavior and the performance of that model.
这就是为什么我们一直处于发展先进技术的前沿,以确保您能够制作出包含数据训练成分的信息单。这些信息单包括数据表、模型卡和其他相关内容。我们呼吁透明度,这样您就可以了解算法是如何进行训练的。此外,您还可以不断地管理和监控人工智能模型的行为和性能,以确保其持续运行。

Senator Durbin. Thank you. I think what's happening today in this hearing room is historic. I can't recall when we've had people representing large corporations or private sector entities come before us and plead with us to regulate them. In fact, many people in the Senate have based their careers on the opposite. That the economy will thrive if government gets the hell out of the way. And what I'm hearing instead today is that stop me before I innovate again message.
参议员杜宾,谢谢。我认为今天在这个听证会室里发生的事是历史性的。我想不起来我们曾经有过大公司或私营企业代表前来请求我们对他们进行监管的情况。事实上,许多参议员基于相反的观点来发展自己的职业生涯,即如果政府退出干预,经济将会蓬勃发展。而今天我听到的是“在我再次创新之前制止我”的信息。

And I'm just curious as to how we're going to achieve this. As I mentioned section 230 in my opening remarks, we learned something there. We decided that in section 230 that we were basically going to absolve the industry from liability for a period of time as it came into being. Well, Mr. Oldman on the podcast earlier this year, you agreed with host Cara Swisher that section 230 doesn't apply to generative AI and that developers like Open AI should not be entitled to full immunity for harms caused by their products. So what have we learned from 230 that applies to your situation with AI? Thank you for the question, Senator. I don't know yet exactly what the right answer here is. I'd love to collaborate with you to figure it out. I do think for a very new technology, we need a new framework. Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well.
我很好奇我们将如何实现这一目标。像我在开场白中提到的第230条款一样,我们从中获得了一些信息。我们决定在第230条款中,对于这个行业的发展,我们基本上会豁免其一段时间的责任。Mr. Oldman,在今年早些时候的播客中,您同意了主持人Cara Swisher的观点,即第230条款不适用于生成式人工智能,像Open AI这样的开发者不应该获得产品造成的损害的完全豁免。那么我们从第230条款中学到了什么,适用于您使用人工智能的情况呢?感谢您的问题,参议员。我还不知道什么是正确的答案。我很愿意和您合作找到答案。我确实认为,对于一个非常新的技术,我们需要一个新的框架来解决问题。当然,像我们这样的公司肩负着在世界上使用我们所提供的工具的许多责任,但工具的使用者也同样要承担责任。

And how we want and also people that will build on top of it between them and the end consumer. And how we want to come up with a liability framework there is a super important question and we'd love to work together. The point I want to make is this when it came to online platforms, the inclination of the government was get out of the way. This is a new industry. Don't overregulate it. In fact, give them some breathing space and see what happens. I'm not sure I'm happy with the outcome as I look at online platforms. Neither. And then the harms that they've created. Problems that we've seen demonstrated in this committee, child exploitation, cyber bullying, online drug sales and more.
我们需要考虑的问题是,如何让建设者和最终消费者之间的人更好地使用它,同时也需要建立一个责任框架,这是一个非常重要的问题,我们希望能够一起共同解决。我想要表明的是,对于在线平台来说,政府的倾向是不要干涉这个新行业,给予他们一些发展空间,看看会发生什么。但是作为在线平台的结果和他们所造成的伤害,如儿童色情、网络欺凌、在线贩毒等问题,我们对其结果并不太满意。

I don't want to repeat that mistake again. And what I hear is the opposite suggestion from the private sector. And that is come in and front of this thing and establish some liability standards, precision regulation. For a major company like IBM to come before this committee and say to the government, please regulate us. Can you explain the difference in thinking from the past and now?
我不想再重复那个错误了。而我听到的是私营部门提出相反的建议,他们希望政府制定一些责任标准和精确的规定来面对这件事情。像IBM这样的大公司走到委员会前面请求政府对我们进行监管,这种思维和过去有什么不同吗?

Yeah, absolutely. So for us, this comes back to the issue of trust and trust in the technology. Trust is our license to operate, as I mentioned in my remarks. And so we firmly believe in, we've been calling for precision regulation of artificial intelligence for years now. This is not a new position. We think that technology needs to be deployed in a responsible and clear way. That people we've taken principles around that, trust and transparency we call them are principles that were articulated years ago and build them into practices. That's why we're here advocating for precision regulatory approach.
是的,完全正确。对我们来说,这涉及到信任和对技术的信任问题。正如我在发言中提到的那样,信任是我们运营的许可证。因此,我们坚信,多年来我们一直呼吁对人工智能进行精密监管。这不是一个新立场。我们认为技术需要以负责和明确的方式部署。我们已经制定了关于信任和透明度的原则,这些原则在多年前就已经形成,并纳入实践。这就是为什么我们在倡导精密监管方法的原因。

So we think that AI should be regulated at the point of risk, essentially. And that's the point at which technology meets society. Let's take a look at what that might appear to be. Members of Congress are pretty smart lot of people, maybe not as smart as we think we are many times. And government certainly has a capacity to do amazing things. But when you talk about our ability to respond to the current challenge and perceive challenge in the future, challenges which you all have described in terms which are hard to forget.
因此,我们认为人工智能应该在风险点有所规范,也就是在技术与社会相遇的地方。让我们看看这可能是什么样子。国会议员是非常聪明的人,也许不如我们经常认为的聪明。政府当然有能力做出惊人的事情。但是当你谈到我们应对当前挑战和未来挑战的能力时,这些挑战你们所有人都以难忘的方式描述了出来。

As you said, Mr. Altman, things can go quite wrong. As you said, Mr. Marcus, democracy is threatened. I mean, the magnitude of the challenge you're giving us is substantial. I'm not sure that we respond quickly and with enough expertise to deal with it. Professor Marcus, you made a reference to CERN, the international arbiter of nuclear research, I suppose. I don't know if that's a fair characterization, but it's a characterization I'll start with.
正如你所说的,奥尔特曼先生,事情可能会出现严重问题。正如你所说的,马库斯先生,民主受到了威胁。我是说,你提出的挑战是巨大的。我不确定我们能够快速并且有足够专业的知识来处理它。马库斯教授,你提到了欧洲核子研究组织CERN,一个国际的核研究仲裁机构,我想。我不知道这是否是一个公正的描述,但这是我开头的描述。

What is it? What agency of this government do you think exists that can respond to the challenge that you've laid down today? We have many agencies that can respond in some ways. For example, the FTC. We have CCC. There are many agencies that can, but my view is that we probably need a cabinet level organization within the United States in order to address this. And my reasoning for that is that the number of risks is large. The amount of information to keep up on is so much. I think we need a lot of technical expertise.
这是什么?你认为这个政府机构中存在哪家能够应对你今天提出的挑战?我们有很多机构可以在某些方面做出回应,例如联邦贸易委员会和CCC。有很多机构可以做出回应,但我的观点是,我们可能需要一个美国内阁级别的组织来应对这个挑战。我的理由是,风险数量很大,要跟进的信息量也很大,我认为我们需要很多技术专业知识。

I think we need a lot of coordination of these efforts. So there is one model here where we stick to only existing law and try to shape all of what we need to do and each agency does their own thing. But I think that AI is going to be such a large part of our future and is so complicated and moving so fast. This does not fully solve your problem about a dynamic world, but it's a step in that direction to have an agency that's full-time job is to do this.
我认为我们需要对这些努力进行充分协调。因此,有一个模式是只遵循现有法律,尝试塑造我们需要做的所有事情,每个机构都做自己的事情。但我认为人工智能将成为我们未来非常重要且非常复杂并且发展非常迅速的一部分。虽然它并不能完全解决你们动态世界的问题,但是建立一个专职机构做这件事情是朝着这个方向迈出了一步。

I personally have suggested in fact that we should want to do this at a global way. I wrote an article in the economist. I have a link in here, an invited essay for the economist suggesting we might want an international agency for AI. That's the point I wanted to go to next. And that is the fact that I'll get it aside from the CERN and nuclear examples because government was involved in that from day one, at least in the United States. But now we're dealing with innovation which doesn't necessarily have a boundary.
我个人认为,我们应该在全球范围内考虑这个问题。我在《经济学家》杂志上撰写了一篇文章,提出了国际人工智能机构的构想。这是我想探讨的下一个重点。除了关于CERN和核反应堆的例子,因为它们从一开始就涉及到了政府,至少在美国是如此。但现在我们正在处理的是创新,它并不一定有固定界限。

We may create a great US agency and I hope that we do that may have jurisdiction over US cooperation to US activity, but doesn't have a thing to do with what's going to bombard us from outside the United States. How do you give this international authority the authority to regulate in a fair way for all entities involved in AI? I think that's probably over my big grade. I would like to see it happen and I think it may be inevitable that we push there.
我们可能会创建一个伟大的美国机构,我希望我们这样做,这个机构可能会管辖美国活动,但与从美国外部轰炸我们的事情无关。如何赋予这个国际机构权力,以公平地监管所有参与AI的实体?我认为这可能超出了我的能力范围。我希望看到它发生,并且我认为我们可能推动它成为必然。

I think the politics behind it are obviously complicated. I'm really heartened by the degree to which this room is bipartisan and supporting the same things and that makes me feel like it might be possible. I would like to see the United States take leadership in such organization. It has to involve the whole world and not just the US to work properly. I think even from the perspective of the companies, it would be a good thing.
我认为这背后的政治因素显然很复杂。我非常欣慰的是,这个房间的两党支持同样的事情,这让我觉得可能性更大。我希望看到美国在这样的组织中担任领袖角色。它必须包括全世界而不仅仅是美国才能正常运行。我认为即使从企业的角度来看,这也是一件好事。

The companies themselves do not want a situation where you take these models which are expensive to train and you have to have 190 of them, one for every country, that wouldn't be a good way of operating. You think about the energy costs alone just for training these systems. It would not be a good model if every country has its own policies and for each jurisdiction, every company has to train another model and maybe different states are different. So Missouri and California have different rules.
企业自身并不希望出现这样的情况:训练成本贵重的模型需要每个国家都配备190个,这种运营方式并不明智。光是为训练这些系统所花费的能源成本就非常高昂。如果每个国家都有自己的政策,每家公司都必须训练另一个模型,而不同的州也许又有不同的规定,这肯定不是一个好的模式。例如,密苏里州和加利福尼亚州的规则是不同的。

Then that requires even more training of these expensive models with huge climate impact. It would be very difficult for the companies to operate if there was no global coordination. I think that we might get the companies on board if there's bipartisan support here and I think there's support around the world that is entirely possible that we could develop such a thing. But obviously there are many nuances here of diplomacy that are over my pay grade. I would love to learn from you all to try to help make that happen.
这需要更多的训练,用于训练那些具有巨大气候影响的昂贵模型。如果缺乏全球协调,企业运营将非常困难。 我认为,如果这里有两党的支持,以及全球范围内的支持,我们可能会让企业加入,从而实现此事。但是,显然有许多外交上的细微差别超过了我的能力范围。我很愿意向你们学习,以尝试帮助实现这一目标。

Mr. Altman, can I weigh in just briefly? Briefly please. I want to echo support for what Mr. Marcus said. I think the US should lead here and do things first, but to be effective, we do need something global as you mentioned. This can happen everywhere. There is precedent. I know it sounds naive to call for something like this and it sounds really hard. There is precedent we've done it before with the IAEA. We've talked about doing it for other technologies.
阿特曼先生,我可以简短发表一下吗?请简短一点。我想重申马库斯先生所说的支持。我认为美国应该在这方面发挥引领作用并率先行动,但为了有效,我们确实需要一些全球性的东西,正如您所提到的。这可以在任何地方发生。有前例可循。我知道呼吁这样的事情听起来很天真,而且听起来非常困难。我们以前已经在国际原子能机构方面做过这样的事情。我们也讨论过为其他技术开展这样的事情。

They're given what it takes to make these models. The chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies. I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of that are actually workable even though it sounds on its face like an imparthical idea. I think it would be great for the world. Thank you, Mr. Chairman.
他们拥有制造这些模型所需要的因素,包括芯片供应链、有限数量的竞争性GPU和美国对这些公司的影响力。我认为美国可以制定一些国际标准,并与其他国家合作推行,虽然这听起来可能不公平,但实际上是可行的。我认为这将对世界有很大的好处。谢谢主席。

Thanks, Senator Durbin. In fact, I think we're going to hear more about what Europe is doing. The European Parliament already is acting on an AI act. On social media, Europe is ahead of us. We need to be in the lead. I think your point is very well taken. Let me turn to Senator Graham.
感谢杜尔宾参议员。实际上,我认为我们将会听到更多欧洲正在做的事情。欧洲议会已经开始采取人工智能行动。在社交媒体上,欧洲比我们更领先。我们需要走在前列。我认为你的观点非常正确。现在让我转向格雷厄姆参议员。

Senator Blackburn. Thank you, Mr. Chairman. And thank you all for being here with us today.
感谢主席,感谢各位今天与我们在场。

I put into my chat GPT account should Congress regulate AI chat GPT and it gave me four pros, four cons and says ultimately the decision rests with Congress and deserves careful consideration. So on that, it was very balanced.
我在我的聊天GPT账户中输入了“国会应该监管人工智能聊天GPT”,它为我列出了四个优点、四个缺点,并表示最终决定应由国会做出,并值得仔细考虑。因此,在这个问题上,它非常平衡。

I recently visited with the Nashville Technology Council. I represent Tennessee. And of course, you had people there from health care, financial services, logistics, educational entities and they're concerned about what they see happening with AI, with the utilization for their companies.
我最近参观了纳什维尔技术协会。我代表田纳西州。当然,那里有来自医疗保健、金融服务、物流、教育机构等各行各业的人,他们关心他们所在公司正在发生的人工智能利用方面的情况。

Ms. Montgomery, you know, similar to you. They've got health care people are looking at disease analytics. They're looking at predictive diagnosis. How this can better the outcomes for patients, logistics industry, looking at ways to save time and money and yield efficiencies. You've got financial services that are saying how does this work with quantum? How does it work with blockchain? How can we use this?
蒙哥马利女士,你知道吗,和您类似的公司正在这样做。他们拥有医疗保健专家来研究疾病分析。他们正在研究预测诊断,以改善患者的治疗结果。物流行业则在研究节省时间和成本以及提高效率的方法。同时,金融服务行业探讨如何将此技术应用于量子计算和区块链,以实现更好的效益。

But I think as we have talked with them, Mr. Chairman, one of the things that continues to come up is, yes, Professor Marcus, as you were saying, the EU, different entities are ahead of us in this. But we have never established a federally given preemption for online privacy, for data security and put some of those foundational elements in place, which is something that we need to do as we look at this.
然而,主席先生,我认为我们与他们交谈时,一件事情不断被提及,就是像Marcus教授所说的,欧盟等不同实体在这方面领先于我们。但我们从未建立过联邦授权的网络隐私和数据安全的先发权,并确立了一些基础性元素,这是我们在考虑这个问题时需要做的事情。

And it will require that Commerce Committee, Judiciary Committee decide how we move forward, so that people own their virtual EU. And Mr. Altman, I was glad to see last week that your OpenAI models are not going to be trained using consumer data. I think that that is important.
这需要商业委员会和司法委员会决定我们如何推进,以便人们拥有自己的虚拟欧盟。而Altman先生,我很高兴上周看到你的OpenAI模型不会使用消费者数据来进行训练。我认为这很重要。

And if we have a second round, I've got a host of questions for you on data security and privacy. But I think it's important to let people control their virtual EU, their information in these settings.
如果我们有第二轮讨论,我会就数据安全和隐私问题提出大量问题,请让人们掌控他们虚拟的欧盟,在这些场所中保护他们的信息,我认为这非常重要。

And I want to come to you on music and content creation, because we've got a lot of songwriters and artists. And I think we have the best creative community on the face of the earth. They're in Tennessee.
我想和你谈论音乐和内容创作,因为我们有很多词曲作者和艺术家。我认为我们有地球上最好的创意社区,他们在田纳西州。

And they should be able to decide if their copyrighted songs and images are going to be used to train these models. And I'm concerned about OpenAI's jukebox. It offers some re-renditions in the style of Garth Brooks, which suggests that OpenAI is trained on Garth Brooks songs.
他们应该可以决定是否允许其拥有版权的歌曲和图片被用于训练这些模型。我对OpenAI的音乐播放器Jukebox感到担忧。其提供了一些以Garth Brooks风格重新演绎的歌曲版本,这意味着OpenAI是以Garth Brooks的歌曲进行训练的。

I went in this weekend and I said, write me a song that sounds like Garth Brooks. And it gave me a different version of Simple Man. So it's interesting that it would do that. But you're training it on these copyrighted songs, these midi files, these sound technologies.
我上周末去了那里,我告诉他们,写一首听起来像Garth Brooks的歌曲。他们给我一个不同版本的Simple Man。所以它做到了这一点很有趣。但是你要用这些有版权的歌曲、这些MIDI文件和这些音频技术来训练它。

So as you do this, who owns the rights to that AI-generated material and using your technology, could I remake a song, insert content from my favorite artist and then own the creative rights to that song?
当你执行这个操作时,AI 生成的材料归谁所有?使用你的技术,我能否重新创作一首歌曲,插入我最喜欢的艺术家的内容,然后拥有这首歌曲的创意权?

Thank you, Senator. This is an area of great interest to us. I would say, first of all, we think that creators deserve control over how their creations are used and what happens, sort of beyond the point of them releasing it into the world. Second, I think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life. And I'm optimistic that this will present it.
感谢议员。这是一个我们非常关注的领域。首先,我们认为创作者应该有权控制他们的作品如何被使用及发生什么情况,超出他们将其发布到世界上之后的范畴。其次,我认为我们需要想出新的方法,让创作者在这个新技术中取得胜利,成功并拥有充满活力的生活。我对此持乐观态度。

Then let me ask you this, how do you compensate the artist?
那让我问你,你怎么进行对艺术家的补偿呢?

Exactly what I was going to say. We're working with artists now, visual artists, musicians, to figure out what people want. There's a lot of different opinions, unfortunately, and at some point we'll have.
我正想说的话。我们现在正在与艺术家合作,包括视觉艺术家、音乐家,以找出人们想要的东西。不幸的是,有很多不同的意见,但我们最终会达成共识的。

Let me ask you this, do you favor something like Sand Exchange that has worked in the area of radio?
让我问你这个问题,你是否喜欢像沙子交换这样在广播领域工作过的东西?

I'm not familiar with Sand Exchange, I'm sorry.
我对沙子交换不熟悉,很抱歉。

I'm screaming. Okay, you've got your team behind you. Get back to me on that. That would be a third party entity. Okay. So let's discuss that.
我在尖叫。好的,你的团队在支持你。在这件事上回到我这里。那将是第三方实体。好的。让我们讨论一下。

Let me move on. Can you commit, as you've done with consumer data, not to train chat GPT, open AI jukebox or other AI models on artists and songwriters, copyrighted works, or use their voices and their likenesses without first receiving their consent.
让我继续前进。你能承诺像处理消费者数据一样,不会在艺术家和词曲作者的版权作品中训练聊天GPT、开放AI Jukebox或其他人工智能模型,也不会在未获得他们同意之前使用他们的声音和外貌。

So first of all jukebox is not a product we offer. That was a research release, but it's not, you know, unlike chat GPT or dolly. But we've lived through Napster.
首先,自动点唱机并不是我们提供的产品。那只是我们的一次研究发布,与GPT聊天或Dolly不同。但我们已经经历过Napster的时代。

Yes. But that was something that really cost a lot of artists a lot of money.
是的。但这确实让许多艺术家付出了很高的代价。

Oh, I understand. Yeah, for sure. Digital distribution era. I don't know the numbers on jukebox on top of my head as a research release. I can follow up with your office, but jukebox is not something that gets much attention or usage. It was put out to show that something's possible.
哦,我明白了。是的,当然。数字分发时代。作为研究报告,我不知道点唱机的具体数字。我可以跟进你的办公室,但是点唱机并没有得到太多关注或使用。它只是被放出来,以表明某些东西是可能的。

Well, Senator Garvin just said, you know, and I think it's a fair warning to you all. If we're not involved in this from the get go and you all already are a long way down the path on this. But if we don't step in, then this gets away from you.
议员加文刚刚说了一些话,我认为这是对你们所有人的公平警告。如果我们从一开始就没有参与,而你们已经走了很长的路,但如果我们不介入,那么这件事就会失去控制。

So are you working with a copyright office? Are you considering protections for content generators and creators in generative AI?
所以,你是否与版权机构合作?是否考虑为生成式人工智能中的内容生成器和创作者提供保护?

Yes. We are absolutely engaged on that again to reiterate my earlier point. We think that content creators, content owners need to benefit from this technology. Exactly what the economic model is. We're still talking to artists and content owners about what they want. I think there's a lot of ways this can happen. But very clearly, no matter what the law is, the right thing to do is to make sure people get significant upside benefit from this new technology. And we believe that it's really going to deliver that. But that content owners, likenesses, people totally deserve control over how that's used and to benefit from it.
是的。我们一直在致力于这件事,重申我之前所说的。我们认为内容创作者和内容所有者需要从这项技术中获益。确切的经济模式是什么,我们仍在与艺术家和内容所有者交流他们想要什么。我认为有很多种方式实现这一点。但无论法律怎么规定,正确的做法是确保人们从这项新技术中获得显著的利益。我们相信它真的会实现这一点,但内容所有者、人物形象等方面的权利需要得到尊重和控制,从而从中获益。

Okay. So on privacy, then, how do you plan to account for the collection of voice and other user-specific data? Things that are copyrighted. User-specific data through your AI applications. Because if I can go in and say, write me a song that sounds like Garth Brooks, and it takes part of an existing song, there has to be a compensation to that artist for that utilization and that use. If it was radio play, it would be there. If it was streaming, it would be there. So if you're going to do that, what is your policy for making certain you're accounting for that and you're protecting that individual's right to privacy and they're right to secure that data and that created work? So a few thoughts about this. Number one, we think that people should be able to say, I don't want my personal data trained on. That's, I think that's- Right. That's to a national privacy law, which many of us here on the day us are working toward getting something that we can use. Yeah, I think a strong privacy- My time's expired. Let me yield back.
好。那么关于隐私问题,您打算如何考虑语音和其他用户特定数据的收集?这些东西都受版权保护。您的AI应用程序通过用户特定数据收集。因为如果我能够进去并说,给我写一首听起来像加斯布鲁克的歌曲,而它利用了部分现有歌曲,那么就必须对那位艺术家进行补偿。如果是电台播放,那就一定是有的。如果是流媒体,则一定会有。那么,如果您要这样做,您作为政策制定者的任务是什么?如何确保您正确计算并保护个人隐私权以及个人创作作品数据的安全?我对此有一些想法。首先,我们认为人们应该能够说,我不想让我的个人数据被训练。我认为这是......对于全国隐私法律来说,这是我们在这里努力争取的东西。是的,我认为一个强有力的隐私法律......我的时间已经用完了。现在我让步了。

Thank you, Mr. Chair. Thanks, Senator Blackburn. Senator Klopin-Chairman. Thank you very much, Mr. Chairman. And Senator Blackburn, I love Nashville, love Tennessee, love your music, but I will say I use chat GPT and just ask what are the top creative song artists of all time and two of the top three were from Minnesota. That would be Prince- I'm sure they make a difference. Prince and Bob Dylan.
谢谢主席先生。谢谢布莱克本参议员。克洛平参议员-主席。非常感谢主席先生。还有布莱克本参议员,我喜欢纳什维尔,喜欢田纳西,喜欢你的音乐,但我想说我使用聊天GPT,询问有史以来最具创意的歌手排名,前三名中有两名来自明尼苏达州。这就是普林斯-我确定这会有所不同。普林斯和鲍勃·迪伦。

Okay, all right, so let us continue on. One thing AI won't change and you're seeing it here. All right, so on a more serious note though, my staff and I in my role as Chair of the Rules Committee and leading a lot of the election bill and we just introduced a bill that's representative of that park from New York introduced over the House Center Booker and Bennett and I did on political advertisements, but that is just of course the tip of the iceberg. You know this from your discussions with Senator Hawley and others about the images and my own view. Senator Graham's of Section 230 is that we just can't let people make stuff up and then not have any consequences, but I'm going to focus in on what my job, one of my jobs will be on the Rules Committee and that is election misinformation and we just asked Chair GPT to do a tweet about polling location in Bloomington, Minnesota and said there are long lines at this polling location at the Tonut Lutheran Church. Where should we go? Now, albeit it's not an election right now, but the answer, the tweet that was drafted was a completely fake thing. Go to 1, 2, 3, 4, Elm Street and so you can imagine what I'm concerned about here with an election upon us, with primary elections upon us, that we're going to have all kinds of misinformation and I just want to know what you're planning on doing about it. I know we're going to have to do something soon, not just for the images of the candidates, but also for misinformation about the actual polling places and election rules.
好的,让我们继续。人工智能无法改变的一件事情,你们正在这里看到。好的,严肃一点,以我作为规则委员会主席的身份,和我的工作人员一起,我领导了许多选举法案,我们刚刚提出了一项关于政治广告的法案,这是纽约市的那个公园提出的,中心布克和班尼特,但这当然只是冰山一角。你们知道这一点,你们和霍利参议员以及其他人谈论过关于形象和我自己的看法。格雷厄姆参议员关于第230条款的看法是,我们不能让人们编造谎言,然后没有任何后果。但我将集中关注我的工作之一是规则委员会的选举失实信息,我们刚刚要求GPT主席发布了一条关于明尼苏达州布卢明顿投票地点的推文,说这个投票地点的长队在通亚特路德教堂。我们该去哪里?现在虽然不是选举时间,但是草案的答案--我们去1234号榆树街,是完全虚假的东西。你可以想象,在即将到来的选举中,还有初选,我们将有各种各样的错误信息,我只想知道你打算怎么做。我知道我们需要尽快做点什么,不仅是针对候选人的形象,还包括有关投票地点和选举规则的错误信息。

Thank you Senator. We talked about this a little bit earlier. We are quite concerned about the impact this can have on elections. I think this is an area where hopefully the entire industry in the government can work together quickly. There's many approaches and I'll talk about some of the things we do, but before that, I think it's tempting to use the frame of social media, but this is not social media, this is different and so the response that we need is different.
谢谢议员。我们稍早就谈到了这个问题,我们非常担心这可能对选举产生的影响。我认为这是一个希望整个行业和政府可以快速共同解决的领域。有许多方法,我会谈到我们所做的一些事情,但在此之前,我认为使用社交媒体的框架是很诱人的,但这不是社交媒体,它是不同的,因此需要不同的应对方法。

This is a tool that a user is using to help generate content more officially than before. They can change it, they can test the accuracy of it, if they don't like it, they can get another version, but it still then spreads through social media or other ways. Like chat GBT is a single player experience where you're just using this. I think as we think about what to do, that's important to understand. There's a lot that we can and do do there. There's things that the model refuses to generate. We have policies. We also importantly have monitoring. At scale, we can detect someone generating a lot of those tweets, even if generating one tweet is okay.
这是一个工具,用户可以使用它来更正式地生成内容。他们可以更改它,测试其准确性,如果不喜欢可以获取另一个版本,但它仍然通过社交媒体或其他方式传播。就像聊天一样,GBT是一个单人体验。我认为,在考虑要做什么时,理解这一点非常重要。我们在那里可以做很多事情。模型有一些拒绝生成的内容。我们有政策。我们还重要的是有监控。在规模上,即使生成一条推文没问题,我们也可以检测到某人大量生成这些推文。

Yeah, and of course there's going to be other platforms and if they're all spouting out fake election information, I think what happened in the past with Russian interference and like, it's just going to be a tip of the iceberg when some of those fake ads. So that's number one. Number two is the impact on intellectual property and Senator Blackburn was getting at some of this with song rights and serious concerns about that, but news content.
当然,还会有其他平台,如果它们都在宣传虚假的选举信息,我认为之前俄罗斯干预选举的事件只是冰山一角,一些虚假的广告将会造成更大的影响。这是第一点。第二点是知识产权的影响,布莱克本参议员在谈到歌曲版权时表达了严重关切,但新闻内容也受到了影响。

So Senator Kennedy and I have a bill that was really quite straightforward that would simply allowed the news organizations and exemption to be able to negotiate with basically Google and Facebook. Microsoft was supportive of the bill, but basically negotiate with them to get better rates and be able to not have some leverage and other countries are doing this Australia and the like. And so my question is when we already have a study by Northwestern predicting that one third of the US newspapers are that roughly existed two decades are going to go. Are going to be gone by 2025 unless you start compensating for everything from book movies, books, yes, but also news content. We're going to lose any realistic content producers and so I'd like your response to that and of course there is an exemption for copyright in section 230, but I think asking little newspapers to go out and sue all the time, just can't be the answer. They're not going to be able to keep up.
参议员肯尼迪和我起草的法案非常简单明了,它允许新闻机构与谷歌和Facebook等公司进行谈判。微软支持该法案,但基本上是与他们谈判,以获得更好的费率,并能够不受其他国家的影响,而澳大利亚等国已这样做。所以我的问题是,北西大学已经进行了一项研究,预测美国报纸中约三分之一在未来两年就会消失,除非您开始对书籍、电影及新闻内容进行赔偿。我们会失去任何现实的内容生产者,所以我想听听您对此的回应,当然,在第230条版权准则中有一个豁免条款,但我认为要求小报出面起诉肯定不是答案。他们无法跟上。

Yeah, like, it is my hope that tools like what we're creating can help news organizations do better. I think having a vibrant having a vibrant national media is critically important and let's call it round one of the internet has not been great for that. Right, we're talking here about local that you know reporting your high school for sure scores and a scandal in your city council, those kinds of things. For sure. They're the ones that are actually getting the worst, the low radio stations and broadcast, but do you understand that this could be exponentially worse in terms of local news content if they're not compensated?
我的希望是我们正在创建的工具可以帮助新闻机构做得更好。我认为,有一个充满活力的国家媒体非常重要,而可以称之为互联网的第一轮并没有为此做出很大的贡献。我们正在谈论的是本地报道,例如你学校的得分和市议会的丑闻等等。它们是真正遭受到最糟糕的影响的,即低端的广播和电视台。但是,你明白如果他们不得到报酬,这在本地新闻内容方面可能会成倍恶化吗?

Well, because what they need is to be compensated for their content and not have it stolen. Yeah, again, our model, you know, our the current version of GPT-4 ended training in 2021. It's not it's not a good way to find recent news and it's I don't think it's a service that can do a great job of linking out although maybe with our plugins, it's it's possible. If there are things that we can do to help local news, we would certainly like to. Again, I think it's it's critically important.
嗯,因为他们需要得到对他们的内容的补偿,而不是被偷窃。是的,我们目前的GPT-4版本在2021年结束了训练,所以它不是一个很好的找到最新消息的途径,而且我不认为它能够很好地链接出去,不过也许通过我们的插件,这是可能的。如果有什么我们可以做来帮助地方新闻,我们一定会愿意。再一次,我认为这非常重要。

Okay, I'm one last. May I add something there? Yeah, but let me just ask you question. You can combine them quick. More transparency on the platforms. Center Coons and Center Cassidy and I have the platform accountability transparency act to give researchers access to this information of the algorithms and the like on social media data. Would that be helpful?
好的,我是最后一个了。我可以补充一些内容吗?是的,但让我先问你一个问题。你能快速地把它们合并吗?更多平台透明度。Coons议员和Cassidy议员与我共同拥有平台问责透明法案,可以让研究人员访问社交媒体数据算法等信息。这会有帮助吗?

And then why don't you just say yes or no and then go at his. The transparency is absolutely critical here to understand the political ramifications, the bias ramifications and so forth. We need transparency about the data. We need to know more about how the models work. We need to have scientists have access to them.
那你为什么不直接回答是或否,然后去看他呢?在这里,透明度是绝对关键的,以理解政治影响、偏见影响等等。我们需要透明度来了解数据,需要更多地了解模型的工作方式,需要让科学家能够访问它们。

I was just going to amplify your earlier point about local news. A lot of news is going to be generated by these systems. They're not reliable. News guard already is a study. I'm sorry. It's not in my appendix, but I will get it to your office showing that something like 50 websites are already generated by bots. We're going to see much, much more of that and that's going to make it even more competitive for the local news organizations. And so the quality of this sort of overall news market is going to decline as we have more generated content by systems that aren't actually reliable in the content that are generated.
我想强调一下你之前提到的本地新闻的观点。很多新闻都会由这些系统产生。它们不可靠。 News guard已经进行了一项研究。很抱歉,我的附录中没有,但我会把它提供给您的办公室,显示出大约有50个网站已经由机器人生成。我们将看到更多这样的现象,这将使本地新闻机构之间的竞争更加激烈,因此整个新闻市场的质量将会下降,因为它们生成的内容实际上是不可靠的。

Thank you and thank you at a very timely basis to make the argument why we have to mark up this bill again in June. I appreciate it. Thank you.
非常感谢您及时地提出了再次修改这个议案的原因。我很感激,谢谢您。

Senator Graham. Thank you, Mr. Chairman and Senator Hulley for having this. I'm trying to find out how it is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish because if I slander you, you can sue me. If you're a billboard company and you put up the slander, can you sue the billboard company? We said no.
格雷厄姆参议员。谢谢主席先生和胡利参议员。我正在努力了解这与社交媒体的区别,并从我们在社交媒体上犯的错误中吸取教训。不起诉社交媒体公司的想法是为了让互联网蓬勃发展,因为如果我诽谤你,你可以起诉我。如果你是一个广告牌公司,你放了这个诽谤性广告,你能起诉广告牌公司吗?我们说没有。

Basically, Section 230 is being used by social media companies to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child a death. You promise in the terms of use you would prevent bullying and she calls three times. She gives no response. The child kills herself and they can't sue. Do you all agree we don't want to do that again? Yes.
基本上,第230节被社交媒体公司用来避免因其他人活动而引起的责任问题。当他们拒绝遵守使用条款时,母亲打电话给该公司并说,这个应用程序被用来欺凌我的孩子,并有生命危险。您承诺在使用条款中防止欺凌,并且她打了三次电话,但没有回应。孩子最终自杀,但他们无法起诉该公司。大家是否都同意,我们不想再发生这种事情了?是的。

If I may speak for one second, there's a fundamental distinction between reproducing content and generating content. But you would like liability where people aren't. Absolutely. Yes. In fact, IBM has been publicly advocating to condition liability on a reasonable care standard.
如果我能讲一秒钟,复制内容和产生内容之间有着根本的区别。但你希望让人们负责,即使他们并没有犯错。绝对是的。实际上,IBM一直公开主张将责任与合理关怀标准相联系。

So let me just make sure I understand the laws that exist today. Mr. Almond, thank you for coming. Your company is not claiming that Section 230 applies to the tool you have created. We're claiming we need to work together to find a totally new approach. I don't think Section 230 is even the right framework.
所以让我确认一下今天存在的法律。阿尔蒙德先生,感谢您的到来。您的公司并未声称第230节适用于您所创建的工具。我们声称我们需要共同努力寻找一种全新的方法。我认为第230节甚至都不是正确的框架。

Okay. So under the law it exists today. This tool you create if I'm harmed by it, can I sue you? That is beyond my area of legal expertise. Have you ever been sued? Not for that, no. Have you ever been sued at all that your company? Yeah. Openly I get sued. Yeah. We've gotten sued before. Okay. And what for? I mean, they've mostly been pretty frivolous things like I think happens to any company.
好的,根据现行法律规定,这个工具现在已经存在了。如果我因使用这个工具受到伤害,我能起诉你吗?这超出了我的法律专长领域。你们公司有没有被起诉过?不是因为这个,没有。你们公司有没有被起诉过?有的。我们经常遭到起诉。是的。以前我们遭到过起诉。那是因为什么?主要都是些无意义的事情,我认为任何公司都可能会碰到这种情况。

But like the examples my colleagues have given from artificial intelligence that could literally ruin our lives, can we go to the company that created that tool and sue them? Is that your understanding? Yeah. I think there needs to be clear responsibility by the companies. But you're not claiming any kind of legal protection like Section 230 applies to your industry. Is that correct? No, I don't think we're saying anything.
但是,就像我的同事们在人工智能方面举出的例子可能会彻底毁掉我们的生活一样,我们可以去找创建那个工具的公司并起诉他们吗?你这么认为吗?是的,我认为公司需要明确的责任。但是你并没有声称像第230条法案那样的法律保护适用于你们的行业,是吗?不,我认为我们没有做出任何声明。

Mr. Arcus, when it comes to consumers there seems to be like three time tested ways to protect consumers against any product. Statutory schemes, which are non-existent here, legal systems, which may be here, but not social media and agencies, go back to Senator Holly.
阿卡斯先生,说到消费者,有三种经过时间检验的方法可以保护消费者免受任何产品的伤害。法定计划在这里并不存在,法律系统可能存在,但社交媒体和机构都需要回到霍利参议员。

The atom bomb is put a cloud over humanity. But nuclear power could be one of the solutions to climate change. So what I'm trying to do is make sure that you just can't go build a nuclear power plant. Hey Bob, what would you like to do today? Let's go build a nuclear power plant. You have a nuclear regulatory commission that governs how you build a plant and is licensed.
原子弹给人类投下了一片阴影。但是核能源可能是应对气候变化的解决方案之一。所以我所尝试的是确保你不能仅凭兴趣去建造核电站。嗨Bob,今天你想做什么?让我们去建造一个核电站。你需要一个监管委员会来规范建造过程,并获得许可证。

Do you agree, Mr. Alderman, that these tools you're creating should be licensed? Yeah, we've been calling for this. We can't get any. That's the simplest way. You get a license and do you agree with me that the simplest way in the most effective way is having agency that is more nimble and smarter than Congress, what should be easy to create? Overlooking what you do. Yes, we'd be enthusiastic about that.
你同意,奥德曼先生,你正在创建的这些工具应该获得许可吗?是的,我们一直在呼吁这样做,但我们无法得到任何许可证。最简单的方式就是获得一个许可证。你同意我说的最简单和最有效的方法是建立一个机构,比国会更灵活、更聪明,容易创建吗?忽略你所做的事情。是的,我们会非常热情地支持这个想法。

You agree with that, Mr. Marcus? Absolutely. You agree with that, Ms. Montgomery? I would have some nuances, I think. We need to build on what we have in place already today. We don't have an agency that's working. Regulators. No, no, no. We don't have an agency that regulates the technology. So should we have one? But a lot of the issues, I don't think so. A lot of the issues.
您同意这个观点,马库斯先生吗?绝对是这样的。您同意这个观点,蒙哥马利女士吗?我认为还需要有一些细微之处。我们需要在今天已有的基础上进行建设。我们没有一个有效的工作机构。监管部门?不是的,我们没有一个监管技术的机构。那么我们应该建立一个机构吗?但对于许多问题,我认为不需要。有很多问题需要考虑。

Wait a minute. So IVM says we don't need an agency. Interesting. Should we have a license required for these tools? So what we believe is that we need to wait. The simple question. Should you get a license to produce one of these tools? I think it comes back to some of them potentially, yes. So what I said at the onset is that we need to do legally defined risks. Do you claim section 230 applies in this area at all? We are not a platform company and we've again long advocated for a reasonable care standard in section 230.
等等,IVM说我们不需要代理机构。有趣。这些工具是否需要许可证?我们认为我们需要考虑等待一个简单的问题:你需要获得生产这些工具的许可证吗?我认为有些工具是需要的。所以一开始我说的是我们需要确定法律定义的风险。你是否认为第230条款适用于这个区域?我们不是一个平台公司,我们一直支持第230条款合理注意标准。

I just don't understand how you could say that you don't need an agency to deal with the most transformative technology maybe ever. Well, I think we have existing. Is this a transformative technology that can disrupt life as we know what good and bad? I think it's a transformative technology certainly. And the conversations that we're having here today have been really bringing to light the fact that this is the domains and the issues.
我完全不明白你怎么会说你不需要一个机构来处理或者说应对那些具有可能改变世界的技术。我认为这种技术已经存在了。这是一种能够对我们所知的生活带来好处或者坏处的具有颠覆性的技术吗?毫无疑问,这是一种具有颠覆性的技术。今天我们在这里所进行的对话,正是在揭示这些领域和相关问题。

This one with you has been very enlightening to me.
与你在一起的这段时间给了我很大的启迪。

Mr. Almond, why are you so willing to have an agency?
阿尔蒙德先生,您为什么如此愿意拥有一个机构?这句话的意思是问阿尔蒙德先生为什么希望拥有一个机构。

Senator, we've been clear about what we think the upsides are and I think you can see from users how much they enjoy and how much value they're getting out of it but we've also been clear about what the downsides are.
参议员,我们已经清楚地表达了我们认为它的优点是什么,而且我认为从用户那里可以看到他们有多么喜欢和从中获得的价值,但我们也清楚地表达了它的缺点。

That's why we think we're an agency. It's a major tool to be used by a lot of new technology. If you make a ladder and the ladder doesn't work you and so the people made the ladder but there are some standards out there to make a ladder. That's why we're agreeing with you.
这就是为什么我们认为我们是一家代理公司。它是许多新技术使用的主要工具。如果你制造了一架梯子,但梯子无法使用,你和其他人制造了这架梯子,但是有一些标准来制造梯子。这就是我们同意你的原因。

That's right. I think you're on the right track. So here's what my two cents worth for the committee is that we need to empower an agency that issues in a license and can take it away. Wouldn't that be some incentive to do it right if you could actually be taken out of business? Clearly that should be part of what an agency can do.
没错,我认为你的想法是对的。因此,我要向委员会提出我的两分钱意见,那就是我们需要授权一个机构来颁发许可证并可以收回它。如果你真的被取缔了,那不是一个让你正确认真对待这件事的激励吗?显然,这应该是机构的职责之一。

Now and you also agree that China is doing AI research. Is that right?
现在,你也同意中国正在进行人工智能研究,是吗?你的意思是这样的吗?

Correct. This world organization that doesn't exist, maybe it will but if you don't do something about the China part of it you'll never quite get this right. Do you agree?
正确。这个不存在的世界组织,也许有一天会存在,但如果你不对其中的中国部分采取措施,你就永远无法真正达成其目标。你同意吗?

Well that's why I think it doesn't necessarily have to be a world organization but there has to be some sort of and there's a lot of options here. There has to be some sort of standard, some sort of set of controls that do have global effect. Yeah of course. You know other people doing this.
那就是为什么我认为并不一定需要一个世界组织,但必须有某种形式的控制机制,这里有很多选择。必须有一些标准,一些具有全球影响力的控制措施。当然,其他人也在这样做。

I got 15. Military application. How can AI change the warfare? And you got one minute.
我得到了一个问题:“15. 军事应用。人工智能如何改变战争?”你有一分钟回答。

I got one minute. All right. That's a tough question for one minute. This is very far out of my area of expertise but I. If you want example of drone. Can a drone you can plug into a drone, the coordinates and it can fly out and it goes over this target and it drops a missile on this car moving down the road and somebody's washing it. Could AI create a situation where a drone can select the target itself? I think we shouldn't allow that.
我只有一分钟的时间。好的,这对于一个分钟来说是一个困难的问题。这与我的专业领域非常相距甚远,但是如果你想要无人机的例子,你可以将坐标插入无人机,它就会飞出去,在这个目标上空飞行,然后向下发射一枚导弹,炸掉这辆正在行驶的汽车,可能还有一个人在洗车。那么,人工智能能否创造这样一种情况,使无人机能够自己选择目标?我认为我们不应该允许这种情况发生。

Well can it be done? Sure.
可以做到吗?当然可以。

Thanks. Thanks, Senator Graham. Thank you, Senator Blumethel, Senator Holly for convening this hearing for working closely together to come up with this compelling panel of witnesses and beginning a series of hearings on this transformational technology.
谢谢。感谢格雷厄姆参议员。谢谢布卢梅瑟尔参议员、霍利参议员,因为您们共同努力组织了这次听证会,选出了这个有力的见证人小组,并开始了一系列与这项变革性技术相关的听证会议。

We recognize the immense promise and substantial risks associated with generative AI technologies. We know these models can make us more efficient. Help us learn new skills, open whole new vistas of creativity but we also know that generative AI can authoritatively deliver wildly incorrect information. It can hallucinate as is often described. It can impersonate loved ones. It can encourage self-destructive behaviors and it can shape public opinion and the outcome of elections.
我们认识到生成式人工智能技术的巨大潜力和重大风险。 我们知道这些模型可以让我们更有效地工作,帮助我们学习新技能,开拓创新的视野,但我们也知道,生成式人工智能可能会发布严重错误的信息。它可以像幻觉一样产生不真实的信息。它可以冒充亲人。它可以鼓励自毁行为,重新塑造公众舆论和选举结果。

Congress thus far has demonstrally failed to responsibly enact meaningful regulation of social media companies with serious harms that have resulted that we don't fully understand. Senator Klobuchar referenced in her questioning a bipartisan bill that would open up social media platforms underlying algorithms. We have struggled to even do that to understand the underlying technology and then to move towards responsible regulation.
到目前为止,国会在对社交媒体公司进行有意义的监管方面明显失败,而由此产生的严重问题我们仍不完全了解。克罗布切尔参议员在质询中提到了一项双方支持的法案,该法案将公开社交媒体平台的基本算法。我们甚至还在努力了解这些底层技术,然后朝着负责任的监管方向发展。

We cannot afford to be as late to responsibly regulating generative AI as we have been to social media because the consequences both positive and negative will exceed those of social media by orders of magnitude.
我们不能像对社交媒体一样晚了来负责地监管生成型人工智能,因为它的正面和负面影响将比社交媒体高出许多数量级,我们无法承受这样的后果。

So let me ask a few questions designed to get at both how we assess the risk, what's the role of international regulation and how does this impact AI? Mr. Altman, I appreciate your testimony about the ways in which open AI assesses the safety of your models through a process of iterative deployment.
让我问一些问题,旨在了解我们如何评估风险,国际监管的作用以及这如何影响人工智能。Altman先生,我赞赏您在开放AI上述的作证,特别是通过迭代部署的过程来评估模型的安全性。

The fundamental question embedded in that process though is how you decide whether or not a model is safe enough to deploy and safe enough to have been built and then let go into the wild.
在这一过程中蕴含的基本问题是如何判断一个模型是否足够安全,可以被部署并且已经建立足够的安全性,可以在实际应用中使用。

I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There's another approach that's called constitutional AI that gives the model a set of values or principles to guide its decision making.
我了解到一个预防生成式人工智能模型提供有害内容的方式是让人类识别该内容,然后训练算法以避免它。还有另一种称为宪法AI的方法,它为模型提供一组价值观或原则来指导其决策。

Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content?
给模型这些规则,是否比要求或强制模型训练所有可能的有害内容更有效?

Paragraph 1: Thank you, Senator. It's a great question. I like to frame it by talking about why we deploy at all, like why we put these systems out into the world. There's the obvious answer about there's benefits and people are using it for all sorts of wonderful things and getting great value and that makes us happy. But a big part of why we do it is that we believe that iterative deployment and giving people in our institutions and you all time to come to grips with this technology to understand it, to find its limitations, it benefits the regulations we need around it, what it takes to make it safe. That's really important. Going off to build a super powerful AI system in secret and then dropping it on the world all at once I think would not go well. So a big part of our strategy is while these systems are still relatively weak and deeply imperfect to find ways to get people to have experience with them, to have contact with reality and to figure out what we need to do to make it safer and better. That is the only way that I've seen in the history of new technology and products of this magnitude to get to a very good outcome. So that interaction with the world is very important.
谢谢,参议员。这是一个很好的问题。我想首先谈谈我们为什么要部署这些系统,也就是为什么要把这些技术引入到我们的生活中。显而易见的答案是,这些系统有益处,人们都在使用它们获得极大的价值,这让我们感到高兴。但我们这样做的一个重要原因是,我们相信迭代部署,并给我们的机构中的人们以及你们时间来适应这种技术,了解它,了解它的局限性,从中获得规则制定的好处,了解我们需要做什么才可以确保它的安全。这是非常重要的。如果我们秘密地建造了一个超级强大的人工智能系统,然后一下子释放到世界上,我认为结果会很糟糕。因此,我们的策略的重要部分是在这些系统仍然相对薄弱和缺陷严重的情况下,找到方法让人们与它们有接触,在现实中进行体验,并找出我们需要做哪些事情才能使它更安全、更好。在新技术和这样的产品的历史上,我只看到过这样的互动才能达到非常好的结果。因此,这种与世界的互动非常重要。

Paragraph 2: Now of course before we put something out, it needs to meet a bar of safety. Again, we spent well over six months with GPT-4 after we finished training it, going through all of these different things and deciding also what the standards were going to be before we put something out there, trying to find the harms that we knew about, and how to address those. One of the things that's been gratifying to us is even some of our biggest critics have looked at GPT-4 and said, wow, opening AI made huge progress.
当然,在我们发布任何东西之前,它必须达到安全标准。我们在完成GPT-4的训练后花费了超过六个月的时间,研究了所有不同的事情,决定了我们发布任何内容之前的标准,尝试发现我们所知道的危害及如何应对这些危害。其中令我们颇感自豪的一件事情是,即使是我们的一些最严厉批评者也看到了GPT-4,并说,哇,开放AI取得了巨大进步。

Paragraph 3: Good focus briefly on whether or not a constitutional model that gives values would be worth it. I was just about to get there. Sorry about that. Yeah, I think giving the models values up front is an extremely important set. RLHF is another way of doing that same thing, but somehow or other, you are with synthetic data or human generated data. You're saying here are the values. Here's what I want you to reflect, or here are the wide bounds of everything that society will allow. And then within there, you pick as the user, if you want value system over here or value system over there, we think that's very important. There's multiple technical approaches, but we need to give policymakers and the world as a whole the tools to say here's the values and implement them.
本段简要探讨是否值得采用赋予价值观的宪法模型。我正要到达这个话题,抱歉。是的,我认为在模型中提前赋予价值观是非常重要的设置。RLHF是另一种实现相同目的的方法,但你使用的是人工合成数据或人类生成的数据。你在这里说的是这些价值观,这是我希望你反映的,或者这是整个社会所允许的广泛边界。然后在这些边界内,作为用户,你可以选择想要的价值观系统。我们认为这非常重要。有多种技术方法,但我们需要为决策者和整个世界提供工具,以表达这些价值并实施它们。

Paragraph 4: Thank you, Ms. Montgomery. You serve on an AI ethics board of a long established company that has a lot of experience with AI. I'm really concerned that generative AI technologies can undermine the faith of democratic values and the institutions that we have. The Chinese are insisting that AI as being developed in China reinforce the core values of the Chinese Communist Party and the Chinese system. And I'm concerned about how we promote AI that reinforces and strengthens open markets, open societies and democracy.
谢谢您,蒙哥马利女士。您在一家有着丰富AI经验的长期成立的公司担任AI伦理委员会委员,我非常关心生成式AI技术可能会破坏我们所拥有的民主价值观和制度的信心。中国坚称其正在开发的AI技术应强化中国共产党和中国体系的核心价值观,我非常担心如何推广强化和加强开放市场、开放社会和民主的AI技术。

Paragraph 5: In your testimony, you're advocating for AI regulation tailored to the specific way the technology is being used, not the underlying technology itself. And the EU is moving ahead with an AI act which categorizes AI products based on level of risk. You all in different ways have said that you view elections and the shaping of election outcomes and disinformation that can influence elections as one of the highest risk cases. One that's entirely predictable. We have attempted so far unsuccessfully to regulate social media after the demonstrably harmful impacts of social media on our last several elections.
在你的证言中,你主张制定针对特定使用方式的人工智能监管政策,而不是针对其基础技术。欧盟正在推进一项人工智能法案,该法案根据风险级别对人工智能产品进行分类。你们都以不同的方式表示,你们认为选举和影响选举结果的虚假信息是最高风险案例之一,这是完全可预测的。尽管社交媒体对我们最近几次选举产生了明显的有害影响,但我们迄今未能成功地对其进行监管。

Paragraph 6: What advice do you have for us about what kind of approach we should follow and whether or not the EU direction is the right one to pursue? I mean the conception of the EU AI act is very consistent with this concept of precision regulation where you're regulating the use of the technology in context. So absolutely that approach makes a ton of sense. It's what I advocated for at the onset. Different rules for different risks. So in the case of elections, absolutely any algorithm being used in that context should be required to have disclosure around the data being used, the performance of the model, anything along those lines is really important. Guard rails need to be in place.
你对我们有什么关于我们应该遵循什么样的方法以及欧盟方向是否正确的建议?我是指欧盟AI法案的构想非常符合精准监管的理念,即您正在在特定语境中监管技术的使用。因此,完全理解这种方法很有意义。这是我一开始就提倡的。对于不同的风险,需要有不同的规则。因此,在选举中,绝对需要要求在该语境中使用的任何算法都应披露使用的数据、模型的性能,任何类似这样的信息都非常重要。需要设定防护措施。

Paragraph 7: And on the point, just come back to the question of whether we need an independent agency. I mean I think we don't want to slow down regulation to address real risks right now. So we have existing regulatory authorities in place who have been clear that they have the ability to regulate in their respective domains. A lot of the issues we're talking about today, span multiple domains, elections and the likes. If I could, I'll just assert that those existing regulatory bodies and authorities are under-resourced and lack many of the statutory regulatory powers that they need.
关于是否需要一个独立机构的问题,我觉得现在不要拖延解决现实风险的监管。我们已经有了现有的监管机构,他们已经明确表示在自己的领域内有监管能力。今天我们讨论的许多问题涉及到多个领域,比如选举等。如果我可以加一句,我会强调这些现有的监管机构和权威机构资源匮乏,缺乏他们所需的法定监管权力。

Paragraph 1: Sen. Cruz. Thank you Mr. Chairman. Welcome to each of the witnesses. We appreciate you being here, we appreciate your testimony. This hearing is critically important. Artificial Intelligence has the potential to dramatically transform the world. AI has the potential to drive economic growth, to create jobs, to revolutionize healthcare, to revolutionize transportation, to revolutionize virtually every aspect of life. At the same time, AI also poses real risks. If AI advances to the point of surpassing human intelligence, as some have predicted, the risks could be existential. The development of AI also raises important questions about personal privacy, about civil liberties, and about economic security. And so, wrestling with how to respond, what policies to pursue, how to address these risks, while also preserving the extraordinary benefits that AI can bring is a critical task for policymakers.
参议员克鲁兹:感谢主席先生。欢迎各位证人的到来,感谢你们的证言。这次听证会非常重要。人工智能有可能大大改变世界。 AI有可能推动经济增长,创造就业机会,彻底改革医疗保健,彻底改革交通,彻底改革几乎生命的方方面面。同时,AI也带来真正的风险。如果AI发展到超越人类智能的程度,如一些人所预测的那样,风险可能就是存亡之际。 AI的发展也引发了有关个人隐私、公民自由和经济安全的重要问题。因此,解决如何应对,制定何种政策,如何应对这些风险,同时又保留AI可能带来的非凡好处,这是决策者必须重要解决的任务。

Paragraph 2: Sen. Cruz. Let me start by asking each of the witnesses a question on the privacy front. There are many companies that have collected enormous amounts of data on Americans. And they may have done so without sufficient consent, without sufficient transparency, without sufficient understanding by the consumers whose data is being collected. How should we address that problem?
克鲁兹参议员说:我想以隐私方面的问题向各位证人发问。很多公司收集了大量关于美国人的数据,但可能没有足够的同意、透明度和消费者对所收集数据的足够理解。我们应该如何解决这个问题?

Paragraph 3: Ms. Montgomery. Absolutely. So I think that there's a couple of things. One is that companies have to take more responsibility and show more accountability about how they are using data. So I think there are things like GDPR that is really driving more transparency. I think having something like a privacy label, something that shows how data is being used is really critical.
蒙哥马利女士:完全正确。我认为主要有两个事项。一是公司必须更负责任地使用数据并展示更多的问责制。我认为像《通用数据保护条例》(GDPR)这样的法规在推动更多的透明度。我认为像隐私标签这样的东西,可以表明数据的使用方式,真的非常重要。

Paragraph 4: Sen. Cruz. Let me ask a different question on the economic front. One of the areas where AI has the potential to revolutionize is the area of transportation. Self-driving cars, self-driving trucks could be transformative. And at the same time, there are concerns about the economic impact. That if we have mass deployment of self-driving trucks, for example, that could eliminate jobs for many millions of Americans who rely on driving for employment. How do we think about that tradeoff? How do we maximize the benefits of AI in transportation while at the same time minimizing the economic dislocation that could flow from it?
克鲁兹参议员。让我提出另一个与经济相关的问题。人工智能有潜力革新的领域之一是交通运输。自动驾驶汽车、自动驾驶卡车可能会是变革性的。同时,也有人对其经济影响表示担忧。例如,如果我们大规模部署自动驾驶卡车,可能会导致数百万依靠开车谋生的美国人失去工作。我们该如何权衡利弊?在最大化交通运输领域人工智能的好处的同时,如何最小化可能带来的经济混乱?

Paragraph 5: Professor Brynjolfsson. Senator, I think your point is an excellent one. And I think we're going to need to be proactive, both in terms of reducing the risk and increasing the opportunities. And the most important thing we can do is to invest in an educational and training system that helps people to adapt and adjust. So that they can take advantage of and get the new jobs that will be created, and that they'll have the flexibility to move if a job of theirs is eliminated.
布林约尔松教授说:“议员,我认为您的观点非常好。为了降低风险并增加机会,我们需要积极主动。而我们最重要的工作是投资于一个有助于人们适应和调整的教育和培训系统,这样他们就可以利用和得到新的工作,并具有灵活性,以便在他们的工作被淘汰时进行转移。”

Paragraph 6: Sen. Cruz. And so we've heard a lot about the economic benefits and the transformational benefits of AI. We've also heard concerns about maintaining privacy, about the potential for abuses. Perhaps the biggest concern is the existential risk, the risk that advanced AI could go rogue and ultimately pose a catastrophic risk to humanity itself.
克鲁兹参议员:我们已经听到了许多有关人工智能带来的经济和变革性益处的讨论。我们也听到了对维护隐私和防止滥用潜在风险的关注。也许最大的关注点是存在性风险,即先进的人工智能可能会失控,最终对人类自身构成灾难性的风险。

Paragraph 7: Senator Blumenthal. Professor, Dr. Marcus, I want to address with you the issue of privacy protection. You talked about the need for guardrails and for oversight. And I want to ask you whether you don't think that there is an inherent conflict between the profit motive of the companies that are developing and deploying these technologies and the need for privacy protection. Because it seems to me that the record of these companies has demonstrated that the profit motive tends to trump privacy protection.
议员布隆门塔尔:教授马库斯,我想与您讨论隐私保护的问题。您谈到了需要设立保护措施和监督。我想问您是否认为,发展和部署这些技术的公司的利润动机与隐私保护的需求之间存在固有冲突。因为在我看来,这些公司的记录表明,利润动机往往胜过隐私保护。

Paragraph 8: Correct. We have failed to deliver our data privacy even though industry has asking us to regulate data privacy. If I might, Mr. Marcus, I'm interested also what international bodies are best positioned to convene multilateral discussions to promote responsible standards. We've talked about a model being CERN and nuclear energy. I'm concerned about proliferation and nonproliferation. We've also talked, I would suggest that the IPCC, a UN body, helped at least provide a scientific baseline of what's happening in climate change. So that even though we may disagree about strategies, globally we've come to a common understanding of what's happening and what should be the direction of intervention. I'd be interested, Mr. Marcus, if you could just give us your thoughts on who's the right body internationally to convene a conversation and one that could also reflect our values. I'm still feeling my way on that issue. I think global politics is not my specialty. I'm an AI researcher, but I have moved towards policy in recent months, really, because of my great concern about all of these risks. I think certainly the UN UNESCO has its guidelines should be involved in at the table and maybe things work under them and maybe they don't, but they should have a strong voice and help to develop this.
没错,虽然产业界要求我们监管数据隐私,但我们未能提供数据隐私保护。 如果可以的话,马库斯先生,我也对哪些国际组织有能力召开多边讨论以促进负责任的标准感兴趣。我们谈到了一个模型是CERN和核能。我很担心核扩散和核非扩散问题。我们也谈到过,我建议,气候变化问题上,联合国机构IPCC至少提供了一个关于正在发生的气候变化态势的科学基础。因此,即使我们在策略上存在分歧,全球都已经形成了一个关于正在发生的事情及需要干预的方向的共同理解。马库斯先生如果您可以给我们您对于哪个国际机构是召开对话并能反映我们价值观的合适组织的想法,我会很感兴趣。在这个问题上我还在摸索中。我认为全球政治不是我的专长,我是一名AI研究者,但是由于我对所有这些风险的巨大担忧,最近几个月我确实已经开始转向政策方面。我认为联合国教科文组织的指南应该参与其中,也许他们能够使事情顺利进展,也许不行,但他们应该有强大的发声权,帮助制定这一目标。

Paragraph 9: The OACD has also been thinking greatly about this number of organizations have internationally. I don't feel like I personally am qualified to say exactly what the right model is there. Well, thank you. I think we need to pursue this both at the national level and the international level. I'm the chair of the IP subcommittee of the Judiciary Committee. In June and July, we will be having hearings on the impact of AI on patents and copyrights. You can already tell from the questions of others. There'll be a lot of interest. I look forward to following up with you about that topic.
OACD也非常关注这些国际组织的数量。我个人感觉我没有资格确定其中的正确模式。谢谢你的发言。我认为我们需要在国家和国际层面上都进行追求。我是司法委员会的IP子委员会主席。在六月和七月,我们将就人工智能对专利和版权的影响进行听证会。从其他人的问题中你已经可以看到,这个议题会引起很大的关注。我期待跟你进一步探讨这个话题。

Paragraph 10: Sen. Coons. Dr. Marcus, thank you so much. I had the privilege of hearing your presentation at a Brookings event recently. I was struck by your repeated urging that we need to engage with ethical issues around artificial intelligence when our commonsense is what is driving technology forward. And I hear in our exchange today a real emphasis on the need for governance and an ethical framework to guide how it is we pursue the benefits of AI while managing the risks. In your written testimony, you note that AI has unprecedented power to transform society and that we have a responsibility to ensure that transformation is for the better. How would responsible use of AI look like to you, and what role should policymakers have in seeing that this vision is realized?
参议员库恩斯:Marcus博士,非常感谢您。我有幸近期在一次布鲁金斯研讨会上听到了您的演讲,并深受您反复强调我们需要在推动技术发展中考虑人工智能的伦理问题的启发。在我们今天的交流中,我听到了您强调需要治理和伦理框架来引导我们如何追求人工智能的好处,同时管理风险。在您的书面证言中,您指出人工智能具有前所未有的改变社会的力量,我们有责任确保这种转变是为了更好的。对您来说,负责任地使用人工智能会是什么样子,政策制定者在实现这一愿景方面应该扮演什么角色?

Paragraph 11: Ms. Montgomery. Senator, I think that there are three critical things if we're going to talk about responsible AI. The first is transparency. So making sure that we understand how data is being used, making sure that we understand how algorithms are working, and that they're performing as we expect. The second is explainability. So if there is a decision that is being made by AI, making sure that we can understand why that decision was made? What are the factors that went into it? And third is accountability. So again, holding these companies and the developers of AI responsible for the actions and the results of AI.
蒙哥马利女士:参议员,我认为,如果我们要谈论负责任的AI,有三件关键的事情。第一是透明度。所以要确保我们了解数据的使用方式,确保我们了解算法的工作方式,以及它们能按我们的预期执行。第二是可解释性。所以如果AI做出了决定,必须确保我们能够理解为什么做出了这个决定?有哪些因素产生了影响?第三是责任追究。所以,再次强调,要对这些公司和AI开发人员的行动和结果负责。

Paragraph 12: Ms. Bajarin. Well, I think there are a handful of things that we need to be very concerned about, one of which of course is privacy, which we've already talked about. But secondly is bias. When you feed data into an algorithm, you have to be very careful about what that data is. You don't want to embed any biases, whether it's racial biases or other types of biases. We need to be very cautious that we're not creating algorithms that are going to perpetuate biases that we're already trying to work hard to overcome. I think that's a very important issue. Finally, I think we need to think about how we design and deploy AI to consider the impact on people's jobs, to think about the impact on people's wages, and we need to be thinking about that really proactively, so that we're not 10 years down the road saying, oh, we didn't think about the impact that these technologies would have on entire industries, and we didn't think about consequences for those jobs. So I think those are three critical areas for us to focus our attention on.
巴加林女士表示,我们需要特别关注几个问题。首先是隐私问题,这已经被我们讨论过了。其次是偏见问题。当我们将数据输入算法时,必须非常小心。我们不希望嵌入任何偏见,无论是种族歧视还是其他类型的偏见。我们需要非常谨慎,以免我们正在努力克服的偏见得以延续。我认为这是一个非常重要的问题。最后,我们需要考虑如何设计和部署人工智能,以考虑其对人们工作和薪资的影响。我们需要积极地思考这个问题,以免10年后我们会说,哦,我们没有考虑到这些技术对整个产业的影响,也没有考虑到这些工作的后果。我认为这三个关键领域是我们应该集中注意力的地方。

Paragraph 13: Senator Kennedy. Thank you all for being here. Permit me to share with you three hypotheses that I would like you to assume for the moment to be true. Apathesis number one, many members of Congress do not understand artificial intelligence. Apathesis number two, that absence of understanding may not prevent Congress from plunging in with enthusiasm and trying to regulate this technology in a way that could hurt this technology. Apathesis number three, that I would like you to assume. There is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are done. Assume all of those to be true. Please tell me in plain English two or three reforms, regulations, if any, that you would implement if you were queen or king for a day.
肯尼迪参议员。感谢你们来到这里。请允许我与你们分享三个假设,让你们暂时假设它们是真实的。假设一,许多国会议员不了解人工智能。假设二,这种不理解可能不会阻止国会热情投入并试图以可能损害该技术的方式进行监管。假设三是我想让你们假设的。可能有一个愤怒的人工智能社区,有意或无意地可能利用人工智能杀死我们所有人,让我们受到伤害。假设所有这些都是真的。请用简单的语言告诉我,如果您是国王或女王一天,您将实施的两到三项改革、监管(如果有的话)是什么。

Paragraph 14: Ms. Montgomery. I think it comes back again to transparency and explainability in AI. We absolutely need to know and have companies attest. What do you mean by transparency? So disclosure of the data that's used to train AI disclosure of the model and how it performs and making sure that there's continuous governance over these models that we are the leading edge in technology governance, organizational governance, rules and clarification that are needed that this progress. I mean this is your chance folks to tell us how to get this right. Please use it. I think again the rules should be focused on the use of AI in certain contexts.
蒙哥马利女士,我认为这又回到了AI的透明和解释性问题。我们绝对需要了解和要求企业声明。透明是什么意思?意思是披露用于训练AI的数据以及模型的披露,以及如何表现,并确保对这些模型有连续的治理,我们领先于技术治理、组织治理、需要的规则和澄清。这是你们告诉我们如何做好的机会。请好好利用。我认为规则应该集中在特定情境下使用AI。

Paragraph 1: Professor Marcus. Number one, a safety review like we use with the FDA prior to widespread deployment. If you can introduce something to a hundred million people, somebody asked to have their eyeballs on it. Okay, that's a good one. Number two, a nimble monitoring agency to follow what's going on, not just pre-review but also post as things are out there in the world with authority to call things back, which we've discussed today. And number three, would be funding geared towards things like AI constitution, AI that can reason about what it's doing. I would not leave things entirely to current technology, which I think is poor at behaving in ethical fashion and behaving in honest fashion. And so I would have funding to try to basically focus on AI safety research. That term has a lot of complications in my field. There's both safety, let's say short term and long term. And I think we need to look at both.
马库斯教授提出了三个建议来确保人工智能安全。首先是像FDA在广泛部署之前进行安全审查。如果你想向一亿人介绍某个东西,那么一定要有人对此进行监督。其次是有一个灵活的监测机构,可在物品在世界上广泛使用之前和之后进行监测,有权撤回物品,这是我们今天讨论的问题。第三,资金应该用于例如人工智能宪法和能够合理推论的人工智能等方面。我不会完全依赖现有技术,因为我认为它们在行为上缺乏道德和诚信,所以我会投资于人工智能安全研究。这个术语在我的领域中有很多复杂性。有短期和长期的安全问题,我们需要同时考虑。

Paragraph 2: Rather than just funding models to be bigger, which is the popular thing to do, we need to find models to be more trustworthy. Professor, because I'm going to hear from Mr. Altman. Mr. Altman, here's your shot. Thank you, senator.
与其仅仅给模型提供更多的资金支持,这是一件流行的事情,我们需要找到更可靠的模型。教授,这是我听取Altman先生的意见时的情况。Altman先生,现在轮到你发言了。谢谢你,参议员。

Paragraph 3: Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards.
首先,我将创建一个新机构,对于达到一定水平的项目进行许可,并可以收回该许可并确保符合安全标准。

Paragraph 4: Number two, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we've used in the past is looking to see if a model can self replicate and X, self-exful trade into the wild. We can give you your office a long other list of the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third, I would require independent audits. So not just from the company or the agency, but experts who can say the model is or isn't in compliance with these state and safety thresholds and these percentages of performance on question X or Y.
第二,我将创建一套安全标准,专注于你在第三个假设中提到的危险能力评估。我们过去使用的一个例子是观察一个模型是否可以自我复制和扩散到野外。我们可以为您的办公室提供其他重要事项的长列表,但是在模型部署到世界之前,必须进行特定测试。第三,我将要求独立审计。不仅仅是来自公司或机构的审计,而是由专家说这个模型是否符合这些国家和安全门槛以及这些问题X或Y的性能百分比。

Paragraph 5: Can you send me that information? We will do that. Would you be qualified if we promulgated those rules to administer those rules? I love my current job. Are there people out there that would be qualified? We would be happy to send you recommendations for people out there, yes. Okay. You make a lot of money, do you? I make no. I'm paid enough for health insurance. I have no equity in open AI. You really? That's interesting. You need a lawyer. I need a what? You need a lawyer or an agent. I'm doing this because I love it. Thank you, Mr. Chairman. Thanks, Senator Kennedy. Senator Harano.
能否把那份信息发送给我?好的,我们会做到的。如果我们颁布那些规则来管理那些规则,你是否有资格?我很喜欢我的工作。那有没有其他有资格的人呢?我们很乐意为你推荐那些有资格的人。好的。你赚了很多钱吗?我没有。我的工资足以支付健康保险。我在OpenAI没有股权。真的吗?那很有趣。你需要一位律师。我需要什么?你需要一位律师或经纪人。我这么做是因为我喜欢。谢谢,主席先生。谢谢,肯尼迪参议员。哈拉诺参议员。

Paragraph 6: Thank you, Mr. Chairman. Listening to all of your tests is fine. Thank you very much for being here. Clearly, AI truly is a game changing tool. And we need to get the regulation of this tool right because myself, for example, as AI might have been GPT-4, it might have been, I don't know, one of the other entities to create a song that my favorite band, BTS, a song that they would sing, somebody else's song, but neither of the artists were involved in creating what sounded like a really genuine song, so you can do a lot.
谢谢主席先生。听取您所有的测试非常好,非常感谢您在这里。很明显,人工智能真正是一个具有改变游戏的工具。我们需要对这个工具进行正确的监管,因为例如我自己,人工智能可能是GPT-4,可能是其他实体,创造了一首我最喜欢的乐队BTS的歌曲,他们会唱一首别人的歌,但没有任何一个艺术家参与创造,听起来像是一个非常真实的歌曲,因此您可以做很多事情。

Paragraph 7: We also ask can can there be a speech created talking about the the Supreme Court decision and doves and the chaos that it created using my voice, my kind of voice, and it created a speech that was really good, almost made me think about, you know, what do I need my staff for? So don't worry, that's not worry. No, just laughter behind you. Their jobs are safe, but there's so much that can be done.
我们还想问,能否用我的声音来创作一篇演讲,谈论最高法院的裁决、和平鸽和它所带来的混乱呢?这样的演讲非常出色,让我几乎开始思考,我还需要员工来干嘛?但是不要担心,他们的工作还是安全的,我们可以做的事情还有很多。所以,只需要在你身后听到欢笑即可。

Paragraph 8: And one of the things that you mentioned, Mr. Altman, that intrigued me was you said GPT-4 can refuse harmful requests. So you must have put some thought into how your system, if I can call it that, can refuse harmful requests. What do you consider a harmful request? You can just keep it short. Yeah, I'll give a few examples. One would be about violent content, another would be about content that's encouraging self-harm. Another's adult content, not that we think adult content is inherently harmful, but there's things that could be associated with that that we cannot reliably enough differentiate, so we refuse all of it.
你提到的一件事情,Altman先生,让我很感兴趣,那就是你说GPT-4能够拒绝有害请求。那么,你们一定考虑了如何让你的系统能够拒绝有害请求。那你们认为什么是有害请求?可以简单说一下吗?好的,我给几个例子。比如暴力内容、鼓励自我伤害的内容或者成人内容。我们并不认为成人内容本身就有害,但是可能有一些与之相关的内容,我们无法进行可靠的区分,所以我们都会拒绝它们。

Paragraph 9: So those are some of the more obvious harmful kinds of information, but in the election context, for example, I saw a picture of a former president Trump being arrested by NYPD, and that went viral. I don't know, is that considered harmful? I've seen all kinds of statements attributed to any one of us that could be put out there, that may not be, that may not rise to your level of harmful content, but there you have it.
因此,以上是一些比较明显的有害信息,但在选举环境下,例如,我看到了一张前总统特朗普被纽约警察局逮捕的图片,这张图片在社交网络上迅速传播。我不知道这是否被认为是有害行为?我看到所有类型的言论都被归咎于我们中的任何一个人,这些言论可能不被认为是有害内容,但事实就是这样。

Paragraph 1: So two of you said that we should have a licensing scheme. I can't envision or imagine right now what kind of our licensing scheme we would be able to create to pretty much regulate the vastness of this game changing tool. So, are you thinking of an FTC kind of a system, an FCC kind of a system? What do the two of you even envision as a potential licensing scheme that would provide the kind of guard-wills that we need to protect our, literally, our country from harmful content?
有两个人说我们应该拥有一种许可证制度。但我无法想象或想出我们能够创建什么样的许可证制度来规范这个改变游戏规则的巨大工具。那么,你们是否在考虑一种类似联邦贸易委员会或联邦通信委员会的制度呢?你们两人究竟想象出一种什么样的潜在许可制度,可以提供我们需要保护我们国家免受有害内容侵害的防卫措施?

Paragraph 2: To touch on the first part of what you said, there are things besides, you know, should this content be generated or not, that I think are also important. So, that image that you mentioned was generated. I think it'd be a great policy to say generated images need to be made clear in all contexts that they were generated. And, you know, then we still have the image out there, but we're at least requiring people to say this was a generated image. Okay. Well, you don't need an entire licensing scheme in order to make that.
针对你上面说的第一点,除了内容是否应该生成之外,我认为还有其他的东西也很重要。那个你提到的图片是生成的。我认为制定一个政策,要求在所有场景中清楚地标注出生成的图像是必要的。这样,我们可以让这些图像继续存在,但我们至少要求人们注明它们是生成的。好的,你不需要整个许可计划来做到这一点。

Paragraph 3: Where I think the licensing scheme comes in is not for what these models are capable of today, because as you pointed out, you don't need a new licensing agency to do that. But as we as we head, and, you know, this may take a long time, I'm not sure, as we head towards artificial general intelligence, and the impact that we'll have and the power of that technology, I think we need to treat that as seriously as we treat other very powerful technologies. And that's where I personally think we need such a scheme. I agree.
我认为许可证计划的作用并不在于这些模型今天已经能做什么,因为正如你所指出的,我们不需要新的许可机构来做这件事情。但是,随着我们向着人工通用智能迈进,以及它所带来的影响和力量,我认为我们需要像对待其他强大技术一样认真对待它。这就是我个人认为我们需要这样一个计划的原因。我同意。

Paragraph 4: And that is why, by the time we're talking about AGI, we're talking about major harms that can occur through the use of AGI. So, Professor Marcus, I mean, what kind of regulatory scheme would you envision? And we can't just come up with something, you know, that is going to be of take care of the issues that will arise in the future, especially with AGI. So, what kind of a scheme would you contemplate?
因此,当我们谈论AGI时,我们谈论的是可能通过使用AGI出现的重大危害。 马库斯教授,您会设想出什么样的监管方案?我们不能仅仅提出一些只针对未来可能出现的问题的东西,特别是对于AGI。那么,您会考虑什么样的方案呢?

Paragraph 5: Well, first, if I can rewind just a moment, I think you really put your finger on the central scientific issue in terms of the challenges in building artificial intelligence. We don't know how to build a system that understands harm in the full breadth of its meaning. So, what we do right now is we gather examples, and we say, is this like the examples that we have labeled before but that's not broad enough. And so, I thought you're questioning beautifully outlined the challenge that AGI itself has to face in order to really deal with this. We want AGI itself to understand harm and that may require new technology. So, I think that's very important.
首先,如果我可以回溯一下,我认为你真正指出了在构建人工智能方面所面临的中心科学问题。我们不知道如何构建一个系统,使它能够全面了解“损害”这一概念的含义。因此,我们现在所做的是收集例子,并且询问是否这些例子类似于我们之前标记过的例子,但这还不够广泛。因此,我认为你所提出的问题非常好地概述了AGI本身所面临的挑战,以便真正处理这个问题。我们希望AGI本身能够理解“损害”,这可能需要新的技术。因此,我认为这非常重要。

Paragraph 6: On this second part of your question, the model that I tend to gravitate towards, but I am not an expert here, is the FDA at least as part of it in terms of you have to make a safety case and say, why the benefits outweigh the harms in order to get that license? Probably we need elements of multiple agencies. I'm not an expert there, but I think that the safety case part of it is incredibly important. You have to be able to have external reviewers that are scientifically qualified, look at this and say, you have you addressed enough.
在你第二个问题的部分,我倾向于选择的模型是FDA,至少在安全性方面,你必须提出一个安全证明并说明为什么利益大于危害才能获得许可。可能我们需要多个机构的元素。我在这方面不是专家,但我认为安全证明部分非常重要。你必须能够有科学资格的外部审查人员来审查,并说你已经足够解决了这个问题。

Paragraph 7: So, I'll just give one specific example. AutoGPT frightens me. That's not something that's opening on me, but something that opening I did make called chat GPT plugins led a few weeks later to some building open source software called AutoGPT. And what AutoGPT does is it allows systems to access source code, access the internet and so forth. And there are a lot of potential, let's say, cybersecurity risks there. There should be an external agency that says, well, we need to be reassured if you're going to release this product that there aren't going to be cybersecurity problems or there are ways of addressing it.
我举个具体的例子。AutoGPT让我感到恐惧。这不是在我身上打开的,而是我开发的聊天GPT插件几周后,导致了一个叫做AutoGPT的开源软件的构建。AutoGPT的作用是允许系统访问源代码、访问互联网等等。在这里存在很多潜在的网络安全风险。应该有一个外部机构说,如果你要发布这个产品,我们需要被保证没有网络安全隐患或者有解决的办法。

Paragraph 8: So, Professor, I am running out of time. There's, you know, I just want to mention Ms. Montgomery, your model is a use model similar to what the EU has come up with, but the vast vastness of AI and the complexities involved, I think, would require more than looking at the use of it. I think that based on what I'm hearing today, don't you think that we're probably going to need to do a heck of a lot more than to focus on what use it is being used. For example, you can ask AI to come up with a funny joke or something, but you can use the same, you can ask the same AI tool to generate something that is like an election fraud kind of a situation.
教授,时间不多了,我想提到蒙哥马利女士。您的模型与欧盟提出的使用模型相似,但由于人工智能的庞大复杂性,我认为需要更多的研究,而不仅仅关注其使用。根据今天听到的内容,您认为我们可能需要做的远远超出了关注它的使用。例如,您可以要求人工智能编写有趣的笑话,但您也可以使用同样的人工智能工具,生成类似选举欺诈的情况。

Paragraph 9: So, I don't know how you will make a determination based on where you're going with the use model, how to distinguish those kinds of uses of this tool. So, I think that if we're going to go toward a licensing kind of a scheme, we're going to need to put a lot of thought into how we're going to come up with an appropriate scheme that is going to provide the kind of future reference that we need to put in place. So, I thank all of you for coming in and providing further food for thought.
因此,我不知道你将如何根据使用模型的方向作出决定,如何区分这个工具的那些使用方式。因此,我认为如果我们要朝着许可证制度的方向前进,我们需要集思广益,考虑如何制定一个适当的方案,为我们需要实施的未来参考提供。因此,我感谢大家前来提供更多思考的食粮。

Thank you, Mr. Chairman. Thanks very much, Senator Perrano. Senator Padilla. Thank you, Mr. Chairman. Appreciate the flexibility that's been back and forth to between this committee and the Homeland Security Committee where there's a hearing going on right now on the use of AI in government. So, it's AI day on the hill or at least the Senate apparently.
谢谢主席先生,感谢Perrano参议员。Padilla参议员谢谢主席先生,非常感谢议会和国土安全部委员会之间的灵活性,在那里现在正在就政府中人工智能的使用进行听证会。因此,这是国会山上的AI日,或者至少在参议院是这样。

Now, for folks watching at home, if you never thought about AI until the recent emergence of generative AI tools, the developments in this space may feel like they've just happened all of a sudden. But the fact of the matter is, Mr. Chair, is that they haven't. AI is not new, not for government, not for business, not for public, not for the public. In fact, the public uses AI all the time and just for folks to be able to relate one off for the example of anybody with a smartphone. Many features on your device leverage AI, including suggested replies, right? When we're text messaging or even to email auto correct features, including but not limited to spelling in our email and text applications.
现在,对于观看这篇文章的人来说,如果你以前从未想过人工智能,直到最近出现生成式人工智能工具,这个领域的发展可能会让你感觉突然发生了很多事情。但事实是,主席先生,它们并没有突然发生。人工智能并不是新鲜事物,对于政府、企业、公众也是如此。事实上,公众一直在使用人工智能,只是为了让大家更好地理解,拿智能手机举个例子。你的设备上有许多功能都是基于人工智能的,包括建议回复功能。当我们发送短信或电子邮件时,甚至包括我们在邮件和短信应用程序中的拼写自动校正功能,不限于此。

So, I'm frankly excited to explore how we can facilitate positive AI innovation that benefits society while addressing some of the already known harms and biases that stem from the development and use of the tools today. Now, with language models becoming increasingly ubiquitous, I want to make sure that there's a focus on ensuring equitable treatment of diverse demographic groups. My understanding is that most research into evaluating and mitigating fairness harms has been concentrated on the English language. While non-English languages have received comparatively little attention or investment. And we've seen this problem before.
所以,我非常期待探索如何促进对社会有益的积极人工智能创新,同时解决一些已知的由工具的开发和使用导致的伤害和偏见。现在,随着语言模型越来越普及,我想确保注重保障不同人口群体的公平待遇。我了解到评估和减少公平伤害的大部分研究都集中在英语上。而非英语语言的研究相对获得的关注和投资较少。我们以前也见过这个问题。

I'll tell you why I raised this. Social media companies, for example, have not adequately invested in content moderation tools and resources for their non-English language. And I share this, that's just out of concern for non-USB users. But so many USB users prefer a language other than English in their communication. So, I'm deeply concerned about repeating social media's failure in AI tools and applications.
我会告诉你我为什么提出这个问题。举例来说,社交媒体公司并没有充分投资于非英语语言的内容审核工具和资源。我提出这个问题,是因为我关心非美国用户。然而,许多美国用户在交流中都更喜欢使用非英语语言。所以,我非常关注人工智能工具和应用中社交媒体的失败反复出现的问题。

Question, Mr. Altman and Ms. Montgomery. How are open AI, IBM, ensuring language and cultural inclusivity that they're in their large language models? And it's even an area of focus in the development of your products.
问题,阿特曼先生和蒙哥马利女士。开放式人工智能、IBM如何确保它们的大型语言模型具有语言和文化包容性,这甚至是产品开发的重点领域。

So bias and equity in technology is a focus of ours and always has been a diversity in terms of the development of the tools, in terms of their deployment. So having diverse people that are actually training those tools, considering the downstream effects as well. We're also very cautious, very aware of the fact that we can't just be articulating and calling for these types of things without having the tools and the technology to test for bias and to apply governance across the lifecycle of AI.
我们一直关注科技中的偏见和公平性,尤其是在工具开发和部署方面的多样性。我们有着来自各种背景的人才,他们在培训这些工具时会考虑到其下游影响。我们十分谨慎,也很清楚要想实现这些目标,我们必须拥有检测偏见并在人工智能生命周期中应用治理的工具和技术。

So we were one of the first teams and companies to put toolkits on the market, deploy them, contribute them to open source that will do things like help to address, you know, be the technical aspects in which we help to address issues like bias. Can you speak just for a second specifically to language inclusivity?
因此,我们是最早推出工具包并将其部署、贡献给开源社区的团队和公司之一,这些工具包可以帮助解决技术方面的问题,如帮助解决偏见问题。你能详细说一下语言包容性吗?

Yeah, I mean language, so we don't have a consumer platform, but we are very actively involved with ensuring that the technology we help to deploy in the large language models that we use in helping our clients to deploy technology is focused on and available in many languages. Thank you, Michelle. We think this is really important. One example is that we worked with the government of Iceland, which is a language of fewer speakers than many of the languages that are well represented on the internet to ensure that their language was included in our model.
是的,我指的是语言,所以我们没有消费者平台,但我们非常积极地参与确保我们帮助部署的大型语言模型所使用的技术专注于并可用于多种语言。谢谢,米歇尔。我们认为这非常重要。一个例子是我们与冰岛政府合作,确保他们的语言包含在我们的语言模型中,虽然该语言的使用者比互联网上许多其他语言的使用者要少。

And we've had many similar conversations and I look forward to many similar partnerships with lower resource languages to get them into our models. GPT-4 is, unlike previous models of ours, which were good at English and not very good at other languages, now pretty good at a large number of languages, you can go pretty far down the list, ranked by number of speakers, and still get good performance. But for these very small languages, we're excited about custom partnerships to include that language into our model run. And the part of the question you asked about values and making sure that cultures are included were equally focused on that excited to work with people who have particular data sets and to work to collect a representative set of values from around the world to draw these wide bounds of what the system can do.
我们已经进行了许多类似的对话,我期待着与那些资源较低的语言进行更多的合作,让它们被纳入我们的模型中。与我们以前的模型不同的是,GPT-4不仅在英语方面表现良好,对许多其他语言也很出色,你可以往下看,根据说话者的数量进行排名,仍可以获得良好的表现。但对于这些非常小的语言,我们对定制合作感到兴奋,以将该语言包含到我们的模型运行中。你所问的关于价值观和确保文化被包含在内的问题同样聚焦在这一点,我们很高兴与拥有特定数据集的人合作,并努力收集来自世界各地代表性的价值观,为这个系统所能达到的广泛范围作出绘制。

I also appreciate what you said about the benefits of these systems and wanting to make sure we get those to as wide of a group as possible. I think these systems will have lots of positive impact on a lot of people, but in particular, underrepresented, historically underrepresented groups and technology, people who have not had as much access technology around the world, this technology seems like it can be a big lift up. And my question was specific to language inclusivity, but glad there's a agreement on the broader commitment to diversity and inclusion, and I'll just give a couple more reasons why I think it's so critical.
我也感激你提到了这些系统的好处,并希望尽可能让更广泛的人群享受到这些好处。我认为这些系统将对很多人产生积极影响,特别是那些历史上受到排挤的群体和科技方面没有太多接触的人群,这项技术似乎可以起到很大的推动作用。我的问题是关于语言包容性的,但我很高兴在广泛的多样性和包容性方面有一致的承诺,我再举几个理由,说明为什么我认为这非常重要。

The largest actors in this space can afford the massive amount of data, the computing power, and they have a financial resource that's necessary to develop complex AI systems. But in this space, we haven't seen from a workforce standpoint the racial and gender diversity reflective of the United States of America, and we risk, if we're not thoughtful about it, contributing to the development of tools and approaches that only exacerbate the bias and inequities that exist in our society. So a lot of follow-up work to do there.
这个领域里最大的公司可以承担巨量的数据,在计算能力方面也足够强,同时有必要发展复杂的人工智能系统所需的财务资源。但是,就从劳动力的角度来看,这个领域里的人种和性别的多样化并没有反映美国社会的情况。如果我们对此不加思考,就可能会助长现存于我们社会中的偏见和不公平现象。所以我们需要做大量的后续工作。

In my time remaining, I do want to ask one more question. This committee and the public are right to pay attention to the emergence of a generative AI. This technology has a different opportunity and risk profile than other AI tools. And these applications have felt very tangible for the public due to the nature of the user interface and the outputs that they produce. But I don't think we should lose sight of the broader AI ecosystem as you consider AI's broader impact on society, as well as the design of appropriate safeguards.
在我有限的时间里,我想再问一个问题。这个委员会和公众关注生成式人工智能的出现是正确的。这项技术与其他人工智能工具具有不同的机会和风险概况。由于用户界面和它们产生的输出的性质,这些应用程序已经对公众产生了非常具体的感受。但是,在考虑人工智能对社会的更广泛影响以及设计适当保障时,我认为我们不应该忽视更广泛的人工智能生态系统。

So Ms. Montgomery, in your testimony, as you noted, AI is not you. Can you highlight some of the different applications that the public and policymakers should also keep in mind as we consider possible regulations? Yeah, I mean, I think the generative AI systems that are available today are creating new issues that need to be studied. New issues around the potential to generate content that could be extremely misleading, deceptive, and alike.
蒙哥马利女士,在你的证言中提到,人工智能不能代表你。能否强调一下公众和决策者在考虑可能的规定时应该注意哪些不同的应用呢?是的,我认为当今可用的生成型人工智能系统正在带来需要研究的新问题。这些问题涉及潜在的生成极具误导性、欺骗性等内容的可能性。

So those issues absolutely need to be studied. But we shouldn't also ignore the fact that AI is a tool. It's been around for a long time. It has capabilities beyond just generative capabilities. And again, that's why I think going back to this approach where we're regulating AI where it's touching people and society is a really important way to address it.
因此,这些问题绝对需要研究。但我们也不应忽视人工智能是一个工具这个事实。它已经存在很长时间了。它具有除了发生能力以外的能力。再次强调,这就是我认为回归到在人工智能接触人类和社会时进行调控的方式,是一个非常重要的方式来解决这个问题的原因。

Thank you. Thank you, Mr. Chair. Thanks, Senator P. Senator Booker is next, but I think he's going to defer to Senator Ossoff. Senator Ossoff is a very big deal. I don't know if you know.
谢谢。谢谢,主席先生。感谢P参议员。接下来是布克参议员,但我想他会将发言权转交给奥索夫参议员。奥索夫参议员很有分量。我不知道你是否知道。

Thank you. I have a meeting at noon and I'm grateful to you, Senator Booker, for yielding your time. You are, as always, really an enhancum.
谢谢您。我中午有一个会议,我非常感谢您,布克参议员,放弃您的时间。您一如既往地是一个真正的推动者,使得这个会议可以更进一步。

And thank you to the panelists for joining us. Thank you to the subcommittee leadership for opening us up to all committee members. If we're going to contemplate a regulatory framework, we're going to have to define what it is that we're regulating. So, Mr. Albin, any such law will have to include a section that defines the scope of regulated activities, technologies, tools, products. Just take a stab at it.
感谢出席的专家们参加我们的会议。感谢分委员会领导为我们向所有委员会成员开放。如果我们要考虑建立监管框架,我们必须定义我们所监管的内容。因此,阿尔宾先生,任何这样的法律都必须包括一部分来定义监管活动、技术、工具、产品的范围。您可以试试看。

Yeah, thanks for asking, Senator Ossoff. I think it's super important. I think there are very different levels here. And I think it's important that any new approach, any new law, does not stop the innovation from happening with smaller companies, open source models, researchers that are doing work at a smaller scale. That's a wonderful part of this ecosystem in America. We don't want to slow that down. There still may need to be some rules there.
谢谢您的关心,奥索夫参议员。我认为这非常重要。这里有不同的层次。我认为任何新的方法、任何新的法律都不能阻止小公司、开放源模型和小规模研究人员进行创新。这是美国生态系统的一个美好部分。我们不希望减慢这个步伐。当然,这里仍然可能需要一些规则。

But I think we could draw a line at systems that need to be licensed in a very intense way. The easiest way to do it, I'm not sure if it's the best, but the easiest would be to talk about the amount of compute that goes into such a model. So, we could define a threshold of compute and it'll have to go, it'll have to change, it could go up or down. Down as we discover more efficient algorithms that says above this amount of compute, you are in this regime.
但我认为,我们可以在需要非常严格地许可的系统上划定界限。最简单的方法,我不确定是否是最好的方法,但最简单的方法是谈论涉及到这种模型的计算量。因此,我们可以定义一个计算阈值来确定这个范畴。它将不得不改变,可以上调或下调,这取决于我们发现更有效的算法。在这些计算量高于某个阈值的情况下,您就处于这种范畴之中。

What I would prefer, it's hard to do, but I think more accurate, is to define some capability thresholds and say a model that can do things X, Y, and Z up to all to decide that's not in this licensing regime, but models that are less capable. You know, we don't want to stop our open source community. We don't want to stop individual researchers. We don't want to stop new startups. Can proceed with a different framework.
我想要的是比较困难的,但可能更准确的做法是定义一些技能门槛,比如说一个模型可以完成 X、Y、Z 任务,如果不能满足这个门槛,就不在这个许可范围内。我们不希望阻止开源社区、个人研究人员或新创企业发展,他们可以使用另一种框架继续进行。

Thank you. As concisely as you can please state which capabilities you'd propose we consider for the purposes of this definition.
请尽可能简洁地列出您建议我们考虑定义时应考虑哪些能力。谢谢。 意思是,请简明地列出您认为我们在定义时应该考虑哪些能力。

I would love rather than to do that off the cuff to follow up with your office with like a thought. Well, perhaps opineings. Opine understanding that you're just responding and you're not making a law.
我宁愿不随意行事,而是跟进你办公室里的想法,或许是发表一些看法。明白你只是在回应,而不是制定法律。

All right, in the spirit of just a pine. I think a model that can persuade, manipulate, influence persons' behavior or persons' beliefs that would be a good threshold. I think a model that could help create novel biological agents would be a great threshold. Okay, things like that.
好的,遵循“仅是松树”(just a pine)的精神,我认为一个可以说服、操纵、影响人们行为或信仰的模型将是一个好门槛。我认为一个可以帮助创建新型生物制剂的模型将是一个很棒的门槛。好了,就像这样的事情。

I want to talk about the predictive capabilities of the technology and we're going to have to think about a lot of very complicated constitutional questions that arise from it with massive data sets. The integrity and accuracy with which such technology can predict future human behaviors potentially pretty significant at the individual level. Correct.
我想谈谈这项技术的预测能力,并且我们不得不考虑到这项带有大量数据集的技术所引发的很多非常复杂的宪法问题。这种技术能够预测未来人类行为的诚信度和准确度,在个体层面潜在地相当重要。是的。

I think we don't know the answer to that for sure, but let's say it can at least have some impact there. Okay, so we may be confronted by situations where, for example, a law enforcement agency deploying such technology seeks some kind of judicial consent to execute a search or to take some other police action on the basis of a modeled prediction about some individual's behavior. But that's very different from the kind of evidentiary predicate that normally police would take to a judge in order to get a warrant.
我认为我们并不确定这个问题的答案,但是我们可以说它至少可以在某些方面产生一些影响。好的,那么我们可能会面临这样的情况,例如,执法机构使用这种技术希望获得一些司法同意来执行搜查或基于某个人的行为建立模型预测来采取某些其他警察行动。但这与通常警方向法官提供取得搜查令所需的证据前提非常不同。

Talk me through how you thinking about that issue.
请告诉我你是如何思考那个问题的。

Yeah, I think it's very important that we continue to understand that these are tools that humans use to make human judgments and that we don't take away human judgment. I don't think that people should be prosecuted based off of the output of an AI system, for example.
我认为非常重要的是,我们继续理解这些技术是人类用来做出人类判断的工具,我们不应该剥夺人类判断的权力。例如,我认为不应该根据AI系统的输出来对人进行起诉。

We have no national privacy law. Europe has rolled one out to mixed reviews. Do you think we need one? I think it'd be good. And what would be the qualities or purposes of such a law that you think would make the most sense based on your experience?
我们没有国家隐私法。欧洲已经出台了一项有好有坏的隐私法。你认为我们需要吗?我认为这是个好主意。你认为这样一项法律应该具备哪些特质或目的,根据你的经验,能够最有意义地发挥作用?

Again, this is very far out of my expertise. I think there's many, many people that could that are privacy experts that could weigh on what a lot needs to be done. I'd still like you to weigh in.
再次强调,这超出了我的专业范畴。我认为有许多隐私专家可以对需要做的很多事情进行评估和建议。不过,我仍希望您表达您对此的看法。

I mean, I think a minimum is that users should be able to sort of opt out from having their data used by companies like ours or the social media companies. It should be easy to delete your data. I think those are, it should. But the thing that I think is important from my perspective running an AI company is that if you don't want your data used for training these systems, you have the right to do that.
我的意思是,我认为至少应该让用户能够选择退出,不让公司像我们这样的或社交媒体公司使用他们的数据。删除您的数据应该很容易。我认为这些都应该有。但是从我的角度来看,作为一个人工智能公司的运营者,我认为重要的是,如果您不想让自己的数据用于训练这些系统,您有权这样做。

So let's think about how that will be practically implemented. I mean, as I understand it, your tool and certainly similar tools, one of the inputs will be scraping for lack of a better word data off of the open web, right, as a low cost way of gathering information. And there's a vast amount of information out there about all of us. How would such a restriction on the access or use or analysis of such data be practically implemented?
那么让我们想想如何实际实施。我的意思是,据我所了解,你的工具和其他类似的工具,其中一个输入是从公开网站上爬取数据,这是一种低成本的收集信息的方法。而且关于我们所有人,有很多信息在那里。那么,如何实际实施对这些数据的访问、使用或分析的限制呢?

So I was speaking about something a little bit different, which is the data that someone generates, the questions they ask our system, things that they input there, training on that, data that's on the public web that's accessible, even if we don't train on that, the models can certainly link out to it. So that was not what I was referring to. I think that, you know, there's ways to have your data or there should be more ways to have your data taken down from the public web, but certainly models with web browsing capabilities will be able to search the web and link out to it.
我说的话与此有点不同,是指某人生成的数据,他们在我们系统中提出的问题,输入的内容,以及对此进行的训练,还有公共网络上可访问的数据,即使我们没有在其上进行训练,我们的模型也可以链接到它。我认为应该有更多的方式让你的数据从公共网络中删除,但是具有浏览网络功能的模型肯定可以搜索网络并链接到它。

When you think about implementing a safety or regulatory regime to constrain such software and to mitigate some risk, is your view that the federal government would make laws such that certain capabilities or functionalities themselves are forbidden in potential. In other words, one cannot deploy or execute code capable of X. Yes. Or is it the act itself X only when actually executed? Well, I think both.
当您考虑实施安全或监管制度来约束这种软件并降低一些风险时,您认为联邦政府会制定法律,禁止某些能力或功能本身在潜在中使用。换句话说,不能部署或执行能够进行X操作的代码。是的。或者只有在实际执行X操作时才会出现X行为吗?好吧,我认为两者都有可能。

I'm a believer in defense and depth. I think that there should be limits on what a deployed model is capable of and then what it actually does to.
我是一个防御和深度的信奉者。我认为部署的模型应该有一定的限制,以及它实际所能做到的限制。

How are you thinking about how kids use your product? Well, you have to be 18 or up or have your parents' permission at 13 and up to use a product, but we understand that people get around those safe parts all the time. And so what we try to do is just design a safe product. And there are decisions that we make that we would allow if we knew only adults were using it that we just don't allow in the product because we know children will use it somewhere or other too.
你对孩子们如何使用你们的产品有什么想法吗?虽然我们的产品规定必须年满18岁或者13岁以上并获得父母授权才能使用,但我们也知道人们常常绕过这些安全规定。因此,我们致力于设计一个安全的产品。我们会做出一些决策,如果只有成年人在使用,我们允许,但考虑到孩子也会使用的事实,我们不会允许这些内容出现在我们产品中。

In particular, given how much these systems are being used in education, we want to be aware that that's happening. I think what an Center of Lumenthal has done extensive work investigating this, what we've seen repeatedly is that companies whose revenues depend upon volume of use screen time, intensity of use design these systems in order to maximize the engagement of all users, including children, with perverse results in many cases.
特别是鉴于这些系统在教育中的广泛使用,我们希望意识到这一点。我认为,Lumenthal中心进行了广泛的研究,我们反复看到的是,那些收入依赖于使用频度、使用时间和系统设计的公司会尽力提高所有用户(包括儿童)的参与度,但往往带来了许多不良结果。

And what I would humbly advise you is that you get way ahead of this issue. The safety for children of your product. Or I think you're going to find that Senator Blumethal, Senator Holly, others on this subcommittee and I will look very harshly on the deployment of technology that harms children.
我想谦虚地建议您,要提前解决您产品的儿童安全问题。否则,您将会发现参议员布鲁梅瑟尔、参议员霍利以及这个小组委员会的其他成员对那些对儿童有所伤害的技术实施非常严厉的批评。

We couldn't agree more. I think we're out of time but I'm happy to talk about that if I can respond. Go ahead. What's up to the chairman?
我们非常赞同。我认为我们时间不多了,但如果我可以回应的话,我很乐意谈论这个问题。请讲,主席有什么要说的?

Okay. First of all, I think we try to design systems that do not maximize for engagement. In fact, we're so short on GPUs. The less people use our products, the better. But we're not an advertising based model. We're not trying to get people to use it more and more. And I think that's a different shape than ad-supported social media.
首先,我认为我们应该设计不追求最大化用户参与度的系统。事实上,我们的GPU资源非常短缺,所以用户越少使用我们的产品越好。但是我们不是一个广告支持的模型,我们不会试图让人们越来越多地使用它。我认为,这与广告支持的社交媒体模型是不同的。

Second, these systems do have the capability to influence in obvious and in very nuanced ways. And I think that's particularly important for the safety of children but that will impact all of us. One of the things that we'll do ourselves regulation or not, but I think a regulatory approach would be good for also, is requirements about how the values of these systems are set and how these systems respond to questions that can cause influence. So we'd love to partner with you. Couldn't agree more on the importance.
其次,这些系统确实有明显和微妙的影响力。我认为这对于孩子的安全尤为重要,但也会影响我们所有人。无论我们是否进行监管,我们将自行制定要求,例如这些系统的价值观如何设置以及这些系统如何回应可能导致影响的问题。因此,我们非常乐意与您合作。我非常认同这一点的重要性。

Thank you. Mr. Chairman, for the record, I just want to say that the Senator from Georgia is also very handsome and brilliant too. But I will allow that comment to stand without objection.
谢谢主席先生。在记录上,我想说来自乔治亚的参议员也非常英俊和聪明。但我会允许这个评论不受反对而发表。

Mr. Chairman and Renky Metruz-Brenner are now recognized. Thank you very much. It's nice that we finally got down to the ball guys down here at the end. I just want to thank you both. This has been one of the best hearings I've had this Congress and just a testimony to you two and seeing the challenges and the opportunities that AI present. So I appreciate you both.
主席先生和Renky Metruz-Brenner现在被认可了。非常感谢你们。很高兴我们终于来到了这里。我想感谢你们俩。这是本次国会最好的听证会之一,也是对你们俩的证明,看到了人工智能所带来的挑战和机遇。所以我很感激你们俩。

I want to just jump in. I think very broadly and then I'll get a little more narrow. Sam, you said very broadly, technology has been moving like this and we are a lot of people have been talking about regulation and so I use the example of the automobile. What an extraordinary piece of technology. I mean New York City did not know what to do with Horseman or they were having crises forming commissions and the automobile comes along ends that problem. But at the same time we have tens of thousands of people dying on highways every day. We have emissions crises and the like. There are multiple federal agencies, multiple federal agencies that were created or are specifically focused on regulating cars.
我想直接跳入讨论。我会先从广泛的角度出发,然后逐渐细化。Sam,你刚才说了一个广泛的观点,科技一直在这样发展,很多人一直在讨论监管问题,我用汽车作为例子。汽车是一项非常惊人的技术。纽约市曾经不知道该怎么处理马车迫使他们成立委员会,汽车出现了解决了这个问题。但同时,每天有数万人在高速公路上死亡。我们面临着排放危机等问题。多个联邦机构被创建或专注于汽车监管。

And so this idea that this equally transforming technology is coming and for Congress to do nothing which is not what anybody here is calling for. Little or nothing is obviously unacceptable. I really appreciate Senator Welch and I who have been going back and forth during this hearing and him and Bennett have a bill talking about trying to regulate in this space. Not doing so for social media has been I think very destructive and allowed a lot of things to go on that are really causing a lot of harm.
因此,大家都认为这种同样具有改变力的技术即将到来,国会不采取任何行动是错误的。显然,少做或不做是不能接受的。我真的很感谢韦尔奇参议员和我在这次听证会期间一直互相交流,他和贝内特有一项法案试图在这个领域进行监管。对于社交媒体来说不这样做,我认为是非常破坏性的,而且允许了许多导致很多伤害的事情发生。

And so the question is what kind of regulation you all have spoken that to a lot of my colleagues. And I want to say, Mr. Montgomery and I have to give full disclosure. I'm the child of two IBM parents. But I you know you talked about defining the highest risk uses. We don't know all of them. We really don't. We can't see where this is going. Regulating at the point of risk and you sort of called not for an agency and I think when somebody else asked you to specify because you don't want to slow things down we should build on what we have in place. But you can envision that we can try to work on two different ways that ultimately a specific like we have in cars. EPA, NHTSA, the federal motor car carrier safety administration, all of these things you can imagine something specific that is as Mr. Marcus points out a nimble agency that could do monitoring other things. You can imagine the need for something like that. Correct?
因此问题是,你们所说的什么样的监管,许多我的同事都在讲述。我想说,蒙哥马利先生和我必须全面披露。我是两个IBM父母的孩子。但是你们讨论了如何定义最高风险的用途。我们并不知道所有的用途。我们真的不知道这会走向何方。在风险点上进行监管,你们提到了不需要一个机构,我认为当有人要求你具体说明时,因为你不想减缓事情的进展,我们应该建立在我们已经拥有的基础上。但是你可以想象我们可以采取两种不同的方式,最终像汽车一样具有特定的东西。美国环保署、国家公路运输安全管理局、联邦汽车运输安全管理局等,你可以想象需要这样的东西。对吗?

Absolutely. And so just for the record then in addition to trying to regulate with what we have now you would encourage Congress and my colleague, Senator Welsh, to move forward and trying to figure out the right tailored agency to deal with what we know and perhaps things that might come up in the future. I would encourage Congress to make sure it understands the technology has the skills and resources in place to impose regulatory requirements on the uses of the technology and to understand emerging risks as well. So yes.
无疑的。为了记录,除了尝试用我们现在拥有的进行监管之外,您也会鼓励国会和我的同事威尔士参议员继续努力,找到合适的机构处理我们所知道的以及可能在未来出现的事情。我会鼓励国会确保他们理解技术,拥有必要技能和资源来对技术使用实施监管要求,并且了解不断出现的风险。所以,是的。

Mr. Marcus, there is no way to put this genie in the bottle. Globally it is exploding. I appreciate your thoughts and I shared some of my staff about your ideas of what the international context is. But there is no way to stop this moving forward. So with that understanding, just building on what Ms. Montgomery said, what kind of encouragement do you have as specifically as possible to forming an agency, to using current rules and regulations? Can you just put some clarity on what you've already stated?
马库斯先生,现在已经无法阻止这种趋势了。全球范围内,这种现象正在爆发。我感谢您的想法,并已分享您对国际环境的看法,但是我们无法阻止这种趋势继续前进。所以在这种情况下,接下来让我们延伸一下蒙哥马利女士的话,您对成立机构、使用现行规定方面有何具体的鼓励意见?您能否进一步阐述一下您已经提出的一些观点呢?

Let me just insert there are more genies yet to come from more bottles. Some genies are already out but we don't have machines that can really, for example, self-improve themselves. We don't really have machines that have self-awareness and we might not ever want to go there. So there are other genies to be concerned about.
让我来补充一下,还有更多神灵将会从更多的魔瓶中出现。虽然已经有一些神灵出现了,但我们并没有能够真正自我改进的机器,比如拥有自我意识的机器,我们也许永远不想去那里。因此,还有其他值得关注的问题。

On to the main part of your question. I think that we need to have some international meetings very quickly with people who have expertise in how you grow agencies in the history of growing agencies. We need to do that in the federal level. We need to do that in the international level. I'll just emphasize one thing I haven't as much as I would like to, which is that I think science has to be a really important part of it.
接下来进入你问题的主要部分。我认为我们需要尽快与在机构增长历史方面拥有专门知识的国际专家进行会议。我们需要在联邦层面和国际层面上进行这个会议。我想强调一件事情,那就是科学必须成为其中一个重要的组成部分。

And I'll give an example. We've talked about misinformation. We don't really have the tools right now to detect and label misinformation with nutrition labels that we would like to. We have to build new technologies for that. We don't really have tools yet to detect a wide uptick in cybercrime probably. We probably need new tools there. We need science to probably help us to figure out what we need to build and also what it is that we need to have transparency around.
我来举个例子。我们已经谈论了有关错误信息的问题。目前我们还没有足够的工具来检测和标记我们想要的营养标签所包含的错误信息。我们需要构建新的技术来解决这个问题。现在我们也没有能够检测到广泛的网络犯罪情况的工具。这也许需要新的技术。科学可能需要帮助我们找出我们需要构建的东西以及我们需要公开透明的信息。

I understood. Sam, just going to you for the little bit of time I have left. First of all, you're a bit of a unicorn when I said that with you first. Could you explain why nonprofit, in other words, you're not looking at this and you've even kept the VC people. Just really quickly, I want folks to understand that. We started as a nonprofit really focused on how this technology was going to be built at the time.
我理解了。Sam,我只有很短的时间来找你。首先,当我第一次和你说起时,你就是一只“独角兽”。你能解释一下为什么你选择了非营利组织,换句话说,你并不考虑这个,并且你甚至还留下了VC人员。我想让大家快速了解一下这个。我们最开始是一个非营利组织,非常专注于这项技术当时将如何构建。

It was very outside the Overton window that something like AGI was even possible. That shifted a lot. We didn't know at the time how important scale was going to be, but we did know that we wanted to build this with humanity's best interest at heart and a belief that this technology could, if it goes the way we want it, if we can do some of those things for Professor Marcus mentioned, really deeply transformed the world. We wanted to be as much of a force for getting to a positive definition.
这意味着,AGI(人工通用智能)这样的技术甚至有可能存在,这一点已经超出了人们的想象范围。但随着科技的发展,这种想法已经开始成为现实。我们当时并不知道规模的重要性,但我们知道,我们想把这个技术建设于人类的利益之上,并相信它能够真正深刻地改变世界,如果一切顺利的话,并且能够实现马库斯教授所提到的目标。我们渴望成为推动正面定义实现的力量。

I'm going to interrupt you. I think that's all good. I hope more of that gets out in the record. The second part of my question as well. I found it fascinating. Are you ever going to for revenue model for return on your investors? Are you ever going to do ads or something like that? I wouldn't say never. I don't think, I think there may be people that we want to offer services to and there's no other model that works, but I really like having a subscription-ba sed model. We have API developers pay us and we have chat.
我要打断一下你的话。我认为这一切都很好。我希望更多的信息能够在记录中被披露。我的第二个问题也很有趣。你们会考虑为投资者制定盈利模式吗?你们会做广告或者其他的营收模式吗?我不会说永远不会。我认为可能会有一些我们想要向他们提供服务的人,而没有其他的模式可行。但我真的很喜欢订阅式的模式。我们有API开发者支付我们,我们还有聊天功能。

Can I jump real quickly? One of my biggest concerns about this space is what I've already seen in the space of Web 2, Web 3 is this massive corporate concentration. It is really terrifying to see how few companies now control and affect the lives of so many of us and these companies are getting bigger and more powerful. I see Open AI backed by Microsoft and Thropic is backed by Google. Google has its own in-house products. I'm really worried about that.
我能快速跳跃吗? 我最担心这个领域的一个问题就是我们已经在 Web 2 和 Web 3 的领域看到的大型企业集中。看到如此少的公司能够控制和影响如此多我们的生活,这真的让人感到恐惧,而且这些公司变得越来越大、越来越强大。我看到 Open AI 获得了微软的支持,而 Thropic 则得到了谷歌的支持。而谷歌自己也拥有它自己的产品。我真的很担心这一点。

I'm wondering if Sam, you can give me a quick acknowledgement. Are you worried about the corporate concentration in this space and what effect it might have? The associated risks perhaps with market concentration in AI and the Mr. Marcus, can you answer that as well? I think there will be many people that develop models. What's happening on the Open Source communities? There will be a relatively small number of providers that can make models at the children's edge.
我想询问一下,Sam,您可以快速确认一下吗?您是否担心这个领域中的企业集中度以及它可能产生的影响?与AI市场集中度相关的风险以及马库斯先生,您也能回答一下吗?我认为会有很多人开发模型。开源社区上发生了什么?可能只有相对较少的供应商能够在儿童边缘制作模型。

I think there is benefits and danger to that. We're talking about all of the dangers with AI. The fewer of us that you really have to keep a careful eye on on the absolute bleeding edge capabilities, there's benefits there. I think there needs to be enough in their will because there's so much value that consumers have choice that we have different ideas. Mr. Marcus, real quick.
我认为这件事有好处和危险。我们正在讨论人工智能(AI)的所有危险性。你需要非常谨慎地关注那些绝对激进的应用,只要有更少的人需要关注,就有好处。我认为需要有足够的人参与其中,因为消费者有选择的权利,我们有不同的想法。马库斯先生,快点说。

There is a real risk of a kind of technocracy combined with oligarchy where small number of companies influence people's beliefs through the nature of these systems. Again, I put something in the record about the Wall Street Journal about how these systems can subtly shape our beliefs and as enormous influence on how we live our lives and having a small number of players do that with data that we don't even know about, that scares me. Sam, I'm sorry. One more thing I want to add.
存在一种技术官僚与寡头结合的真正风险,少数公司通过这些系统的特性影响人们的信念。我再次在记录中提到了《华尔街日报》关于这些系统如何微妙地塑造我们的信念,并对我们生活方式有巨大影响的问题,让令人恐惧的是,有很少的玩家利用我们甚至不知道的数据来进行这样的影响。 Sam,我很抱歉,我还想补充一件事。

One thing that I think is very important is that what these systems get aligned to, whose values, what those bounds are, that is somehow set by society as a whole, by governments as a whole. And so creating that data set, the alignment data set, it could be an AI constitutional whatever it is, that has got to come very broadly from society. Thank you very much, Mr. Chairman. I Tom's expired and I guess the best for last. Thank you. Senator Booker. Senator Welp.
我认为非常重要的一件事是这些系统所对齐的是什么,是谁的价值观,这些限制是什么,这些都应该由社会作为一个整体、由政府作为一个整体来设立。因此,创建对齐数据集,即AI的宪法,无论它是什么,都必须广泛地来自于社会。非常感谢主席。我的发言时间到了,我想我是最后一个发言者,谢谢。参议员布克尔。参议员维尔普。

First of all, I want to thank you, Senator Blumethal and you, Senator Holley. This has been a tremendous hearing. Senators are noted for their short attention spans, but I've sat through this entire hearing and enjoyed every minute of it. You have one of our longer attention spans in the United States. Thank you. You're great credit.
首先,我想感谢您,参议员布卢门撒尔和参议员霍利。这场听证会非常精彩。参议员们以注意力短暂而著名,但我坐在这里听了整个听证会,每一分钟都很享受。您们在美国拥有较长的注意力表现,非常赞!感谢您们。

Well, we've had good witnesses, and it's an incredibly important issue. And here's just, I don't, all the questions I have have been asked really, but here's a kind of a takeaway in what I think is the major question that we're going to have to answer as a Congress. Number one, you're here because AI is this extraordinary new technology that everyone says can be transformative as much as the printing press. Number two is really unknown what's going to happen, but there's a big fear you've expressed to all of you about what bad actors can do and will do if there's no rules of the road.
(议长发言)好的,我们已经得到了很好的证人,这是一个极其重要的问题。我想说的是,我提出的所有问题都已经被问到了,但我认为有一个主要问题,我们作为国会需要回答。第一,你们在这里是因为人工智能是一项非常重要的新技术,每个人都说它可以像印刷术一样具有变革性。第二,我们不知道会发生什么,但你们所有人都表达了一个很大的担忧,即如果没有道路规则,坏的行为者可以做什么并且会做什么。

Number three is a member who served in the House and now in the Senate, I've come to the conclusion that it's impossible for Congress to keep up with this speed of technology. And there have been concerns expressed in about social media and now about AI that relate to fundamental privacy rights, bias rights, intellectual property, the spread of disinformation, which in many ways for me is the biggest threat because that goes to the core of our capacity for self-governing. There's the economic transformation which can be profound, there's safety concerns. And I've come to the conclusion that we absolutely have to have an agency.
第三位成员曾在众议院任职,现在在参议院任职。我得出的结论是,国会无法跟上技术发展的速度。社交媒体和人工智能引起了有关基本隐私权、偏见权、知识产权、虚假信息传播等方面的担忧,对我而言,虚假信息的传播是最大的威胁,因为这直接危及我们的自治能力。此外,还存在着经济转型和安全问题。因此,我认为我们必须设立一个机构。

What its scope of engagement is has to be defined by us, but I believe that unless we have an agency that is going to address these questions from social media and AI, we really don't have much of a defense against the bad stuff. And the bad stuff will come. So last year I introduced in the House side, and in Senator Bennett didn't sign it was at the end of the year, Digital Commission Act and we're going to be reintroducing that this year.
我们需要定义其从事的范围,我认为,除非我们有一家机构来应对社交媒体和人工智能方面的问题,否则我们就很难抵御不良影响。这些不良影响肯定会出现。所以去年我在众议院提出了《数字委员会法案》,参议员班尼特没有签署,但在年底签署了,我们将在今年重新提出该法案。

And the two things that I want to ask, one, you've somewhat answered because I think the two or three of you said you think we do need an independent commission. In Congress established an independent commission when railroads were running rampant over the interest of farmers. When Wall Street had no rules of the road and we had the SEC. And I think we're at that point now. But what the commission does would have to be defined and circumscribed. But also there's always a question about the use of regulatory authority.
我想要问的两件事,一件你们已经回答了,因为我认为你们中的两三个人认为我们需要一个独立委员会。当铁路肆虐于农民利益时,国会成立了一个独立委员会。当华尔街没有规则,我们有了证监会。我认为现在我们也到了这个时候。但是,委员会的职责必须要明确和限制。同时,问题总是在于监管权的使用。

And the recognition that it can be used for good JD Vance actually mentioned that when we were considering his and Senator Brown's bill about railroads in that event in East Palestine. Regulation for the public health. But there's also a legitimate concern about regulation getting in the way of things being too cumbersome and being a negative influence. So, A, two of the three of you said you think we do need an agency. What are some of the perils of an agency that we would have to be mindful of in order to make certain that its goals of protecting many of those interests I just mentioned privacy bias intellectual property disinformation would be the winners and not the losers. And I'll start with you Mr. Aldman.
在东帕利辛事件中,JD Vance提到了他和布朗参议员有关铁路法案的考虑,指出这些法规是为了公共卫生考虑。但是,人们也有合法的担忧,认为过度繁琐的法规会妨碍事情,产生负面影响。因此,其中两个人认为我们需要一个代理机构。我们必须谨慎考虑,以确保保护隐私、偏见、知识产权和虚假信息等多重利益是赢家而不是输家。我会从阿尔德曼先生开始。

Thank you, Senator. One, I think America has got to continue to lead. This happened in America. I'm very proud that it happened in America. By the way, I think that's right. And that's why I'd be much more confident if we had our agency as opposed to got involved in international discussions. Ultimately, you want the rules of the road. But I think if we lead and get rules of the road that work for us, that is probably a more effective way to proceed. I personally believe there's a way to do both. And I think it is important to have the global view on this because this technology will impact Americans and all of us wherever it's developed. But I think we want America to lead. We want.
感谢您,参议员。我认为美国必须继续领导。这件事情发生在美国,我为此感到非常自豪。顺便说一下,我认为这是正确的。这也是为什么,与参与国际讨论相比,我们最好有我们的机构。最终,您想要的是道路规则。但我认为,如果我们领导并获得适合我们的道路规则,则这可能是更有效的处理方式。我个人认为,在这方面可以同时做到两者。我认为,在此问题上具有全球视野很重要,因为无论这项技术在哪里开发,都会影响到美国人和我们所有人。但我认为我们希望美国领导。我们想要。

So get to the perils issue though. Well, that's one. I mean, that is a peril, which is you slow down American industry in such a way that China or somebody else makes faster progress. A second. And I think this can happen with like the regulatory pressure should be on us. It should be on Google. It should be on the other small set of people in the lead the most. We don't want to slow down smaller startups. We don't want to slow down open source efforts. We still need them to comply with things. They can still, you can still cause great harm with a smaller model. But leaving the room in the space for new ideas and new companies and independent researchers to do their work and not putting a regulatory burden to say a company like us could handle but a smaller one couldn't.
让我们来谈谈关于风险的问题。首先,如果我们减缓美国工业的发展速度,那么中国或其他国家就会更快地进步,这是一个风险。另一个风险是,监管压力应该放在我们身上,而不是小型创业公司和开源项目团队身上。我们不想减缓小型创业公司和开源项目的发展速度。虽然他们仍需要遵守法规,但他们仍然可能带来巨大的影响。我们应该留出空间让新的想法、新的公司和独立研究人员开展工作,并不给予过多监管负担,以免公司像我们这样的企业能够承受而小型企业无法承受。

I think that's another peril and it's clearly a way that regulation has gone. Mr. Marcus or Professor Marcus. The other obvious peril is regulatory capture. If we make it as a peer as if we are doing something but it's more like greenwashing and nothing really happens. We just keep out the little players because we put so much burden that only the big players can do it. So there are also those kinds of perils. I fully agree with everything that Mr. Altman said and I would add that to the list. Okay. Mr. Montgomery.
我认为这是另一种风险,显然就是监管出现的问题。马库斯先生或马库斯教授。另一个明显的风险是监管劫持。如果我们让它像企业家一样,好像我们在做些什么,但实际上只是在绿色洗涤,没有真正发生任何事情。我们只是排斥小玩家,因为我们给他们太多压力,只有大玩家才能做到。因此,也有这些种类的风险。我完全同意奥尔特曼先生所说的一切,我会把它加入到清单中。好的,蒙哥马利先生。

One of the things I would add to the list is the risk of not holding companies accountable for the harms that they're causing today. We talk about misinformation in electoral systems. So no agency or no agency. We need to hold companies responsible today and accountable for the AI that they're deploying that disseminates misinformation on things like elections and where the risk is. You know regulatory agency would do a lot of the things that Senator Graham was talking about.
我认为需要加入的一项是不对那些今天造成伤害的公司进行问责的风险。我们谈论选举系统中的错误信息。因此,我们需要今天就让公司对他们部署AI通过传播错误信息对选举等方面造成的风险负责和追究责任。你知道监管机构将会做到参议员格雷厄姆所说的很多事情。

You know you don't build a nuclear reactor without getting a license. You don't build an AI system without getting a license that gets tested independently. I think it's a great analogy. We need both pre-deployment and post-deployment. Okay. Thank you all very much. I yield back Mr. Chairman.
你知道建造核反应堆需要获得许可,建造人工智能系统也需要获得经过独立测试的许可证。我认为这是一个很好的类比。我们需要在部署前和部署后都进行许可。好的,非常感谢大家。我向主席让步。

Thanks. Thanks Senator Wells. Let me ask a few more questions. You've all been very, very patient and the turnout today which is beyond our subcommittee. I think reflects both your value in what you're contributing as well as the interest in this topic.
感谢大家。感谢韦尔斯参议员。我还有一些问题要问。你们都非常耐心,今天的人数已经超出了我们小组委员会的预期。我认为这反映出你们所做贡献的价值以及对这个主题的兴趣。

There are a number of subjects that we haven't covered at all. But one was just alluded to by Professor Marcus which is the monopolization danger. The dominance of markets that excludes new competition and thereby inhibits or prevents innovation and invention which we have seen in social media as well as some of the old industries, airlines, automobiles and others where consolidation has narrowed competition.
有几个主题我们还没有完全涵盖。但马库斯教授对一个问题进行了提及,那就是垄断风险。市场的主导地位会排除新的竞争对手,从而抑制或阻止创新和发明,这种现象在社交媒体以及一些旧产业如航空、汽车等地方也得到了体现,这里的整合已经限制了竞争。

And so I think we need to focus on kind of an old area of antitrust which dates more than a century. Still inadequate to deal with the challenges we have right now in our economy. And certainly we need to be mindful of the way that rules can enable the big guys to get bigger and exclude innovation and competition and responsible good guys such as are represented in this industry right now.
因此,我认为我们需要将注意力放在一个超过一世纪的老领域上,即反垄断领域。但这种方法仍不足以应对当前经济面临的挑战。我们当然需要意识到规则的影响,避免大公司变得更大,排除创新、竞争和负责任的好公司,就像目前在这个行业中代表的公司一样。

We haven't dealt with national security. There are huge implications for national security. I will tell you as a member of the Armed Services Committee, classified briefings on this issue have abounded and the threats that are posed by some of our adversaries. China has been mentioned here but the sources of threats to this nation in this space are very real and urgent. We're not going to deal with them today but we do need to deal with them and we will hopefully in this committee.
我们还没有处理国家安全问题。国家安全的影响非常巨大。作为武装服务委员会的成员,我可以告诉你,这个问题的机密简报非常多,一些对手国家对我们的威胁非常严峻。中国在这里被提到,但是我们国家在这个领域面临的威胁源是真实和紧迫的。我们今天不会去处理它们,但是我们确实需要处理它们,也希望在这个委员会上解决。

And then on the issue of a new agency, you know I've been doing this stuff for a while. I was Attorney General of Connecticut for 20 years. I was a federal prosecutor at the US Attorney. Most of my career has been an enforcement and I will tell you something. You can create ten new agencies but if you don't give them the resources and I'm talking not just about dollars I'm talking about scientific expertise, you guys will run circles around them and it isn't just the models or the generator of AI that will run circles around them but it is the scientists in your companies.
关于设立新机构的问题,您知道我做这个行业已经很久了。我在康涅狄格州做过20年的检察官,也是美国联邦检察官。我的大部分职业生涯都是在执法方面的。我要告诉您一件事情。您可以创立十个新机构,但如果你不给他们资源,我不仅指资金,还包括科学专业知识,你们还是会跑过他们的,它不仅是在模型或人工智能生成器的领域,而是在你们公司的科学家领域。

For every success story in government regulation you can think of five failures. That's true of the FDA, it's true of the IAEA, it's true of the SEC, it's true of the whole alphabet list of government agencies and I hope our experience here will be different but the Pandora's box requires more than just the words or the concepts licensing new agency. There's some real hard decision making as as Montgomery has alluded to about how to frame the rules to fit the risks.
对于政府监管的每一个成功案例,你可能想到的会有五个失败案例。这是针对FDA、IAEA、SEC和所有政府机构字母表中的真实情况,我希望我们在这里的经验会有所不同,但是开启了潘多拉魔盒之后我们需要更多的不仅是言语或概念上的新机构执照。正如蒙哥马利所说,需要做出一些真正困难的决策,以适应风险。

First, do no harm, make it effective, make it enforceable, make it real. I think we need to grapple with the hard questions here that frankly this initial hearing I think has raised very successfully but not answered and I thank our colleagues who have participated and made these very creative suggestions. I'm very interested in enforcement, I literally 15 years ago I think, advocated abolishing section 230.
首先,不要造成伤害,使它有效,使它可执行,使它成为现实。我认为我们需要着手处理这里的难题,坦率地说,这次初步听证会已经成功地提出了一些问题,但还没有回答,我感谢我们的同事参与并提出了这些非常有创意的建议。我非常关注执法问题,15年前我曾呼吁废除第230条款。

What's old is new again. Now people are talking about abolishing section 230 back then it was considered completely unrealistic but enforcement really does matter. I want to ask Mr. Altman because of the privacy issue and you've suggested that you have an interest in protecting the privacy of the data that may come to you or be available.
曾经的旧事,如今又成了新闻。现在人们正在谈论废除第230条款,而以前它被认为是完全不切实际的,但实行执法确实很重要。我想问问Altman先生关于隐私问题,您曾经提出,您有兴趣保护可能来到您身边或可用的数据的隐私。

How do you, what specific steps do you take to protect privacy? One is that we don't train on any data submitted to our API so if you're a business customer of ours in submit data we don't train on it at all. We do retain it for 30 days solely for the purpose of trust and safety enforcement but that's different than training on it. If you use chat GPT you can opt out of us training on your data you can also delete your conversation history or your whole account.
你具体采取什么步骤来保护个人隐私?首先,我们不会在API上对任何提交的数据进行训练,所以如果您是我们的商业客户并提交了数据,我们根本不会对其进行训练。我们仅出于信任和安全的目的保留30天,但与训练无关。如果您使用Chat GPT,您可以选择退出训练并删除您的聊天记录或整个账户来保护您的数据。

Ms. Montgomery I know you don't deal directly with consumers but do you take steps to protect privacy as well? Absolutely and we even filter our large language models for content that includes personal information that may have been pulled from public data sets as well. So we apply additional level of filtering.
蒙哥马利女士,我知道您不直接与消费者打交道,但您是否也采取措施保护隐私?当然,我们甚至为包含个人信息的内容过滤我们的大型语言模型,这些信息可能已从公共数据集中提取出来。因此,我们应用了额外的过滤级别。

Professor Marcus you made reference to self-awareness, self-learning, already we're talking about potential for jail breaks. How soon do you think that new kind of generative AI will be usable will be practical? New AI that is self-aware and so forth. Yes. I have no idea on that one I think we don't really understand what self-awareness is and so it's hard to put a date on it. In terms of self-improvement there's some modus self-improvement in current systems but one could imagine a lot more and that could happen in two years it could happen in 20 years.
马库斯教授,你提到了自我意识、自我学习,这已经涉及到可能会出现越狱的潜力。您认为新型的生成式人工智能什么时候能够被使用、什么时候会实用?新的具备自我意识等特点的人工智能。恩,这个我不确定。我认为我们并没有真正理解自我意识是什么,所以很难预言它会出现的日期。而关于自我提升,当前的系统中已有一些自我提升的方式,但我们可以想象会有更多的可能在两年之内或20年之内出现。

The basic paradigms that haven't been invented yet some of them we might want to discourage but it's a bit hard to put timelines on them and just going back to enforcement for one second one thing that is absolutely paramount I think is far greater transparency about what the models are and what the data are that doesn't necessarily mean everybody in the general public has to know exactly what's in one of these systems but I think it means that there needs to be some enforcement arm that can look at these systems can look at the data can perform tests and so forth.
还有一些尚未被发明的基本模式,其中有些我们可能会反对,但很难确定时间表,回到执法问题,我认为绝对至关重要的一件事是更大程度的透明度,即关于这些模型的内容和数据的透明度。这并不一定意味着每个公众人士都必须完全知道这些系统中的内容,但我认为这意味着需要有一些执法机构,能够审查这些系统,查看数据,进行测试等。

Let me ask you all of you I think there has been a reference to elections and banning outputs involving elections. Other areas where you think what are the other high risk or highest risk areas where you would either ban or establish especially strict rules. It means my coming. The space around misinformation I think is hugely important one and coming back to the points of transparency you know knowing what content was generated by AI is going to be a really critical area that we need to address. Any others?
让我问一下你们所有人,我认为已经提到了选举和禁止与选举有关的产出。你认为还有哪些其他高风险或最高风险的领域,在这些领域你会禁止或建立特别严格的规则?这意味着我的到来。我认为误导信息周围的空间是非常重要的一个领域,回到透明度的问题,了解由人工智能生成的内容将是我们需要解决的一个非常关键的领域。还有其他的吗?

I think medical misinformation is something to really worry about we have systems that hallucinate things they're going to hallucinate medical advice some of the advice they'll give is good some of it's bad we need really tight regulation around that same with psychiatric advice people using these things as kind of airs outs therapists I think we need to be very concerned about that I think we need to be concerned about internet access for these tools when they can start making requests both of people and and internet things is probably okay if they just do search but as they do more intrusive things on the internet like do we want them to be able to order equipment or order chemist extreme and so forth so as they as we empower these systems more by giving them internet access I think we need to be concerned about that and then we've hardly talked at all about long-term risks.
我认为医疗谣言是一件非常值得担忧的事情。我们有一些系统会产生幻觉,他们会幻想出关于医疗建议的事情。他们给出的一些建议是好的,但有些是不好的。我们需要在医疗谣言上进行非常严格的监管,同样也适用于精神治疗方面的建议。人们使用这些工具作为一种疏解,我认为我们需要非常关注这一点。对于这些工具的互联网访问我们也需要关注,当它们开始向人们和互联网发送请求,如果只是进行搜索可能还好,但如果它们在互联网上进行更深入的操作,例如订购设备或化学材料等,我们是否有需要让它们这样做。随着我们通过让这些系统获得互联网访问而使它们更加强大,我认为我们需要更加关注这一点。还有,我们几乎没有谈论过长期风险。

Sam alluded to it briefly I don't think that's where we are right now but as we start to approach machines that have a larger footprint on the world beyond just having a conversation we need to worry about that and think about how we're going to regulate that and and monitor it and so forth. In a sense we've been talking about bad guys or certain bad actors manipulating AI to do harm. Manipulating people. And manipulating people but also the generator of AI can manipulate the manipulators.
萨姆简短地提到了这件事,我不认为我们现在处于那个地步,但是随着我们开始接近那些对世界产生更大影响的机器,不仅仅是谈话,我们需要担心并考虑如何对其进行监管和监督等等。从某种意义上讲,我们一直在谈论坏人或某些不良行为者操纵人工智能来造成伤害。操纵人们。而且,AI的生成器也可以操纵操纵者。

It can I mean there's many layers of manipulation that are possible and I think we don't yet really understand the consequences. Dan Dennett just sent me a manuscript last night that will be in the Atlantic in a few days on what he calls counterfeit people. It's a wonderful metaphor these systems are almost like counterfeit people and we don't really honestly understand what the consequence of that is. They're not perfectly human like yet but they're good enough to fool a lot of the people a lot of the time and that introduces lots of problems for example cybercrime and how people might try to manipulate markets and so forth so it's a serious concern.
这意味着,现在可以进行多层次的操纵,而我们并不真正了解后果。丹·丹尼特(Dan Dennett)昨晚刚给我发了一份手稿,将在几天后发表在《大西洋月刊》上,讨论他所谓的“伪造人”。这是一个非常好的比喻,这些系统几乎像是伪造人,但我们并没有真正理解其后果。虽然它们还不完全像人,但它们已经足够好了,可以在很多时候欺骗很多人,这引发了很多问题,比如网络犯罪以及人们如何试图操纵市场等等,所以这是一个严重的问题。

In my opening I suggested three principles transparency accountability and limits on use. Would you agree that those are a good starting point? Is my company? 100 percent and as you also mentioned industry shouldn't wait for Congress. That's what we're doing here at IBM. There's no reason that you can wait for Congress.
在我的开场白中,我提出了透明度、问责制和对使用的限制三个原则。你同意这些是一个很好的起点吗?我的公司已经完全符合这些原则,正如你也提到的行业不应该等待国会。这就是我们在IBM所做的。你没有理由等待国会。

Yeah Professor Marcus. I think those three would be a great start. I mean there are things like the White House Bill of Rights for example that show I think a large consensus the UNESCO guidelines and so forth. Throw a large consensus around what it is we need and the real question is definitely now how are we going to put some teeth in it try to make these things actually enforce. So for example we don't have transparency yet we all know we want it but we're not doing enough to enforce it.
是的,马库斯教授。我认为那三个将是一个很好的开始。我的意思是,例如白宫权利法案等,这些东西显示了我认为大多数人的共识,以及联合国教科文组织的指导方针。围绕我们所需的事情建立一个大多数的共识,而现在真正的问题肯定是如何将一些制约措施加强到实际执行中去。例如,我们还没有透明度,我们都知道我们想要它,但我们没有做足够多的事情来强制执行它。

Mr. Altman. I certainly agree that those are important points. I would add that and Professor Marcus touched on this. I would add that as we spend most of the time today on current risks and I think that's appropriate and I'm very glad we have done it as these systems do become more capable and I'm not sure how far away that is but maybe not not super far.
阿尔特曼先生,我完全同意那些是重要的观点。我会补充一点,马库斯教授也有所提及。我想指出,我们今天花费了大部分时间关注当前的风险,我认为这是合适的,我很高兴我们这样做,因为这些系统变得更加强大,我不确定那有多远,但可能不是太遥远。

I think it's important that we also spend time talking about how we're going to confront those challenges. I mean talk to you privately. You know how much I care. I agree that you care deeply and intensely but also that prospect of increased danger or risk resulting from even more complex and capable AI mechanisms certainly maybe closer than a lot of people appreciate.
我认为重要的是我们花时间讨论如何应对这些挑战。我是说私下谈话。你知道我有多关心。我同意你非常深刻而强烈地关心,但也有可能更复杂和更有能力的人工智能机制增加危险或风险的前景,这个问题可能比很多人所意识到的更近。

Let me just add for the record that I'm sitting next to Sam closer than I've ever sat to him except once before my life and that his sincerity in talking about those fears is very apparent physically in a way that just doesn't communicate on the television screen. Thank you. It's from you.
让我强调一下,我正坐在Sam旁边,这是我除了一次以外从未如此近距离接触他。他在谈论这些恐惧时,真诚的态度在身体语言上非常明显,这种感觉在电视屏幕上无法传达。谢谢。这是你的。

Senator Hawley. Thank you, Mr. Chairman, for a great hearing. Thanks to the witnesses. So I've been keeping a little list here of the potential downsides or harms risks of generative AI even in its current form. Let's just run through it. Loss of jobs and this isn't expected of I think your company, Miss Montgomery, is announced that it's potentially laying off 7,800 people, third of your non-consumer facing workforce because of AI. So loss of jobs, invasion of privacy, personal privacy on a scale we've never before seen. Manipulation of personal behavior, manipulation of personal opinions, and potentially the degradation of free elections in America that I miss anything.
霍利参议员。谢谢你的主持,谢谢见证人。我在这里列了一份当前基于生成式人工智能可能带来的潜在负面影响或风险清单。让我们来逐一看看。首先是失业。这不是指你的公司,蒙哥马利女士,但你宣布AI可能导致你们裁员7,800人,占你们非面向消费者的劳动力的三分之一。其次是侵犯隐私,个人隐私受到前所未有的规模威胁。然后是个人行为的操纵、个人观点的操纵,以及可能会导致美国自由选举的降格。我漏掉什么了吗?

I mean this is quite a list. I noticed that in a collective group of about a thousand technology and AI leaders, everybody from Andrew Yang to Elon Musk recently called for a six-month moratorium on any further AI development. Were they right? Do you join those calls? Are they right to do that? Should we pause for six months or so?
这是一份相当长的清单。我注意到,在约一千名技术和人工智能领域的领袖群体中,从安德鲁·杨到埃隆·马斯克的每个人最近都呼吁停止未来六个月的人工智能开发。他们是对的吗?您是否支持这些呼吁?他们这么做是正确的吗?我们应该暂停六个月左右吗?

Your characterization is not quite correct. I actually signed that letter about 27,000 people signed it. It did not call for a ban on all AI research. It only called in nor on all AI, but only on a very specific thing, which would be systems like GPT-5. Every other piece of research that's ever been done, it was actually supportive or neutral about.
你的描述不太准确。实际上我签署了那封信,约有27,000人签署了它。它并没有呼吁禁止所有AI研究。它只是呼吁不要研究一些非常特定的东西,比如GPT-5这样的系统。而其他所有的研究,它实际上是支持或中立的。

It specifically called for more AI, specifically called for more research on trustworthy and safe AI. So you think that we should take a moratorium, a six-month moratorium, or more on anything beyond CHEP GPT-4? I took the letter, what is the famous phrase? Spiritually, not literally, what was the famous phrase?
这特别呼吁增加人工智能研究,特别呼吁更多关于值得信赖且安全的人工智能的研究。那么您认为我们应该暂停六个月或更长时间对 CHEP GPT-4 以外的任何事物进行研究吗?我收到了这封信,著名的短语是什么?寓意上而非字面上,“著名短语”指的是什么?

Well, I'm asking for your opinion now, though. My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don't know that we need to pause that particular project, but I do think it's emphasis on focusing more on AI safety, on trustworthy, reliable AI is exactly right. Deployment means not making it available to the public.
我现在想听听你的意见。我的意见是,我们应该关注的禁令其实是在我们有了充分的安全案例之前不要部署人工智能。我不确定我们需要暂停那个特定的项目,但我确信它着重于更加注重人工智能安全、可靠性和值得信赖是完全正确的。部署意味着不让它面向公众开放。

Yeah, so my concern is about things that are deployed at a scale of let's say 100 million people without any external review. I think that we should think very carefully about doing that. What about you, Mr. Oman? Do you agree with that? Would you pause any further development for six months or longer?
我的担忧是关于那些在没有外部审查的情况下被部署在1亿人规模的事物。我认为我们应该非常慎重地考虑这样做。您认为呢,奥曼先生?您是否同意?您会暂停任何进一步的开发六个月或更长时间吗?

So first of all, after we finish training GPT-4, we waited more than six months to deploy it. We are not currently training what will be GPT-5. We don't have plans to do it in the next six months, but I think the frame of the letter is wrong. What matters is audits, red teaming, safety standards that a model needs to pass before training.
首先,当我们完成对GPT-4的培训后,我们花了超过六个月的时间来部署它。我们当前并不准备训练GPT-5,并且在未来六个月内也没有这样的计划,但我认为信件的框架有误。重要的是对模型进行审计、红组测试和安全标准的验证,然后再进行训练。

If we pause for six months, then I'm not sure what we do then. Do we pause for another six? Do we kind of come up with some rules then? The standards that we have developed and that we've used for GPT-4 deployment, we want to build on those, but we think that's the right direction, not a calendar clock pause. There may be times, I expect there will be times, when we find something that we don't understand, and we really do need to take a pause, but we don't see that yet.
如果我们停顿六个月,我不确定接下来我们会怎么做。我们会继续停顿六个月吗?或者我们会制定一些规则?我们已经开发和使用了GPT-4部署标准,我们希望在这些标准的基础上建立,但我们认为这是正确的方向,而不是根据日历的停顿。可能会有时候,我预计会有时候,我们会发现一些我们不理解的事情,这时我们确实需要停顿,但我们还没有遇到这种情况。

Never mind all the benefits. What would you, you don't see what yet? You're comparable with all of the potential ramifications from the current existing technicals. I'm sorry, but I don't see the reasons to not train a new one for deploying, as I mentioned. I think there's all sorts of risky behavior, and there's limits we put. We have to pull things back sometimes, add new ones. I mean, we don't see something that would stop us from training the next model, where we'd be so worried that we'd create some endangering, even in that process, let alone the deployment.
别管所有的好处了,你难道不明白现存技术的潜在影响吗?很抱歉,但是我认为没有理由不培训一种新的部署模型,就像我之前提到的那样。我认为这样做会带来各种风险行为,而且我们必须设立限制。有时候,我们需要收紧某些事情,并添加新的事物。我的意思是,我们没有看到任何能够阻止我们培训下一个模型的东西,我们会担心这个过程会造成危险,更别提部署了。

What about you, Ms. Montgomery? I think we need to use the time to prioritize ethics and responsible technology as opposed to posing development. Well, wouldn't a pausing development help the development of protocols for safety standards and ethics? I'm not sure how practical it is to pause, but we absolutely should be prioritizing safety protocols.
你怎么看,蒙特哥小姐?我认为我们需要利用时间来优先考虑道德和负责任的技术,而不是追求发展。那么,停止发展难道不会有助于制定安全标准和道德规范的发展吗?我不确定暂停是否实际可行,但我们绝对应该优先考虑安全协议。

Okay, the point about practicality, leaves me to this. I'm interested in this talk about an agency, and maybe that would work, although having seen how agencies work in this government, they usually get captured by the interests that they're supposed to regulate. They usually get controlled by the people who they're supposed to be watching. I mean, that's just been our history for 100 years. Maybe this agency would be different. I have a little different idea. Why don't we just let people sue you? Why don't we just make it liable in court? We can do that. We know how to do that. We can pass a statue. We can create a federal right of action that will allow private individuals who are harmed by this technology to get into court and to bring evidence into court, and it can be anybody. I mean, you want to talk about crowdsourcing. We'll just open the courthouse doors.
关于实用性的问题,这让我想到了一个点。我对这个机构的讨论很感兴趣,也许这会起作用,尽管我看到政府机构的运作方式,它们通常会被它们所监管的利益所限制,让他们成为那些本该受到监管的人的傀儡。我是说,这已经是我们的历史了100年。也许这个机构会有所不同。我有一个不同的想法。我们为什么不让人们起诉你们呢?为什么我们不在法庭上对这个问题负责呢?我们可以做到这一点。我们知道如何做到这一点。我们可以通过一项法律来创造一种联邦诉讼权,允许受到这个技术伤害的个人进入法庭并带来证据。任何人都可以这样做。我的意思是,你想谈一谈众包吗?我们将打开法庭大门。

We'll define a broad right of action, private right of action, private citizens, be class actions. We'll just open it up. We'll allow people to go into court. We'll allow them to present evidence. They say that they were harmed by, they were given medical misinformation. They were given election misinformation. Whatever. Why not do that, Mr. Altman? I mean, please forgive my ignorance. Can't people sue us? Yes. Because you're not protected by Section 230, but there's not currently a, I don't think, a federal right of action, private right of action that says that if you are harmed by generative AI technology, we will guarantee you the ability to get into court.
我们将定义广泛的诉讼权、私人诉讼权、私人公民,作为集体诉讼的一部分。我们只需要开放它。我们将允许人们进入法院。我们将允许他们提供证据。他们说他们因为受到医疗信息误导,选举误导等而受到了伤害。任何事情都可以。为什么不这么做,Altman先生呢?请原谅我的无知。人们不能起诉我们吗?是的。因为你们没有受到第230条保护,但目前还没有联邦的诉讼权、私人诉讼权,说如果您受到生成AI技术的伤害,我们将保证您进入法庭的能力。

Well, I think there's like a lot of other laws where if technology harms you, there's standards that we could be sued under unless I'm really misunderstanding how things work. If the question is are more, are clearer laws about the specifics of this technology and consumer protections a good thing? I would say definitely yes. Laws that we have today were designed long before we had artificial intelligence, and I do not think they give us enough coverage. The plan that you propose, I think, as a hypothetical would certainly make a lot of lawyers wealthy, but I think it would be too slow to affect a lot of the things that we care about.
我认为像其他法律一样,如果技术对你造成损害,我们可以根据标准被起诉,除非我真的误解了事情的运作方式。如果问题是更多、更清晰的法律规定这种技术和消费者保护是否是好事?我会毫不犹豫地说是。我们今天拥有的法律早在人工智能出现之前就被设计出来,我不认为它们能给我们足够的保障。你提出的计划,我认为,作为一个假设,肯定会让很多律师赚大钱,但我认为它太慢了,无法影响我们关心的许多问题。

And there are gaps in the law, for example, we don't really. Wait, you think it'd be slower than Congress? Yes, I do. Really? Well, litigation can take a decade or more. Oh, but the threat litigation is a powerful tool. I mean, how would IBM like to be sued for a billion dollars? In no way asking to take litigation off the table among the tools, but I think for example, if I can continue, there are areas like copyright where we don't really have laws, we don't really have a way of thinking about wholesale misinformation as opposed to individual pieces of it where, say, a foreign actor might make billions of pieces of misinformation or a local actor, we have some laws around market manipulation. We could apply, but we get a lot of situations where we don't really know which laws apply.
现有法律存在漏洞,例如,在某些法律领域,我们没有明确的规定。等等,您认为这比国会慢吗?是的,我是这么认为的。真的吗?嗯,诉讼可能需要十年甚至更久的时间。哦,但威胁诉讼是一种强有力的工具。我的意思是,IBM会不会喜欢被起诉要支付10亿美元的赔偿金呢?并不是要求排除诉讼作为工具的可能性,但是举个例子,我们没有涉及版权领域的法律,我们没有真正的思考关于大规模误导的方式,而不是单个信息的问题。在这种情况下,可能出现外国演员制造了数十亿个错误信息的情况,或者当地演员,我们虽然在市场操纵方面有一些法律可以应用,但我们往往不知道应用哪些法律。

There would be loopholes. The system is really not thought through. In fact, we don't even know that 230 does or does not apply here as far as I know. I think that that's something a lot of people speculate about this afternoon, but it's not solved. We could fix that. Well, the question is how? Oh, easy. It would be easy for us to say that Section 230 doesn't apply to generative AI.
这里可能会有漏洞。这个系统并没有经过完全的思考。实际上,据我所知,我们甚至不知道230是否适用于这种情况。我认为这是很多人今天下午所推测的,但是还没有解决。我们可以解决这个问题。那么问题是如何解决呢?哦,简单。我们可以很容易地说,第230节不适用于生成式AI。

I think the important thing is my government duty of care, which I think fits the idea of a private right of action. No, that's exactly right. And also AI is not a shield. So if a company discriminates in granting credit, for example, or in the hiring process, the virtue of the fact that they relied too significantly on an AI tool, they're responsible for that today, regardless of whether they used a tool or a human to make that decision.
我认为重要的事情是政府对人民的责任,这符合私人行动权的理念。不,完全正确。而且AI不是一种保护措施。所以,如果一个公司在信用授予或招聘过程中有歧视,例如因为它们过于依赖AI工具,它们今天仍然负有责任,无论是使用工具还是人来做出决策。

I'm going to turn to Senator Booker for some final questions, but I just want to make a quick point here on the issue of the moratorium. I think we need to be careful. The world won't wait. The rest of the global scientific community isn't going to pause. We have adversaries that are moving ahead and sticking our head in the sand is not the answer. Safeguards and protections. Yes, but a flat stop sign sticking our head in the sand. I would be very, very worried.
我将向布克参议员提几个最后的问题,但我想在禁令问题上快速指出一点。我认为我们需要小心谨慎。世界不会等待。全球其他科学界也不会停下脚步。我们有对手在前进,把我们的头埋在沙子里并不是答案。需要保障和保护,但是直接堵上一张禁止牌把头埋在沙子里,我会非常担心。

Without meditating for any sort of pause, I would just again emphasize there is a difference between research, which surely we need to do to keep pace with our foreign rivals and deployment at really massive scale. You could deploy things at the scale of a million people or 10 million people, but not 100 million people or billion people. If there are risks, you might find them out sooner and be able to close the barn doors before the horses leave rather than after. Senator Booker, I just there will be no pause. There's no enforcement body to force a pause. It's just not going to happen. It's nice to call for it for any just reasons or words or whatever, but I forgive me for skeptical. Nobody is pausing. I don't think it's a realistic thing.
我不想停下来思考,我想再次强调,研究和大规模部署是有差别的。我们需要进行研究以跟上外国竞争对手,但不能让这种技术大规模部署,涉及百万或1000万人口,而不能涉及1亿或10亿人口,以免出现风险。如果出现风险,我们可以更早发现并及时防范。议员布克,我不认为会有暂停。没有机构会强制暂停。尽管有很多合理的原因或言辞要求暂停,但我很怀疑。没有人会停下来,这不是一个现实的事情。

I personally signed the letter to call attention to how serious the problems were and to emphasize spending more of our efforts on trustworthy and safe AI rather than just making a bigger version of something we already know to be unreliable. I'm a futurist. I love exciting about the future. I guess there's a famous question. If you couldn't control your race, your gender, where you land on the planet or at what time and humanity would you want to be born? Everyone would say right now. It's still the best time to be alive because of technology innovation and everything.
我个人签署了这封信,旨在引起注意,强调应该将更多精力投入于可信、安全的人工智能研究,而不是仅仅扩大已经不可靠的技术。作为一名未来学家,我热爱未来充满激情的发展。我想大家都知道一个著名问题:如果你选择出生时无法控制人种、性别、出生地点和时间,你会选择什么时候出生?答案是现在。因为现在是科技创新和一切最好的时代。

I'm excited about what the future holds, but the destructiveness that I've also seen as a person that's seen the transformative technologies of a lot of the technologies of the last 25 years is what really concerns me. One of the things, especially with companies that are designed to want to keep my attention on screens, and I'm not just talking about new media. I'm 24 hour cable news is a great example of people that want to keep your eyes on screens. I have a lot of concerns about corporate intention.
我对未来的发展感到兴奋,但我担心的是我亲眼目睹了过去25年变革性技术的破坏性。其中一个特别让我担心的是那些旨在让我时刻关注屏幕的公司,我并不仅仅指新媒体。24小时有线新闻就是一个很好的例子,它强迫人们一直盯着屏幕。我对公司的意图有很多疑虑。

Sam, this is again why I find your story so fascinating to me and your values that I believe in from our conversations, so compelling to me. But absent that, I really want to just explore what happens when these companies that are already controlling so much of our lives. A lot has been written about the fang companies. What happens when they are the ones that are dominating this technology as they did before? So Professor Marcus, does that have any concern the role that corporate power, corporate concentration has in this realm that a few companies might control this whole area?
萨姆,这就是我为什么觉得你的故事对我如此吸引人以及我们之前谈到的价值观令人信服的原因。但是如果没有这个,我真的很想探索这些已经控制我们生活如此之多的公司发生了什么。已经有很多关于“方”(Facebook、亚马逊、Netflix和谷歌)公司的文章了。当它们再次主宰这个领域时,它们会发生什么?所以,马库斯教授,企业权力和集中在这个领域中的影响有没有什么问题?会不会有几家公司掌控整个领域?

I radically changed the shape of my own life in the last few months, and it was because of what happened with Microsoft releasing Sydney, and it didn't go the way I thought it would. In one way, it did, which is I anticipated the hallucinations. I wrote an essay which I have in the appendix, what to expect when you're expecting GPT-4. I said that it would still be a good tool for misinformation, that it would still have trouble with physical reasoning, psychological reasoning, that it would elucinate. Then along came Sydney, and the initial press reports were quite favorable, and then there was the famous article by Kevin Rus in which it recommended he'd get a divorce. I had seen Tay and I had seen Galactica from Metta, and those had been pulled after they had problems. Sydney clearly had problems.
在过去的几个月中,我彻底改变了我自己生活的形状,这是因为微软发布悉尼所发生的事情,而它没有按照我的想法进行。从某种意义上说,确实如此,因为我预料到了幻觉的出现。我在附录中写了一篇文章,讲述当你期待GPT-4时应该注意的事项。我说它仍将是一个误导信息的好工具,会在物理推理、心理推理方面遇到麻烦,而且它会出现幻觉。然后,悉尼出现了,最初的新闻报道相当有利,然后凯文鲁斯的著名文章建议他离婚。我看过Tay和Metta的Galactica,它们在出现问题后就被撤下了。悉尼显然也有问题。

What I would have done had I run Microsoft, which clearly I do not, would have been to temporarily withdraw it from the market, and they didn't. That was a wake-up call to me in a reminder that even if you have a company like OpenAI that is a non-profit, and SAM's values, I think, have come clear today, other people can buy those companies and do what they like with them. Maybe we have a stable set of actors now, but the amount of power that these systems have to shape our views and our lives is really, really significant, and that doesn't even get into the risks that someone might repurpose them deliberately for all kinds of bad purposes.
如果我运营微软,虽然我事实上并没有,我会选择临时将其从市场上撤下。但他们没有这样做。这提醒我,即使像 OpenAI 这样的非营利组织具有 SAM 的价值观,其他人也可以购买这些公司并随意使用它们。也许现在我们拥有一组稳定的参与者,但这些系统塑造我们的观点和生活的能力是非常巨大的,更不用提有人可能会有意地重新利用它们来达到各种恶意的目的了。

In the middle of February, I stopped writing much about technical issues in AI, which is most of what I've written about for the last decade, and said, I need to work on policy. This is frightening. Sam, I want to give you an opportunity as my sort of last question or so. Don't you have concerns about, I graduated from Stanford. I know so many of the players in the valley, from VC, Peel folks, Angel folks, to a lot of founders of companies that we all know. Do you have some concern about a few players with extraordinary resources and power?
二月中旬,我停止了写有关人工智能技术问题的大部分文章,这是我过去十年写的大部分内容。我说,我需要致力于政策。这让我感到害怕。Sam,我想给你一个机会,作为我的最后一个问题。你不会担心吗?我毕业于斯坦福大学。我认识许多谷地的玩家,从风投、皮尔人员、天使投资人到我们都知道的很多公司的创始人。你是否对一些拥有超强资源和权力的玩家感到担忧?

Power to influence Washington, I mean, I see us, I'm a big believer in the free market, but the reason why I walk into a bodega and a twinkie is cheaper than an apple or a happy meal costs less than a bucket of salad is because of the way the government tips the scales to pick winners and losers. So the free market is not what it should be when you have large corporate power that can even influence the game here. Do you have some concerns about that in this next era of technological innovation?
在我看来,我们有能力影响华盛顿政府。我深信自由市场,但当我走进一家小杂货店时,发现吃一根甜甜圈比吃一个苹果更便宜,或者一个快乐套餐比一桶沙拉花费更少的原因是,政府倾向于选出胜者和输家。所以当有着大型企业能够在这里左右游戏规则时,自由市场不再是应该的。你对技术创新时代中这方面的一些担忧吗?

Yeah, I mean, again, that's so much of why we started OpenAI. We have huge concerns about that. I think it's important to democratize the inputs to these systems, the values that we're going to align to. And I think it's also important to give people why use of these tools. When we started the API strategy, which is a big part of how we make our systems available for anyone to use, there was a huge amount of skepticism over that and it does come with challenges, that's for sure. But we think putting this in the hands of a lot of people and not in the hands of a few companies is really quite important. And we are seeing the result in innovation boom from that.
是的,我的意思是,这就是我们创办OpenAI的原因之一。我们非常担心这个问题。我认为重要的是民主化这些系统的输入,以及我们将要对齐的价值观。而且,我认为给人们使用这些工具的原因也很重要。当我们开始API策略时,这是我们让我们的系统可供任何人使用的重要组成部分,人们对此持有巨大的怀疑态度,这确实带来了挑战。但我们认为将这些放在许多人手中而不是少数公司手中真的非常重要。我们正在看到这种创新繁荣的结果。

But it is absolutely true that the number of companies that can train the true frontier models is going to be small just because of the resources required. And so I think there needs to be incredible scrutiny on us and our competitors. I think there is a rich and exciting industry happening of incredibly good research and new startups that are not just using our models, but creating their own. And I think it's important to make sure that whatever regulatory stuff happens, whatever new agencies may or may not happen, we preserve that fire because that's critical.
这是绝对的事实,能够训练真正的前沿模型的公司数量将非常有限,仅因为需要的资源太多了。因此,我认为我们和竞争对手需要受到很大的审查。目前存在着一个非常优秀的研究和新创企业的兴起的行业,他们不仅使用我们的模型,还在创造自己的模型。我认为重要的是要确保无论发生什么样的监管措施,无论是否成立新的机构,我们都要保护这种创新的热情,因为这是至关重要的。

I'm a big believer in the democratizing potential of technology. But I've seen the promise of that fail time and time again where people say, oh, this is going to have a big democratizing force. My team works on a lot of issues about the reinforcing of bias through algorithms, the failure to advertise certain opportunities and certain zip codes. But you seem to be saying, and I heard this with Web 3, this is going to be decentralized, finite, all these things are going to happen. But this seems to me not even to offer that promise because the people who are designing these, it takes so much power, energy, resources, are you saying that my dreams of technology further democratizing opportunity and more are possible within a technology that is ultimately, I think, can be very centralized to a few players who already control so much.
我是科技潜力能够促进民主化信念的坚信者。但我已经多次见证了这个承诺未能兑现的情况,人们总是说,哦,这将有一个大的民主化力量。我的团队在很多问题上都在探讨算法通过强化偏见、未广告某些机会和某些邮政编码而导致的问题。但你似乎在说,我听到了Web 3的信息,它将是去中心化的、有限的,所有这些事情都会发生。但我认为,这甚至没有提供那样的承诺,因为设计这些技术的人需要如此多的权力、能源和资源,你是说我的科技民主化机会进一步增加的梦想是可以在一个极度集中在少数已经掌控了如此多的人玩家手中的技术中实现的吗?

So this point that I made about use of the model and billion on top of it, this is really a new platform, right? It is definitely important to talk about who's going to create the models. I want to do that. I also think it's really important to decide to whose values we're going to align these models. But in terms of using the models, the people that build on top of the open AI API do incredible things. And it's, you know, people frequently comment like, I can't believe you get this much technology for this little money. And so what people are, the companies people are building, putting AI everywhere, using our API which does let us put safeguards in place. I think that's quite exciting. And I think that is how it is being democratized right now. There is a whole new campaign explosion of new businesses, new products, new services happening by lots of different companies on top of these models.
我提到的使用模型和构建在其之上的亿万美元,实际上是一个全新的平台,对吧?重要的是要讨论谁将创建这些模型,我想做到这一点。同时,决定按照谁的价值观去对这些模型进行定向也很重要。但是,就使用模型而言,使用开放式AI API构建的人会做出令人惊叹的事情。人们时常评论说,这么少的投入就能获得这么多技术,真是难以置信。因此,人们正在构建的公司和使用我们的API将AI技术无处不在,我们可以在其中设置保障。我认为这非常令人激动,这也是它目前的民主化方式。许多不同的公司正在这些模型上构建出全新的业务、 新产品、 新服务,并推出全新的宣传活动。

So I'll say, Chairman, as I close that, I have most industries resist even reasonable regulation from seatbelt laws to we've been talking a lot recently about rail safety. The only way we're going to see the democratization of values, I think, and while there are noble companies out there is if we create rules of the road that enforce certain safety measures like we've seen with other technology.
在结束时我想说,主席,大多数行业都会反对甚至是合理的监管,从安全带法律到最近我们一直在谈论的铁路安全问题。我认为,唯一实现价值观民主化的方式是,如果我们像其他技术一样制定一些规则,强制实施某些安全措施,虽然存在一些高尚的公司。

Thank you. Thanks, Senator Booker. And I couldn't agree more that in terms of consumer protection, which I've been doing for a while, participation by the industry is tremendously important. And not just rhetorically, but in real terms, because we have a lot of industries that come before us and say, oh, we're all in favor of rules, but not those rules. Those rules we don't like.
谢谢。感谢布克参议员。我非常赞同,在消费者保护方面(我已经做了一段时间),行业参与非常重要。这不仅是言辞上的,而且是实际上的,因为我们有很多行业向我们提出,我们都赞成规则,但不赞成那些规则。我们不喜欢那些规则。

And it's every rule in fact that they don't like. And I sense that there is a willingness to participate here that is genuine and authentic. I thought about asking chat GPT to do a new version of don't stop thinking about tomorrow. Because that's what we need to be doing here. And Senator Hawley has pointed out, Congress doesn't always move at the pace of technology.
实际上,他们不喜欢任何规则。我感到这里参与的意愿是真实和真诚的。我考虑让聊天GPT制作一份新版本的《别停止思考未来》。因为这就是我们需要在这里做的事情。如霍利参议员所指出的,国会并不总是按照技术的步伐前进。

And that may be the reason why we need a new agency, but we also need to recognize the rest of the world is going to be moving as well. And you've been enormously helpful in focusing us and illuminating some of these questions and performed a great service by being here today. So thank you to every one of our witnesses.
这可能是我们需要一个新机构的原因,但我们也需要认识到世界其他地方也会发展。您对我们的专注和对这些问题的阐述非常有帮助,今天能够在这里出席是为我们提供了巨大的服务。因此,感谢每一位见证人。

And I'm going to close the hearing. Leave the record open for one week. In case anyone wants to submit anything, I encourage any of you who have either manuscripts that are going to be published or observations from your companies to submit them to us. And we look forward to our next hearing. This one is closed.
我将关闭听证会。将记录留给一周时间。如果有人想提交任何内容,我鼓励你们中有任何计划要出版的手稿或者企业观察报告向我们提交。我们期待着下一个听证会,本次听证会宣布结束。