首页  >>  来自播客: In our Tech Society 更新   反馈

Battling to regulate AI: Brussels, Beijing & Brexit

发布时间 2023-01-27 07:05:34    来源

摘要

The EU's outsized influence on tech regulation beyond its borders raises a lot of political issues: e.g. the UK having little ability to effectively regulate AI. Besides this, we also discuss how China approaches AI ethics and regulation differently. I'm joined by Oxford University researcher Huw Roberts. 8.50: EU priorities in AI regulation 13.50: The effect of EU policy outside its borders 25.40: How China thinks differently about AI 34.30: "The Beijing Effect"

GPT-4正在为你翻译摘要中......

中英文字稿  

It's innately a measure of design to protect fundamental rights, but it's fundamental rights as conceptualised within the EU or at least by EU institutions. With its size and heavy penalties and non-compliance, the European Union has an over-sized influence on international tech companies well beyond its borders. In this episode, we chat about, firstly, just how the EU is trying to regulate but also a lot of the political questions that come with the EU's oversized clout. The UK, for example, spent five years trying to leave the bloc, but for reasons we discuss in this podcast, its sovereignty to actually make regulatory decisions on AI separate to the EU is pretty limited. China too is developing its own detailed approach to regulating AI, a very different one which is influential in different ways and raising its own set of political and geopolitical questions.
设计的本质是保护基本权利的衡量标准,但这些基本权利是欧盟内部概念化或至少是欧盟机构所构想的。欧盟凭借其规模和严厉的处罚以及对不遵守规定的处理,对国际科技公司产生了超出边界的巨大影响。在本节目中,我们首先讨论欧盟如何试图进行规范,以及伴随欧盟过度强大影响力而涉及的众多政治问题。例如,英国花了五年时间试图退出欧盟,但从我们在这个播客中讨论的原因来看,它实际上对AI进行独立的监管决策的主权是相当有限的。中国也正在制定自己详细的AI监管方法,这是一种非常不同的方法,影响力也不同,并提出了自己一套政治和地缘政治问题。

I'm joined by the brilliant Hugh Roberts, a researcher at the Said Business School. I'll let him introduce himself to start us off. Thanks having me on, you know, and inviting me. So yeah, I'm Hugh, I'm a research fellow in AI and sustainable development at the Said Business School at the University of Oxford. So for my role here, I've despite the title, mainly been spending my time thinking about comparative AI policies, really what's going on in different jurisdictions across the world, who's doing it well, who's doing it badly and why. Before that, I worked in the UK government, so getting a bit of hands-on experience in AI policy making, so working on things like the UK's national AI strategy, but also advising on things like biometrics policy. And I guess the final hat that's probably worth mentioning is I'm currently doing a PhD at the University of Oxford's internet institute, where I'm trying to understand the global dynamics in AI governments, and particularly the role that China is playing in this space. So yeah, big questions that I'm trying to address. And you've studied in China, haven't you? You did a master's there? Yeah, so so I've previously spent time studying Chinese philosophy of all things focusing on early Chinese metaphysics, which I can't say is too helpful when it comes to policy making, but interesting nonetheless. Fantastic. So we're going to be talking about AI obviously and kind of regulation.
我和才华横溢的休·罗伯茨一起,他是赛德商学院的一名研究员。我让他自我介绍一下。谢谢你邀请我来参加节目。是的,我是休,我是牛津大学赛德商学院人工智能与可持续发展研究员。在这里,虽然我的职称是研究员,但我主要花时间思考各地区比较人工智能政策,看看世界各地正在发生什么情况,谁做得好,谁做得不好,为什么。在那之前,我在英国政府工作过,获得了一些实际的人工智能政策制定经验,比如参与英国的国家人工智能战略,还就生物识别政策提供建议。另外,值得一提的是,我目前正在牛津大学的互联网研究所攻读博士学位,我正在努力理解人工智能治理的全球动态,尤其是中国在这个领域扮演的角色。所以,我尝试回答一些重要的问题。你在中国学习过,对吧?你在那里获得了硕士学位?是的,我之前花了时间研究中国哲学,专注于早期中国形而上学,虽然在政策制定方面可能没有太大帮助,但仍然很有趣。太棒了。我们将谈论人工智能和监管问题。

And this first question is a little bit controversial because we could talk about this for hours, but I think it would be useful to have like a working definition of AI to keep in mind. So could you attempt to give us one? Yeah, so so as you've alluded to, it's a somewhat controversial question, I think particularly in the policy wealth with the EU's definition, for instance, a lot broader than some others such as the UK's. And I unhelpfully tend to follow a kind of I know it when I see it sort of understanding, because what concerns me about these technologies isn't really kind of particular definitions of how we understand it or setting kind of broad kind of barriers for where we should stop understanding it, but rather the the impacts that these technologies have.
这第一个问题有点争议,因为我们可能会谈论几个小时,但我认为有一个工作定义人工智能会很有用。所以你能试着给我们一个吗?是的,正如你所暗示的,这是一个有争议的问题,我认为特别是在政策领域,例如欧盟的定义比英国等其他一些国家更广泛。我通常会随意地跟随“我看到就知道”的理解,因为我担心的不是关于这些技术的特定定义,或者我们应该在何处设定广泛的边界来理解它,而是这些技术所产生的影响。

But I know that's unhelpful. So I'll try and give a kind of broad brush definition of the sorts of things that I'll be talking about when I'm thinking about AI. So I think the UK policy is actually quite helpful in this respect. So they focus on two aspects or features of technologies that would sort of indicate that they should fall into the kind of broad bucket of AI. And that is that they have the capacity to autonomously or semi autonomously process data, and that this will provide the systems with a degree of adaptiveness over time. So I the what they do might be more unpredictable than you just writing a line of code saying I want X or Y.
但我知道这并不是有帮助的。因此,我将尝试给出一种大致的定义,来解释我在思考人工智能时将要谈论的内容。我认为英国的政策在这方面实际上相当有帮助。因此,他们关注技术的两个方面或特征,这些特征表明它们应该属于AI的广泛范畴。即它们具有自主或半自主处理数据的能力,以及这将使系统随着时间的推移具有一定的适应性。所以它们的行为可能比你仅仅编写一行代码说我想要X或Y更加不可预测。

So from these features, the type of stuff an AI system could do is make predictions about, you know, what might happen, make classifications. So for instance, a facial recognition system, or finally generate content. So here, I'm thinking of things such as deep fakes. So I hope that can provide a sort of working understanding that that's not as tight as perhaps other people would go for. That sounds really useful to me, because it kind of captures just how widely applicable this technology, even though it's kind of more of a set of technologies is. And just one final kind of motivating question here, why is like the regulation specifically of AI important? Yeah, so I guess there's a few angles to take care of.
因此,从这些特征中,人工智能系统可以做的事情包括做出预测,进行分类。例如,面部识别系统,最终生成内容。在这里,我想到了深度伪造等内容。所以希望这能提供一种工作性的理解,尽管可能不如其他人描述得那么严谨。这对我来说听起来非常有用,因为它捕捉到了这种技术的广泛适用性,尽管它更多地是一组技术。最后一个激励性问题,为什么像人工智能这样的监管尤为重要呢?是的,我想有几个角度需要考虑。

I think the first is just emphasising how much of a kind of transformative and ubiquitous set of technologies AI is. So where I are, not sure where to to grammatically put that in the systems are being applied everywhere. They're developing quite quickly. And because of that, we need something to address the sorts of changes that the systems are bringing about. And here, more specifically, what I'm thinking about is the new challenges these technologies could raise, or the exacerbation of existing challenges that the integration of these systems could lead to.
我认为首先强调了人工智能这一种具有变革性和无处不在的技术组合。在哪里我不确定在哪里在语法上放置这些系统正在被应用。它们的发展速度相当快。正因为如此,我们需要采取措施来应对这些系统带来的变化。在这里,更具体地说,我考虑的是这些技术可能带来的新挑战,或是这些系统整合可能导致现有挑战的加剧。

So first, regarding new challenges, just two examples of types of issues that could be caused is first, the black box inverted commas nature of these systems, i.e. we don't really at times with a really complicated system understand how it's working or why it's making the decisions that it is making. And this is quite troubling at times when, well, two things, I guess, first, if we're unable to uncover why a decision is like problematic. And secondly, yeah, if we do discover it's problematic, understanding why it's problematic is tricky.
首先,关于新挑战,只举两个例子来说明可能引起的问题类型。首先是系统的“黑匣子”属性,也就是说,我们有时候并不真正了解一个非常复杂的系统是如何运作的,或者为什么会做出它所做出的决定。有时候这确实令人困扰,因为首先,如果我们无法找出为什么一个决定有问题。其次,如果我们发现有问题,理解为什么有问题是棘手的。

And the second kind of new challenge that AI systems raise is really this one of accountability. So AI systems, as mentioned in my kind of initial definition, have this ability to autonomously or semi autonomously process data and adapt. And so because of this, they are sort of doing their own thing. But then it's a definite kind of philosophical question of whether a system should be held accountable, whether it's someone who's fed into that system in some way. i.e. is it the programmer who should be responsible? Is it the person who's deploying the system? Is it the underlying data sets that have been used? And so on and so on. And there aren't necessarily clear cut-outs as to this. It will be very context dependent.
AI系统带来的第二种新挑战实际上是 accountability 的问题。正如我在最初定义中提到的,AI系统有能力自主或半自主地处理数据并进行适应。因此,它们在某种程度上在自我运行。但这确实是一个哲学问题,即一个系统是否应该承担责任,无论是某种程度上输入该系统的人,比如程序员、部署系统的人员,还是使用的基础数据集等等。这个问题并非有着明确的答案,它将会非常依赖具体的情境。

And then in terms of exacerbating existing challenges, one of the most obvious that comes to mind is bias. And so obviously we're horribly biased society, and that's kind of an underlying fact, unfortunately. But what these systems do is they can really exacerbate and standardize the types of biases we see in society. So here, just to give you an example, one of the most heart-hitting for me as always, Amazon trying to develop a recruitment algorithm that sifted through CVs. And based on the data it was given, it started systematically discriminating against women based on anyone who had been to a traditionally women's college.
在加剧现有挑战方面,我脑海中最明显的之一就是偏见。显然我们是一个充满偏见的社会,这是一个不幸的事实。但这些系统可以加剧并标准化我们在社会中看到的各种偏见。举个例子,亚马逊试图开发一个筛选简历的招聘算法。根据给定的数据,它开始系统性地歧视那些曾就读传统女子学院的女性。

So because of this, it's not just kind of one manager who is sexist, rather it is this standardized problem of everyone who had that college was sifted out. And so what this means regulation is we need to really ensure that the legal instruments that we have are suitably robust for addressing these challenges. And I'm really providing clear guidance to people who are using these systems because whilst in my kind of academic bubble, it's quite easy to point these out and discuss them if you're kind of on the ground developing these systems quickly and integrating them. You don't have the luxury of taking the time to think about it. So hopefully regulation can help address some of these challenges. Thank you. That was a really clear explanation.
因此,这不仅是一个性别歧视的经理,而是每个经历过大学教育的人都被淘汰的标准化问题。因此,这意味着我们需要确保我们拥有的法律工具足够强大,以解决这些挑战。我真的要为使用这些系统的人提供明确的指导,因为在我的学术圈子里,很容易发现并讨论这些问题,但如果你正在实地快速开发并集成这些系统,你没有时间去考虑。希望法规能帮助解决其中一些挑战。谢谢。那是一个非常清晰明了的解释。

So we're going to compare a couple of different approaches to how different jurisdictions have tried to deal with some of these challenges. Let's start with the EU because in many ways it's perhaps the most influential. We might get to that in a minute. But how does the EU, just broadly speaking, kind of approach these questions of like regulating AI and what's its priority if anything? Yes, I definitely agree with your assertion that the EU is kind of the reference point when it comes to kind of AI governance and regulation. And one of the reasons is it was quite an early mover in this space. So it's been around five years since it started seriously considering AI governance and early initiatives really focused on things like guidance and principles. But more recently this has turned into kind of hard regulatory measures that I'll talk about in a second. But I guess like many EU regulations what's really at the heart of this effort is to ensure that the fundamental rights of citizens are protected without unnecessarily constraining such innovations of innovation and industry within the EU. But I would say compared to some other approaches of countries as we'll talk about later on in this chat, the EU is very focused on individual rights.
因此,我们将比较一些不同的方法,看看不同司法管辖区是如何尝试解决这些挑战的。让我们从欧盟开始,因为在很多方面它可能是最有影响力的。也许我们可以在一会儿讨论这个。但就总体来说,欧盟是如何处理像监管人工智能这样的问题的,以及它的优先事项是什么?是的,我完全同意你的说法,欧盟在人工智能治理和监管方面是一种参考点。其中一个原因是欧盟在这个领域是一个较早开始的行动者。自从它开始认真考虑人工智能治理以来已经有大约五年了,早期的倡议主要集中在指导和原则等方面。但最近,这已经变成了我接下来要谈到的一些硬性监管措施。不过,我认为与一些国家的其他方法相比,欧盟非常注重个人权利,我们稍后在这次讨论中会谈到这一点。

So in terms of how it's actually thinking about regulating these technologies, the most kind of I guess centralised measure or the most evident measure is the draft AI Act. So this was initially published by the European Commission last year and it's a risk-based framework that seeks to kind of categorise different types of AI system and based on that give them different kind of regulatory requirements. So there are four tiers of risk within this framework. So the first, the highest level of risk is unacceptable. So these are things that are seen as completely against EU values and the threat to fundamental rights. So here's social credit scoring is one of the kind of things flagged. Then we move down a level and these are kind of high risk systems and these are things like AI systems used for kind of safety purposes in AI systems and products used for safety purposes.
因此,就实际监管这些技术的方式而言,我认为最为中心化或者最为明显的措施就是《AI法案》草案。这是去年由欧洲委员会首次发布的一项基于风险的框架,旨在对不同类型的AI系统进行分类,并根据情况给予不同的监管要求。在这个框架中有四个风险层次。第一个是最高风险,即不可接受的。这些被视为完全违反欧盟价值观且对基本权利构成威胁的事物。社会信用评分就是其中之一。接着是高风险系统,这些系统主要是用于安全目的。

And these systems are really subject to a variety of regulatory restrictions and with things like kind of conformity assessments which are basically documents that companies have to complete to make sure that their systems are doing what this regulation wants them to do. A downer level, you've got systems with transparency risks. So here it's things like chatbots and what this regulation, what this level of the regulation hopes to do is really make sure that people aren't being misled by the systems, for instance. And finally there's minimal or no risk and there are no specific regulatory requirements for these types of systems but they're still encouraged to kind of follow best practice.
这些系统实际上受到各种监管限制的约束,比如符合性评估等文件,公司必须完成这些文件以确保他们的系统符合法规要求。在风险较低的层面,有透明度风险的系统。在这里,比如聊天机器人,这个级别的法规希望确保人们不会被系统误导。最后是最小或没有风险的系统,没有特定的监管要求,但仍然鼓励他们遵循最佳实践。

What's important to flag about this EU regulation is that it was initially published by the Commission last year but it's still a draft and this draft is being debated and discussed within the other kind of two key regulatory bodies within the EU so the Parliament and the Council and once these bodies all come to an agreement on the final text then it will go into law and this isn't likely to happen really until late 2023 or perhaps even 2024. So there's still some time before this materializes but we have quite a good idea of the types of things the EU is thinking about. It's funny how the EU is such a huge machine and in many ways very slow but was also one of the quickest on this. I just wonder you said that the priority is in terms of individual rights. Could you maybe give us an example of where these AI technologies might conflict with that?
关于这项欧盟法规需要关注的重点是,该法规最初是由委员会去年发布的,但仍处于草案阶段,该草案正在欧盟内其他两个关键监管机构——议会和理事会中进行辩论和讨论。一旦这些机构就最终文本达成一致意见,它将成为法律,但这很可能要等到2023年底甚至2024年。因此,在这一实现之前还有一些时间,但我们已经相当清楚了解到欧盟关注的事项。欧盟是一个庞大且在很多方面非常缓慢的机器,但在这方面却是最快的之一。我只是想知道你说的关于个人权利的优先级在哪里。你可能可以举个例子,说明这些人工智能技术可能与之冲突的地方是什么?

Sure so I guess one of the big debates has been kind of remote biometric identification so this is kind of long jargon I guess for things like facial recognition systems used in public spaces and this is obviously a controversial topic because on the one hand using these systems can improve CCTV and in some ways improve the safety of society but on the other hand pose a real threat to the kind of privacy that the citizens in the EU have come to expect and are protected through things like the GDPR so without kind of clear guidance on how these systems should be used they could end up posing a threat such as such as in that case.
当然,我想其中一个大讨论话题是远程生物识别。这其实是一个很长的术语,用来描述在公共空间中使用的面部识别系统等技术。显然,这是一个有争议的话题,因为一方面使用这些系统可以改善监控摄像头,某种程度上提高社会安全,但另一方面可能对欧盟公民所期待的隐私权造成真正的威胁,并且通过《通用数据保护条例》受到保护。如果没有明确指导如何使用这些系统,它们可能最终会带来威胁,比如在那种情况下。

And you mentioned how the EU is very influential kind of globally. Could you talk a little bit about the effect the EU policy has outside its borders? People talk a lot about this Brussels effect. Yeah of course so I guess first before jumping in it's probably important to talk about what the Brussels effect is at least at a high level and then discuss how this might kind of play out in the the AI act down the line once it comes into force. So in essence the Brussels effect is the idea that the market size and the regulatory capacity of the EU means that some of its legislative or kind of regulatory measures will be externalised outside of the country. The reason for this is companies don't want to follow different measures in every jurisdiction because that's costly, confusing, time consuming and risky particularly given somewhere like the EU you can end up getting fined a quite significant amount of money if you don't abide by the regulatory measures that are in place.
你提到了欧盟在全球范围内具有很大影响力。你能谈谈欧盟政策对其边界外的影响吗?人们经常谈到“布鲁塞尔效应”。当然,我认为在深入讨论之前,首先重要的是简要解释一下什么是“布鲁塞尔效应”,然后讨论这种效应在AI法案实施后可能会如何发挥作用。基本上,“布鲁塞尔效应”是指欧盟的市场规模和监管能力意味着一些立法或监管措施会在该国边界之外被外部化。这是因为公司不想在每个司法管辖区遵守不同的措施,因为这样做会产生成本、混乱、耗时和风险,尤其是在像欧盟这样的地方,如果你不遵守当前的监管措施,可能会被罚款相当可观的金额。

So because of this the Brussels effect is the idea that the regulatory measures enacted in the EU will end up in some cases being followed by companies and perhaps even governments outside of the European Union. So to give probably the clearest example of this the general data protection regulation something I mentioned earlier was an EU initiative designed to protect the privacy of citizens within Europe. As an example whenever you have to give your consent to have your data processed this is on account of the GDPR and it's a measure of design to ensure that you're empowered to make meaningful decisions about how your data is used. So obviously for kind of big platforms websites companies etc etc.
因此,布鲁塞尔效应是指在欧盟实施的监管措施最终可能被一些公司甚至是在欧盟以外的国家的政府所遵循。举个例子,通用数据保护条例是一项旨在保护欧洲公民隐私的欧盟倡议。例如,每当您需要同意处理您的数据时,这就是基于GDPR的一个例子,旨在确保您有权做出有意义的决定关于如何使用您的数据。因此,显然对于那些大型平台、网站和公司来说是很重要的。

Trying to localise this data protection just to Europe would be a complete pain. So if you look around the world at different companies many of them have just started following the best practices laid out in the GDPR. Similarly many governments have actually introduced legislative measures that are similar to the GDPR. So just one example is China's own privacy law which came into force. I'm pretty sure last year and it has many of the same stipulations as the GDPR.
试图仅将数据保护定位在欧洲将是一件很痛苦的事情。因此,如果你看看世界各地的不同公司,很多公司刚刚开始遵循《GDPR》规定的最佳实践。同样,许多政府实际上已经出台了类似于《GDPR》的立法措施。例如,中国自己的隐私法去年开始生效,与《GDPR》有许多相同的规定。

Returning to kind of AI policy in particular the degree to which the AI Act will lead to something like the GDPR's Brussels effect is a bit contested but most people accept that the AI Act will have at least some international influence beyond the EU's borders. So it's just one really tangible example of this. The EU AI Act stipulates the outputs from AI systems that were outside of the EU. If applied in the EU would still have to abide by the EU AI Act. So that was a complete mouthful but I hope the kind of essence of it was clear but basically even if you're kind of developing a system outside of the EU and deploying the system outside of the EU, if the output of that is used in the EU then it still has to have followed the EU regulations. Similarly if you're kind of company outside of the EU exporting a system to the EU you'll have to abide by the measures that are outlined in the AI Act.
特别是关于人工智能政策的问题,人们对人工智能法案是否会导致类似于GDPR的“布鲁塞尔效应”存在一些争议,但大多数人都认为人工智能法案将至少在欧盟以外产生一定的国际影响。这只是一个非常明显的例子。欧盟人工智能法案规定了在欧盟以外的AI系统产生的结果也必须遵守欧盟的法案。所以虽然说起来有些复杂,但我希望这个核心思想是清楚的,基本上即使你在欧盟以外开发并部署系统,如果该系统的输出在欧盟使用,仍必须遵守欧盟的法规。同样,如果你是一家在欧盟以外的公司向欧盟出口系统,你也必须遵守AI法案中概述的措施。

I guess one of the areas that hasn't really been talked about that much but I find particularly interesting probably on account of being British if nothing else is the potential impacts in Northern Ireland of the AI Act. So with all the mess going on with the kind of Northern Ireland protocol and trying to find an agreement of how to stop hard border on the kind of Irish border Northern Ireland has to abide by EU product regulations in short. And the AI Act focuses on regulating AI as a product so it's quite uncertain how this will play out in the next couple of years when this AI Act does come into practice, well come to power and come into force but we are in a theoretical situation where AI systems in England, Wales and Scotland may not be able to be exported to Northern Ireland on account of the AI Act which in theory you would hope that there are political dialogues which will prevent this but certainly it's a risk and it's something that was even flagged in one of the kind of House of Commons kind of committees.
我觉得一个领域并没有被讨论得太多,但我特别感兴趣,可能是由于我是英国人,也许还有其他原因,那就是AI法案可能对北爱尔兰产生的潜在影响。所以在北爱尔兰协议和努力达成如何阻止爱尔兰边境硬化问题的混乱中,北爱尔兰必须遵守欧盟产品规定。而AI法案着重于将AI作为产品进行监管,所以在未来几年,当这个AI法案实施时,情况将会如何,还是相当不确定。你希望在实践中会有政治对话来阻止这种情况发生,但肯定存在一定的风险,甚至在下议院一个委员会的讨论中也提到了这一点。

Do you think this raises any kind of alarm bells for countries outside the EU in terms of sovereignty in terms of their ability to actually effectively regulate AI? Yeah so I think it's a really interesting question right and it's an extra territorial influence that's quite different to those asserted elsewhere or from elsewhere in that it's innately a measure designed to protect fundamental rights but it's fundamental rights as conceptualized within the EU or at least by EU institutions.
你认为这是否会引起欧盟以外国家在主权方面的警示,以及它们是否能够真正有效地监管人工智能?是的,我认为这是一个非常有趣的问题,这是一种对外部领土的影响,与其他地方主张的影响或来自其他地方的影响有很大不同,因为这本质上是一项旨在保护基本权利的措施,但是这些基本权利是欧盟内部或至少是由欧盟机构设想出来的。

So if fundamental rights aren't conceptualized in the same way then this certainly poses kind of a big risk and I suspect the UK in particular with all its efforts and endeavours to break from EU regulation and law would be particularly unhappy if in two or three years time it found that all the companies based here are ignoring the regulatory initiatives going on domestically and are ultimately following EU best practice which I don't think is out of the question to be a kind of future that will materialize. So on that question of the UK how does the UK different its approach to AI regulation? Yeah so I think the the EU and UK in many ways represent different ends of the spectrum in terms of how AI can and should be regulated so as I was mentioning a minute ago the EU has taken this really horizontal regulatory approach so it's introducing one kind of overarching legislative measure and this will cover kind of all sectors and all uses of AI within the block and potentially outside the block.
因此,如果基本权利的概念不同,那么这肯定存在一定的风险,我怀疑特别是英国会特别不高兴,因为它一直努力摆脱欧盟的法规和法律,并且如果在两三年后发现所有在这里设立的公司都在忽视国内正在进行的监管倡议,最终还是遵循欧盟的最佳实践,我认为这种未来并非不可能实现。因此,对于英国如何区分其人工智能监管方法这个问题,你的看法是什么呢?是的,我认为在很多方面,欧盟和英国代表了人工智能可以和应该如何监管的不同极端,在一分钟前我提到,欧盟采取了这种真正的横向监管方法,因此它正在引入一种全面的立法措施,这将涵盖该区域内和潜在地域外一切领域和所有用途的人工智能。

In contrast the UK has said that it wants more of a context focused approach to AI regulation and governance. So what's meant by this is that the UK government thinks that a cross-cutting approach like the EU's isn't sufficient for understanding that sorts of contextual harms and impacts that an AI system might cause. So to give an example the same image recognition system being used for a medical scan or for detecting a water bottle will come with very different levels of risk so they hope that through introducing more of a context focused approach they'll be able to have a more nuanced and flexible approach to governance.
相比之下,英国表示希望对人工智能的监管和治理采取更加关注背景的方法。这意味着英国政府认为像欧盟那样的跨领域方法不足以理解人工智能系统可能造成的各种背景伤害和影响。举个例子,同样的图像识别系统用于医学扫描和检测水瓶的风险水平会有很大差异,因此他们希望通过引入更关注背景的方法,能够实现更加细致和灵活的治理方式。

So what this looks like in practice is rather than having one overarching regulatory measure relying on the UK's sectoral regulators and different ministries to deal with the specific harms within their sectors so for instance the information commissioner's office or the competition in markets authority or off-come all looking at what their specific powers, remits are and from that addressing what the harms of AI are. The UK is recognised to a degree that this could be slightly chaotic and are trying to deal with this through things like cross-cutting principles that will underlie how the sorts of guidance introduced by different regulators should be formulated and also emphasising that the UK should follow more of a kind of pro innovation approach and that any measure by regulators should not focus on hypothetical risks but only kind of inverted cons real risk.
因此,在实践中,这看起来不是依靠英国各行业监管机构和不同部门来处理其部门内部的具体危害的一个整体监管措施,而是如信息专员办公室、竞争与市场管理局或各自审视其具体权限和职权,并从中解决人工智能的危害。英国承认在一定程度上这可能会有些混乱,并试图通过诸如跨界原则之类的方法来处理,这些原则将构成不同监管机构引入的指导方钮的基础,并强调英国应该更加倾向于创新的方法,并且监管机构的任何措施不应专注于假设性风险,而是只关注倒置的现实风险。

So I guess when it comes to comparing the UK's and the EU's approaches we can see pros and cons to both really so certainly the contextual approach and the focus on flexibility within the UK's initiatives are kind of noteworthy and beneficial and they could lead to something that is far more flexible and robust to change over time and depending on how the AI acts is kind of finalized there could be seen as a kind of degree of rigidity there as to whether it is able to deal with the kind of fast-paced change that happens in the field of AI.
因此,我想当我们比较英国和欧盟的方法时,其实两者都有优点和缺点,所以可以说英国倡导的情境方法以及注重灵活性的倡议是相当值得关注和有益的,它们可能会导致一种更灵活和更具变化鲁棒性的东西,并取决于人工智能行为的最终确定,这种灵活性可能会被视为一种刚性程度,即是否能够应对在人工智能领域发生的快速变化。

But at the same time the cross-cutting nature of the EU's measures and it's kind of clear legislative stipulations means that it's black and white in a way, obviously exaggeration but it's quite clear what's going on. Was having multiple different regulators try and deal with these technologies could really have two negative impacts so the first is regulatory overlap or regulatory gaps so two regulators saying different things or two regulators not seeing a specific area as their remit and thus ignoring it and the second is resource constraints so government salaries aren't the best so to have the degree of expertise needed regulate these technologies within each kind of UK regulator could be quite tricky in practice. These are issues that the UK would have to address if it was to really enact a successful approach I think whilst the EU I think still needs to show that there is flexibility within its measures so that it can be robust to change over time.
但同时,欧盟措施的交叉性质以及其明确的立法规定意味着在某种程度上是非黑白的,显然有夸张的成分,但情况是相当清楚的。让多个不同的监管机构尝试处理这些技术可能会带来两个负面影响,第一是监管重叠或监管空白,因此两个监管机构各说各的,或者两个监管机构没有将特定领域视为其职责范围而忽略了它;第二是资源约束,政府的薪资不是最好的,因此在实践中,让每种英国监管机构具备规范这些技术所需的专业知识可能会相当棘手。这些是英国必须解决的问题,如果它真的要实施成功的方法,我认为欧盟仍需要表明其措施内部存在灵活性,以便能够适应时间变化。

That's really interesting how different they are and just to come back to something you said earlier about the possibility that companies would follow EU regulation rather than UK regulation. Does that depend kind of on I guess companies follow the lowest common denominator or the most stringent regulation so does that kind of assume that the UK would be less stringent in its regulation? Yeah exactly something that I should have mentioned is a lot of the UK's kind of post-rexet narrative has been kind of making the most of this new regulatory freedom and offering more of a kind of permissible regulatory environment for companies to operate in but the big risk here is as you alluded to that if the UK follows a kind of pro-innovation approach and comes with fewer specific restrictions companies might just turn to the EU's measures because they want to export their tech into this larger market which has you know far more financial pull than the UK's. Okay so we're going to shift to something slightly different now you mentioned earlier different conceptions of fundamental rights and probably the best example of that is the way the AI is regulated in China which is something you've researched. So what's the Chinese government's kind of approach to this and what is their priority? Yeah so I guess there's kind of two questions there the approach and the priority and when it comes to the approach earlier I laid out the UK and the EU on two ends of this kind of spectrum of how centralized versus decentralized the regulatory approach that was being taken was and whilst in the UK it's sort of a free-for-all of every regulator to try and introduce measures in China there are a handful of regulators at the moment who are focusing on AI and I think it will be this kind of select few who really do take the lead in regulating AI going forward.
这真的很有趣,它们有多不同,回到你之前提到的关于公司可能会遵循欧盟法规而不是英国法规的可能性。这是否取决于公司遵循最低限度或最严格的法规,所以这是否意味着英国在法规方面会更加宽松?是的,正是我应该提到的,英国后脱欧叙事中的许多内容都是关于如何充分利用这种新的法规自由,并为公司提供更具包容性的法规环境,但这其中存在的风险正如您所示,如果英国采取一种积极创新的方式,并制定更少的具体限制,公司可能会转向欧盟的措施,因为他们希望将他们的技术出口到这个市场更大、财务吸引力更强的市场,而这远远超过了英国的市场。 好的,现在我们将转向稍微不同的话题,您之前提到了不同对基本权利的理解,可能最好的例子是您研究的中国在AI监管方面的方式。那么,中国政府对此的态度和他们的重点是什么? 是的,所以我想这里有两个问题,一个是态度,一个是重点,就态度而言,我先前讲过英国和欧盟在中央化和去中心化两种监管方式的两端,虽然在英国,所有监管机构都在尝试引入措施,而在中国,目前有几家监管机构正在关注人工智能,我认为这几个机构将主导未来对人工智能的监管。

So I think the most notable here is the the cyberspace administration of China and this is the regulatory body that in theory deals with online uses of algorithms and AI and tech in general and what they've been really active in doing is regulating specific types of AI rather than AI as a broad kind of technology more generally as we've seen in the UK and the EU's approach. So for instance one of the initiatives that they published last year went even earlier this year and then I'm losing track of the dates is a regulatory measure focused on recommended systems so a recommender system in general could be something like your your TikTok algorithm of you know what video comes next it could be which product you're being advertised on the web etc etc and these were quite I'd say strict regulatory measures and introduced a number of kind of features that are similar in some ways distinct in others from the EU's measures for instance so one of the most notable is a kind of public database of recommender systems that will be regularly updated and which companies have to kind of send to the the Chinese regulator to show transparently what their algorithms are doing and whilst the publicly available data perhaps unsurprisingly is quite high level and doesn't really say much a few researchers have pointed to the information that's actually sent to regulators being slightly more substantive so there's there's quite a lot of efforts to check what's going on within these these companies other quite interesting initiatives are well related to the recommender system example is being able to opt out of recommendations and in some cases there is the proposal to be able to alter the parameters or reject specific parameters based on for the recommendation so here i.e i'm a white man might be one of the the kind of parameters that is being used within the system exactly whether that will materialize and how it's an open question because it's a very difficult thing to do but it's interesting seeing this sort of regulatory innovation coming out of China too
因此,我认为这里最值得注意的是中国的网络管理部门,这是一个理论上处理算法和人工智能以及技术的在线使用的监管机构,他们在做的事情非常积极,是针对特定类型的人工智能进行监管,而不是像我们在英国和欧盟看到的那样对人工智能作为一种更广泛的技术进行监管。例如,去年甚至还有今年早些时候,他们发布的一个倡议措施专注于推荐系统,一般的推荐系统可能是类似于你的TikTok算法,知道下一段视频是什么,也可能是你在网络上被推广的产品等等,这些都是相当严格的监管措施,引入了一些在某些方面相似在其他方面不同的特征,比如有一个最值得注意的措施是一个公开数据库,定期更新推荐系统,公司必须透明地向中国监管机构展示他们的算法在做什么,虽然公开可用的数据也许并不令人意外地很高层次,并没有提供太多信息,但一些研究人员认为实际发送给监管机构的信息稍微更加实质性,因此有许多努力来查明这些公司内部发生了什么,其他相当有趣的举措与推荐系统示例相关,比如能够选择退出推荐或者在某些情况下提议能够修改参数或拒绝基于推荐的具体参数,例如,我是一个白人男性可能是系统内使用的一个参数,这到底会不会实现,以及它是如何实现的都是一个开放的问题,因为这是一个非常困难的事情,但看到中国也产生这种监管创新的现象是很有趣的。

So moving to the the kind of second point which is really what characterizes the Chinese approach and how that relates to rights for instance so in the what we've talked about quite a lot is this individual centric approach so focusing on individual centric rights for instance privacy and whilst in China was fundamental rights of individuals are certainly foregrounded we also see quite a heavy emphasis on the rights of people from persons so and focusing more on how society or groups of individuals may be impacted by these systems when it comes to to thinking about this in practice i guess there's more leniency for kind of practices which are seen as being of societal benefits so as one example perhaps a controversial one when it comes to the degree of kind of false positives i.e. detecting someone as something when they're not there's more uh there's more permissibility of this so for instance in the example of using a facial recognition system there's more a police or a local government would be would be happier to accept more false positives from the system then for instance in the model where it's more individual centric so there's there's less scope for that so they'd kind of rather arrest the wrong person than let them go on the assumption that their systems aren't that accurate yeah so so it sort of seems like that and that there's the more important thing is getting the maximum number of criminals or or or kind of the people that they want from the the systems rather than uh focusing on getting the the kind of highest number of the highest percentage of right calls from from the use of the systems which certainly i think would be more heavily emphasized in the UK for instance
因此,转向第二点,这确实是中国方法的特征,以及这如何与权利相关。我们谈得很多的是这种以个人为中心的方法,侧重于个人的隐私权等权利。在中国,个人的基本权利当然被突出显示,但我们也看到对于群体的权利非常重视,更关注社会或群体个体受到这些系统影响的方式。在实践中来思考时,或许更宽容一些被视为有利于社会的做法。举个有争议的例子,就是在处理虚假信息的程度上,即当系统误将某人识别成其他人时,他们更容许这种情况发生。例如,在使用面部识别系统的情况下,警察或地方政府会更愿意接受系统产生更多的虚假信息,而不是像以个人为中心的模式那样更少地接受虚假信息。所以他们更愿意误捕一个人,而不是放过他们,假设他们的系统准确性不那么高。因此,似乎更重要的是尽可能找到最多的罪犯或他们想要的人,而不是专注于从使用系统中获得最高百分比的正确信息,这在英国可能会更受重视。

and how just finally how does that compare to the EU both kind of regulatory you've talked about this a little bit but also like in terms of the like ethical principles kind of underpinning this yeah so so i think it's interesting i think we are still in the very early days of these regulatory kind of initiatives being introduced so it's hard to come with anything too concrete at the moment but i guess there's the there's certain instances that we can we can see is distinguishing the two already so one thing was this this kind of individual versus societal focus and i guess another is kind of buyouts for government in a way or or kind of carve outs for government um in china a lot of the the these regulations are kind of extremely harsh or extremely harsh towards companies was less so towards kind of initiatives coming from central government yeah and more generally beyond this kind of distinguishing between the the the kind of um singular regulatory mechanism versus multiple regulatory mechanisms one of the i guess key things going forward is to just not see china as a kind of poor weak regulatory environment that that isn't kind of considering ai ethics um at all um but rather to see china as a jurisdiction that's focusing on um ai ethics in its own way um and the way it is regulating companies um is based on the unique kind of political and legal structure within the the country so it's been harsher in many ways as a regulator than um well certainly the uk and definitely the u.s as well um but the ways in which it does this are often quite different to the mechanisms available in other jurisdictions so to to make this tangible and i guess my my my clearest example would be tech companies that have done wrong in the eyes of of kind of regulators publicly posting uh kind of omissions of wrongdoing and uh stating what they've done wrong and how they will reconcile this and their policy going forward so um the the sorts of informal regulatory influence um the the shape i guess chinese tech policy going forward is something else that's interesting and i think should be uh should be watched within this this kind of comparative frame
如何最终,这与欧盟的规范相比如何,您已经谈到了一点,但还有像道德原则之类的支撑这些规范的内容。我认为这很有意思,现在这些规范倡议才刚刚开始推出,因此目前很难提出太具体的东西,但我想有一些特定情况我们已经能够看出两者之间的差异。其中一件事是个人与社会焦点之间的区别,另一方面是政府的收购或政府的豁免。在中国,这些规定对公司都是非常严厉的,但对来自中央政府的倡议则不那么严格。总的来说,除了区分单一规范机制和多重规范机制之外,未来的一个关键问题是不要将中国视为一个监管环境贫弱、不考虑人工智能道德的地方,而是将其视为一个以自己的方式关注人工智能道德的司法辖区。评估中国作为监管者相对于英国和美国而言更加严厉,但它所采取的方式通常与其他司法辖区的机制大不相同。举例来说,一些科技公司在监管者眼中做错事时会公开发布他们的违规行为,声明他们所犯的错误以及他们将如何调解,并明确说明未来的政策。中国科技政策的不正式监管影响是另一件有趣且应该关注的事情,在比较框架中观察这种情况。

and sorry i know i've already said finally but china is in many ways kind of a sheltered market you talked about tech companies and the big especially social media companies but tech companies in general have kind of succeeded because Beijing has wanted them to and has kept out foreign competition but i just wonder if if china is would like itself to have some kind of Brussels effect whether it's kind of focused within its own borders or whether it would like to be the one defining the rules of the game here internationally as well yeah so so i think it's an interesting question and one that i hope my d-fill will answer one day but um but i guess there's been some some research on say on the one hand you've got the Brussels effect right that some scholars have started to look at what is called the inverted commerce Beijing effect um and this is different from the Brussels effect because it's not focused on focused on regulation because china in many ways doesn't have the the kind of strongest regulatory system and that often on paper what is regulated for might not come into fruition in practice because there there are sufficient formal mechanisms to to kind of support that in practice and so what the Beijing effect refers to is the export of infrastructures particularly along the Belt and Road Initiative so into many kind of global south countries and parts of Europe as well and how the design of the technologies that are exported will inherently or or innately have kind of Chinese governance norms embedded within them so again to try and be slightly more tangible with this if you look at the export of kind of communications infrastructure surveillance infrastructure sorry etc etc a lot of it focuses on providing countries in the global south with a greater degree of data sovereignty so control over over there um over the their domestic data through using these infrastructures but so what scholarship has kind of suggested is that this is only partially a real kind of data sovereignty right because you're still working within the confines of the infrastructures that the the Chinese government have exported so certainly a conversation for a different date but I think one of the really interesting dynamics going forward is looking at how for instance technical standards will play out into these these kind of areas of international divergence or extra territoriality
抱歉,我知道我已经说过最后了,但中国在很多方面是一个比较封闭的市场。你谈到了科技公司,尤其是大型社交媒体公司,但总的来说,科技公司成功的原因是因为北京想要他们成功,同时拒绝外国竞争。但我不禁思考,中国是不是想拥有某种布鲁塞尔效应,无论是在国内还是国际上定义规则?这个问题我希望我的博士论文能有所回答,但我认为这是一个有趣的问题。一些学者已经开始研究所谓的“北京效应”,这与布鲁塞尔效应不同,因为中国在很多方面并没有最强的监管体系,通常实际上并没有有效实施监管,因为没有足够的正式机制来支持实践。因此,“北京效应”指的是基础设施的出口,特别是沿着“一带一路”倡议向许多全球南方国家和欧洲部分地区出口的技术设计将固有地嵌入了中国的治理规范。举个例子,通过出口通信基础设施、监控基础设施等等,主要集中在为全球南方国家提供更大程度的数据主权,即通过使用这些基础设施控制他们国内数据的程度。但学术研究指出,这只是部分真正的数据主权,因为你仍然在中国政府出口的基础设施的框架内工作。未来一个非常有趣的动态是看看技术标准将如何影响这些国际分歧或领土外管区。

so the UAI Act will largely be the details of it will be made in technical standards so on the one hand you have these technical standards being exported from the U for things like how a system is designed what counts as bias etc etc was on the other you have the actual infrastructures often being exported from China which come with their own technical standards so how these two trends will be reconciled in the future is a it's a big open question and one that I think I think that no one has particularly good answers for just yeah fantastic thank you so much for speaking to us today Hugh oh no it was a pleasure thank you for having me thanks for listening and if you haven't already remember to subscribe so you stay up to date with all our new content we've got some really exciting stuff coming out in the next couple of months see you next time you
因此,UAI法案的大部分内容将在技术标准中制定,一方面,这些技术标准是从U出口的,涉及系统设计、偏见等内容,另一方面,实际的基础设施往往来自中国,有其自己的技术标准。未来如何调和这两种趋势是一个很大的开放问题,我认为没有人特别好的答案。非常感谢您今天与我们交谈,休,哦不,这是我愉快的事情,谢谢您邀请我,谢谢你们的倾听,如果您还没有订阅,请记得订阅,这样您就能及时了解我们的新内容,接下来几个月我们会有一些非常令人兴奋的内容,下次再见。