It's innately a measure of design to protect fundamental rights, but it's fundamental rights as conceptualised within the EU or at least by EU institutions. With its size and heavy penalties and non-compliance, the European Union has an over-sized influence on international tech companies well beyond its borders. In this episode, we chat about, firstly, just how the EU is trying to regulate but also a lot of the political questions that come with the EU's oversized clout. The UK, for example, spent five years trying to leave the bloc, but for reasons we discuss in this podcast, its sovereignty to actually make regulatory decisions on AI separate to the EU is pretty limited. China too is developing its own detailed approach to regulating AI, a very different one which is influential in different ways and raising its own set of political and geopolitical questions.
I'm joined by the brilliant Hugh Roberts, a researcher at the Said Business School. I'll let him introduce himself to start us off. Thanks having me on, you know, and inviting me. So yeah, I'm Hugh, I'm a research fellow in AI and sustainable development at the Said Business School at the University of Oxford. So for my role here, I've despite the title, mainly been spending my time thinking about comparative AI policies, really what's going on in different jurisdictions across the world, who's doing it well, who's doing it badly and why. Before that, I worked in the UK government, so getting a bit of hands-on experience in AI policy making, so working on things like the UK's national AI strategy, but also advising on things like biometrics policy. And I guess the final hat that's probably worth mentioning is I'm currently doing a PhD at the University of Oxford's internet institute, where I'm trying to understand the global dynamics in AI governments, and particularly the role that China is playing in this space. So yeah, big questions that I'm trying to address. And you've studied in China, haven't you? You did a master's there? Yeah, so so I've previously spent time studying Chinese philosophy of all things focusing on early Chinese metaphysics, which I can't say is too helpful when it comes to policy making, but interesting nonetheless. Fantastic. So we're going to be talking about AI obviously and kind of regulation.
And this first question is a little bit controversial because we could talk about this for hours, but I think it would be useful to have like a working definition of AI to keep in mind. So could you attempt to give us one? Yeah, so so as you've alluded to, it's a somewhat controversial question, I think particularly in the policy wealth with the EU's definition, for instance, a lot broader than some others such as the UK's. And I unhelpfully tend to follow a kind of I know it when I see it sort of understanding, because what concerns me about these technologies isn't really kind of particular definitions of how we understand it or setting kind of broad kind of barriers for where we should stop understanding it, but rather the the impacts that these technologies have.
But I know that's unhelpful. So I'll try and give a kind of broad brush definition of the sorts of things that I'll be talking about when I'm thinking about AI. So I think the UK policy is actually quite helpful in this respect. So they focus on two aspects or features of technologies that would sort of indicate that they should fall into the kind of broad bucket of AI. And that is that they have the capacity to autonomously or semi autonomously process data, and that this will provide the systems with a degree of adaptiveness over time. So I the what they do might be more unpredictable than you just writing a line of code saying I want X or Y.
So from these features, the type of stuff an AI system could do is make predictions about, you know, what might happen, make classifications. So for instance, a facial recognition system, or finally generate content. So here, I'm thinking of things such as deep fakes. So I hope that can provide a sort of working understanding that that's not as tight as perhaps other people would go for. That sounds really useful to me, because it kind of captures just how widely applicable this technology, even though it's kind of more of a set of technologies is. And just one final kind of motivating question here, why is like the regulation specifically of AI important? Yeah, so I guess there's a few angles to take care of.
I think the first is just emphasising how much of a kind of transformative and ubiquitous set of technologies AI is. So where I are, not sure where to to grammatically put that in the systems are being applied everywhere. They're developing quite quickly. And because of that, we need something to address the sorts of changes that the systems are bringing about. And here, more specifically, what I'm thinking about is the new challenges these technologies could raise, or the exacerbation of existing challenges that the integration of these systems could lead to.
So first, regarding new challenges, just two examples of types of issues that could be caused is first, the black box inverted commas nature of these systems, i.e. we don't really at times with a really complicated system understand how it's working or why it's making the decisions that it is making. And this is quite troubling at times when, well, two things, I guess, first, if we're unable to uncover why a decision is like problematic. And secondly, yeah, if we do discover it's problematic, understanding why it's problematic is tricky.
And the second kind of new challenge that AI systems raise is really this one of accountability. So AI systems, as mentioned in my kind of initial definition, have this ability to autonomously or semi autonomously process data and adapt. And so because of this, they are sort of doing their own thing. But then it's a definite kind of philosophical question of whether a system should be held accountable, whether it's someone who's fed into that system in some way. i.e. is it the programmer who should be responsible? Is it the person who's deploying the system? Is it the underlying data sets that have been used? And so on and so on. And there aren't necessarily clear cut-outs as to this. It will be very context dependent.
And then in terms of exacerbating existing challenges, one of the most obvious that comes to mind is bias. And so obviously we're horribly biased society, and that's kind of an underlying fact, unfortunately. But what these systems do is they can really exacerbate and standardize the types of biases we see in society. So here, just to give you an example, one of the most heart-hitting for me as always, Amazon trying to develop a recruitment algorithm that sifted through CVs. And based on the data it was given, it started systematically discriminating against women based on anyone who had been to a traditionally women's college.
So because of this, it's not just kind of one manager who is sexist, rather it is this standardized problem of everyone who had that college was sifted out. And so what this means regulation is we need to really ensure that the legal instruments that we have are suitably robust for addressing these challenges. And I'm really providing clear guidance to people who are using these systems because whilst in my kind of academic bubble, it's quite easy to point these out and discuss them if you're kind of on the ground developing these systems quickly and integrating them. You don't have the luxury of taking the time to think about it. So hopefully regulation can help address some of these challenges. Thank you. That was a really clear explanation.
So we're going to compare a couple of different approaches to how different jurisdictions have tried to deal with some of these challenges. Let's start with the EU because in many ways it's perhaps the most influential. We might get to that in a minute. But how does the EU, just broadly speaking, kind of approach these questions of like regulating AI and what's its priority if anything? Yes, I definitely agree with your assertion that the EU is kind of the reference point when it comes to kind of AI governance and regulation. And one of the reasons is it was quite an early mover in this space. So it's been around five years since it started seriously considering AI governance and early initiatives really focused on things like guidance and principles. But more recently this has turned into kind of hard regulatory measures that I'll talk about in a second. But I guess like many EU regulations what's really at the heart of this effort is to ensure that the fundamental rights of citizens are protected without unnecessarily constraining such innovations of innovation and industry within the EU. But I would say compared to some other approaches of countries as we'll talk about later on in this chat, the EU is very focused on individual rights.
So in terms of how it's actually thinking about regulating these technologies, the most kind of I guess centralised measure or the most evident measure is the draft AI Act. So this was initially published by the European Commission last year and it's a risk-based framework that seeks to kind of categorise different types of AI system and based on that give them different kind of regulatory requirements. So there are four tiers of risk within this framework. So the first, the highest level of risk is unacceptable. So these are things that are seen as completely against EU values and the threat to fundamental rights. So here's social credit scoring is one of the kind of things flagged. Then we move down a level and these are kind of high risk systems and these are things like AI systems used for kind of safety purposes in AI systems and products used for safety purposes.
And these systems are really subject to a variety of regulatory restrictions and with things like kind of conformity assessments which are basically documents that companies have to complete to make sure that their systems are doing what this regulation wants them to do. A downer level, you've got systems with transparency risks. So here it's things like chatbots and what this regulation, what this level of the regulation hopes to do is really make sure that people aren't being misled by the systems, for instance. And finally there's minimal or no risk and there are no specific regulatory requirements for these types of systems but they're still encouraged to kind of follow best practice.
What's important to flag about this EU regulation is that it was initially published by the Commission last year but it's still a draft and this draft is being debated and discussed within the other kind of two key regulatory bodies within the EU so the Parliament and the Council and once these bodies all come to an agreement on the final text then it will go into law and this isn't likely to happen really until late 2023 or perhaps even 2024. So there's still some time before this materializes but we have quite a good idea of the types of things the EU is thinking about. It's funny how the EU is such a huge machine and in many ways very slow but was also one of the quickest on this. I just wonder you said that the priority is in terms of individual rights. Could you maybe give us an example of where these AI technologies might conflict with that?
Sure so I guess one of the big debates has been kind of remote biometric identification so this is kind of long jargon I guess for things like facial recognition systems used in public spaces and this is obviously a controversial topic because on the one hand using these systems can improve CCTV and in some ways improve the safety of society but on the other hand pose a real threat to the kind of privacy that the citizens in the EU have come to expect and are protected through things like the GDPR so without kind of clear guidance on how these systems should be used they could end up posing a threat such as such as in that case.
And you mentioned how the EU is very influential kind of globally. Could you talk a little bit about the effect the EU policy has outside its borders? People talk a lot about this Brussels effect. Yeah of course so I guess first before jumping in it's probably important to talk about what the Brussels effect is at least at a high level and then discuss how this might kind of play out in the the AI act down the line once it comes into force. So in essence the Brussels effect is the idea that the market size and the regulatory capacity of the EU means that some of its legislative or kind of regulatory measures will be externalised outside of the country. The reason for this is companies don't want to follow different measures in every jurisdiction because that's costly, confusing, time consuming and risky particularly given somewhere like the EU you can end up getting fined a quite significant amount of money if you don't abide by the regulatory measures that are in place.
So because of this the Brussels effect is the idea that the regulatory measures enacted in the EU will end up in some cases being followed by companies and perhaps even governments outside of the European Union. So to give probably the clearest example of this the general data protection regulation something I mentioned earlier was an EU initiative designed to protect the privacy of citizens within Europe. As an example whenever you have to give your consent to have your data processed this is on account of the GDPR and it's a measure of design to ensure that you're empowered to make meaningful decisions about how your data is used. So obviously for kind of big platforms websites companies etc etc.
Trying to localise this data protection just to Europe would be a complete pain. So if you look around the world at different companies many of them have just started following the best practices laid out in the GDPR. Similarly many governments have actually introduced legislative measures that are similar to the GDPR. So just one example is China's own privacy law which came into force. I'm pretty sure last year and it has many of the same stipulations as the GDPR.
Returning to kind of AI policy in particular the degree to which the AI Act will lead to something like the GDPR's Brussels effect is a bit contested but most people accept that the AI Act will have at least some international influence beyond the EU's borders. So it's just one really tangible example of this. The EU AI Act stipulates the outputs from AI systems that were outside of the EU. If applied in the EU would still have to abide by the EU AI Act. So that was a complete mouthful but I hope the kind of essence of it was clear but basically even if you're kind of developing a system outside of the EU and deploying the system outside of the EU, if the output of that is used in the EU then it still has to have followed the EU regulations. Similarly if you're kind of company outside of the EU exporting a system to the EU you'll have to abide by the measures that are outlined in the AI Act.
I guess one of the areas that hasn't really been talked about that much but I find particularly interesting probably on account of being British if nothing else is the potential impacts in Northern Ireland of the AI Act. So with all the mess going on with the kind of Northern Ireland protocol and trying to find an agreement of how to stop hard border on the kind of Irish border Northern Ireland has to abide by EU product regulations in short. And the AI Act focuses on regulating AI as a product so it's quite uncertain how this will play out in the next couple of years when this AI Act does come into practice, well come to power and come into force but we are in a theoretical situation where AI systems in England, Wales and Scotland may not be able to be exported to Northern Ireland on account of the AI Act which in theory you would hope that there are political dialogues which will prevent this but certainly it's a risk and it's something that was even flagged in one of the kind of House of Commons kind of committees.
Do you think this raises any kind of alarm bells for countries outside the EU in terms of sovereignty in terms of their ability to actually effectively regulate AI? Yeah so I think it's a really interesting question right and it's an extra territorial influence that's quite different to those asserted elsewhere or from elsewhere in that it's innately a measure designed to protect fundamental rights but it's fundamental rights as conceptualized within the EU or at least by EU institutions.
So if fundamental rights aren't conceptualized in the same way then this certainly poses kind of a big risk and I suspect the UK in particular with all its efforts and endeavours to break from EU regulation and law would be particularly unhappy if in two or three years time it found that all the companies based here are ignoring the regulatory initiatives going on domestically and are ultimately following EU best practice which I don't think is out of the question to be a kind of future that will materialize. So on that question of the UK how does the UK different its approach to AI regulation? Yeah so I think the the EU and UK in many ways represent different ends of the spectrum in terms of how AI can and should be regulated so as I was mentioning a minute ago the EU has taken this really horizontal regulatory approach so it's introducing one kind of overarching legislative measure and this will cover kind of all sectors and all uses of AI within the block and potentially outside the block.
In contrast the UK has said that it wants more of a context focused approach to AI regulation and governance. So what's meant by this is that the UK government thinks that a cross-cutting approach like the EU's isn't sufficient for understanding that sorts of contextual harms and impacts that an AI system might cause. So to give an example the same image recognition system being used for a medical scan or for detecting a water bottle will come with very different levels of risk so they hope that through introducing more of a context focused approach they'll be able to have a more nuanced and flexible approach to governance.
So what this looks like in practice is rather than having one overarching regulatory measure relying on the UK's sectoral regulators and different ministries to deal with the specific harms within their sectors so for instance the information commissioner's office or the competition in markets authority or off-come all looking at what their specific powers, remits are and from that addressing what the harms of AI are. The UK is recognised to a degree that this could be slightly chaotic and are trying to deal with this through things like cross-cutting principles that will underlie how the sorts of guidance introduced by different regulators should be formulated and also emphasising that the UK should follow more of a kind of pro innovation approach and that any measure by regulators should not focus on hypothetical risks but only kind of inverted cons real risk.
So I guess when it comes to comparing the UK's and the EU's approaches we can see pros and cons to both really so certainly the contextual approach and the focus on flexibility within the UK's initiatives are kind of noteworthy and beneficial and they could lead to something that is far more flexible and robust to change over time and depending on how the AI acts is kind of finalized there could be seen as a kind of degree of rigidity there as to whether it is able to deal with the kind of fast-paced change that happens in the field of AI.
But at the same time the cross-cutting nature of the EU's measures and it's kind of clear legislative stipulations means that it's black and white in a way, obviously exaggeration but it's quite clear what's going on. Was having multiple different regulators try and deal with these technologies could really have two negative impacts so the first is regulatory overlap or regulatory gaps so two regulators saying different things or two regulators not seeing a specific area as their remit and thus ignoring it and the second is resource constraints so government salaries aren't the best so to have the degree of expertise needed regulate these technologies within each kind of UK regulator could be quite tricky in practice. These are issues that the UK would have to address if it was to really enact a successful approach I think whilst the EU I think still needs to show that there is flexibility within its measures so that it can be robust to change over time.
That's really interesting how different they are and just to come back to something you said earlier about the possibility that companies would follow EU regulation rather than UK regulation. Does that depend kind of on I guess companies follow the lowest common denominator or the most stringent regulation so does that kind of assume that the UK would be less stringent in its regulation? Yeah exactly something that I should have mentioned is a lot of the UK's kind of post-rexet narrative has been kind of making the most of this new regulatory freedom and offering more of a kind of permissible regulatory environment for companies to operate in but the big risk here is as you alluded to that if the UK follows a kind of pro-innovation approach and comes with fewer specific restrictions companies might just turn to the EU's measures because they want to export their tech into this larger market which has you know far more financial pull than the UK's. Okay so we're going to shift to something slightly different now you mentioned earlier different conceptions of fundamental rights and probably the best example of that is the way the AI is regulated in China which is something you've researched. So what's the Chinese government's kind of approach to this and what is their priority? Yeah so I guess there's kind of two questions there the approach and the priority and when it comes to the approach earlier I laid out the UK and the EU on two ends of this kind of spectrum of how centralized versus decentralized the regulatory approach that was being taken was and whilst in the UK it's sort of a free-for-all of every regulator to try and introduce measures in China there are a handful of regulators at the moment who are focusing on AI and I think it will be this kind of select few who really do take the lead in regulating AI going forward.
So I think the most notable here is the the cyberspace administration of China and this is the regulatory body that in theory deals with online uses of algorithms and AI and tech in general and what they've been really active in doing is regulating specific types of AI rather than AI as a broad kind of technology more generally as we've seen in the UK and the EU's approach. So for instance one of the initiatives that they published last year went even earlier this year and then I'm losing track of the dates is a regulatory measure focused on recommended systems so a recommender system in general could be something like your your TikTok algorithm of you know what video comes next it could be which product you're being advertised on the web etc etc and these were quite I'd say strict regulatory measures and introduced a number of kind of features that are similar in some ways distinct in others from the EU's measures for instance so one of the most notable is a kind of public database of recommender systems that will be regularly updated and which companies have to kind of send to the the Chinese regulator to show transparently what their algorithms are doing and whilst the publicly available data perhaps unsurprisingly is quite high level and doesn't really say much a few researchers have pointed to the information that's actually sent to regulators being slightly more substantive so there's there's quite a lot of efforts to check what's going on within these these companies other quite interesting initiatives are well related to the recommender system example is being able to opt out of recommendations and in some cases there is the proposal to be able to alter the parameters or reject specific parameters based on for the recommendation so here i.e i'm a white man might be one of the the kind of parameters that is being used within the system exactly whether that will materialize and how it's an open question because it's a very difficult thing to do but it's interesting seeing this sort of regulatory innovation coming out of China too
So moving to the the kind of second point which is really what characterizes the Chinese approach and how that relates to rights for instance so in the what we've talked about quite a lot is this individual centric approach so focusing on individual centric rights for instance privacy and whilst in China was fundamental rights of individuals are certainly foregrounded we also see quite a heavy emphasis on the rights of people from persons so and focusing more on how society or groups of individuals may be impacted by these systems when it comes to to thinking about this in practice i guess there's more leniency for kind of practices which are seen as being of societal benefits so as one example perhaps a controversial one when it comes to the degree of kind of false positives i.e. detecting someone as something when they're not there's more uh there's more permissibility of this so for instance in the example of using a facial recognition system there's more a police or a local government would be would be happier to accept more false positives from the system then for instance in the model where it's more individual centric so there's there's less scope for that so they'd kind of rather arrest the wrong person than let them go on the assumption that their systems aren't that accurate yeah so so it sort of seems like that and that there's the more important thing is getting the maximum number of criminals or or or kind of the people that they want from the the systems rather than uh focusing on getting the the kind of highest number of the highest percentage of right calls from from the use of the systems which certainly i think would be more heavily emphasized in the UK for instance
and how just finally how does that compare to the EU both kind of regulatory you've talked about this a little bit but also like in terms of the like ethical principles kind of underpinning this yeah so so i think it's interesting i think we are still in the very early days of these regulatory kind of initiatives being introduced so it's hard to come with anything too concrete at the moment but i guess there's the there's certain instances that we can we can see is distinguishing the two already so one thing was this this kind of individual versus societal focus and i guess another is kind of buyouts for government in a way or or kind of carve outs for government um in china a lot of the the these regulations are kind of extremely harsh or extremely harsh towards companies was less so towards kind of initiatives coming from central government yeah and more generally beyond this kind of distinguishing between the the the kind of um singular regulatory mechanism versus multiple regulatory mechanisms one of the i guess key things going forward is to just not see china as a kind of poor weak regulatory environment that that isn't kind of considering ai ethics um at all um but rather to see china as a jurisdiction that's focusing on um ai ethics in its own way um and the way it is regulating companies um is based on the unique kind of political and legal structure within the the country so it's been harsher in many ways as a regulator than um well certainly the uk and definitely the u.s as well um but the ways in which it does this are often quite different to the mechanisms available in other jurisdictions so to to make this tangible and i guess my my my clearest example would be tech companies that have done wrong in the eyes of of kind of regulators publicly posting uh kind of omissions of wrongdoing and uh stating what they've done wrong and how they will reconcile this and their policy going forward so um the the sorts of informal regulatory influence um the the shape i guess chinese tech policy going forward is something else that's interesting and i think should be uh should be watched within this this kind of comparative frame
and sorry i know i've already said finally but china is in many ways kind of a sheltered market you talked about tech companies and the big especially social media companies but tech companies in general have kind of succeeded because Beijing has wanted them to and has kept out foreign competition but i just wonder if if china is would like itself to have some kind of Brussels effect whether it's kind of focused within its own borders or whether it would like to be the one defining the rules of the game here internationally as well yeah so so i think it's an interesting question and one that i hope my d-fill will answer one day but um but i guess there's been some some research on say on the one hand you've got the Brussels effect right that some scholars have started to look at what is called the inverted commerce Beijing effect um and this is different from the Brussels effect because it's not focused on focused on regulation because china in many ways doesn't have the the kind of strongest regulatory system and that often on paper what is regulated for might not come into fruition in practice because there there are sufficient formal mechanisms to to kind of support that in practice and so what the Beijing effect refers to is the export of infrastructures particularly along the Belt and Road Initiative so into many kind of global south countries and parts of Europe as well and how the design of the technologies that are exported will inherently or or innately have kind of Chinese governance norms embedded within them so again to try and be slightly more tangible with this if you look at the export of kind of communications infrastructure surveillance infrastructure sorry etc etc a lot of it focuses on providing countries in the global south with a greater degree of data sovereignty so control over over there um over the their domestic data through using these infrastructures but so what scholarship has kind of suggested is that this is only partially a real kind of data sovereignty right because you're still working within the confines of the infrastructures that the the Chinese government have exported so certainly a conversation for a different date but I think one of the really interesting dynamics going forward is looking at how for instance technical standards will play out into these these kind of areas of international divergence or extra territoriality
so the UAI Act will largely be the details of it will be made in technical standards so on the one hand you have these technical standards being exported from the U for things like how a system is designed what counts as bias etc etc was on the other you have the actual infrastructures often being exported from China which come with their own technical standards so how these two trends will be reconciled in the future is a it's a big open question and one that I think I think that no one has particularly good answers for just yeah fantastic thank you so much for speaking to us today Hugh oh no it was a pleasure thank you for having me thanks for listening and if you haven't already remember to subscribe so you stay up to date with all our new content we've got some really exciting stuff coming out in the next couple of months see you next time you