Blog

Explainable AI in the UK – the Regulatory Landscape

02 Apr, 2020
Xebia Background Header Wave

Currently, all sorts of data are becoming more and more available. Likewise, as computing power develops, Artificial Intelligence (AI) grows into an increasingly vital part of our daily lives. However, as we trust AI with more personal and professional tasks, the question of accountability must be asked. [/post_introduction]

Previously we discussed both the needs and methods of AI Explainability (XAI). Here, we want to look at the regulatory landscape that is also building around this pressing need.

Artificial Technology in Context

For most people, interacting with an AI-enabled system is a common (if not daily) occurrence. Solutions such as image recognition systems (for tagging and identifying photos), conversational voice recognition systems (often used by virtual personal assistants) and recommendation systems (such as online shops and steaming services) are all powered by AI and Machine Learning.

However, it’s also increasingly finding its way into a broader range of industries, such as entertainment, education, construction, healthcare, manufacturing, law enforcement and finance. In many of these cases, they can have a critical influence on life and death, as well as personal and financial wellbeing.

This is especially true for such systems used in healthcare, facial recognition in policing or even predicting recidivism in the criminal justice system. In more emerging trends, we can look at autonomous vehicles, or the increasing use of drones during war, for more examples where the decision-making process behind the AI system needs to be right.

A Measure of Accountability

As we can clearly establish, AI has the unquestioned potential to transform our reality. Naturally, this has led to numerous discussions in research and policy circles, which have been actively debating the extent to which individuals should be able to understand (or receive an explanation of) how AI-based decisions are determined.

From the individual human perspective, we want computers to produce transparent explanations and reasons for AI choices because they can impact us in many ways. This is why such regulations are being established.

On the business side, then, a lack of this explainability could directly lead to serious regulatory and reputational risks for those deploying AI. It can also lead to overall mistrust of the system(s), hampering overall development and success.

For example, financial companies are legally not allowed to let AI or Machine Learning algorithms make final decisions in insurance, loan or mortgage approval processes. At the end of the day, the banks are still responsible for such choices and any unethical decisions that can occur. So, until they implement technology to clearly show the “thought process” and rule out any illegal bias, human intervention is still relied on.

Therefore, with an increasing call for some form of explainability on the horizon, governments, regulators and private sector organisations alike are all trying to implement (and agree on) the rules and principles that govern AI-enabled systems, from design and deployment to overall usage once active. Here, I want to primarily focus on the regulatory landscape of Explainable AI particularly in UK, as it has one of the world’s leading AI ecosystems.

Policy and governance structures surrounding data use and AI come from a complex configuration of legal and regulatory instruments and standards, alongside professional and behavioral norms of conduct. In the UK, these structures include regulatory mechanisms to govern the use of data, emerging legal and policy frameworks governing the application of AI in certain sectors, and a range of codes of conduct that seek to influence the ways in which developers design AI systems. A sketch of this environment is below.

Main Legislation

The primary legislation in the United Kingdom that explicitly states a requirement to provide an explanation to an individual is the Data Protection Act 2018 (DPA), which is the country’s implementation of the General Data Protection Regulation (GDPR) – the data protection and privacy regulation that governs the use of personal data in EU countries.

Together, they regulate the collection and use of personal data; information about identified or identifiable individuals. Where AI doesn’t involve the use of personal data, it falls outside the remit of data protection law. For example, the use of AI for weather forecasting or astronomy doesn’t concern individuals.

Yet, very often, AI does use or create personal data. In some cases, vast amounts of personal data are used to train and test AI models. On deployment, more personal data is collected and fed through the model to make decisions about individuals. These decisions – even if they are only predictions or inferences – are themselves personal data, as they apply to specific individuals and may even be identifiable. In any of these latter cases, such AI falls within the scope of data protection law.

This law, however, is relatively neutral when it comes to technology. It does not directly reference AI or any associated technologies, such as Machine Learning. However, both GDPR and DPA do have significant focuses on large scale automated processing of personal data, while several provisions specifically refer to the use of profiling and automated decision-making. This means it applies to the use of AI to provide a prediction or recommendation about someone.

Specifically, it’s worth paying key attention to the following rights & obligations within the GDPR/DPA rulings…

The Right to Be Informed

When it comes to GDPR, we can see that Articles 13 and 14 detail individuals’ rights to be informed of solely automated decision-making processes when development legal decisions or similarly significant effects. Users need to be informed that such systems exist, as well as provided with any meaningful information regarding the logic used and the significance (including the envisioned consequences) for the individual.

What does this mean for AI? Owners of AI need to show how the AI works (the logic) and how the AI can impact the individual user, customer, client, applicant etc. When AI make a decision that can impact the end-user significantly, such as a loan request, such users have a right to know how the decision was reached.

The Right of Access

Article 15 of GDPR also states an individual’s right to access information on the existence of solely automated decision-making processes. This is very similar to articles 13 and 14 – it further highlights that individuals have a right to know about this process.

What does this mean for AI? Recital 71 provides interpretative guidance. It makes clear that individuals have the right to obtain an explanation of a solely automated decision after it has been made. A customer might not even know such a decision was made by an AI, but that’s not the point. If an individual requests an explanation, saying “the decision was generated by AI” is not enough.

The Right to Object

Article 21 of GDPR concerns individual’s rights to object to the processing of their personal data. Specifically, this includes a direct and absolute right to object to any profiling for direct marketing purposes.

What does this mean for AI? Not much directly, at least. If someone requests their data not be processed, then your AI and Machine Learning algorithms can’t use their personal data at all. As we mentioned earlier, any generating findings regarding these individuals also counts as personal data, so this can’t be kept either.

Rights Related to Automated Decision-Making – Including Profiling

Under Article 22 of GDPR, individuals also have the right to not be subjected to a solely-automated decision process – specifically those that result in legal or other similarly significant outcomes. There are a few exceptions to this but, even in these cases, organisations are obliged to adopt appropriate measures to safeguard individuals; this includes a right to obtain human intervention, express their views and even contest the decisions of AI procedures. Recital 71 also provides interpretive guidance for Article 22.

What does this mean for AI? A human approach can often be requested, so such manual options must always be available. Even then, when dealing with AI decision-making, safeguards need to be in place. For example, more simple Machine Learning can use decision trees, where choices are linear and easy to determine. However, for more complex AI, this isn’t the case, so human input is often utilised when any errors are flagged or choices are made. In the future, such organisations will turn to advanced ethical testing and explainability to better trust AI.

Data Protection Impact Assessments

Article 35 of GDPR states that companies are required to carry out Data Protection Impact Assessments (DPIA) when they are performing any tasks regarding personal data, but especially when using new technologies, as these are more likely to have a high risk for individuals.

A DPIA is always required for any systematic and extensive profiling, as well as any other automated evaluation of an individuals’ personal aspects, which are used for impactful decisions (that is, legal or similarly significant, as always).

What does this mean for AI? Ultimately, many of the rights GDPR and DPIA give to individuals, and the obligations it places on organisations, are directly relevant to the use of AI. This also means that AI can’t simply be implemented at a company’s leisure –when such impactful decisions are involved, there is a burden of duty to perform a DPIA.

Other relevant laws

When it comes to AI regulations in the UK, we can also look to both the Equality Act 2010 and Administrative law.

Equality Act 2010

This applies to a wide range of organisations and prohibits any behaviour that discriminates, harasses or victimises another person based on any of “protected characteristics”. This can include age, disability, race and gender, among others.

As it’s your organisation’s obligation to avoid disadvantaging such people with protected characteristics as much as possible, you need to be able to explain any decision-making process, in order to show that it is not discriminating such factors. This explanation must be in a format that the decision recipient can meaningfully engage with.

What does mean for AI? Even smart AI processes are capable of accidental bias – especially if they’re historical information that may have hidden bias. Because you need to demonstrate to users – at any point, upon their request – that your process was fair and ethical, the best option is to regularly ensure your AI is performing as intended.

Of course, from a pragmatic point of view, the easiest solution is to simply not give your AI solutions such information. When processing choices regarding individuals, it’s often better to not provide the AI with race, gender or other factors. That said, this doesn’t entirely remove the problem. AI needs to be regulated and checked to ensure it doesn’t, for example, provide different results to people in a certain location over others, especially when those locations favour specific demographics that could be protected under the Equality Act 2010.

Judicial Review Under Administrative Law

Anyone can apply & challenge the lawfulness of governmental decisions or rulings. In other words, individuals can legally question decisions made by a public sector agency, or even private bodies contracted to carry out public functions on behalf of the government. Naturally, this includes when AI decision-making is utilised in such processes.

What does this mean for AI? Not much, unless you operate in the governmental sector and are responsible for judicial processes – then it counts for a lot. This is very similar to the above points – that individuals have a right to know how factors were determined.

Emerging Policy Approaches

In several sectors, regulatory bodies are investigating whether AI-enabled products or services might raise questions about explainability, as well as whether existing frameworks can sufficiently manage any concerns that might follow. For example:

  • The UK’s Automated and Electric Vehicles Act (2018) makes provisions for the liability on insurers or owners of autonomous vehicles. This answers some questions about whether an explanation would be required to allocate responsibility for an accident.
  • The Financial Conduct Authority is commissioning research to gain a better understanding of the explainability challenges that arise when applying AI in the finance sector. They’re also working with the International Organization of Securities Commissions to develop a framework for the application of ethical AI in this industry.
  • The Medicines and Healthcare Regulatory Authority is working with the British Standards Institute to examine how far existing frameworks for regulating medical devices are able to address the challenges posed by AI, as well as whether new standards (in areas such as transparency and explainability) might be necessary in validating the efficacy of such devices.

Summary

The finance sector was always heavily regulated. Unsurprisingly, the number of legal regulations regarding data and AI is growing. With the unstoppable expansion of the technological world, it can be said with certainty that the regulatory landscape will expand.

XAI – Explainable AI – is a complex area, where on one hand algorithms are evolving to consider more variables, and on the other hand, the expectation is that consumers will fully understand how decisions and predictions are made. This introduces a whole new set of considerations: human-interpretable decision boundaries, the accuracy of the predictive power against visibility, risk management and mitigation. Currently, one thing we know for certain – this area will have to evolve strongly in the coming months.

Finally, it’s worth noting than in April 2018, the government published its AI Sector Deal, which tasked the ICO and The Alan Turing Institute to: “…work together to develop guidance to assist in explaining AI decisions.” This fits into the UK’s ongoing efforts – which also includes both national and international regulators/governments - to address the wider implications of transparency and fairness in impactful AI decisions.

This partnership is currently working on a new AI auditing framework, which will provide guidance for practical explanations of AI decisions to the respective individuals whose data was processed. It’s currently in the initial consultation phase, with the final framework expected sometime in 2020.

For now, however, we can look at three parts of the draft guidance that were published this same year:

• Explaining decisions made with AI – Part 1 : “The basics of explaining AI

• Explaining decisions made with AI - Part 2: “Explaining AI in practice

• Explaining decisions made with AI - Part 3: “What explaining AI means for your organization”[:]

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts