Skip navigation
Skip navigation
You are using an outdated browser. Please upgrade your browser.

Codified Legal: AI Briefing

Part 1: The struggle for AI regulatory supremacy

This briefing is the first in a three-part series by Codified Legal on the key legal issues posed by artificial intelligence. Part Two looks at the intellectual property issues in AI and Part Three looks at data protection implications.

Much of the current concern surrounding AI centres on large language models or generative AI such as ChatGPT (sometimes referred to as ‘weak AI’ or ‘narrow AI’) which is capable of performing tasks to a high level under direction from humans. We go into more detail on how this type of AI works in Part Two. However, other forms such as sentient ‘True AI’ (sometimes referred to as ‘strong AI’ or ‘general AI’) capable of operating independently much like human intelligence, are currently theoretical. We are focussing our briefing on legal issues surrounding AI as is currently available. Research into strong AI is ongoing and could be possible in the not too distant future.

The current UK position

There is currently no specific, standalone AI law or regulation in the UK. There are some sector specific laws, notably GDPR rules on automated decision making which we will discuss in Part Three. Also most UK law and regulation is "technology neutral" so if an AI product or service creates an outcome that is contrary to existing laws (for example the Equality Act or product safety laws) then the existing laws will apply.

The UK government’s new proposals

On 29th March 2023 the UK government produced a white paper on its plans for AI regulation. With the UK aiming to maintain its position as one of the world's leaders in AI technology, the white paper has a pro-innovation approach. This means an encouragement in investment and building trust in AI rather than placing a regulatory burden on AI developers.

A white paper does not have any legally binding status, instead it seeks to gauge views on a subject and form discussion on key topics. Growth in AI is happening at an exponential rate and this begs the question as to whether a white paper is moving quickly enough. The likes of Elon Musk and other tech leaders have called for a six-month halt on AI developments to allow regulators to get up to speed. There are major fears that if not kept in check, AI could place itself in an uncontrollable arms race. The UK approach does have some potential advantages on flexibility, but there are concerns that potential economic gain is being prioritised over other issues.

The white paper argues that the government doesn’t want to rush into creating legislation that isn’t fit for purpose or would stifle growth of the industry, particularly with regard to small businesses and start-ups. Therefore, it has suggested five principles that existing regulators and future regulation should adhere to:

- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress

These principles do not have any statutory effect for the time being and will instead be applied by existing regulators who will be left to further define the applicability and scope of these principles themselves. In time, the plan is to impose statutory duties on those regulators to have due regard to the principles, though the paper is definitive that there will not be a new AI regulator. The main intention will be to allow for maximum flexibility while minimising disruption for business and increasing public confidence in AI.

The white paper does not lay down definitive plans as to what the regulatory regime will look like, however, there is the intention for a road map to be produced within the next six months that will add further clarity.

The paper provides some direction of how this process will operate, listing out under each principle what it anticipates will need to be done. Under safety, security and robustness for example the anticipated tasks are to “provide guidance about this principle”, “refer to a risk management framework which AI life cycle creators should apply”, and “consider the role of available technical standards”. By the nature of a white paper these are somewhat vague.

It is important to note that the white paper makes no mention of implementing a blanket ban on certain forms of AI, opting for what could be a quite reactive position. There could be some downsides to this approach such as harmful AI outcomes being more likely to slip through the regulatory net or it being too late to act when harm has already occurred, which may be worrying for consumers who could be on the receiving end.

We are already witnessing the harmful elements of various AIs, including instances of racial bias in healthcare provision in the USA where an AI was trained to predict who would require additional healthcare based on expenditure, but predicted incorrectly because some racial groups spent less on the same health issues meaning the AI failed to identify those potentially needing additional healthcare. Other examples include where it is used to cheat in exams or has ‘hallucinated’, producing output which it deems to be plausible despite it being false.

There seems to be an intention in the UK to promote AI innovation and investment, with a clear bias towards encouraging the UK tech industry and assisting the UK economy. AI encouragement also has huge potential benefits to society such as the ability to predict which people are likely to suffer certain diseases or picking up on data which doctors may have missed and then alerting them, increasing the chances of a correct medical diagnosis.

The UK's approach to use light touch, devolved regulation has some advantages. It is faster to devolve power to individual regulators than get a big piece of AI legislation on the statute books (directly at odds with the EU approach outlined below). However, it runs the risk of creating a patchwork of regulation that lacks consistency and increases uncertainty of how to apply the regulations, especially in areas that may overlap (such as the use of data). There are proposals in the white paper for a small co-ordination layer of centralised monitoring and co-ordination.

Although there is growing excitement for many over AI there is certainly fear too, so the question of whether this less prescriptive and more reactive approach balances the benefits and opportunities against the risks of AI remains to be seen.

The EU’s regulatory approach

As noted above, GDPR does have some existing provisions for automated decision making under Article 22, therefore restricting AI use to certain scenarios.

In terms of future regulation in the EU, there is a far more stringent and centralised regulatory regime being created. The EU is not willing to take the flexible path which the UK is currently pursuing, evidenced by the proposed EU AI Act which is set to be the first AI-specific legal framework passed by a major regulator. It seeks to cement the place of the EU as, arguably, the world's leading technology regulator, building on the reach created by GDPR in 2018.

There are some similarities between the EU and UK proposals with one key driver in both being increasing trust in AI. The way that the EU AI Act will do this is using a risk-based approach. Under the Act there will be designated categories of risk with four tiers. The highest level will be named ‘unacceptable risk’ and will target the most socially harmful AI’s by banning them immediately, considered to be those that cause a “threat to the safety, livelihoods and rights of people” such as “toys using voice assistance that encourages dangerous behaviour”.

A step down from that will be ‘high risk’ and will include AIs used in projects such as construction, education, and the administration of justice, with these AI tools being subject to authorisation by judicial or supervisory bodies. In addition, use of such AIs will require evidence of human oversight as a further safety measure.

The final two levels will be called ‘limited risk’ and ‘minimal or no risk’. Under the ‘limited risk’ category, transparency obligations will still be imposed such as having to notify someone that they are interacting with a machine in the case of chatbots like ChatGPT and Bard.

Most AI that is currently used in the EU will fall within the lowest category of risk (‘minimal risk or no risk’), for example spam filters and AI-enabled video games. Use of those AIs are to be unrestricted in the EU, subject to other regulations such as GDPR.

The proposed Act also comes with large sanctions for non-compliance: up to €30 million or 6% of turnover for breaching a prohibition, €20 million or 4% of turnover for an infringement of obligations, and €10 million or 2% of turnover for supplying misleading, incomplete, or incorrect information. An EU Artificial Intelligence Board will be established too, supervising the operation of the Act, making recommendations, and providing guidance among other duties.

The passing of the AI Act into legislation had seemed imminent with the EU parliament due to finalise its position (which now looks likely to happen towards the end of April) before the Commission, Council and Parliament discuss final details. The idea had been for the Act to be passed by the end of 2023, but that deadline seems increasingly unlikely to be met. How long the delay could be is not entirely clear. The disadvantage with the EU approach of trying to get a large piece of AI legislation passed is the time it takes is much slower than the pace of growth of AI.

The interaction between the UK and EU regime

When in force, the EU AI Act could impact the UK’s regulatory position, potentially forcing the UK off the flexible path to a more comprehensive and legislated regime as many UK businesses seek to meet the standards of the EU for trading purposes.

Although the circumstances were different with GDPR (being regulation agreed prior to the UK leaving the EU) we have seen that UK business welcomed working to a single standard rather than designing processes that work just in the UK and have different arrangements for EU customers or EU operations. The same is likely to apply to AI. A UK development team will want to ensure that its AI tools and software can be used by customers in the EU, and it will not want to have to retrospectively change development processes to ensure this is the case.

The UK accounts for over one third of Europe’s AI companies and the white paper seems to be focussed on maintaining or increasing that share. The relaxed approach may encourage more companies to base themselves in the UK, but once maturing they will most likely want to be able to operate in the EU market and that will be a driver to adopt EU regulations.

AI in the United States

The US is also beginning to make AI regulatory changes, proposing AI regulatory frameworks to accompany existing regulations. There is AI regulation in place to reduce the risks of AI used in employment decisions in New York, Illinois and Maryland. New York law for example now prescribes that automated employment decision tools should be subject to bias audits on an annual basis.

Other states (such as California) are implementing privacy rules that are similar to GDPR in dealing with automated decision making tools. There are also a number of state legislatures discussing draft legislation on AI. These tend to focus on the protection of individuals in high impact areas of AI, and also requiring transparency where AI tools are used. Pennsylvania is proposing a state registry of businesses operating AI systems which would have to include details on the systems used.

Congress has passed bills on US government AI systems, there have been executive orders and voluntary guidance issued. However, there is little sign of federal law being passed on the more general aspects of AI.

At a regulatory level, the Federal Trade Commission (FTC) has published ground rules largely aimed at increasing fairness and requiring AIs to be trained in a way so as to remove bias. The FTC has also taken enforcement actions against companies misusing AI, requiring them to delete certain algorithms and training data.

The US position on AI regulation seems to be state and regulatory driven rather than at federal level (again creating a patchwork of rules much like there could be in the UK) and state legislatures seem to be looking at aspects of EU regulation (partly in existing GDPR) for inspiration on how to react to the challenge of AI.

Other countries

Elsewhere, Brazil has provided draft legislation on AI. Key principles under the draft legislation include freedom of choice, auditability, and transparency. Risk assessments are also covered by the draft, with providers having to document the risks of using their AI system before putting it on the market. Like the EU there will also be the designation of ‘high-risk’ AI such as self-driving vehicles as well as the prohibition of some harmful AIs.

Canada has also begun to act, producing a bill for the Artificial Intelligence and Data Act (AIDA). Though many of Canada’s other acts such as The Bank Act already apply to AI, Canada’s AIDA would ensure AI is subject to obligations under Canadian consumer protection law and human rights law as well as prohibiting malicious AIs.

Meanwhile, countries such as India and Australia have no planned AI legislation, but it will be interesting to see if they change their approach in the coming year with increased discussion of AI around the world.

A country that has had quite prescriptive AI regulation for a little while now is, perhaps unsurprisingly, China with regulation in force since March 2022. It has established an algorithm registry and although public details of this are limited it is understood to be collecting quite detailed information on AI algorithms used by tech companies in China. This is probably focused more on limiting information dissemination via AI so that they do not harm China's national security. There are also questions whether a central government repository can really understand the impact of every AI algorithm submitted to it.

Conclusion

In conclusion it is clear to see that AI is demanding a lot of attention from regulators and there is not yet a clear best approach. Countries such as the UK are focussed largely on the great benefits of AI but with a light touch, decentralised approach on regulating it. The EU is taking a top-down legislative approach and hoping that their AI Act will build on the broadly successful GDPR.

The speed of advancement of AI technology might mean that by the time the best approach is figured out we may be living in a world with pervasive AI, too difficult to undo. However, it could be that large language models and generative AI is the wake-up call required to get our regulatory AI house in order before we become capable of creating strong, sentient AI which will produce even more challenges for our societies. This type of AI may need much stricter control than even the EU is currently proposing.

For more information on the issues raised in this note please get in touch with us.


Stephen Ollerenshaw
Codified Legal
7 Stratford Place
London
W1C 1AY

stephen.ollerenshaw@codified.legal
0845 351 9092

17 April 2023

The information contained in this briefing note is intended to be for information purposes only and is not legal advice. You must take professional legal advice before acting on any issues raised in this briefing.

This press release was distributed by ResponseSource Press Release Wire on behalf of Codified Legal in the following categories: Business & Finance, Public Sector, Third Sector & Legal, Computing & Telecoms, for more information visit https://pressreleasewire.responsesource.com/about.