AUTHOR: EMMA GRECH
Dr Emma Grech, a partner at City Legal, writes about the anticipated draft EU Regulation on Artificial Intelligence (“AI”), which works to introduce a harmonised, ‘first-of-its-kind’ AI regulatory framework – whilst simultaneously seeking to promote industry excellence and mitigate the risks introduced by AI-tech. The proposed Regulation harnesses the principles already enshrined in other EU laws: notably, for example, those relating to privacy and product safety – with an inbuilt emphasis on accountability, risk management, exorbitant maximum fines based on turnover, and far-reaching extraterritorial effect.
Published on 30th April, 2021
FEAR OF THE (UN)KNOWN
Although it may come as a surprise to some, AI often looks less like a smart soldier-bot deployed in futuristic utopian movies, and more like our everyday habits. At its most basic levels, AI is everywhere: whether it is Google predicting our search needs; Maps suggesting the best route to get to the city centre; Spotify recommending personalised playlists; or Netflix suggesting viewing recommendations, to name a few examples. By making sense of vast amounts of data in a bid to provide efficient solutions, AI – whether at its most basic or complex levels – is a ‘horizontal’ that, generally, can enhance processes and procedures across all economic sectors.
Whilst the advantages offered by AI are indisputable, fears have grown, in recent years, over the possible dangers triggered by its vast use-cases (a pandora’s box of concerns revolving around, mainly, economic loss, liability, and ethical issues). That fear has been further amplified in a world where technology – including AI – is advancing at breakneck speed, with governments and legislators, naturally, struggling to keep up.
HARNESSING TECHNOLOGICAL DEVELOPMENT THROUGH REGULATION
On 21 April 2021, the European Commission published its proposal for a Regulation on Artificial Intelligence (the “Regulation”) with the following objectives:
- ensuring that AI systems available and used in the EU are safe and respect existing law on fundamental rights and EU values;
- ensuring legal certainty to facilitate investment and innovation in AI;
- enhancing governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and
- facilitating the development of a single market for lawful and trustworthy AI applications.
Indeed, the European authorities’ goal is the creation of legal certainty to place the EU at the competitive forefront of AI-tech development – simultaneously, however, formulating a first-ever framework for AI that is aimed at ensuring that AI technology is utilised safety and in a manner that protects fundamental rights.
The Regulation should be viewed as part of the Bloc’s wider efforts, in recent years, to demonstrate its ability to be a global leader in the trustworthy AI; in particular, reference should be made to the EU’s 2018 Coordinated Plan on AI with Member States, intended to reinforce AI uptake, investment and innovation, which plan is now in the process of being reviewed and updated.
TECHNOLOGY NEUTRAL AND “FUTURE-PROOF”
In light of the ever-evolving nature of AI, the EU has rightly recognised that the definition of AI in any set of rules intended for its regulation must be technology-neutral and ‘future-proof’- meaning that it is wide enough to avoid becoming outdated, whilst ensuring that AI systems, techniques and approaches which are currently or not yet even in development are captured by its governance.
The proposed Regulation’s definition of AI is based on the OECD’s 2019 Recommendation on Artificial Intelligence, encapsulating: (a) software; (b) developed with one of more specified techniques and approaches in Annex I to the Regulation, such as machine learning, logic and statistical approaches; and (c) which can, for a given set of human-defined objectives, generate outputs, such as predictions, recommendations or other decisions.
Notably, Annex I can be amended by the European Commission by virtue of delegated acts. This further bolsters the intended ‘future-proof’ nature of the Regulation.
The Regulation adopts a tiered risk-based approach, whereby – in essence, the higher the risk posed by a given AI – the stricter the regulation. The ‘risk pyramid’ is as follows:
UNACCEPTABLE RISK: AI systems which are considered a clear threat to the safety, livelihoods and rights of people will be deemed ‘unacceptably risky’ and will therefore be banned. This category includes, for example, AI systems that manipulate human behaviour to thwart the free will of users (e.g., toys using voice assistance encourage dangerous behaviour in children).
HIGH-RISK: this category incorporates AI technology implemented in, for example:
- Critical infrastructures (e.g., transport), that could put the life and health of citizens at risk;
- Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g., scoring of exams); and
- Safety components of products (e.g., AI application in robot-assisted surgery),
In terms of the Regulation, AI systems classified as high-risk will be subject to strict obligations before they can be introduced to the market (and throughout the product lifecycle), such as:
- Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
- Provision of clear and adequate information to the user;
- Appropriate human oversight measures to minimise risk; and
- High level of robustness, security and accuracy.
LIMITED RISK: if AI systems are classified as posing ‘limited risk’, they will be subject to specific transparency obligations in order to ensure that users are aware and duly informed, for example – in the context of a chatbot on a website – that they are interacting with a machine.
MINIMAL RISK: it is interesting to note that the Regulation allows the free use of applications such as AI-enabled video games or spam filters. Most AI systems currently in circulation in the EU fall into this category, and, as such, they present minimal or no risk.
Thus, the category of AI that shall become subject to the most stringent levels of monitoring and regulation is high-risk AI. Generally, once developed, AI-tech shall be made to undergo a conformity assessment and will be registered in an EU database. Any such AI products available in the EU shall be required to carry the ‘CE’ mark which evidences compliance with European health, and safety and environmental protection standards.
The criteria for the classification of AI as high-risk or otherwise will depend on factors such as the intend purpose of the AI and the extent of its use, the number of potentially affected purposes, and the irreversibility of potential harm caused.
APPLICABILITY OF REGULATIONS
It is important to note that the following stakeholders will fall within the remit of the Regulation’s applicability:
- Providers (including importers and distributers) placing on the market or putting into service AI systems in the EU. As a result of the Regulation’s extraterritorial breadth, it is irrelevant whether such providers are established within the EU or in a third country;
- Certain users of AI systems located within the EU; and
- Providers and users of AI systems located in a third country, where the output produced by the system is used in the EU. This, again, is the result of the Regulation’s extraterritorial outreach.
MONITORING AND ENFORCEMENT
At EU level, the Regulation is proposing a European Artificial Intelligence Board, the members of which shall comprise representatives from the European Commission and the various Member States. Mimicking the set-up adopted under the General Data Protection Regulation (the “GDPR”), at a national level, Member States are required to designate national competent authorities, including a national supervisory authority, to ensure the supervision of the proposed Regulation’s implementation.
Again, similarly to the GDPR, the Regulation allows the imposition of harsh penalties in the case of non-compliance, contemplating administrative fines for certain offences of up to €30M, or, in the case of companies, up to 6% of the offender’s total worldwide annual turnover, whichever is higher. The Regulation also imposes serious penalties for the supply of incorrect, incomplete or misleading information to national competent authorities, namely up to €10M, or, in the case of companies, up to 2% of the offender’s total worldwide annual turnover, whichever is higher.
The European Commission is now requesting feedback on the proposed Regulation. All feedback received will be presented to the European Parliament and Council in order to contribute to the legislative debate. Stakeholders are thus encouraged to participate in this important step which will have the ability to shape the contents of the Regulation in the coming months. The consultation period will close on 22 June 2021, and feedback can be submitted via the European Commission’s website.
Although the Regulation is still in its infancy and is yet to experience possible amendments and refinements throughout the legislative process, affected providers and users of AI systems should start delving into the newly proposed requirements so as to ensure that, should the Regulation eventually become effective, they are well-positioned – in good time – to comply.
 The Proposal for a Regulation of The European Parliament and of the Council laying down harmonised Rules on Artificial Intelligence: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence
 European Commission, Coordinated Plan on AI 2021 Review: https://digital-strategy.ec.europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review
 European Commission, Regulatory framework proposal on Artificial Intelligence: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
For more information on how we may assist with any of your technology law-related needs, please contact:
Dr Emma Grech, Partner –
DISCLAIMER: The information contained in this document does not constitute legal advice or advice of any nature whatsoever. Although we have carried out research to ensure, as far as is possible, the accuracy and completeness of the information contained in this article, we assume no responsibility for errors or other inconsistencies herein.