I. Introduction
The primary aim of the Regulation (EU) 2024/1689 (EU AI Act)Footnote 1 is to ensure the protection not only of “health and safety,” but also of “fundamental rights […], including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems.”Footnote 2 Positive aspects include a first regulatory framework for the use and development of AI, prohibiting practices with a known negative impact on fundamental rights and providing greater accountability.
Others have already highlighted, however, as the European values foreseen in the Act are “likely to remain primarily aspirational,”Footnote 3 The Act, indeed, lacks comprehensiveness, since it risks neglecting adequate protection of central aspects intertwined with the development of AI systemsFootnote 4 (eg, environment) and uses (eg, those applied to law and rule-making) potentially relevant to the protection of fundamental rights and principles at the basis of the Union values, such as democracy and the rule of law.Footnote 5
To examine what above, we begin by addressing whether and how the Regulation under-includes certain AI applications within risk categories (ie, law and rule-making) that might harm fundamental rightsFootnote 6 and other key elements of the European order, such as democratic decision-making and the rule of law (para II). The paper seeks to explore – taking the existing framework and the process that led to it as given – how to navigate within the provisions of the Regulation.Footnote 7
To differentiate the impacts, uses are organised in: (i) applications “core” to law or rule-making, as being directly part of the decision-making process (para II, 1.); (ii) “quasi-core,” which – based on the specific uses – are borderline between being essential part of the decision-making process and influencing the human component of the processes, or being a supporting tool (para II, 2.); (iii) “ancillary administrative activities,”Footnote 8 ie, assistive tools that do not affect the actual legislative or regulatory functions, such as AI systems performing “a narrow procedural task” or “intended to improve the result of a previously completed human activity”Footnote 9 (para II, 3.).
Having identified the risks of potentially overlooking law and rule-making, we then explore how the Act forgets such functions (para III, 1 and 2). Therefore, we examine the lack of flexibility in amending the Act and the remaining ways forward, as voluntary compliance through codes of conduct and intervention at national level (para IV). We conclude with final considerations (para V).
II. Risks for AI (“core,” “quasi core” and “ancillary”) applications in law and rule-making
IA is increasingly playing a crucial role in law and rule-making around the world by performing time-consuming tasks, increasing access to knowledge base, enhancing the ability of public authorities to draft effective rules and to streamline the regulatory stock. Such applications are intended to improve the efficiency of the proceedings (eg, increasing decision-makers ability to assess all position presented in a highly participated consultations) and the quality of law and regulation (eg, improving wording and thus comprehensibility or supporting coherence of the new rule with the existing regulation). The potential uses range from mere “ancillary” support to activities which are “core” or “quasi core” to law and rule-making. It is a functional distinction, used to identify what regulatory intervention could be adequate to mitigate risks on the rule of law.
Building on what has been mentioned, this section intends to demonstrate how not regulating AI in law or rule-making might indeed be against the primary scope of the Act.
1. “Core” AI application in law and rule-making
A general prohibition on the use of AI systems within the complex landscape of producing law or regulation would not be a desirable limitation of innovation. Accordingly, law and rule-making are not among the practices prohibited by the EU AI Act.Footnote 10
Other risk-categories, different from the prohibited practices, require more interpretative effort, as the high-risk one.
Which current and potential uses of AI are “core” to law and rule-making, and while not theoretically classified as high-risk, might nonetheless raise concerns for principles such as democracy and rule of law?Footnote 11
a. AI writing a law/regulation
Some AI systems are used to write a law or a regulation, or a part of it.Footnote 12 In the EU, the development of LEOS dates back to 2016, whose future applicationsFootnote 13 include LLMs in legislative drafting so as to “capitalise on the vast amount of data available to lawmakers”.Footnote 14 It is not surreal to state that AI systems are increasingly able to “produce a rough but credible first legislative draft.”Footnote 15
Looking at LLMs, ChatGPT has been used to write an ordinance approved by the City of Porto Alegre (Brazil).Footnote 16 LLMs can help by analysing existing laws and assisting with the creation of new ones (eg, enhancing the logical organisation of proposed drafts and pinpointing inconsistencies or ambiguities), as in the case of OpenFisca.Footnote 17 The latter involves models processing comprehensive legal texts to classify topics within different laws and regulations. An even more increasing use of AI pertains to the drafting of amendments, as in the Scottish and UK Parliament,Footnote 18 in the European Parliament through the “AT4AM” system,Footnote 19 or in Italy.Footnote 20
It seems reasonable to consider the above applications as potentially rising significant risks, since the use of AI in law and rule-making might generate erroneous or biased content, especially where inputs are incomplete or ambiguous.Footnote 21 The law and rule-making activities are particularly sensitive since interventions have to fit into complex regulatory systems, linked to social values and principles, whose complexity is hardly fully grasped without a human intervention.Footnote 22 Specifically, the uses of AI in writing a law or regulation (or part of it) might limit the decision-maker’s autonomy of judgment.Footnote 23 The latter becomes evident when considering the widespread practice of using AI for amendments in law-making, which can alter the original meaning of a provision and are often approved collectively (as is the case in Italy), without being subjected to a compulsory impact assessment.Footnote 24 Besides, AI might result in supporting deliberate and adverse uses (eg, production of a flood of amendmentsFootnote 25 ). Consider also the potential risk to pre-emptive censorship or automatic rejection of amendments, or even a training leading to political biases.Footnote 26
Moreover, the use of AI in legislative drafting might result in erroneous terms or poorly thought-out constructions, which can heavily influence the substance and subsequent interpretations of the content of a law. Indeed, drafting is a dynamic and forward-looking activity, beyond the simple writing of a prescription: the process of deliberation does indeed include exchange of perspectives and debating alternatives that lead to reaching a collective decision.Footnote 27 These use cases might threaten “legality, implying a transparent, accountable, democratic and pluralistic process for enacting laws”.Footnote 28
b. AI in support of the residuality of law and regulation
AI can also process vast amount of law and regulation so as to identify potential inconsistencies or overlapping, as experimented by the Australian Office of Impact Analysis.Footnote 29 Use of AI to simplify existing law and regulation has been mentioned in a recommendation of the Administrative Conference of the United States.Footnote 30 The latter building on a groundbreaking report on pilots in the US, such as the “Regulatory Clean Up Initiative”Footnote 31 to identify mistakes in rules through natural language processing, or the “RegExplorer”Footnote 32 system to identify burdensome, ineffective or obsolete regulations. Another tool, “QuantGov,”Footnote 33 is used to estimate the regulatory load by identifying content within massive quantities of rules. In the EU, Portugal implemented a system for automated monitoring and control of compliance with regulations to identify potential needs to review existing norms.Footnote 34
In Brazil, a “regulatory observatory” is used to monitor “plans, projects and ongoing processes on the regulatory agenda, organized by subject theme and status”Footnote 35 for regulatory planning. The Brazilian Ulysses AI-driven system analyses semantic similarities involving existing legislation and new proposals/amendments.Footnote 36
What is mentioned might hide potential risks. The AI accuracy in identifying semantic similarities or inconsistencies in existing legislation or regulation might be challenged by the complexity of legal language and the variability of interpretation, as well as the complexity of the legal system.
Consider, for example, an algorithm designed solely to tally obligations, without simultaneously accounting for actions such as removing, restricting, or exempting certain obligations.Footnote 37
Moreover, the underlying approach adopted by the developer or the regulator commissioning the system — whether de-regulatory or pro-regulatory — can significantly influence how the algorithm operates.Footnote 38 This poses a particular risk, as the absence of consultations to challenge or balance the outcomes of AI-driven reviews can lead to unintended consequences,Footnote 39 as much as weakening the democratic control of the rule and law-making process.Footnote 40 An error in the algorithm could indeed lead to the repeal of substantive norms beyond mere inconsistencies or overlaps. For instance, if provisions introducing affirmative actions were repealed, this could also result in a violation of the fundamental right of non-discrimination.
c. AI for consultations
AI is currently used by many decision-makers to cluster comments, to identify duplicates, or to summarise the overall comment sentiment. The EU proposes an advanced example, “Doris+” system,Footnote 41 similar to the United KingdomFootnote 42 and United States.Footnote 43 The system used by the EU also includes a component, AWS Comprehend, to detect key phrases, entities, sentiment and common topics, and potentially to be improved with an LLM.Footnote 44
Such applications aim to alleviate the resource-intensive and subjective process of sorting thousands of responses, freeing decision-makers to focus on developing creative policy solutions and considering broader implications. These uses allow a significant time saving,Footnote 45 and prevent legislators and regulators being affected by information overload bias.Footnote 46 Among the risks, AI systems might be trained on biased data (such as tendency to give more relevance to some geographic location and related industry needs), or might lead to an inadequate clustering of comments (combination of different groups of stakeholders showing similar position) resulting in an unfair representation or prioritisation of certain groups’ opinions over others (eg, making an opinion appear majoritarian, while it is not).Footnote 47 Similarly, clustering and ranking responses by theme can result in the loss of nuanced perspectives. Complex opinions might be grouped under broad categories, which could distort or overlook important subtleties in the data. At the same time, overusing these tools could lead to decision-makers relying too heavily on AI systems and neglecting the value of human judgment in interpreting and assessing contributions. Besides, the criteria and algorithms used to cluster, analyse, and rank responses may lack transparency, making it difficult for stakeholders to understand how decisions on their contributions are being made. The latter could reduce trust in the process (openness does not only regard the outcomes of legislative processes but also the process that leads to them) and raise questions about accountability. On a different perspective, if stakeholders understand how these systems work, they might try to manipulate public opinion by gaming the AI analysis to their advantage. To sum up, the applications described above might have a direct impact on citizens and firms’ effective participation in decision-making, thus challenging democratic and pluralistic processes for enacting laws and regulations, also leading to potential discriminations. Moreover, the lack of effective participation might undermine the value of the information provided to decision-makers and thus of the final law or regulation.Footnote 48
What safeguards should be provided for these uses in law and rule-making?
In all the examples provided, human intervention is key. For instance, if AI identifies overlapping rules, it should remain the responsibility of humans to decide whether the previous rule should be repealed. However, it should be considered that human oversight, per se, does not solve the guarantee for an AI system to produce false outputs. The latter due to over reliance on the results of the system or lack of concrete verification of the path linked to the result. Such a guarantee should be performed in collaboration between computer scientists and lawyers to reduce biases and noise in the/judgment and ensure that computer architectures incorporate fundamental safeguards and the possible interpretations that it will give with respect to the regulatory context.
Another guardrail is transparency. For instance, public authorities should disclose AI (replicable) systems used to assess comments in consultations.
Moreover, uses of AI in the public sector regarding “core” applications should be based on public data sets, both for the training and the “dynamic” collection of information. AI should be able to work with data that is legitimately formed and collected, primarily from other public institutions and platforms. Already existing public databases can be used, as well as portals and online resources providing access to current legislation. To this first level of sources, a second one could be added (scientific papers and similar), with the necessary precautions.Footnote 49 The availability and, specifically, quality of data is indeed the key: inaccurate analyses are otherwise produced, which in turn could lead to inadequate rules. Poor data quality must be read in the sense of “contextualization”: training data must be collected having in mind the use that the system will make of it, and the possible interpretations that it will give with respect to the regulatory context.
Public authorities employing AI for the “core” uses examined should give preference to the open-source approach, which enables the reuse of solutions or co-creation among public authorities.Footnote 50 If this is not the case, the choice should be justified.
Lastly, AI-generated should explicitly mention the sources of the information and data referenced, as well as the intended use (transparency), and ensure an adequate level of cybersecurity.Footnote 51
2. “Quasi core” AI applications in law and rule-making
The so-called “quasi core” applications can be considered as uses that are borderline between being part of the decision-making and being a mere support of the human component of the process. In order to assess whether they can raise risks to fundamental rights, it is worth mentioning some examples.
a. AI to cluster similarly worded amendments
For instance, ItalyFootnote 52 and BrazilFootnote 53 are using AI to cluster similarly worded amendments, by also looking at semantics. While seeming mere organisational and assistive activity, an undue use of AI might lead to proposals being over or under-looked. This would be the case for amendments classified as similar in the wording while having different impact on the text and outcome, or vice versa. AI would end up replacing the human decision-maker and present the same risks as highlighted in the previous section (generate biased content and not fit into regulatory systems, whose complexity is hardly fully grasped without a human interventionFootnote 54 ).
b. AI for regulatory impact assessment (RIA)
AI leads to important development prospects for data collection and analysis,Footnote 55 to be used in RIAs. It helps increase access to knowledge that would otherwise be unattainable, as data mining to extract informationFootnote 56 to identify patterns. The Italian national statistical institute developed a platform (IstatData)Footnote 57 enabling natural language-based searches on datasets contained in its archives.Footnote 58 Clustering algorithms might be used to identify common and emerging patterns in the documents. AI might also try to predict the potential impacts by analysing causal effects.Footnote 59 To the best of our knowledge, these uses are currently of limited application. Among the few examples, the German Federal Statistical Office is experimenting to speed up the identification of parts of draft regulatory texts that affect compliance costs (through machine learning), as well as the source of the costs (eg, through understanding who the affected recipients are and at what cadence).Footnote 60 The experiment aims to combine two sources of data to lead the system to identify words related to changes in compliance costs. In Portugal,Footnote 61 deep learning was used to “identify information obligations within legal texts,”Footnote 62 estimate their cost through the “standard cost model”Footnote 63 and recognise patterns linked to administrative burdens to train a system.Footnote 64 Lastly, the EU is experimenting the use of AI to “carry out impact assessments of major legal proposals” and “assess the impact of new legislation on existing European and national legislation.”Footnote 65
The above applications rise some concern. RIA is intended to provide final decision-makers with evidence in order to adopt informed law and regulation; as such, it can widely influence the final decision while not compelling it. In order to appraise what role AI can play and what role for humans, it is important to underline that RIA is not a mere technical analysis, and many discretional decisions are up to technicians during the assessment (eg, it is up to them to balance advantages with disadvantages, or to value sensitive factors such as human life). AI can perform well in executing limited tasks, such as collecting and analysing public datasets and relevant literature, defining the regulatory framework within which a regulatory option is situated, and quantifying the burdens introduced by different options. If confined to such tasks, there are no inherent risks, except for one: AI is not forward-looking. If it relies primarily on historical data, it risks underestimating new entrants and emerging risks,Footnote 66 as well as supporting the flawed assumption that human behavior is always consistent.Footnote 67
The “quasi-core” uses (RIA and the clustering of similar amendments) might potentially challenge democratic and pluralistic process for enacting laws and regulation, as well as the quality of final rules.
Some safeguards already examined for “core” uses – eg, reliance on public datasets and transparency as previously described – might be sufficient. It should also be noted that consultations are an essential phase of the RIA process; therefore, the considerations outlined in the previous section apply here as well.
Lastly, the role of humans is key. For instance, in RIAs, humans shall guide AI on how to balance advantages with disadvantages or determine the value of sensitive factors, such as human life or air quality.Footnote 68
3. “Ancillary” administrative activities in law and rule-making leveraging on AI
The following are among the applications of AI in law and rule-making which can be considered as ancillary (ie, supporting tools that do not impact on the legislative or regulatory functions).
a. Digitisation of daily tasks
AI is used in Brazil and Argentina to simplify the workflow of parliamentary officials (by retrieving reports from parliamentary sessions)Footnote 69 or of the politicians (Parliamentarians can engage in deliberations remotely through biometric authentication).Footnote 70 The Argentinian Parliament is experimenting a predictor of the competent parliamentary committee that shall work on a proposal (to improve efficiency).Footnote 71
b. Speech to text and vice versa
The US Department of Labor uses AI to convert speech to text for internal meetingsFootnote 72 and a similar system is tested by the European Commission to create minutes or data analysis, as well as to subtitle conferences.Footnote 73 Estonian parliament uses speech recognition technology to create verbatim reports of its sittings to be published.Footnote 74
b. Summary of the legislative proposals
A known experiment concerns the LLaMandement projectFootnote 75 in France, an LLM for the production of memoranda and documents required for interministerial meetings and parliamentary sessions.
c. Automatic translations
Such use has been implemented in Spain, also for language inclusivity,Footnote 76 and by EU institutions.Footnote 77
The above-mentioned examples of “ancillary administrative activities” might have a negative impact on the efficiency of the institutions if systems do not work properly, but it seems unlikely that they could directly affect any fundamental right. Given that the perspective here primarily concerns risks against the scope of the Act, a public registry of AI uses employed by regulators and legislators could suffice to improve citizens and firms’ awareness.
III. A poorly effective safeguard of fundamental rights
Having seen in the previous paragraph how AI in law and rule-making might pose risks, this paragraph is intended to show that “core” and “quasi core”Footnote 78 uses are wholly or partially overlooked by the EU AI Act. Moreover, this section highlights how the EU AI Act is actually neither easily nor swiftly amendable and it explores what could be achieved through the self-regulation envisaged by the EU AI Act, as well as through national interventions.
Risk regulation (“the privileged methodological tool for regulating risks in Europe”Footnote 79 ) offers a guide which then requires enforcement for understanding the actual risk and the related planning and type of intervention.Footnote 80 Initially developed in the environment and health protection areas, such approach is now widely applied in the European digital policy. A traditional risk-based approach means tailoring actions based on the results of a case-by-case assessment. The latter structured according to criteria of analysis that are either static or dynamic in relevance and constantly updated and based on scientific evidence or experience. Such an approach would allow the prioritisation of decisions following the level of risk to rights that need protection, thus favouring residuality of legislative or regulatory intervention and the proportionality of rulesFootnote 81 (eg, according to the GDPR is up to the data controller and processors to identify both risks and mitigating measures).
Differently, the EU AI Act identifies four categories of risks and consequent measures to mitigate them. The risk assessment is carried-out ex-ante at legislative level: it is not based on real and practical scenarios and the text does not provide a general methodology to assess risks.Footnote 82 Such approach does not favour the residuality and the proportionality of rules,Footnote 83 risks both under and over inclusion of AI applications,Footnote 84 as well as of missing some existing and future uses, such as those in law and rule-making.
While the main claim of the EU AI Act is to be able to evolve in tandem with the changing risks that – due to AI – loom over fundamental rights, an ex-ante risk approach might result being unresponsive.
1. Core applications and the AI Act
Do the core uses examined fall under the regulation of high-risk application under the EU AI Act? The applications of AI systems in law and rule-making seem to be considered by the EU AI Act as not able to harm any fundamental right (Annex III, on high-risk applications, does not include any direct reference). Among the high-risk uses, the category of “democratic processes” could be linked to law or rule-making.Footnote 85 The Act, however, restricts such category to the use by judicial authorities or alternative dispute resolution bodies,Footnote 86 or within interference in elections, political campaigns or voting behaviours.Footnote 87 Therefore, the Act does not regulate any of these sensitive applications in law and rule-making, as it does with regard to AI high risk systems.Footnote 88
For what concerns LLMs, if of systemic risk under the EU AI Act (such as ChatGPT, considered to be with “high impact capabilities”Footnote 89 ), providers are asked to perform model evaluations and document adversarial testing to identify and mitigate system risksFootnote 90 and ensure an adequate level of cybersecurity.Footnote 91 If not directly using ChatGPT, the Act also includes the category of “downstream provider” as “a provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.”Footnote 92 Public authorities using systems integrating GPAIs powerful as ChatGPT would therefore be subject to the same obligations.
This would not apply for GPAIs below the computing power of the few most advanced generative IAs. Public authorities developing their own systems, without integrations from Open AI’s API or similar, would not likely fall into this category and obligations. It is indeed unlikely that regulators or law-makers will be able to develop, train and maintain systems on such a scale of computing power.Footnote 93
Besides, providers of “ordinary” GPAIs (thus below the threshold to be considered of system risk) are asked to prepare and maintain updated technical documentation of the model, regarding – for instance – the training and testing and the results of the evaluation.Footnote 94 The latter shall be made available to providers intending to integrate the GPAI in their AI, and publicly available as a detailed summary describing the content used for the training.Footnote 95 Any output generated by AI systems, including GPAIs, are to be marked in a machine-readable format and detectable as artificially generated or manipulated.Footnote 96
What mentioned, however, does not apply to tools with an assistive function for standard editing or that do not alter input data or the semantic thereof.Footnote 97 When can we say that a system is performing more than an assistive function? Is that information disclosure enough compared to the risks of an automation bias, oversimplification, distortion of content that may accompany the use of AI for the drafting activity?
Even if we decide to interpret that the tackled cases will have to be respective of the obligations for GPAIs, there is another obstacle: “it should be understood that the obligations for the providers of general-purpose AI models should apply once the general-purpose AI models are placed on the market”.Footnote 98 The latter is defined, in the Act, as the “supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.”Footnote 99 If law-makers and rule-makers then develop their systems autonomously, the above-described obligations might not apply, not being a commercial activity. Those would indeed fall under the different definition of “putting into services,” meaning “supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.”Footnote 100
Lastly, very fittingly, the EU AI Act does not apply to AI systems developed as open source, as happens for systems developed internally by public authorities for further re-use by other onesFootnote 101 (eg, in FranceFootnote 102 or in ItalyFootnote 103 ).Footnote 104
2. “Quasi-core” applications and the AI Act
Does AI “quasi core” applications in law and rule-making fall under EU AI Act? Which obligations are involved?
Whether those applications fall under the not high-risk category, the focus the of Act would be mostly on transparency obligations,Footnote 105 while the training data quality is widely limited,Footnote 106 as well as the technical documentation showing system’s compliance with the needed requirements,Footnote 107 and to the record-keeping obligations.Footnote 108 Most importantly, what differs – compared to the obligations for the high-risk category – is the level of human oversight envisaged.Footnote 109 Moreover, it is envisaged a limited attention to an appropriate level of cybersecurity and accuracy of the system,Footnote 110 which we consider to be pivotal for uses in a Parliament and Governments where there is daily basis intervention on the rights and obligations of citizens.
The applicability of these minimal rules of transparency to the “quasi core” AI’s uses is uncertain. It depends on the meaning of systems “interacting with individuals” which affects the application of the transparency obligations under Art 50: it is an unclear wording (part of an approach that has been defined “transparency by design”Footnote 111 ) within the Act that risks resulting in a high level of discretion by the enforcers. An extensive interpretation of “interaction with individuals” would be certainly desirable, in order to include a system drafting laws or regulations or clustering amendments; systems used to assess the impact of new legislation on existing European and national legislation; to identify burdensome, ineffective or obsolete regulations; or to evaluate the achievement of goals by a regulation. Does this provision instead refer to systems that replace one human being in communication with another (such as chatbots)?
The EU AI Act establishes a duty to disclose that “the text has been artificially generated or manipulated” for “deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest”.Footnote 112 This requirement ensures transparency in contexts that may include the publication of laws or regulations in official journals or websites, as well as the results of public consultations. Notably, the Act provides an exemption from this disclosure duty when there is human review or editorial control,Footnote 113 or when someone holds editorial responsibility. While this approach prioritises trust in human oversight, refining these provisions could further enhance the balance between transparency and accountability.
IV. The way forward
1. European code of conducts
It is not possible to regulate AI applications in law and rule-making by leveraging the “delegated acts.”Footnote 114 The latter tool allows for modifications to Annex III, which pertains to high-risk cases, but only in terms of adding or modifying use cases within the areas already envisaged (eg, biometrics, education and vocational training, etc).Footnote 115 It is highly questionable – and part of the Regulation’s opacity due to its lack of terminological clarity – whether the use of AI to support the drafting of laws or rules, or to cluster similarly worded amendments or for consultations, might be considered use cases under “democratic process”Footnote 116 (the latter would seem to be more appropriate to interpret as a display of the collective will, as voting and participation to decision-making). In other words, uses applied to law and rule-making are surely indispensable tools and activities to achieve “public policy objectives and ensuring democratic governance,”Footnote 117 but hardly an actual democratic process.
Therefore, extending existing area reading or adding new ones can only be performed in the framework of the review process under Art 112, which requires an evaluation of the EU AI Act by the Commission and a subsequent referral to the European Parliament and the Council for a potential legislative reform process.Footnote 118
After this preliminary specification, it is worth assessing the role that guidance documents regulated by the EU AI Act can play.
As known, the AI Act is deemed to be completed and interpreted by giving relevance to the following documentsFootnote 119 : the harmonised standards, designed to provide technical solutions to assist providers in ensuring compliance with the regulation; the code of practice, aimed at enabling providers to demonstrate adherence to their obligations (which the Commission may formally recognise through an implementing act); and the codes of conduct, intended to promote voluntary best practices and standards, as well as encouraging the voluntary application of obligations prescribed for high-risk AI systems to non-high-risk systems. Furthermore, the EU AI Act also provides for the development of guidelines by the Commission, which could be used to offer greater clarity and certainty, such as in the already discussed case of the “interacting with individuals” transparency requirement (Art 50).Footnote 120
The Act, perhaps aware that it does not cover all possible uses, leaves plenty of room to voluntary compliance, where suppliers of AI systems that are not high-risk are encouraged to create codes of conduct to ensure voluntary compliance with some or all of the mandatory requirements applicable to high-risk systems. The latter adapted in light of the intended purpose of the systems and the lower risk involved and taking into account available technical solutions and industry best practices.Footnote 121 The same EU AI Act does also specify that both, providers and deployers, are to be encouraged to apply additional requirements and that voluntary codes of conducts (ie, the tool proposed to foster voluntary compliance) are to be based on clear objectives and key performance indicators to measure the achievement of those objectives – so be effective.Footnote 122
Codes of conduct may also be drawn up by organisations representing deployers and providers.Footnote 123 For instance, regarding law-making, such role may be covered (to ensure harmonisation) by the Inter-Parliamentary Union which is also working on the topic and very recently published guidelines for the use of AI in Parliaments.Footnote 124
2. National interventions
As an integrated intervention to what already mentioned (in the absence of a – practically hardly conceivable – revision of the risk approach envisaged in the Act), Member States could impose additional safeguards in the use of AI by their public authorities,Footnote 125 although this practice would not be desirable as it would lead to non-uniform guardrails in the European context. Such national-level initiative would be enabling criteria for action in borderline situations, as well as ensuring sound data governance and a relevant technical infrastructure. The latter does also mean a planning of needed upgrades and investments to ensure up-to-date robustness.Footnote 126
\Furthermore, legislators and regulators might self-regulate their use of AI – thus an application of the precautionary principle. The latter would allow both more conscious and dynamic regulation of concrete uses and risks – outside external influence, and within the guiding criteria of effective human oversight, transparency, explainability, and the use of public datasets to enhance systems’ quality. National decision-makers have the chance to focus more on the ones deploying the AI application, as those who plan the intended end use. The latter case-by-case risk-assessment would also be able to include an analysis of the organisational structures and practices in place to ensure that the goals for the AI system are met, and that suitable procedures are in place to promptly identify and address problems, including supporting post-incident investigations in high-risk settings.
V. Final considerations
It can be stated that the EU AI Act provides a higher level of protection of fundamental rights and other key elements of the European order, if compared to the status quo. However, some potential impacts seem to have been forgotten, entirely or partially, if linked to law and rule-making, such as the ones of democratic and pluralistic process for enacting laws; non-discrimination and fairness, as well as transparency.Footnote 127 The example provided has shown that, while European and national law and rule-making procedures provide important guardrail by involving multiple actors and technical bodies, they can hardly detect, and correct unintended consequences of AI uses. This is due, in general terms, to confirmation and automation bias, which lead people to overly rely on the outputs of AI systems without critically questioning their validity. Furthermore, the lack of transparency that characterises the use of AI in the public sector amplifies these issues. The latter is compounded by the inherent inexplicability of the functioning and the results provided by certain AI systems, often referred to as the “black box” phenomenon. These factors together raise significant concerns regarding accountability, trust, and the legitimacy of decisions informed by AI in law and rule-making processes.
The paper claims that this is one of the outcomes of the lack of a bottom-up risk-based analysis which leads to a poor regulatory agility.Footnote 128 As highlighted, the Act is not actually flexible even in its amendment process.Footnote 129 Besides, law and rule-making are not considered within the “AI systems presenting a risk” according to the EU AI Act.Footnote 130 It is still relevant to stress as, following the subdivision hypothesised in the paper, the categorisation of which should depend on a case-by-case risk assessment: the “core” applications should foresee the guarantees of human oversight (with different background of experts), transparency, explainability, and the use of public data sets for training and data analysis; for the “quasi-core” uses it should be sufficient to guarantee transparency and public data-sets; while the “ancillary” uses should be covered by a general provision of informing the public about the uses of AI made by each public authority.
The options we see in the short-term (given that it makes little sense at this stage to propose a new risk-informed methodology, since it would not be implemented due to the time and political effort already invested in drafting the current Act) are the following:
-
i) intervention at national level, through Member States’ regulations on public authority use safeguards or self-regulations. These solutions might however hide risks for consistent and harmonised EU enforcement of AI regulation (where, on the other hand, the Act looks for a maximisation of market harmonisationFootnote 131 ) due also to the lack of authority by the AI Board to revise national interventions (differently for what happens under the GDPR).Footnote 132
-
ii) Preferable solutions would be an intervention at European level through guidelines when needed for greater clarity and certainty (eg, transparency), as well as codes of conducts in the overlooked cases; additional requirements to be considered in a code of conducts for the topic under analysis might also include an extension of the fundamental rights impact assessment, now limited to high-risk AI systems used by bodies governed by public law.Footnote 133
Acknowledgments
The article is the result of a collaborative effort. However, Nicoletta Rangone composed paragraphs II and V; while Luca Megale contributed paragraphs I, III and IV. The authors gratefully acknowledge the valuable suggestions provided by anonymous reviewers.
Competing interests
The authors declare there are no conflicting interests to declare.