Hostname: page-component-7b9c58cd5d-dkgms Total loading time: 0 Render date: 2025-03-13T13:11:00.945Z Has data issue: false hasContentIssue false

Risks Without Rights? The EU AI Act’s Approach to AI in Law and Rule-Making

Published online by Cambridge University Press:  13 March 2025

Nicoletta Rangone
Affiliation:
Department of Law, Economics, Politics and Modern languages, LUMSA University, Rome, Italy
Luca Megale*
Affiliation:
Department of Law, Economics, Politics and Modern languages, LUMSA University, Rome, Italy
*
Corresponding author: Luca Megale; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

The EU AI Act seeks to balance the need for societal protection against the potential risks of AI systems, with the goal of fostering innovation. However, the Act’s ex-ante risk-based approach might lead to regulatory obsolescence (already materialised in 2021 with the spread of LLMs and the consequent reopening of the regulatory process), as well as to over or under-inclusion of AI applications in risks’ categories. The paper deals with the latter outcome by exploring how AI uses in law and rulemaking hide risks not covered by the EU AI Act. It is then analysed as to how the Act lacks flexibility on amending its provisions, and the way forward. The latter is tackled without utopian and not really feasible proposals for a new act and risk-based approach, but focusing on codes of conduct and national interventions on AI uses by public authorities.

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

I. Introduction

The primary aim of the Regulation (EU) 2024/1689 (EU AI Act)Footnote 1 is to ensure the protection not only of “health and safety,” but also of “fundamental rights […], including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems.”Footnote 2 Positive aspects include a first regulatory framework for the use and development of AI, prohibiting practices with a known negative impact on fundamental rights and providing greater accountability.

Others have already highlighted, however, as the European values foreseen in the Act are “likely to remain primarily aspirational,”Footnote 3 The Act, indeed, lacks comprehensiveness, since it risks neglecting adequate protection of central aspects intertwined with the development of AI systemsFootnote 4 (eg, environment) and uses (eg, those applied to law and rule-making) potentially relevant to the protection of fundamental rights and principles at the basis of the Union values, such as democracy and the rule of law.Footnote 5

To examine what above, we begin by addressing whether and how the Regulation under-includes certain AI applications within risk categories (ie, law and rule-making) that might harm fundamental rightsFootnote 6 and other key elements of the European order, such as democratic decision-making and the rule of law (para II). The paper seeks to explore – taking the existing framework and the process that led to it as given – how to navigate within the provisions of the Regulation.Footnote 7

To differentiate the impacts, uses are organised in: (i) applications “core” to law or rule-making, as being directly part of the decision-making process (para II, 1.); (ii) “quasi-core,” which – based on the specific uses – are borderline between being essential part of the decision-making process and influencing the human component of the processes, or being a supporting tool (para II, 2.); (iii) “ancillary administrative activities,”Footnote 8 ie, assistive tools that do not affect the actual legislative or regulatory functions, such as AI systems performing “a narrow procedural task” or “intended to improve the result of a previously completed human activity”Footnote 9 (para II, 3.).

Having identified the risks of potentially overlooking law and rule-making, we then explore how the Act forgets such functions (para III, 1 and 2). Therefore, we examine the lack of flexibility in amending the Act and the remaining ways forward, as voluntary compliance through codes of conduct and intervention at national level (para IV). We conclude with final considerations (para V).

II. Risks for AI (“core,” “quasi core” and “ancillary”) applications in law and rule-making

IA is increasingly playing a crucial role in law and rule-making around the world by performing time-consuming tasks, increasing access to knowledge base, enhancing the ability of public authorities to draft effective rules and to streamline the regulatory stock. Such applications are intended to improve the efficiency of the proceedings (eg, increasing decision-makers ability to assess all position presented in a highly participated consultations) and the quality of law and regulation (eg, improving wording and thus comprehensibility or supporting coherence of the new rule with the existing regulation). The potential uses range from mere “ancillary” support to activities which are “core” or “quasi core” to law and rule-making. It is a functional distinction, used to identify what regulatory intervention could be adequate to mitigate risks on the rule of law.

Building on what has been mentioned, this section intends to demonstrate how not regulating AI in law or rule-making might indeed be against the primary scope of the Act.

1. “Core” AI application in law and rule-making

A general prohibition on the use of AI systems within the complex landscape of producing law or regulation would not be a desirable limitation of innovation. Accordingly, law and rule-making are not among the practices prohibited by the EU AI Act.Footnote 10

Other risk-categories, different from the prohibited practices, require more interpretative effort, as the high-risk one.

Which current and potential uses of AI are “core” to law and rule-making, and while not theoretically classified as high-risk, might nonetheless raise concerns for principles such as democracy and rule of law?Footnote 11

a. AI writing a law/regulation

Some AI systems are used to write a law or a regulation, or a part of it.Footnote 12 In the EU, the development of LEOS dates back to 2016, whose future applicationsFootnote 13 include LLMs in legislative drafting so as to “capitalise on the vast amount of data available to lawmakers”.Footnote 14 It is not surreal to state that AI systems are increasingly able to “produce a rough but credible first legislative draft.”Footnote 15

Looking at LLMs, ChatGPT has been used to write an ordinance approved by the City of Porto Alegre (Brazil).Footnote 16 LLMs can help by analysing existing laws and assisting with the creation of new ones (eg, enhancing the logical organisation of proposed drafts and pinpointing inconsistencies or ambiguities), as in the case of OpenFisca.Footnote 17 The latter involves models processing comprehensive legal texts to classify topics within different laws and regulations. An even more increasing use of AI pertains to the drafting of amendments, as in the Scottish and UK Parliament,Footnote 18 in the European Parliament through the “AT4AM” system,Footnote 19 or in Italy.Footnote 20

It seems reasonable to consider the above applications as potentially rising significant risks, since the use of AI in law and rule-making might generate erroneous or biased content, especially where inputs are incomplete or ambiguous.Footnote 21 The law and rule-making activities are particularly sensitive since interventions have to fit into complex regulatory systems, linked to social values and principles, whose complexity is hardly fully grasped without a human intervention.Footnote 22 Specifically, the uses of AI in writing a law or regulation (or part of it) might limit the decision-maker’s autonomy of judgment.Footnote 23 The latter becomes evident when considering the widespread practice of using AI for amendments in law-making, which can alter the original meaning of a provision and are often approved collectively (as is the case in Italy), without being subjected to a compulsory impact assessment.Footnote 24 Besides, AI might result in supporting deliberate and adverse uses (eg, production of a flood of amendmentsFootnote 25 ). Consider also the potential risk to pre-emptive censorship or automatic rejection of amendments, or even a training leading to political biases.Footnote 26

Moreover, the use of AI in legislative drafting might result in erroneous terms or poorly thought-out constructions, which can heavily influence the substance and subsequent interpretations of the content of a law. Indeed, drafting is a dynamic and forward-looking activity, beyond the simple writing of a prescription: the process of deliberation does indeed include exchange of perspectives and debating alternatives that lead to reaching a collective decision.Footnote 27 These use cases might threaten “legality, implying a transparent, accountable, democratic and pluralistic process for enacting laws”.Footnote 28

b. AI in support of the residuality of law and regulation

AI can also process vast amount of law and regulation so as to identify potential inconsistencies or overlapping, as experimented by the Australian Office of Impact Analysis.Footnote 29 Use of AI to simplify existing law and regulation has been mentioned in a recommendation of the Administrative Conference of the United States.Footnote 30 The latter building on a groundbreaking report on pilots in the US, such as the “Regulatory Clean Up Initiative”Footnote 31 to identify mistakes in rules through natural language processing, or the “RegExplorer”Footnote 32 system to identify burdensome, ineffective or obsolete regulations. Another tool, “QuantGov,”Footnote 33 is used to estimate the regulatory load by identifying content within massive quantities of rules. In the EU, Portugal implemented a system for automated monitoring and control of compliance with regulations to identify potential needs to review existing norms.Footnote 34

In Brazil, a “regulatory observatory” is used to monitor “plans, projects and ongoing processes on the regulatory agenda, organized by subject theme and status”Footnote 35 for regulatory planning. The Brazilian Ulysses AI-driven system analyses semantic similarities involving existing legislation and new proposals/amendments.Footnote 36

What is mentioned might hide potential risks. The AI accuracy in identifying semantic similarities or inconsistencies in existing legislation or regulation might be challenged by the complexity of legal language and the variability of interpretation, as well as the complexity of the legal system.

Consider, for example, an algorithm designed solely to tally obligations, without simultaneously accounting for actions such as removing, restricting, or exempting certain obligations.Footnote 37

Moreover, the underlying approach adopted by the developer or the regulator commissioning the system — whether de-regulatory or pro-regulatory — can significantly influence how the algorithm operates.Footnote 38 This poses a particular risk, as the absence of consultations to challenge or balance the outcomes of AI-driven reviews can lead to unintended consequences,Footnote 39 as much as weakening the democratic control of the rule and law-making process.Footnote 40 An error in the algorithm could indeed lead to the repeal of substantive norms beyond mere inconsistencies or overlaps. For instance, if provisions introducing affirmative actions were repealed, this could also result in a violation of the fundamental right of non-discrimination.

c. AI for consultations

AI is currently used by many decision-makers to cluster comments, to identify duplicates, or to summarise the overall comment sentiment. The EU proposes an advanced example, “Doris+” system,Footnote 41 similar to the United KingdomFootnote 42 and United States.Footnote 43 The system used by the EU also includes a component, AWS Comprehend, to detect key phrases, entities, sentiment and common topics, and potentially to be improved with an LLM.Footnote 44

Such applications aim to alleviate the resource-intensive and subjective process of sorting thousands of responses, freeing decision-makers to focus on developing creative policy solutions and considering broader implications. These uses allow a significant time saving,Footnote 45 and prevent legislators and regulators being affected by information overload bias.Footnote 46 Among the risks, AI systems might be trained on biased data (such as tendency to give more relevance to some geographic location and related industry needs), or might lead to an inadequate clustering of comments (combination of different groups of stakeholders showing similar position) resulting in an unfair representation or prioritisation of certain groups’ opinions over others (eg, making an opinion appear majoritarian, while it is not).Footnote 47 Similarly, clustering and ranking responses by theme can result in the loss of nuanced perspectives. Complex opinions might be grouped under broad categories, which could distort or overlook important subtleties in the data. At the same time, overusing these tools could lead to decision-makers relying too heavily on AI systems and neglecting the value of human judgment in interpreting and assessing contributions. Besides, the criteria and algorithms used to cluster, analyse, and rank responses may lack transparency, making it difficult for stakeholders to understand how decisions on their contributions are being made. The latter could reduce trust in the process (openness does not only regard the outcomes of legislative processes but also the process that leads to them) and raise questions about accountability. On a different perspective, if stakeholders understand how these systems work, they might try to manipulate public opinion by gaming the AI analysis to their advantage. To sum up, the applications described above might have a direct impact on citizens and firms’ effective participation in decision-making, thus challenging democratic and pluralistic processes for enacting laws and regulations, also leading to potential discriminations. Moreover, the lack of effective participation might undermine the value of the information provided to decision-makers and thus of the final law or regulation.Footnote 48

What safeguards should be provided for these uses in law and rule-making?

In all the examples provided, human intervention is key. For instance, if AI identifies overlapping rules, it should remain the responsibility of humans to decide whether the previous rule should be repealed. However, it should be considered that human oversight, per se, does not solve the guarantee for an AI system to produce false outputs. The latter due to over reliance on the results of the system or lack of concrete verification of the path linked to the result. Such a guarantee should be performed in collaboration between computer scientists and lawyers to reduce biases and noise in the/judgment and ensure that computer architectures incorporate fundamental safeguards and the possible interpretations that it will give with respect to the regulatory context.

Another guardrail is transparency. For instance, public authorities should disclose AI (replicable) systems used to assess comments in consultations.

Moreover, uses of AI in the public sector regarding “core” applications should be based on public data sets, both for the training and the “dynamic” collection of information. AI should be able to work with data that is legitimately formed and collected, primarily from other public institutions and platforms. Already existing public databases can be used, as well as portals and online resources providing access to current legislation. To this first level of sources, a second one could be added (scientific papers and similar), with the necessary precautions.Footnote 49 The availability and, specifically, quality of data is indeed the key: inaccurate analyses are otherwise produced, which in turn could lead to inadequate rules. Poor data quality must be read in the sense of “contextualization”: training data must be collected having in mind the use that the system will make of it, and the possible interpretations that it will give with respect to the regulatory context.

Public authorities employing AI for the “core” uses examined should give preference to the open-source approach, which enables the reuse of solutions or co-creation among public authorities.Footnote 50 If this is not the case, the choice should be justified.

Lastly, AI-generated should explicitly mention the sources of the information and data referenced, as well as the intended use (transparency), and ensure an adequate level of cybersecurity.Footnote 51

2. “Quasi core” AI applications in law and rule-making

The so-called “quasi core” applications can be considered as uses that are borderline between being part of the decision-making and being a mere support of the human component of the process. In order to assess whether they can raise risks to fundamental rights, it is worth mentioning some examples.

a. AI to cluster similarly worded amendments

For instance, ItalyFootnote 52 and BrazilFootnote 53 are using AI to cluster similarly worded amendments, by also looking at semantics. While seeming mere organisational and assistive activity, an undue use of AI might lead to proposals being over or under-looked. This would be the case for amendments classified as similar in the wording while having different impact on the text and outcome, or vice versa. AI would end up replacing the human decision-maker and present the same risks as highlighted in the previous section (generate biased content and not fit into regulatory systems, whose complexity is hardly fully grasped without a human interventionFootnote 54 ).

b. AI for regulatory impact assessment (RIA)

AI leads to important development prospects for data collection and analysis,Footnote 55 to be used in RIAs. It helps increase access to knowledge that would otherwise be unattainable, as data mining to extract informationFootnote 56 to identify patterns. The Italian national statistical institute developed a platform (IstatData)Footnote 57 enabling natural language-based searches on datasets contained in its archives.Footnote 58 Clustering algorithms might be used to identify common and emerging patterns in the documents. AI might also try to predict the potential impacts by analysing causal effects.Footnote 59 To the best of our knowledge, these uses are currently of limited application. Among the few examples, the German Federal Statistical Office is experimenting to speed up the identification of parts of draft regulatory texts that affect compliance costs (through machine learning), as well as the source of the costs (eg, through understanding who the affected recipients are and at what cadence).Footnote 60 The experiment aims to combine two sources of data to lead the system to identify words related to changes in compliance costs. In Portugal,Footnote 61 deep learning was used to “identify information obligations within legal texts,”Footnote 62 estimate their cost through the “standard cost model”Footnote 63 and recognise patterns linked to administrative burdens to train a system.Footnote 64 Lastly, the EU is experimenting the use of AI to “carry out impact assessments of major legal proposals” and “assess the impact of new legislation on existing European and national legislation.”Footnote 65

The above applications rise some concern. RIA is intended to provide final decision-makers with evidence in order to adopt informed law and regulation; as such, it can widely influence the final decision while not compelling it. In order to appraise what role AI can play and what role for humans, it is important to underline that RIA is not a mere technical analysis, and many discretional decisions are up to technicians during the assessment (eg, it is up to them to balance advantages with disadvantages, or to value sensitive factors such as human life). AI can perform well in executing limited tasks, such as collecting and analysing public datasets and relevant literature, defining the regulatory framework within which a regulatory option is situated, and quantifying the burdens introduced by different options. If confined to such tasks, there are no inherent risks, except for one: AI is not forward-looking. If it relies primarily on historical data, it risks underestimating new entrants and emerging risks,Footnote 66 as well as supporting the flawed assumption that human behavior is always consistent.Footnote 67

The “quasi-core” uses (RIA and the clustering of similar amendments) might potentially challenge democratic and pluralistic process for enacting laws and regulation, as well as the quality of final rules.

Some safeguards already examined for “core” uses – eg, reliance on public datasets and transparency as previously described – might be sufficient. It should also be noted that consultations are an essential phase of the RIA process; therefore, the considerations outlined in the previous section apply here as well.

Lastly, the role of humans is key. For instance, in RIAs, humans shall guide AI on how to balance advantages with disadvantages or determine the value of sensitive factors, such as human life or air quality.Footnote 68

3. “Ancillary” administrative activities in law and rule-making leveraging on AI

The following are among the applications of AI in law and rule-making which can be considered as ancillary (ie, supporting tools that do not impact on the legislative or regulatory functions).

a. Digitisation of daily tasks

AI is used in Brazil and Argentina to simplify the workflow of parliamentary officials (by retrieving reports from parliamentary sessions)Footnote 69 or of the politicians (Parliamentarians can engage in deliberations remotely through biometric authentication).Footnote 70 The Argentinian Parliament is experimenting a predictor of the competent parliamentary committee that shall work on a proposal (to improve efficiency).Footnote 71

b. Speech to text and vice versa

The US Department of Labor uses AI to convert speech to text for internal meetingsFootnote 72 and a similar system is tested by the European Commission to create minutes or data analysis, as well as to subtitle conferences.Footnote 73 Estonian parliament uses speech recognition technology to create verbatim reports of its sittings to be published.Footnote 74

b. Summary of the legislative proposals

A known experiment concerns the LLaMandement projectFootnote 75 in France, an LLM for the production of memoranda and documents required for interministerial meetings and parliamentary sessions.

c. Automatic translations

Such use has been implemented in Spain, also for language inclusivity,Footnote 76 and by EU institutions.Footnote 77

The above-mentioned examples of “ancillary administrative activities” might have a negative impact on the efficiency of the institutions if systems do not work properly, but it seems unlikely that they could directly affect any fundamental right. Given that the perspective here primarily concerns risks against the scope of the Act, a public registry of AI uses employed by regulators and legislators could suffice to improve citizens and firms’ awareness.

III. A poorly effective safeguard of fundamental rights

Having seen in the previous paragraph how AI in law and rule-making might pose risks, this paragraph is intended to show that “core” and “quasi core”Footnote 78 uses are wholly or partially overlooked by the EU AI Act. Moreover, this section highlights how the EU AI Act is actually neither easily nor swiftly amendable and it explores what could be achieved through the self-regulation envisaged by the EU AI Act, as well as through national interventions.

Risk regulation (“the privileged methodological tool for regulating risks in Europe”Footnote 79 ) offers a guide which then requires enforcement for understanding the actual risk and the related planning and type of intervention.Footnote 80 Initially developed in the environment and health protection areas, such approach is now widely applied in the European digital policy. A traditional risk-based approach means tailoring actions based on the results of a case-by-case assessment. The latter structured according to criteria of analysis that are either static or dynamic in relevance and constantly updated and based on scientific evidence or experience. Such an approach would allow the prioritisation of decisions following the level of risk to rights that need protection, thus favouring residuality of legislative or regulatory intervention and the proportionality of rulesFootnote 81 (eg, according to the GDPR is up to the data controller and processors to identify both risks and mitigating measures).

Differently, the EU AI Act identifies four categories of risks and consequent measures to mitigate them. The risk assessment is carried-out ex-ante at legislative level: it is not based on real and practical scenarios and the text does not provide a general methodology to assess risks.Footnote 82 Such approach does not favour the residuality and the proportionality of rules,Footnote 83 risks both under and over inclusion of AI applications,Footnote 84 as well as of missing some existing and future uses, such as those in law and rule-making.

While the main claim of the EU AI Act is to be able to evolve in tandem with the changing risks that – due to AI – loom over fundamental rights, an ex-ante risk approach might result being unresponsive.

1. Core applications and the AI Act

Do the core uses examined fall under the regulation of high-risk application under the EU AI Act? The applications of AI systems in law and rule-making seem to be considered by the EU AI Act as not able to harm any fundamental right (Annex III, on high-risk applications, does not include any direct reference). Among the high-risk uses, the category of “democratic processes” could be linked to law or rule-making.Footnote 85 The Act, however, restricts such category to the use by judicial authorities or alternative dispute resolution bodies,Footnote 86 or within interference in elections, political campaigns or voting behaviours.Footnote 87 Therefore, the Act does not regulate any of these sensitive applications in law and rule-making, as it does with regard to AI high risk systems.Footnote 88

For what concerns LLMs, if of systemic risk under the EU AI Act (such as ChatGPT, considered to be with “high impact capabilities”Footnote 89 ), providers are asked to perform model evaluations and document adversarial testing to identify and mitigate system risksFootnote 90 and ensure an adequate level of cybersecurity.Footnote 91 If not directly using ChatGPT, the Act also includes the category of “downstream provider” as “a provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.”Footnote 92 Public authorities using systems integrating GPAIs powerful as ChatGPT would therefore be subject to the same obligations.

This would not apply for GPAIs below the computing power of the few most advanced generative IAs. Public authorities developing their own systems, without integrations from Open AI’s API or similar, would not likely fall into this category and obligations. It is indeed unlikely that regulators or law-makers will be able to develop, train and maintain systems on such a scale of computing power.Footnote 93

Besides, providers of “ordinary” GPAIs (thus below the threshold to be considered of system risk) are asked to prepare and maintain updated technical documentation of the model, regarding – for instance – the training and testing and the results of the evaluation.Footnote 94 The latter shall be made available to providers intending to integrate the GPAI in their AI, and publicly available as a detailed summary describing the content used for the training.Footnote 95 Any output generated by AI systems, including GPAIs, are to be marked in a machine-readable format and detectable as artificially generated or manipulated.Footnote 96

What mentioned, however, does not apply to tools with an assistive function for standard editing or that do not alter input data or the semantic thereof.Footnote 97 When can we say that a system is performing more than an assistive function? Is that information disclosure enough compared to the risks of an automation bias, oversimplification, distortion of content that may accompany the use of AI for the drafting activity?

Even if we decide to interpret that the tackled cases will have to be respective of the obligations for GPAIs, there is another obstacle: “it should be understood that the obligations for the providers of general-purpose AI models should apply once the general-purpose AI models are placed on the market”.Footnote 98 The latter is defined, in the Act, as the “supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.”Footnote 99 If law-makers and rule-makers then develop their systems autonomously, the above-described obligations might not apply, not being a commercial activity. Those would indeed fall under the different definition of “putting into services,” meaning “supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.”Footnote 100

Lastly, very fittingly, the EU AI Act does not apply to AI systems developed as open source, as happens for systems developed internally by public authorities for further re-use by other onesFootnote 101 (eg, in FranceFootnote 102 or in ItalyFootnote 103 ).Footnote 104

2. “Quasi-core” applications and the AI Act

Does AI “quasi core” applications in law and rule-making fall under EU AI Act? Which obligations are involved?

Whether those applications fall under the not high-risk category, the focus the of Act would be mostly on transparency obligations,Footnote 105 while the training data quality is widely limited,Footnote 106 as well as the technical documentation showing system’s compliance with the needed requirements,Footnote 107 and to the record-keeping obligations.Footnote 108 Most importantly, what differs – compared to the obligations for the high-risk category – is the level of human oversight envisaged.Footnote 109 Moreover, it is envisaged a limited attention to an appropriate level of cybersecurity and accuracy of the system,Footnote 110 which we consider to be pivotal for uses in a Parliament and Governments where there is daily basis intervention on the rights and obligations of citizens.

The applicability of these minimal rules of transparency to the “quasi core” AI’s uses is uncertain. It depends on the meaning of systems “interacting with individuals” which affects the application of the transparency obligations under Art 50: it is an unclear wording (part of an approach that has been defined “transparency by design”Footnote 111 ) within the Act that risks resulting in a high level of discretion by the enforcers. An extensive interpretation of “interaction with individuals” would be certainly desirable, in order to include a system drafting laws or regulations or clustering amendments; systems used to assess the impact of new legislation on existing European and national legislation; to identify burdensome, ineffective or obsolete regulations; or to evaluate the achievement of goals by a regulation. Does this provision instead refer to systems that replace one human being in communication with another (such as chatbots)?

The EU AI Act establishes a duty to disclose that “the text has been artificially generated or manipulated” for “deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest”.Footnote 112 This requirement ensures transparency in contexts that may include the publication of laws or regulations in official journals or websites, as well as the results of public consultations. Notably, the Act provides an exemption from this disclosure duty when there is human review or editorial control,Footnote 113 or when someone holds editorial responsibility. While this approach prioritises trust in human oversight, refining these provisions could further enhance the balance between transparency and accountability.

IV. The way forward

1. European code of conducts

It is not possible to regulate AI applications in law and rule-making by leveraging the “delegated acts.”Footnote 114 The latter tool allows for modifications to Annex III, which pertains to high-risk cases, but only in terms of adding or modifying use cases within the areas already envisaged (eg, biometrics, education and vocational training, etc).Footnote 115 It is highly questionable – and part of the Regulation’s opacity due to its lack of terminological clarity – whether the use of AI to support the drafting of laws or rules, or to cluster similarly worded amendments or for consultations, might be considered use cases under “democratic process”Footnote 116 (the latter would seem to be more appropriate to interpret as a display of the collective will, as voting and participation to decision-making). In other words, uses applied to law and rule-making are surely indispensable tools and activities to achieve “public policy objectives and ensuring democratic governance,”Footnote 117 but hardly an actual democratic process.

Therefore, extending existing area reading or adding new ones can only be performed in the framework of the review process under Art 112, which requires an evaluation of the EU AI Act by the Commission and a subsequent referral to the European Parliament and the Council for a potential legislative reform process.Footnote 118

After this preliminary specification, it is worth assessing the role that guidance documents regulated by the EU AI Act can play.

As known, the AI Act is deemed to be completed and interpreted by giving relevance to the following documentsFootnote 119 : the harmonised standards, designed to provide technical solutions to assist providers in ensuring compliance with the regulation; the code of practice, aimed at enabling providers to demonstrate adherence to their obligations (which the Commission may formally recognise through an implementing act); and the codes of conduct, intended to promote voluntary best practices and standards, as well as encouraging the voluntary application of obligations prescribed for high-risk AI systems to non-high-risk systems. Furthermore, the EU AI Act also provides for the development of guidelines by the Commission, which could be used to offer greater clarity and certainty, such as in the already discussed case of the “interacting with individuals” transparency requirement (Art 50).Footnote 120

The Act, perhaps aware that it does not cover all possible uses, leaves plenty of room to voluntary compliance, where suppliers of AI systems that are not high-risk are encouraged to create codes of conduct to ensure voluntary compliance with some or all of the mandatory requirements applicable to high-risk systems. The latter adapted in light of the intended purpose of the systems and the lower risk involved and taking into account available technical solutions and industry best practices.Footnote 121 The same EU AI Act does also specify that both, providers and deployers, are to be encouraged to apply additional requirements and that voluntary codes of conducts (ie, the tool proposed to foster voluntary compliance) are to be based on clear objectives and key performance indicators to measure the achievement of those objectives – so be effective.Footnote 122

Codes of conduct may also be drawn up by organisations representing deployers and providers.Footnote 123 For instance, regarding law-making, such role may be covered (to ensure harmonisation) by the Inter-Parliamentary Union which is also working on the topic and very recently published guidelines for the use of AI in Parliaments.Footnote 124

2. National interventions

As an integrated intervention to what already mentioned (in the absence of a – practically hardly conceivable – revision of the risk approach envisaged in the Act), Member States could impose additional safeguards in the use of AI by their public authorities,Footnote 125 although this practice would not be desirable as it would lead to non-uniform guardrails in the European context. Such national-level initiative would be enabling criteria for action in borderline situations, as well as ensuring sound data governance and a relevant technical infrastructure. The latter does also mean a planning of needed upgrades and investments to ensure up-to-date robustness.Footnote 126

\Furthermore, legislators and regulators might self-regulate their use of AI – thus an application of the precautionary principle. The latter would allow both more conscious and dynamic regulation of concrete uses and risks – outside external influence, and within the guiding criteria of effective human oversight, transparency, explainability, and the use of public datasets to enhance systems’ quality. National decision-makers have the chance to focus more on the ones deploying the AI application, as those who plan the intended end use. The latter case-by-case risk-assessment would also be able to include an analysis of the organisational structures and practices in place to ensure that the goals for the AI system are met, and that suitable procedures are in place to promptly identify and address problems, including supporting post-incident investigations in high-risk settings.

V. Final considerations

It can be stated that the EU AI Act provides a higher level of protection of fundamental rights and other key elements of the European order, if compared to the status quo. However, some potential impacts seem to have been forgotten, entirely or partially, if linked to law and rule-making, such as the ones of democratic and pluralistic process for enacting laws; non-discrimination and fairness, as well as transparency.Footnote 127 The example provided has shown that, while European and national law and rule-making procedures provide important guardrail by involving multiple actors and technical bodies, they can hardly detect, and correct unintended consequences of AI uses. This is due, in general terms, to confirmation and automation bias, which lead people to overly rely on the outputs of AI systems without critically questioning their validity. Furthermore, the lack of transparency that characterises the use of AI in the public sector amplifies these issues. The latter is compounded by the inherent inexplicability of the functioning and the results provided by certain AI systems, often referred to as the “black box” phenomenon. These factors together raise significant concerns regarding accountability, trust, and the legitimacy of decisions informed by AI in law and rule-making processes.

The paper claims that this is one of the outcomes of the lack of a bottom-up risk-based analysis which leads to a poor regulatory agility.Footnote 128 As highlighted, the Act is not actually flexible even in its amendment process.Footnote 129 Besides, law and rule-making are not considered within the “AI systems presenting a risk” according to the EU AI Act.Footnote 130 It is still relevant to stress as, following the subdivision hypothesised in the paper, the categorisation of which should depend on a case-by-case risk assessment: the “core” applications should foresee the guarantees of human oversight (with different background of experts), transparency, explainability, and the use of public data sets for training and data analysis; for the “quasi-core” uses it should be sufficient to guarantee transparency and public data-sets; while the “ancillary” uses should be covered by a general provision of informing the public about the uses of AI made by each public authority.

The options we see in the short-term (given that it makes little sense at this stage to propose a new risk-informed methodology, since it would not be implemented due to the time and political effort already invested in drafting the current Act) are the following:

  1. i) intervention at national level, through Member States’ regulations on public authority use safeguards or self-regulations. These solutions might however hide risks for consistent and harmonised EU enforcement of AI regulation (where, on the other hand, the Act looks for a maximisation of market harmonisationFootnote 131 ) due also to the lack of authority by the AI Board to revise national interventions (differently for what happens under the GDPR).Footnote 132

  2. ii) Preferable solutions would be an intervention at European level through guidelines when needed for greater clarity and certainty (eg, transparency), as well as codes of conducts in the overlooked cases; additional requirements to be considered in a code of conducts for the topic under analysis might also include an extension of the fundamental rights impact assessment, now limited to high-risk AI systems used by bodies governed by public law.Footnote 133

Acknowledgments

The article is the result of a collaborative effort. However, Nicoletta Rangone composed paragraphs II and V; while Luca Megale contributed paragraphs I, III and IV. The authors gratefully acknowledge the valuable suggestions provided by anonymous reviewers.

Competing interests

The authors declare there are no conflicting interests to declare.

Footnotes

Nicoletta Rangone is professor of Administrative Law at LUMSA University of Rome, Jean Monnet professor of EU approach to Better Regulation and Member of the Italian Regulatory Impact Assessment Nucleus.

Luca Megale is research fellow in Administrative Law at LUMSA University and adjunct professor of European Public Law and Data Regulation at the Marche Polytechnic University.

References

1 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending […] [2024] OJ L 2024/1689.

2 Recital (1). See also Art 1 (1).

3 NA Smuha and K Yeung, “The European Union’s AI Act: Beyond Motherhood and Apple Pie?” in NA Smuha (ed), The Cambridge Handbook on the Law, Ethics and Policy of Artificial Intelligence (Cambridge, Cambridge University Press 2025) 33.

4 See P Hacker, “Sustainable AI Regulation” (2024) 61(2) Common Market Law Review 346–7.

5 “Charter of fundamental rights of the European Union.” 2000/C 364/01, preamble.

6 Among the reasons against this approach, it has been underlined that the EU has no general competence to harmonise national legislation on human rights and thus the regulation is based on Art 114 TFEU (“this regulation ensures the free movement, cross-borders of AI based good and services,” recital 1) (M Ebers, “Truly Risk-Based Regulation of Artificial Intelligence How to Implement the EU’s AI Act” (2024) First view European Journal of Risk Regulation 7).

7 It is relevant, for the sake of completeness, to acknowledge that scholars have identified, as a fundamental issue, the fact that the Regulation is shaped by constitutional constraints, stemming from the EU’s lack of general competence in the protection of human rights. It has been highlighted indeed how the risk-based approach adopted in the Act is “difficult to reconcile with the safeguarding of fundamental rights” (supra note 6). The latter is undoubtedly a crucial topic that further illustrates the challenges the EU has faced in drafting the Regulation. However, for a more in-depth analysis – which warrants focused analysis – we refer to who focused its work on such specific topic (ibid.).

8 The wording is from recital (61) of the EU AI Act.

9 These examples are inspired by Art 6 (3) of the EU AI Act according to which (when these and other conditions are fulfilled) AI systems listed in Annex II shall not be considered at high risk.

10 Art 5 (1. (a)).

11 Recital 1, EU AI Act.

12 See A Drahmann and A Meuwese, “AI and Lawmaking: An Overview” in B Custers and E Fosh-Villaronga (eds), Law and Artificial Intelligence. Regulating AI and Applying AI in Legal Practice (The Hague, T.M.C. Asser Press 2022) 435 and 437.

13 (i) detecting patterns leading to better regulation and clear legislation; (ii) examining the transposition of EU Directive in domestic legislation to identify divergences; (iii) detecting derogations and connecting them with initial obligations; (iv) supporting the assessment of acts’ digital readiness (https://joinup.ec.europa.eu/collection/justice-law-and-security/solution/leos-open-source-software-editing-legislation/document/drafting-legislation-era-ai-and-digitisation).

14 European Commission, “AI-based solutions for legislative drafting in the EU” (2024) 30.

15 D Lovric, “The Future of Legislative Drafting: A Strategic Approach” (Paper for Canadian Institute for the Administration of Justice-CIAJ Legislative Drafting Conference, Ottawa, 8–9 September 2022) 6.

16 The news was firstly shared by the Councilman Ramiro Rosàrio on X.com and then delved into by The Washington Post (https://www.washingtonpost.com/nation/2023/12/04/ai-written-law-porto-alegre-brazil/).

17 See G Hill, M Waddington and L Qiu, “From Pen to Algorithm: Optimizing Legislation for the Future with Artificial Intelligence” (2024) 3 AI & Society 1–12.

18 M Lync, “Lawmaker – The New Legislative Drafting Service of the UK and Scotland” (2022) 2 The Loophole 24.

19 Elena Griglio and Carlo Marchetti, “La ‘specialità’ delle sfide tecnologiche applicate al drafting parlamentare: dal quadro comparato all’esperienza del Senato italiano” (2022) 3 Osservatorio delle fonti 371. From the EU website: <https://joinup.ec.europa.eu/collection/justice-law-and-security/solution/at4am-all>.

20 “This system allows the user to directly edit the text of the provision and obtain the corresponding amendment proposal structured in the form of an amendment, according to the rules for drafting of legislative texts” (L Tafani, “A Legislative Drafter’s Perspective” (ChatGPT series, April 13, 2023) <https://betteregulation.lumsa.it/chatgpt-essay-series-legislative-drafters-perspective>.

21 European Commission, “AI-Based Solutions for Legislative Drafting in the EU,” 28.

22 H Xanthaki, “Legislative Drafting: A New Sub-Discipline of Law is Born” (2013) 1(1) IALS Student Law Review 57.

23 “The essence of what it means to be a Parliament – a place for human deliberation, debate and consensus building – must be preserved, even as we explore how AI can enhance these processes” (L Kimaid, “Introduction – The Interface of Tradition and AI in Parliaments” in Bussola.tech (eds), AI in Legislative Services: Principles for Effective Implementation (2024) 9). See also PF Bresciani and M Palmirani, “Constitutional Opportunities and Risks of AI in the Law-Making Process” (2024) 2 Federalismi.it 12.

24 The European Parliament and the Council carry out impact assessments in relation to their “substantial” amendments to the Commission’s proposal only if they consider this to be “appropriate and necessary” for the legislative process (para 15, Interinstitutional Agreement between the European Parliament, the Council of the European Union and the European Commission on Better Law-Making 2016).

25 A Cardone, “Algoritmi e ICT nel procedimento legislativo: quale sorte per la democrazia rappresentativa?” (2022) 2 Osservatorio sulle Fonti 376.

26 YM Citino, “Leveraging Automated Technologies for Law-Making in Italy: Generative AI and Constitutional Challenges” (2024) XX Parliamentary Affairs 16.

27 L Kimaid, “Introduction – The Interface of Tradition and AI in Parliaments” (2024) cit., 11.

28 Communication from the European Commission, “Further Strengthening the Rule of Law in the Union: State of Play and Possible Next Steps” COM(2019) 163, 1.

29 WTO, Committee on Technical Barriers to Trade, Thematic Session on the Use of Digital Technologies and Tools in Good Regulatory Practices, G/TBT/GEN/367 (20 December 2023) 1.

30 Admin Conf of the US, “Recommendation 2023-3, Using Algorithmic Tools in Retrospective Review of Agency Rules” 88 Fed Reg 42, 681 (3 July 2023).

31 Ivi 9 ff.

32 Ibid.

33 Ivi 13 ff.

34 R Saraiva, “Rules and Nudging as Code: Is This the Future for Legal Drafting Activities?” in K Mathis and A Tor (eds), Law and Economics of the Digital Transformation. ILEC 2023. Economic Analysis of Law in European Legal Scholarship (Cham, Springer 2023) 307–85.

35 Ivi, 2.

36 And also shows how those would change the norms. D Pressato et al, “Natural Language Processing Application in Legislative Activity: A Case Study of Similar Amendments in the Brazilian Senate” (16th International Conference on Computational Processing of Portuguese, PROPOR 2024). A system used in the U.S. Congress allows us to verify how legislative language changes throughout the amending process, and includes the impact, comparison and cross-reference with existing laws (Quarterly report (H-154 The Capitol) of the Acting Clerk of the House, KF McCumber (April 15, 2024)).

37 C Coglianese, G Scheffler and D Walters,”Unrules” (2021) 73 Stanford Law Review 885–967 and 921–2.

38 CM Sharkey, “AI for Retrospective Review” (2021) 8 Belmont Law Review, 404 and 378.

39 N Rangone, “Artificial Intelligence Challenging Core State Functions. A Focus on Law-making and Rule-making” (2023) 8 Revista de Derecho Público: Teoría y Método 117.

40 PF Bresciani and M Palmirani, “Constitutional Opportunities and Risks of AI in the Law-Making Process” (2024) cit. 16.

42 The UK Policy Lab is experimenting a tool to identify key issues from consultations, quickly and impartially, by clustering similar responses. A complementary tool evaluates the emotional tone of responses helping us to understand public reactions, refining policy delivery, and developing communication strategies (S Bennett and N Cutler, “Lab Long Read: Policy Consultations – Part 2: A Role for Data Science?” openpolicy.blog.gov.uk, 28 October 2019). Another model (i.AI) uses LLM to label and summarise each common recurring theme, previously extracted through natural language processing (<https://ai.gov.uk/projects/consultations/>).

43 The USDA and the CDO Council collaborated to develop a tool allowing rulemaking personnel to focus on the most pertinent comments and offering unified responses to clusters of similar comments (Federal CDO Council, “Implementing Federal-Wide Comment Analysis Tools: Final Recommendations” (June 2021) <https://resources.data.gov/assets/documents/CDOC_Recommendations_Report_Comment_Analysis_FINAL.pdf> (last accessed 7 August 2024).

45 MA Livermore et al, “Computationally Assisted Regulatory Participation” (2018) 93(3) Notre Dame Law Review 977.

46 N Rangone, “Improving Consultation to Ensure the European Union’s Democratic Legitimacy: From Traditional Procedural Requirements to Behavioural Insights” (2024) 28 (4–6) European Law Journal 154–71.

47 F Di Porto et al, “Mining EU Consultations through Artificial Intelligence” (2024) First view Artificial Intelligence and Law.

48 Supra, note 39 111.

49 D De Lungo, “Le prospettive dell’AI generative nell’esercizio delle funzioni parlamentari di controllo e indirizzo” (2024) 23 Federalismi.it 79.

50 Supra, note 21 34.

51 Art 55 (d).

52 Supra, note 26.

53 As part of the already mentioned Ulysses suite. See the description of “Ulysses 4” given by the “Bussola Tech” blog: <https://library.bussola-tech.co/p/ulysses-chamber-deputies-brazil>.

54 Supra, note 22 57.

55 With regard to the European Commission, “the Board acknowledges that the assessment of impacts can be constrained by limited data availability and raise analytical challenges” (Regulatory Scrutiny Board, “Annual Report 2022” (2022) 17). “In several cases services preparing an impact assessment did not pay sufficient attention to the adequate reporting or timely development of an adequate data collection approach as recommended in the better regulation guidelines and toolbox” (Regulatory Scrutiny Board, “Annual Report 2022” (2022) 20).

56 The European Commission mentions the use of “AI to search for and make available scientific evidence for EU policy making” (European Commission, “Communication to the Commission, Artificial Intelligence in the European Commission (AI@EC)” (2024) 9).

58 ISTAT, “Relazione al Parlamento sulle attività dell’Istat e degli uffici del sistema statistico nazionale e stato di attuazione del programma statistico nazionale (Art 24, D. Lgs. n. 322 del 1989) – anno 2022” (2023) 69 ff.

59 “Agent-based models […] could be used to proactively determine how social systems may respond to future contingencies, identify future issues, and evaluate possible interventions, such as the enactment of new regulations” (Giovanni Sartor, “The Way Forward for Better Regulation in the EU – Better Focus, Synergies, Data and Technology” (2022) PE 736.129 In Depth Analysis EU Parliament Policy Department for Citizens’ Rights and Constitutional Affairs 20).

60 See the presentation given by the Statistisches Bundesamt: S Walprecht and C Lewerenz, “Facilitating Regulatory Impact Assessments: The Benefits of Machine Learning in Legislation” (04 April 2024).

62 Ibid.

63 On the formula used and the methodology, see European Commission, “Better Regulation Toolbox” (2023) Tool #28, 522 ff.

64 The project is funded by the EU: <https://reform-support.ec.europa.eu/our-projects/country-factsheets/portugal_en> and <https://www.planapp.gov.pt/project/artificial-intelligence-for-better-regulation/>). The latter has been proven successful in identifying parts of text including administrative burdens through natural language processing. Further developments include using AI to analyse the transposition of EU legislation and identifying instances of “gold plating.”

65 European Commission, “Communication to the Commission, Artificial Intelligence in the European Commission (AI@EC),” cit. 9.

66 R Baldwin and J Black, “Really Responsive Risk-Based Regulation” (2010) 32(2) Law & Policy 181.

67 M Hildebrandt, “Code-Driven Law: Freezing the Future and Scaling the Past” in S Deakin and C Markou (eds), Is Law Computable? Critical Perspectives on Law and Artificial Intelligence (Hart 2020) 73 ff.

68 On the need to devise the complimentary and supportive role already at the level of algorithm setting, in addition to an ex post human control, see supra, note 39 119.

71 See the already mentioned interview to the Director of Innovation, Planning and New Technologies at the Argentinian Chamber of Deputies: <https://www.ipu.org/innovation-tracker/story/argentina-first-steps-towards-ai-driven-chamber-deputies>.

72 See the U.S. Department of Labor Artificial Intelligence Use Case Inventory: <https://www.dol.gov/agencies/oasam/centers-offices/ocio/ai-inventory>.

73 European Commission, “Communication to the Commission, Artificial Intelligence in the European Commission” cit. 8.

75 J Gesnouin et al, “LLaMandement: Large Language Models for Summarization of Legislative Proposals” (2024) arXiv:2401.16182v1 [cs.CL].

76 See the analysis provided by the Bùssola Tech blog: <https://library.bussola-tech.co/p/transforming-the-past-and-shaping>. Such tool becomes crucial since Spanish Parliament approved the use of all official languages (Catalan, Basque and Galician).

77 “The European Commission implemented a range of AI-based multilingual services developed by DGT in cooperation with the Directorate-General for Communications Networks, Content and Technology (DG CNECT) as part first of the Connecting Europe Facility programme, and now of its successor, the “Digital Europe Programme” (<https://interoperable-europe.ec.europa.eu/collection/public-sector-tech-watch/multilingual-servicesec-ai-support-european-commissions-multilingual-services>).

78 The analysis addresses these two categories, since the “ancillary” seems unlikely to impact on any right, as underlined in the previous paragraph.

79 A Alemanno, “Regulating the European Risk Society” in A Alemanno et al (eds), Better Business Regulation in a Risk Society (New York, Springer 2013) 41.

80 As in the food safety one, see Regulation (EU) 2017/625 […], L 95/1.

81 R Baldwin and J Black, “When Risk-Based Regulation Aims Low: Approaches and Challenges” (2012) 6(1) Regulation and Governance 2.

82 C Novelli, “L’Artificial Intelligence Act Europeo: alcune questioni di implementazione” (2024) 2 Federalismi.it 2. Interestingly, the DSA mixed the top-down, which characterises the AI Act, and the bottom-up approach exemplified by the GDPR (G De Gregorio and P Dunn, “The European Risk-Based Approaches: Connecting Constitutional Dots in the Digital Age” (2022) 59 (2) Common Market Law Review 473–500).

83 The approach used in the Act has been defined as “mainly of risk mitigation rather than risk assessment” (Supra, note 6 10), which leads to not focusing on AI systems based on their functionalities (T Schrepel, “Decoding the AI Act: A Critical Guide for Competition Experts” (ALTI Working Paper, Amsterdam Law & Technology Institute – Working Paper 3 – 2023, October 2023) 11). Others identified a lack a methodology for the assessment of risk categories in concrete situations and proposed considering real-world risk scenarios with a proportionality test to balance competing values (C Novelli et al, “AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act” (2024) 3 Digital Society 1–29).

84 Supra, note 82 2.

85 The matter under analysis is also not considered among those cases in which a fundamental rights impact assessment is required for the use of AI by public authorities (not being included in the areas listed in Annex III – see Art 27).

86 Recital (61).

87 Recital (63).

88 Art 9. Art 20. Art 73. Recital (96); Art 27.

89 “High-impact capabilities in general-purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models” (Recital 111). See also Recital (64).

90 Art 55 (a); (b).

91 Art 55 (d).

92 Recital (68). It also states that other actors (eg, deployers) are to be considered a provider “if they modify the purpose of an AI system, including a GPAI, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned becomes a high-risk in accordance with Article 6” (Art 25 (c)). However, as already mentioned, rule and law-making are not included.

93 The floating-point operations need to be – as of now - greater than 10^25 (Art 51 (2)).

94 Art 53 (1a).

95 Art 53 (1b). The system should include a policy to comply with Union law on copyright and related rights – see Art 53 (1c); there is a general obligation from providers to cooperate with the national competent authorities – see Art 53 (3).

96 Art 50 (2).

97 Ibid.

98 Recital (97).

99 Art 3 (10).

100 Art 3 (11).

101 Art 2 (12). If not falling under high-risk AI systems or under Art 4 or 50.

104 “When making decisions regarding new digital solutions (…) consider first reuse, then buy, and, as a last option, build. In the same spirit, an open-source software approach should be favored” (European Commission, “AI-Based Solution for Legislative Drafting” cit. 34).

105 Art 50 (1).

106 Art 10.

107 Art 11.

108 Art 12.

109 Art 14.

110 Art 15.

111 Lee A Bygrave and R Schmidt, “Regulating Non-High-Risk AI Systems under the EU’s Artificial Intelligence Act, with Special Focus on the Role of Soft Law” (2024) 10 University of Oslo Faculty of Law Legal Studies Research Paper Series 7.

112 Art 50 (4).

113 Recital (134); Art 50 (4).

114 Art 7 and Art 97.

115 That risk needs to be equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

116 Annex III, point 8.

117 SIGMA and OECD “Parliaments and evidence-based lawmaking in the Western Balkans” (2024) 68 SIGMA Paper 16.

118 Art 112.2. This does not mean, inter alia, a direct amendment, but rather a review of the effectiveness of the Act and a potential proposal for a not-so-fast review process. See also supra note, 3 15.

119 Supra, note 111.

120 “The Commission shall develop guidelines on the practical implementation of this Regulation, and in particular on […] (d) the practical implementation of transparency obligations laid down in art. 50” (Art 96).

121 Recital 165 and Art 95.

122 Ibid.

123 Art 95 (3).

124 Inter-Parliamentary Union, “Guidelines for AI in Parliaments” (2024).

125 O Mir, “The AI Act from the Perspective of Administrative Law: Much Ado About Nothing?” (2024) European Journal of Risk Regulation 13 and O Mir, “The Impact of the AI Act on Public Authorities and on Administrative Procedures” (2023) 4 CERIDAP 247–8.

126 L Kimad, “Core Considerations and Frameworks” cit. 27.

127 As rights included in the 2019 Ethics guidelines for trustworthy AI developed by the independent AI HLEG appointed by the Commission.

128 See OECD, “Recommendation of the Council for Agile Regulatory Governance to Harness Innovation” C/MIN(2021)23/Final (Paris 2021); S Denning, “The Age for Agile: How Smart Companies are Transforming the Way Work Gets Done” (AMACOM 2018). The used approach is not a representation of a “risk-based regulation [that] uses risk as a tool to prioritize and target enforcement action in a manner that is proportionate to an actual hazard: in other words, to ‘calibrate’ the enforcement of the law based on concrete risk scores” (G De Gregorio and P Dunn, “The European Risk-based Approaches: Connecting Constitutional Dots in the Digital Age” cit. 475). The International Network of AI Safety Institute well represented as risk-based approach should be established following a shared scientific basis, which would need to include joint risk assessments by competent authorities and cooperative scientific research. The latter to adapt risk-benefit trade-offs in a flexible scenario (International Network of AI Safety Institutes, “Joint Statement on Risk Assessment of Advanced AI Systems” (2024)).

129 Para IV.

130 The Act allows national level to deal with “AI systems presenting a risk” (Art 79). However, the definition of “product presenting a risk” (Art 3, point 19 of Regulation (UE) 2019/1020) refers to adverse effects to rights protected by Union harmonisation legislation (Annex I), which fails to include rule or law-making.

131 Recital (3) states as “diverging national rules may lead to the fragmentation of the internal market and may decrease legal certainty (…)”.

132 C Novelli et al, “A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities” (2024) First view European Journal of Risk Regulation 23.

133 Art 27. A Mantelero, “The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, Legal Obligations and Key Elements for a Model Template” (2024) 54 Computer Law & Security Review: The International Journal of Technology Law and Practice 1–18.