I acknowledge and thank my supervisor Dr. Rachael Dickson-Hillyard for guiding me through this research and writing process.
This work is part of a master's dissertation submitted to the University of Essex in April 2024. The research was last updated in September 2024.
DeepL SE, DeepL Translator, translation of German written text into the English equivalent.
DeepL SE, DeepL Write, grammar corrections, spelling corrections, and reformulation of English written text.
Elicit Research, Elicit, summarization of documents and answering queries respectively.
Artificial intelligence (AI) is one of the most meaningful technological developments of the current times. Stephen Hawking called it ‘the biggest event in the history of our civilization’.1 The theoretically endless capabilities accompanying AI are overwhelming and by now hard to imagine because its development is still in its infancy. As of today, there is no defined idea of the direction in which AI will head. Chatbots have been the latest form of AI to gain significant attention due to their accuracy and broad capabilities. Nevertheless, the scope of application for these systems is very big and could span various areas.
The ongoing evolution of AI has the potential to shape human behaviour and reallocate wealth and resources globally by strategically fostering certain digital activities but not others.2 It offers a wide range of benefits, economically and socially, based on the enhanced efficiency of automated processes and could have significant transformative effects.3 However, AI’s broad application could also cause immense damages for society.4 If not controlled properly, these systems could harm individuals, their health, property, and rights, e.g. by causing damages without bearing liability, discrimination or limiting the freedom of expression.5 As AI systems become more advanced and integrated into society, its unrestrained use could have far-reaching consequences. Governments must limit the responsibility assigned to algorithms to control the future impact of AI and prevent it from overtaking sectors of people’s lives by outpacing humankind.
Thus, regulations could still impact the direction of AI’s technological development. Due to the steadily increasing economic and commercial use of AI, some form of regulation is needed immediately.6 AI could threaten some of the most fundamental legal concepts such as the rule of law and fundamental human rights as it could be used in an almost limitless range of social sectors and possibly threaten to infiltrate these sectors.7 The types of ways of legally regulating AI have been widely discussed globally. The current discussion regarding the legal regulation of AI is a turning point within the evolution of AI and decisions in this respect have the potential to not only form current developments but also the future of humankind. In the words of Stephen Hawking, ‘The rise of powerful AI will either be the best or the worst thing ever to happen to humanity. We do not yet know which.’8
First and foremost, the foundation for the following work should be developed in order to further elaborate on the issue of regulating AI. Therefore, out of the many existing definitions of AI, the particular one used in this work is explained. Afterwards, the need for a legal regulation and its underlying difficulties are demonstrated.
To regulate a novel sector, its scope must be defined first. There is no singular definition of AI, instead, it depends on the point of view.
In general, AI is viewed as software which is able to imitate human abilities such as thinking logically or creatively.9 According to philosophy, intelligence in its artificial form is the attempt to imitate human intelligence; this definition led to rules-based AI systems which were not very flexible and limited in their scope.10 However, what is classified as AI has also changed over time due to new expectations of what “intelligent” behaviour signifies in machines.11
An important distinction is differentiating between narrow and general AI, also called weak and strong AI, respectively. While narrow AI systems focus on performing specific tasks, general AI systems would be able to perform most activities nearly humanlike.12 As of today, only narrow systems have been developed.13
Numerous narrow AI systems are already assisting in daily life, creating groundbreaking innovations in diverse sectors, e.g. in the healthcare industry14 or by offering new opportunities for enterprises.15 With the increased processing power of today’s computers and the huge data available, modern algorithms have become increasingly complex.16 They can process vast datasets within seconds and produce useful outcomes, hence, being drivers of innovative solutions.
According to the OECD, AI systems are machine-based systems which use their received input to generate output such as predictions, content, or decisions.17 Following this definition, AI’s aim is to autonomously produce output based on given input. This article understands AI systems by this definition, focusing on the autonomy of these systems. Unlike conventional algorithms, AI can produce human-like outcomes by independently learning and evolving after being programmed once. Therefore, the autonomous behaviour is the critical attribute a legal framework should focus on.
The vast capabilities of these systems are also one of the downsides of the technology. The increasing complexity of modern technology leads to one of the biggest risks of AI, known as the black-box problem. As algorithms increasingly advance, humans will only be able to understand the outputs that the AI produces without being able to explain the exact technological functionality of such non-transparent systems.18
Further, a self-learning system can only be as good as the data it relies on during its decision-making process. Hence, the outputs of AI systems could be biased. If a database contains racist, sexist, or discriminatory data in any form, the AI system will also adopt these principles.19 Humans will no longer be able to identify the outcomes as biased because they will be unable to explain the decision-making process, e.g. with face recognition algorithms and potential underlying biases based on skin colour.
It is also possible that the immense potential of AI could be misused, e.g. criminals could use AI to their advantage or AI could be used to design autonomous weapons systems, thus making weapons more lethal.20 These weapons would have such destructive impacts and could seriously harm the development of humanity. If AI were to be used in warfare or crimes against humanity, the outcomes could be devastating. Thus, nations globally should agree to not use such technology for military purposes and should collaborate to hinder the use of AI by criminals.
In light of these inherent risks of AI, restrictive legislation should be drafted, and rules and regulations must be put in place to appropriately deal with AI.
When legislating modern innovations, minor adjustments to existing regulations is sometimes enough, but other times a widespread change in legislation is necessary.21 Thus far, it has been sufficient to modify current laws to legislate modern technology, but AI has moved the benchmark. These systems’ broad capacities, such as machine self-learning or neuronal networks, surpass current legislation. Therefore, specific AI regulations are needed, although this comes with novel complications.
It is necessary to have laws that appropriately address the risks of AI to ensure privacy and security and foster trust in AI systems.22 Until recently, legislation has been mostly neutral regarding technology, focusing instead on generic principles, which means existing laws could be applied to cases involving technology in the same way that they could be applied to cases without any technological implications.23
Because the term AI describes a variety of technological applications and is a vague collective term without one common definition, it is only of limited suitability within the legislative process.24 Legislation is typically a discipline that works by defining content and regulating already well-established sectors.25 Legislating AI is further complicated due to the fact that the systems do not always produce foreseeable outcomes but decide differently each time and learn from previous behaviour.26 This makes it challenging to address certain outcomes because the algorithms will be presumably very quick in learning and then circumventing the particular practice addressed by the regulation.
Furthermore, the burden of proof in AI cases is problematic. First, there is no human to hold responsible for the system's actions. Instead, responsibility could indirectly fall on different actors, such as the programmer, the operator of a system, or even the machine itself. Second, because AI systems can be a black box for humans, problems may arise when a complainant must detect the exact deleterious behaviour.27 This opacity makes it burdensome to pinpoint and prove the specific behaviour and to attribute responsibility to a human party. Therefore, using legislation to address such a rapidly evolving issue as AI is challenging, and currently different approaches have evolved.
Some of the previous laws already partially cover AI, for example the EU’s laws on fundamental human rights, data protection, product safety, and consumer protection.28 In particular, the EU has established an extensive governance of digital services either through planned or recently enacted legislation, such as the Digital Markets Act29 that governs fairness in online trading or the European Health Data Space30 which harmonises healthcare throughout Member States.31 This section discusses two of the most impactful pieces of legislation pertinent to AI and their recent effects in detail: the General Data Protection Regulation32 (GDPR), covering the protection of individuals’ personal data in the EU, hence, the Human Rights Framework, which addresses the protection of fundamental rights.
The GDPR is probably the former legislation which has had the biggest impact in the context of AI since it came into force in 2018. It aims to protect the use and processing of the personal data of natural persons.33 Its fundamental idea is that the processing of personal data should only be conducted to serve humans.34 The GDPR is based on the key values of dignity, fairness, and transparency.35 It sets out principles and conditions on how to handle personal data in different scenarios and which rights persons have in case of infringements. Further, it sets standards for establishing responsible authorities and general conditions for imposing fines and penalties. It also has some legislation relevant to AI systems, for example, Art. 22 of the GDPR36 limits the possibility of automated decision-making processes if certain requirements are not met.
The sheer volume and diversity of data processed by AI algorithms make it challenging to determine which data falls under the scope of GDPR protection. Unlike traditional data processing methods, for which humans can understand and control the flow of information, AI algorithms operate in complex ways that individuals cannot easily recreate or interpret due to their ability for independent decision-making and continuous adapting. This opacity makes it difficult to identify how personal data is being used and whether it complies with GDPR principles. The lack of transparency in AI algorithms, due to the black-box problem, and the fact that much of the data is not directly linked to one individual pose significant obstacles to the application of GDPR regulations.37 Thus, ongoing efforts towards ensuring data transparency and governance are crucial to address these issues and ensure the effective application of GDPR regulations and other data protection legislation to AI.
The European Convention on Human Rights38 (ECHR) came into force over 70 years ago and has been amended several times since. It guarantees fundamental human rights and liberties. Because AI is being used in a variety of areas, some of them more significant than others, it affects various social aspects and therefore imposes a possible threat to fundamental freedoms.39 Thus, not only is the GDPR and its protection of personal data relevant when regulating AI but so is the ECHR.40
The use of AI in private and public sectors can damage several fundamental human rights, beginning with impinging on the freedom of speech and expression and ultimately going so far as posing a threat to the values of a democratic society.41 The ECHR does address some of these risks, for example, Art. 8 ECHR42 establishes the right to privacy, and Art. 10 ECHR43 covers the freedom of expression.
In many cases, ECHR articles are more far-reaching than the GDPR. For example, while the GDPR only limits the processing of biometric data to a certain extent, the ECHR’s article on freedom of expression applies also in the context of public facial recognition because the possibility of identifying anyone within a group would enable surveillance of individuals, thus limiting freedom of expression, e.g. if the technology would be used by authorities to monitor individuals during a demonstration.44 Furthermore, Art. 6 ECHR45 guarantees the right to a fair trial, which is important within the field of AI due to the increasing use of such systems in legal decision-making processes. Because, overall, humans are unable to retrace and understand the internal processes and respective outcome of AI based decisions (black-box problem), legal professionals are not able to access and review the reasons behind an AI decision.46 Hence, it cannot be assured that the decisions are fair and impartial, as the right to a fair trial guarantees. This lack of understanding is especially critical in the legal field where individuals’ freedom or even life can be at stake but is also important in sectors such as health or transportation.
However, because the ECHR came into force long before today’s rapid development of technological applications even started, it can only be applied by modifying its principles to adapt it to the digital area. Therefore, the effectiveness of the ECHR for current technologies is limited, and it is questionable whether the ECHR can adequately address the challenges arising from the widespread use of AI.47 Thus, it is vital to focus on the issues arising from AI by introducing particular legislation specifically focusing on it.
In March 2023, the Italian Data Protection Authority (“Garante”) investigated possible breaches of the USA-based firm OpenAI. The Garante claimed that OpenAI’s chatbot, ChatGPT, did not abide by the requirements of the European Privacy Rules, specifically the GDPR. According to the Garante, the application of ChatGPT violated various parts of the GDPR, which determine the handling of personal data and oblige a provider to inform users about the processing of their data.48 Specifically, they referred to breaches of Art. 549 regarding the principles for the fair, and transparent processing of personal data, Art. 650 regarding the lawfulness of the processing, Art. 1351 indicating that individuals were not informed adequately about their data being processed, and Art. 2552 referring to violations of the design of the data processing. Furthermore, the Garante criticised the free accessibility of ChatGPT even for minors. They claimed it was a violation of Art. 853 of the GDPR according to which generally imposes age limits of 16 years or older to accept the processing of one's data, under this age, children are not allowed to agree themselves, but their parents have to give their consent.54
The Garante stated that there “appears to be no legal basis underpinning the massive collection and processing of personal data” and that the exposure of minors is inappropriate, hence, it threatened to impose a fine of millions of euros on OpenAI if they did not modify their product accordingly.55 After a temporary ban of ChatGPT in Italy, OpenAI agreed to partially modify the algorithms to comply with GDPR rulings, e.g. installing the option to decline that one's data is being used as training data.56 The authority has recently reintroduced the chatbot, subject to specific conditions and with the provision that further measures may be implemented at any time should these conditions not be met.57 It further fined the company in regard to the respective privacy breaches.58
The Italian authorities’ investigations of OpenAI’s product have been the first within Europe since OpenAI’s recent and uprising entry into the market.59 This case shows how the GDPR regulations can limit the applications of AI and influence its developments.
Because training AI algorithms requires vast datasets, it is impossible for developers to collect such data without also processing personal data, which is protected under the European Data Regulation. Due to this need for data, the privacy rules of the GDPR will always partially influence the development and innovation process of AI programs.60
Following the temporary ban of the application, there was concern about Italy falling behind other European Member States in the evolution of technological innovation.61 Generally, banning such an application could have a significant influence on the country’s markets in general. During the short ban of the product in Italy, users of ChatGPT could have replaced it with a competitor’s product.62 The ability to ban such products and to shape markets accordingly reveals the legislative power that authorities have over the market. Having national authorities publicly criticise a firm for not complying with existing laws for users’ protection could affect firms’ market positions and replace outdated businesses.63 It is vital that regulation focuses on protecting rights in such an evolving field as otherwise the technology will evolve without respecting laws put in place to ensure a thriving democratic society. In this case, the Italian authorities respected the privacy rights of individuals and only allowed the further market development of the AI tool if the technology complied with the GDPR’s regulations. By doing so, it stipulated the importance between balancing the protection of personal data while fostering innovation in the field of AI.64
The EU’s approach to legally regulating AI has made the Union one of the first movers globally and its regulation already has a wide influence internationally.65 The EU regulation aims to foster trust in AI by ensuring that the technology aligns with European values and fundamental rights, so that Europe will be able “to become a global leader in innovation”.66 After some years of preparation, the Commission published its proposal for AI Regulation in 2021.67 In December 2023, European legislators agreed on the final details of the Artificial Intelligence Act (AI Act),68 and the European Parliament voted in favour of the draft text in March 2024.69 The Act came into force in August 2024.70
The legislation’s definition of an AI system has changed throughout the latest versions of the Act and is now based on the OECD’s definition, as stated above, and outlined in Art. 3 (1)71 of the final draft. It hereby focuses on the characteristic of the autonomy of the systems, which are able to create certain outcomes on the basis of the given input.
The AI Act is based on the idea that only systems with a high risk to individuals’ rights should be regulated. Hence, simple systems that pose a low risk should not be curtailed. Overall, the Act focuses on ensuring safety and transparency to prevent violations of fundamental rights, encouraging a human-centric and ethical approach to AI technology.72 Thus, it follows a risk-based approach because it classifies different categories of algorithms according to their threat to safety and fundamental rights, namely prohibited systems, high-risk systems, and minimal-risk systems, and focuses on imposing regulations for high-risk systems.73
The AI Act differentiates between completely prohibited AI practices and high-risk systems, which are allowed but only under certain conditions.74 Art. 575 of the Act lists specific sensitive sectors and practises in which the use of AI systems is completely prohibited, e. g. systems which could influence individuals’ decisions or services which exploit vulnerability of a specific group of persons. The high-risk systems are classified according to Art. 676 and some are explicitly listed in Annex III77 of the Act, which include for example biometric technology, or products used in education or employment. Therefore, the AI Act does not aim to regulate the development of the underlying technology itself but rather it limits specific fields of application where high risks may occur. Because it covers all forms of AI across sectors the Act follows a horizontal approach which allows its wide application. By targeting the usage of the technology for the possible threats it poses in certain ways of application, the AI Act aims to prevent breaches instead of focusing on prosecution after a violation.
A new addition to this version of the Act is the regulation of General Purpose AI (GPAI) systems in Title VIIIA.78 These GPAI systems cannot be sorted consistently into the fixed categories of the AI Act due to their extensive range of application, which was criticised about the draft texts before. By addressing broad categories of systems and inserting this new category as a sort of catch-all element, the EU’s approach aims to adapt to the evolving landscape of AI and react flexibly to the future technological evolution. Another recent addition is the expanded legal basis of the act within the preamble. While previously, it had only been Art. 114 of the TFEU focusing on the harmonisation of the internal market of the Union, now Art. 16 of the TFEU is also explicitly mentioned, consequently, highlighting the importance of protecting individuals’ rights, specifically protecting personal data.79
By focusing on the regulation of high-risk systems, the EU’s AI Act ensures that fundamental rights are protected even when using AI rather than aiming to control the technological development itself. While the regulation prohibits some systems because the risk is perceived as exceedingly high, most high-risk systems are allowed provided they meet strict requirements and follow the safety and ethical standards set out by the Act. Therefore, developers of AI systems, who want to make economical use of their product by placing it on the market, must comply with these principles, imposed to protect basic rights, which should ideally lead to the existence of all AI systems on the market meeting ethical standards.
Additionally, the Act introduces methods to ensure regular assessment and documentation, for example by introducing obligations for risk management systems in Art. 9,80 and heightens the required quality criteria for training data, in Art. 10,81 to increase the transparency and ensure the reliability of the systems. Moreover, Art. 1482 stresses the importance of human oversight and that there must be an option for humans to intervene at any stage of the decision-making process. Art. 4383 introduces a mandatory conformity assessment for high-risk systems that ensures all requirements are met, and Art. 4984 introduces a CE certification as a licensing method for the technology. In addition, all high-risk systems must be registered into a Union-wide database according to Art. 6085 to allow a harmonised surveillance. Sanctions of non-compliance are regulated under Art. 71 ff.,86 with the heftiest ones ranging up to 35 million Euros or 7% of the worldwide turnover threatened for non-compliance with the regulations regarding prohibited practices. Systems classified as minimal or no risk can be used without any far-reaching restrictions.
The EU wants to find the right balance between restriction and innovation in this developing field and, by doing so, aspires to become the global leader in AI regulation.87 Its objective is to address and regulate the potential societal impacts of AI while fostering innovation, mainly in low-risk sectors, and support a responsible development of AI within the EU. In this way, the regulation limits the perceived risks and safeguards the rights of individuals while promoting technological development of low-risk systems to pave the way for ethical innovation.
Because the AI Act will be applicable to all EU Member States, it is going to ensure there is a harmonised framework throughout the Union. Art. 5988 of the Act provides for the installation of national supervisory authorities and Art. 55b ff.89 calls for a European AI Office and the AI Board, which are responsible for interpreting further details of the AI Act, investigating infringements, providing guidelines and advice, and supporting international cooperation.90
The AI Act’s global impact might be helped by the so-called Brussels Effect, in which companies outside of the EU comply with EU regulations because having one global approach is more practical than having different ones for the EU and for the rest of the world.91 This powerful effect could even lead to the AI Act becoming a global standard. The AI Act, like the GDPR, also functions extraterritorially, which means it also binds companies outside the EU if their product has an impact on European citizens.92 There will be no distinction between the role of an institution depending on its position on the supply chain of an AI application. As long as the product is to be used within the EU then the rules apply, even if the provider is not based in the Union.93 Therefore, the Act’s territorial scope will be very broad not only due to its direct applicability but also due to the powerful Brussels Effect, which could ensure that products not directly concerned by the AI Act comply with it.
By regulating AI systems as they are being used or even before use when only the intent of utilisation is given, the EU’s Act ensures that Member States comply with European ethical standards. Because the regulation distinguishes between different levels of risk, it aims to guarantee fundamental rights by identifying potential threats and accordingly deciding on requirements that shall mitigate the threats. The restrictions imposed on high-risk AI systems attempt to minimise potential risks and control them.
By explicitly mentioning Art. 16 TFEU,94 which guarantees the protection of personal data and binds the European authorities to this protection, in the preamble,95 legislators emphasise the importance of this protection and make clear that they want to incorporate this right in the AI Act to guard individuals' right to privacy. Nonetheless, this article was only added to later versions after critics pointed out that the Act did not acknowledge human rights enough. Furthermore, it has no binding effect because it is included in the preamble; however, the number of references to human rights within the Act’s articles has increased after this change.96
By banning AI systems in certain areas where the Act perceives human rights could be at risk, human rights violations are prevented before they could even occur. For example, Art. 5 No. 1(ba)97 bans systems that could categorise persons based on their race, sex, or other biometric data in any technical variation. Furthermore, high-risk systems are strictly regulated and only permitted if they meet strict requirements to ensure compatibility with guaranteed rights, e.g. the requirement for human oversight intends to verify that decisions made by machines are reviewed by a human. This requisite ensures that human experts oversee decision-making processes, thereby guaranteeing accountability, fairness, and ethical standards are upheld. In addition, human oversight enhances public trust in AI systems by ensuring that fundamental human rights and principles are respected. Nevertheless, human overseers of AI are also error-prone, and using such oversight could lull users into a false sense of security.98 Thus, this condition is not the perfect solution for ensuring safe AI.
Accordingly, the AI Act also emphasises the importance of testing and validating processes to ensure that AI systems work as intended and do not pose risks to individuals rights, for example by requiring providers to conduct ongoing monitoring processes and market surveillance described in Title VIII99 of the Act. Moreover, the AI Act establishes clear expectations for training data, requiring audits and transparency to ensure the integrity and reliability of AI-based decisions. This focus on data quality, review mechanisms, and traceability not only contributes to the overall safety and accountability of AI systems but also serves to protect inalienable human rights by ensuring that AI outcomes are fair, non-discriminatory, and in line with ethical standards.
Many expected that the EU approach would be focused on the protection of human rights due to previous legislation and ongoing discussion regarding the regulation of AI; instead, it focuses heavily on harmonisation and standardisation.100 Human rights are acknowledged within the Act and serve as a basis but are not the focal point. However, a collaborative approach based on Art. 114 TFEU across the Union could facilitate the protection of fundamental rights by streamlining the process of revision and implementation, hence ensuring its conversion due to the simplified harmonisation.
While the EU’s AI Act focuses on harmonising the internal market, it still highlights important regulations to ensure human rights are met, transparency is maintained, and effective risk management measures are introduced.101 By proactively regulating the technology prior to its development and ensuring a thorough monitoring and assessment after being put on the market, when enforced effectively, the EU Act could mitigate risks of AI systems harming or violating individual rights.
Because AI technology has many different advantages, its overall establishment cannot and should not be hindered completely. AI can enhance economic and societal welfare, which is why many countries are currently competing to achieve these benefits faster than others.102 In order to keep up in this global competition and drive their economic growth, nations should invest in AI innovation and development to keep up with the global competition for better effectiveness and economic opportunities.
A common method to support technological research is to install regulatory toolboxes as a flexible and secure approach to test certain regulations.103 One version of such toolboxes within the context of AI has been so-called regulatory sandboxes. These are fenced-in zones where new products and services can be tested; having these areas fosters innovation while minimising the possibility of unforeseen harms affecting the general public, allowing developers to experiment without endangering individuals’ rights.104 Thus, establishing some procedures to foster innovative development in AI is necessary for nations to not be left behind in the international race for algorithmic intelligence and economic efficiency.
The EU supports innovation by setting up such AI regulatory sandboxes as outlined in Title V105 of the AI Act. The Act binds the Member States to establish at least one such sandbox system at a national level or on a combined national level by collaborating with other nations and setting up a regulatory sandbox accessible for a group of Member States. In addition, the AI Act aims to simplify access to datasets as one core foundation for algorithmic development. It also highlights the importance of integrating small enterprises and start-ups into the development by allowing them privileged access to the regulatory sandboxes according to Art. 55106 of the Act.
This method promotes the exchange of ideas, expertise, and resources by supporting collaboration in various fields. By providing the opportunity to experiment within sandboxes, the Act refrains from having a traditional regulatory approach, instead ensuring a proactive and agile regulation framework.107 Joint efforts between countries allow governments to pool knowledge and grant access to small or medium-sized enterprises, while ensuring the compliance with safety standards across the Union.108 Through collaboration, governments can establish and enforce unified safety regulations which apply to all participating entities. This unified method not only enhances universal safety standards but also creates a consistent regulatory environment for companies operating within the Union, making the European market, as a whole, attractive for institutions due to the standardised requirements. When experts from various fields and backgrounds come together, they bring different perspectives and experiences to the table, which often leads to the emergence of innovative ideas and approaches that can more effectively address complex challenges.
Easier access to data will enable algorithms to develop using large amounts of data, which will boost innovation and the competitive position of the Union.109 At the same time, the compliance of these datasets with the GDPR and its protection of personal data shall be ensured, according to the EU’s Commission, which strives to encourage a wider availability of data while respecting personal data legislation throughout the establishment of this access.110 In addition, the ability to access and analyse large datasets gives organisations a competitive advantage. In providing this access, it is crucial to prioritise the protection of individuals' privacy and data security to comply with the relevant data protection legislation and maintain the trust of individuals.
By elaborating on the requirements companies must meet to adhere to the Act, the EU has prevented legal uncertainty. Meeting the comprehensive requirements of the EU’s Act allows companies to drastically reduce the risk of legal disputes and penalties. Such clear requirements enable informed decision-making and appropriate action to meet the standards. By setting fixed requirements, the EU has created a unified playing field for all firms operating within the jurisdiction. Businesses can compete on an equal level without any ambiguity or confusion regarding the legal obligations they need to fulfil, which will encourage fair competition and support innovative development within the market.
While a perfect regulatory framework might not exist, it is important to enact the best possible regulations which are able to keep pace with the technological developments before the arrival of stronger and more competent AI.111 Instead of racing towards having any kind of regulation, it is vital for securing the future well-being of humans to have the best possible regulatory foundation, not only nationally but on a global level.
In depth research should focus on the specific global effects the approach might have, especially in comparison to other powerful nations, not only economically but also regarding the reliability of governance and monitoring mechanisms. This is important to ensure a future-proof approach which will be able to adapt to novel technological developments. Additionally, defining the right scope in such a new field is vital to cover all relevant aspects, which is why the definition of AI is an ongoing issue in research, and should be analysed further in relation to different regulatory approaches. As this field is so flexible, it is important to define the framework conditions correctly, but without becoming rigid about it.
The EU’s AI Act is a world-first attempt to set up a horizontal regulation for AI systems; however, regardless of its broad scope of application it has weaknesses.112 While its risk-based approach might address issues before they become a problem, the strict requirements could lead to over-regulation and hinder the development of AI and, thus, frustrate the success of this legislation.
It is vital to develop solid AI regulation now to shape the future of technology. Such a regulation should be able to react proactively and not only reactively, preventing violations instead of only penalising them afterwards. A similar situation occurred regarding the regulation of cyberspace: regulations came into force too late, when a few companies dominated the internet, and today's regulative measures are only able to react to violations instead of shaping the internet’s development to align with human rights.113 Thus, a harmful evolution of AI should be prevented by enacting strong regulatory frameworks for AI to develop within and shaping any technological advancements.114
The EU’s risk-based approach might be a sufficient framework as it addresses the technology proactively before violations happen. Even if the EU’s method has shortcomings, by enacting a proactive legal framework as of today it ensures that it already has a foundation to refer to in future, even if the technological environment changes. The AI Act proportionately addresses the protection of human rights by focusing on potential threats. However, its excessive regulatory requirements might be excessively theoretical and complex to practically implement while successfully fostering innovative developments.
University of Cambridge, ‘“The best or worst thing to happen to humanity” - Stephen Hawking launches Centre for the Future of Intelligence’ (University of Cambridge Research, 19 October 2016) <https://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of> accessed 17 Mar 2024.
Shin-Yi Peng and Ching-Fu Lin and Thomas Streinz, ‘Artificial Intelligence and International Economic Law: A Research and Policy Agenda’ in Shin-Yi Peng and Ching-Fu Lin and Thomas Streinz (eds), Artificial Intelligence and International Economic Law (CUP 2021) 2.
Nathalie A Smuha, ‘From a 'Race to AI' to a 'Race to AI Regulation' - Regulatory Competition for Artificial Intelligence’ (2021) 13(1) Law, Innovation and Technology 57, 57.
Nathalie A Smuha, ‘Beyond a Human Rights-Based Approach to AI Governance: Promise, Pitfalls, Plea’ (2021) 34(1) Philosophy & Technology 91, 92.
Nikos T Nikolinakos, EU Policy and Legal Framework for Artificial Intelligence, Robotics and Related Technologies – The AI Act (Springer 2023) 7 f.
Dan Ciuriak and Vlada Rodionova, ‘Trading Artificial Intelligence’ in Shin-Yi Peng and Ching-Fu Lin and Thomas Streinz (eds), Artificial Intelligence and International Economic Law (CUP 2021) 75.
Karen Yeung and Martin Lodge, ‘Algorithmic Regulation: An Introduction’ in Karen Yeung and Martin Lodge (eds), Algorithmic Regulation (OUP 2019) 2.
University of Cambridge, ‘“The best or worst thing to happen to humanity” - Stephen Hawking launches Centre for the Future of Intelligence’ (University of Cambridge Research, 19 October 2016) <https://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of> accessed 17 Mar 2024.
Jonas Botta, ‘Die Förderung innovativer KI-Systeme in der EU: Zum Kommissionsvorschlag der KI-Reallabore („AI regulatory sandboxes“)‘ [2022] ZfDR 391, 393.
Lena Kästner and Astrid Schomäcker, ‘KI-Systeme in der modernen Gesellschaft: Potenziale und Grenzen‘ [2023] ZUM 558, 559.
Daniel Busche, ‘Einführung in die Rechtsfragen der künstlichen Intelligenz‘ [2023] JA 441, 441.
The European Commission’s High-Level Expert Group on Artificial Intelligence, ‘A definition of AI: Main capabilities and scientific disciplines’ (Brussels, 18 Dec 2018) 6.
The European Commission’s High-Level Expert Group on Artificial Intelligence, ‘A definition of AI: Main capabilities and scientific disciplines’ (Brussels, 18 Dec 2018) 6.
DonHee Lee and Seong No Yoon, ‘Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: Opportunities and Challenges’ (2021) 18(1) Int. J. Environ. Res. Public Health 271.
Hind Benbya and Thomas H. Davenport and Stella Pachidi, ‘Artificial Intelligence in Organizations: Current State and Future Opportunities’ (2021) 19(4) MISQE.
Lena Kästner and Astrid Schomäcker, ‘KI-Systeme in der modernen Gesellschaft: Potenziale und Grenzen‘ [2023] ZUM 558, 560.
OECD.AI, ‘OECD AI Principles overview’ (OECD.AI Policy Observatory) <https://oecd.ai/en/ai-principles> accessed 12 Jan 2024.
Lena Kästner and Astrid Schomäcker, ‘KI-Systeme in der modernen Gesellschaft: Potenziale und Grenzen‘ [2023] ZUM 558, 561.
Lena Kästner and Astrid Schomäcker, ‘KI-Systeme in der modernen Gesellschaft: Potenziale und Grenzen‘ [2023] ZUM 558, 563.
Krishna Deo Singh Chauhan, ‘From ‘What’ and ‘Why’ to ‘How’: An Imperative Driven Approach to Mechanics of AI Regulation’ (2023) 23(2) Global Jurist 99, 105.
Daniel Busche, ‘Einführung in die Rechtsfragen der künstlichen Intelligenz‘ [2023] JA 441, 445.
Sheshadri Chatterjee and Sceenivasulu N.S., ‘Impact of AI Regulation and Governance on Online Personal Data Sharing: From Sociolegal, Technology and Policy Perspective’ (2023) 14(1) JST 157, 171.
Daniel Krüger and Susan Wagner, ‘Das Phänomen „Künstliche Intelligenz“ aus regulatorischer und haftungsrechtlicher Sicht‘ [2023] ZfPC 124, 124.
Nathalie A Smuha, ‘From a 'Race to AI' to a 'Race to AI Regulation' - Regulatory Competition for Artificial Intelligence’ (2021) 13(1) Law, Innovation and Technology 57, 62 f.
Johann Justus Vasel, ‘Künstliche Intelligenz und die Notwendigkeit agiler Regulierung‘ [2023] NVwZ 1298, 1299.
Daniel Krüger and Susan Wagner, ‘Das Phänomen „Künstliche Intelligenz“ aus regulatorischer und haftungsrechtlicher Sicht‘ [2023] ZfPC 124, 124.
Daniel Krüger and Susan Wagner, ‘Das Phänomen „Künstliche Intelligenz“ aus regulatorischer und haftungsrechtlicher Sicht‘ [2023] ZfPC 124,
Irina Orssich, ‘Das europäische Konzept für vertrauenswürdige Künstliche Intelligenz‘ [2022] EuZW 254, 255.
Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) [2022] OJ L 265/1.
Proposal for a Regulation on the European Health Data Space of the European Commission of 06 May 2022 [2022] COM (2022) 197/2.
Luciano Floridi, ‘The European Legislation on AI: A Brief Analysis of Its Philosophical Approach’ in Jakob Mökander and Marta Ziosi (eds), The 2021 Yearbook of the Digital Ethics Lab (Springer 2022) 6.
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/32 (following: General Data Protection Regulation).
General Data Protection Regulation Art. 1.
General Data Protection Regulation Recital 4.
Hielke Hijmans and Charles Raab, ‘Ethical Dimensions of the GDPR, AI Regulation, and Beyond’ (2021) 18(100) RDP 63, 65.
General Data Protection Regulation Art. 22.
Giovanni Sartor, Panel for the Future of Science and Technology (STOA), ‘The Impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence’ (EPRS 2020) PE 641.530, p. 53.
Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention on Human Rights, as amended by Protocols 11 and 14) (following: European Convention on Human Rights).
Aleš Završnik, ‘Criminal Justice, Artificial Intelligence Systems, and Human Rights’ (2020) 20(4) ERA Forum 567, 574 f.
Human Rights Act 1998.
Stéphanie Laulhé Shaelou and Yulia Razmetaeva, ‘Challenges to Fundamental Human Rights in the Age of Artificial Intelligence Systems: Shaping the Digital Legal Order while Upholding Rule of Law Principles and European Values’ [2024] ERA Forum <https://doi.org/10.1007/s12027-023-00777-2> accessed 20 Jan 2024, para. 3.
European Convention on Human Rights Art. 8.
European Convention on Human Rights Art. 10.
Isaac Ben-Israel and others, ‘Towards Regulation of AI Systems’ (DGI 2020) 16 Compilation of contributions prepared by the CAHAI Secretariat 26 f.
European Convention on Human Rights Art. 6.
Isaac Ben-Israel and others, ‘Towards Regulation of AI Systems’ (DGI 2020) 16 Compilation of contributions prepared by the CAHAI Secretariat 24.
Isaac Ben-Israel and others, ‘Towards Regulation of AI Systems’ (DGI 2020) 16 Compilation of contributions prepared by the CAHAI Secretariat 10.
Garante per la Protezione dei Dati Personali (Italian Data Protection Authority), ‘Registro dei provvedimenti n. 112 del 30 marzo 2023 [doc. web n. 9870832]’ (GPDP, 30 March 2023) <https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870832> accessed 2 Feb 2024.
General Data Protection Regulation Art. 5.
General Data Protection Regulation Art. 6.
General Data Protection Regulation Art. 13.
General Data Protection Regulation Art. 25.
General Data Protection Regulation Art. 8.
Garante per la Protezione dei Dati Personali (Italian Data Protection Authority), ‘Registro dei provvedimenti n. 112 del 30 marzo 2023 [doc. web n. 9870832]’ (GPDP, 30 March 2023) <https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870832> accessed 2 Feb 2024.
Garante per la Protezione dei Dati Personali (Italian Data Protection Authority), ‘Artificial intelligence: stop to ChatGPT by the Italian SA Personal data is collected unlawfully, no age verification system is in place for children’ (GPDP, 31 March 2023) <https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870847#english> accessed 2 Feb 2024.
Elvira Pollina, ‘OpenAI's ChatGPT breaches privacy rules, says Italian watchdog’ (Reuters, 30 January 2024) <https://www.reuters.com/technology/cybersecurity/italy-regulator-notifies-openai-privacy-breaches-chatgpt-2024-01-29/> accessed 24 Mar 2024.
Garante per la Protezione dei Dati Personali (Italian Data Protection Authority), ‘Relazione sull’attività 2023 – Sintesi per a stampa e documenti’ (GPDP, 03 July 2024) <https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/10032023> accessed 11 December 2024.
Elvira Pollina and Alvise Armellini, ‘Italy fines OpenAI over ChatGPT privacy rules breach’ (Reuters, 20 December 2024) <https://www.reuters.com/technology/italy-fines-openai-15-million-euros-over-privacy-rules-breach-2024-12-20/> accessed 10 May 2025.
Ceren Yakışır, ‘An Evaluation of the ChatGPT Decision, which Italy blocked Access on the Grounds of Violation of the GDPR’ [2023] <http://dx.doi.org/10.2139/ssrn.4423779> accessed 2 Feb 2024, p. 4.
Ceren Yakışır, ‘An Evaluation of the ChatGPT Decision, which Italy blocked Access on the Grounds of Violation of the GDPR’ [2023] <http://dx.doi.org/10.2139/ssrn.4423779> accessed 2 Feb 2024, p. 5.
Alvise Armellini and Elisa Anzolin, ‘Italy will fall behind if ChatGPT not reactivated soon - deputy PM’ (Reuters, 4 April 2023) <https://www.reuters.com/technology/italy-will-fall-behind-if-chatgpt-not-reactivated-soon-deputy-pm-2023-04-04/> accessed 2 Feb 2024.
Jeremy Bertomeu and others, ‘Capital Market Consequences of Generative AI: Early Evidence from the Ban of ChatGPT in Italy’ [2023] <https://dx.doi.org/10.2139/ssrn.4452670> accessed 2 Feb 2024, p. 25.
Jeremy Bertomeu and others, ‘Capital Market Consequences of Generative AI: Early Evidence from the Ban of ChatGPT in Italy’ [2023] <https://dx.doi.org/10.2139/ssrn.4452670> accessed 2 Feb 2024, p. 3.
Ceren Yakışır, ‘An Evaluation of the ChatGPT Decision, which Italy blocked Access on the Grounds of Violation of the GDPR’ [2023] <http://dx.doi.org/10.2139/ssrn.4423779> accessed 2 Feb 2024, p. 5.
Luciano Floridi, ‘The European Legislation on AI: A Brief Analysis of Its Philosophical Approach’ in Jakob Mökander and Marta Ziosi (eds), The 2021 Yearbook of the Digital Ethics Lab (Springer 2022) 2.
European Commission, ‘White Paper on Artificial Intelligence - A European approach to excellence and trust’ [2020] COM (2020) 65 final, p. 2.
European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ [2021] COM/2021/206 final.
European Commission, ‘Commission welcomes political agreement on Artificial Intelligence Act’ (European Commission Press corner, 9 December 2023) <https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473> accessed 9 Feb 2024.
European Parliament, ‘Artificial Intelligence Act: MEPs adopt landmark law’ (European Parliament Press room, 13 March 2024) <https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law> accessed 24 Mar 2024.
Council of the European Union, Interinstitutional File: 2021/0106(COD), 5662/24 <https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf> (following: Council of the European Union, AI Act) accessed 24 Mar 2024.
Council of the European Union, AI Act.
Weiyue Wu and Shaoshan Liu, ‘A Comprehensive Review and Systematic Analysis of Artificial Intelligence Regulation Policies’ [2023] <https://doi.org/10.48550/arXiv.2307.12218> accessed 27 Nov 2023.
Martin Ebers and others, ‘The European Commission’s Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)’ (2021) 4(4) J 589, 589.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Emre Kazim and others, ‘Proposed EU AI Act – Presidency Compromise Text: Select Overview and Comment on the Changes to the Proposed Regulation’ (2023) 3(2) AI and Ethics 381, 381.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
European Commission, ‘European AI Office’ (European Commission European AI Office) <https://digital-strategy.ec.europa.eu/en/policies/ai-office> accessed 02 Mar 2024.
Luciano Floridi, ‘The European Legislation on AI: A Brief Analysis of Its Philosophical Approach’ in Jakob Mökander and Marta Ziosi (eds), The 2021 Yearbook of the Digital Ethics Lab (Springer 2022) 3.
Luciano Floridi, ‘The European Legislation on AI: A Brief Analysis of Its Philosophical Approach’ in Jakob Mökander and Marta Ziosi (eds), The 2021 Yearbook of the Digital Ethics Lab (Springer 2022) 3.
Nikos T Nikolinakos, EU Policy and Legal Framework for Artificial Intelligence, Robotics and Related Technologies – The AI Act (Springer 2023) 340 f.
Consolidated Version of the Treaty on the Functioning of the European Union (TFEU) [2012] OJ C 326/47.
Council of the European Union, AI Act.
Laura Lazaro Cabrera and Iverna McGowan, ‘EU AI Act Brief – Pt. 1, Overview of the EU AI Act’ [2024] Center for Democracy & Technology <https://cdt.org/insights/eu-ai-act-brief-pt-1-overview-of-the-eu-ai-act/> accessed 29 Mar 2024.
Council of the European Union, AI Act.
Johann Laux, ‘Institutionalised Distrust and Human Oversight of Artificial Intelligence: Towards a Democratic Design of AI Governance Under the European Union AI Act’ [2023] AI & Society <https://doi.org/10.1007/s00146-023-01777-z> accessed 8 Nov 2023.
Council of the European Union, AI Act.
Oskar J Gstrein, ‘European AI Regulation: Brussels Effect versus Human Dignity?’ (2022) 25(4) ZEuS 755, 757.
Oskar J Gstrein, ‘European AI Regulation: Brussels Effect versus Human Dignity?’ (2022) 25(4) ZEuS 755, 757.
Nathalie A Smuha, ‘From a 'Race to AI' to a 'Race to AI Regulation' - Regulatory Competition for Artificial Intelligence’ (2021) 13(1) Law, Innovation and Technology 57, 57 f.
Krishna Deo Singh Chauhan, ‘From ‘What’ and ‘Why’ to ‘How’: An Imperative Driven Approach to Mechanics of AI Regulation’ (2023) 23(2) Global Jurist 99, 117.
Krishna Deo Singh Chauhan, ‘From ‘What’ and ‘Why’ to ‘How’: An Imperative Driven Approach to Mechanics of AI Regulation’ (2023) 23(2) Global Jurist 99, 118.
Council of the European Union, AI Act.
Council of the European Union, AI Act.
Cristina Poncibò and Laura Zoboli, ‘The Methodology of Regulatory Sandboxes in the EU: A Preliminary Assessment from a Competition Law Perspective’ [2022] Stanford-Vienna European Union Law Working Paper No. 61 <https://law.stanford.edu/publications/no-61-the-methodology-of-regulatory-sandboxes-in-the-eu-a-preliminary-assessment-from-a-competition-law-perspective/> accessed 10 Feb 2024, p. 2.
Nikos T Nikolinakos, EU Policy and Legal Framework for Artificial Intelligence, Robotics and Related Technologies – The AI Act (Springer 2023) 42.
Nikos T Nikolinakos, EU Policy and Legal Framework for Artificial Intelligence, Robotics and Related Technologies – The AI Act (Springer 2023) 44.
Communication from the Commission, ‘Artificial Intelligence for Europe’ [2018] COM (2018) 237 final, p. 10.
Nathalie Rébé, Artificial Intelligence – Robot Law, Policy and Ethics (Brill Nijhoff 2021) 117 f.
Michael Veale and Frederik Zuiderveen Borgesius, ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) 22(4) CLRI 97, 112.
Krishna Deo Singh Chauhan, ‘From ‘What’ and ‘Why’ to ‘How’: An Imperative Driven Approach to Mechanics of AI Regulation’ (2023) 23(2) Global Jurist 99, 121.
Nathalie Rébé, Artificial Intelligence – Robot Law, Policy and Ethics (Brill Nijhoff 2021) 80.
©Sabrina Pölle. This article is licensed under a Creative Commons Attribution 4.0 International Licence (CC BY).