Legal framework for artificial intelligence: What is the approach of the European Union, the United States and China?

February 1st, 2024


Several major Canadian cities, including Toronto, Montreal and Edmonton, are considered prime hubs for investors and companies looking to develop, design and deploy artificial intelligence (“AI”) systems.1 Home to national AI institutes such as the Québec AI Institute in Montreal,2 post-secondary institutions with cutting-edge research centres and collaborative networks featuring some of the world’s top AI talent, Canada ranks fifth in the world for AI,3 behind the United States, China, Singapore and the United Kingdom, according to The Global AI Index released on June 28, 2023.

The report highlights Canada’s strengths in terms of government strategies for investment, innovation and implementation of AI. However, the country has dropped one place in the ranking since last year due to Singapore’s meteoric rise. It is therefore difficult to say whether Canada will be able to defend its position as an AI leader when other powers have announced their intention to legislate in this area, as Canada has done.

In June 2023, the British Prime Minister presented the United Kingdom as the future global home for AI regulation. China, meanwhile, issued its new regulation on AI-generated content in August 2023, and, the president of the United States issued an executive order for safe, secure and trustworthy IA in October 2023. The European Parliament, for its part, reached an agreement on European AI regulation in December 2023. As for Canada, an AI and data bill, which we discussed in a previous article, is currently undergoing committee review in the House of Commons.

The purpose of this article is to provide a non-exhaustive review of current (or pending) AI regulation in Europe, the United States and China. In the case of Singapore and the United Kingdom (“UK”), this article will not cover in detail the steps taken by them4. This is because Singapore does not currently appear to have any legislation specifically regulating the use of AI. In fact, while the government does have a Personal Data Protection Act, it does not appear to want to develop a legal framework for AI for now, “so as not to stifle innovation in the field and to benefit from early feedback on legislation put in place by other players.” [Translation]5

As for the UK, on March 29, 2023, the government published a white paper outlining its proposals for regulating the use of AI. Rather than introducing new legislation to regulate AI in its territory, as the European Union and the Canadian government are doing, the UK government appears to be focusing on defining key principles for the development and use of AI: fairness, transparency, safety and security, accountability and contestability, while empowering existing regulators to regulate AI systems in their respective areas.


Legal framework in the European Union 

On December 9, 2023, the European Union (“EU”) reached a provisional agreement on artificial intelligence legislation, known as the AI Act, to establish a harmonized and comprehensive legal framework for all AI systems throughout its territory.

February 5, 2024 update: On February 2, 2024, all 27 EU member states unanimously approved the text as concluded on December 9, 2023.

The EU has therefore opted for a so-called “proactive” approach to AI regulation, recognizing the benefits of AI systems for its citizens, businesses and the public interest, while seeking to protect against the security and fundamental rights risks associated with such systems.6

A member of the European Parliament said regarding this AI legislation:

The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law.”7

European legislation defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”8 This definition includes systems that use symbolic or generative AI, machine learning, logic and knowledge, as well as those that have not yet been invented.9

Moreover, its article 3 indicates that its provisions apply to any natural or legal person, public authority, agency or other body that develops or has developed, places on the market, puts into service, makes available or uses such systems on the EU market, whether for payment or free of charge.10

  • It classifies AI systems into four categories: those at unacceptable risk, at high risk, at limited risk and at low or minimal risk. It prohibits certain AI practices that are considered unacceptable,11 in particular: systems that manipulate people using subliminal techniques that act on their subconscious;
  • systems that exploit the vulnerabilities of specific vulnerable groups, such as children or people with disabilities;
  • systems that use ‘real-time’ remote biometric identification in publicly accessible spaces;
  • biometric identification systems that use sensitive characteristics such as gender, race, ethnicity, citizenship, religion or political orientation;
  • systems that predict the likelihood that a person will commit a crime or reoffend based on profiling, location or past criminal behaviour;
  • systems that detect the emotional state of people for use in law enforcement, border control, the workplace and educational institutions;
  • systems that enable the creation or development of facial recognition databases through the untargeted capture of facial images from the Internet or video surveillance.

High-risk AI systems—those that pose a high risk to the health, safety or fundamental rights of individuals, such as those used for biometric identification, access to education, workforce management, access to essential private and public services and benefits, law enforcement, justice and border control—will be subject to the most stringent requirements.12 In particular, they will have to be tested during their development and before they are put on the market in order to identify measures to manage and minimize their risks.13 They will also have to ensure a certain level of transparency, traceability, accuracy, human oversight and cybersecurity.14

It should be noted that failure to comply with these rules is punishable by ‟fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company."15

While the European Parliament has reached a provisional agreement with the Council on the text of the European AI Act, the agreed text will now have to be formally adopted by the European Parliament and the Council of the EU to enter into force.16

The EU appears intent on establishing a comprehensive AI legal framework to ensure that AI systems developed, designed, marketed and used in Europe are human-centred and that the technology is used safely and does not violate people’s fundamental rights.


Legal framework in the United States 

Historically, the United States (“U.S.”) federal government appears to have taken a more passive approach to AI regulation in order to encourage innovation and growth in the area. However, on October 30, 2023, the U.S. president signed and issued an executive order to regulate the development and use of AI. This followed the voluntary commitments made in July by 15 major U.S. companies to promote the safe, secure and trustworthy development of AI technology.

The landmark executive order establishes new standards to notably ensure AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition and advance American leadership around the world.17

In his remarks at the signing of the order, the U.S. president said:

“One thing is clear: To realize the promise of AI and avoid the risks, we need to govern this technology […] — and there’s no other way around it, in my view. It must be governed.”18

While legislation supporting this executive order has yet to be passed by the U.S. Congress, the order requires developers of the most powerful AI systems to:

  • clearly label AI-generated content to protect Americans from AI-enabled fraud and deception;
  • test AI systems to ensure they are safe, secure and trustworthy;
  • share their safety test results and other critical information with the U.S. government;
  • develop AI tools to find and fix vulnerabilities in critical software.

In addition, the executive order calls on the U.S. government and its agencies to:

  • develop a national security memorandum that directs further actions on AI;
  • strengthen and advance privacy-preserving research and technologies;
  • provide clear guidance to keep AI algorithms from being used to exacerbate discrimination;
  • apply best practices throughout the criminal justice system, including in investigating and prosecuting civil rights violations related to AI;
  • establish a safety program to receive reports of—and act to remedy—harms or unsafe healthcare practices involving AI;
  • develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement;
  • catalyze AI research across the U.S. through a pilot of the National AI Research Resource.

Previously, the U. S. federal normative framework had relied primarily on pre-existing laws and regulations rather than comprehensive legislation applicable to all AI systems, non-binding guidelines such as the AI Bill of Rights to guide the responsible design and use of AI issued by the White House in October 2022, and industry self-regulation.

However, even before the executive order was issued in October 2023, many states had decided to take the initiative to regulate AI. Ten U.S. states have incorporated AI regulations into broader legislation that were passed or came into effect in 2023. These laws, however, affect very specific areas, such as employment and consumer privacy.

In addition, different federal agencies are involved in regulating and overseeing certain aspects of AI within their respective jurisdictions, such as the Federal Trade Commission, which combats deceptive and unfair AI-related business practices, and the Department of Transportation, which is responsible for regulating automated vehicles and keeping U.S. roads safe.

Thus, the legal framework for AI in the U.S. is evolving.


Legal framework in China 

The Chinese government has also established regulations for AI that address privacy, intellectual property, national security and ethics.

The new measures on the management of generative AI services that became effective in China on August 15, 2023, provide a legal framework to ensure that the provision and use of AI systems comply with Chinese laws and regulations. In particular, the measures stipulate that AI systems must adhere to socialist core values and must not generate content that incites subversion of State power and overthrow of the socialist system, endangers national security and interests, damages the national image, incites secession, undermines national unity and social stability, promotes terrorism or extremism, promotes national hatred or ethnic discrimination or contains violence, obscenity, pornography or false and harmful information.19

These rules apply to generative AI systems that provide services to the general public to generate text, images, audio or video content.20

Specifically, the new measures indicate that providers of generative artificial intelligence services have to:21

  • use data and basic models from legal sources;

  • not infringe the intellectual property rights enjoyed by others;
  • obtain the consent of individuals whose personal information is used;
  • take effective measures to improve the quality of training data and enhance the authenticity, accuracy, objectivity and diversity of training data.

The new measures therefore reflect China’s efforts to promote innovation and the development of generative artificial intelligence while upholding its socialist values, supplier responsibility and intellectual property protection. Finally, it is important to note that these are interim measures, indicating that China may further develop its regulatory framework in the future.



Like Canada, which enjoys a reputation and status as a leader in AI, there are other world powers that see the benefits of legislating in this area, particularly by ensuring that AI systems designed, developed and deployed on their territory adhere to certain key principles, such as transparency, protection of privacy and human rights, public safety and ethics.

However, each country takes a different approach to the legal framework for AI. Ultimately, companies will need to ensure that they fully understand and comply with the various requirements.

In Canada, the federal government still appears committed to passing the Artificial Intelligence and Data Act. However, the current version of this bill could change between now and its enactment, which is not expected before 2025, as its content is the subject of much debate and discussion. It will be interesting to see if the regulations that Canada appears poised to adopt will allow it to continue to attract some of the world’s leading AI researchers, experts and companies, while ensuring that Canadians can trust the AI systems they use every day.


1 In this article, “artificial intelligence” refers to a machine’s ability to imitate or surpass the intelligent behaviour and activities of humans.
2 Other examples include the Alberta Machine Intelligence Institute in Edmonton and the Vector Institute for Artificial Intelligence in Toronto.
3 Take, for example, Yoshua Bengio, a Moroccan-born Québec researcher specializing in artificial intelligence and a pioneer in deep learning.
4 The authors, being members of the Barreau du Québec, are not specialists in foreign law, that the text is intended exclusively to provide general information on legal news, that it does not constitute a legal opinion and cannot be treated or relied upon as such.
5 Direction générale du Trésor (July 26, 2023), “Singapour : une stratégie gouvernementale pragmatique en matière d’intelligence artificielle,” online.
6 “Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS,” EUR-Lex, online.; European Parliament (June 14, 2023). “Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts,” European Parliament, online.
7 Janne Ojamo and Yasmina Yakimova (June 14, 2023) “MEPs ready to negotiate first-ever rules for safe and transparent AI,” News European Parliament, online.
8 Article 3 of the Proposal for a Regulation, supra note 5.
9 Article 3 of the Proposal for a Regulation, supra note 5.
10 Article 3 of the Proposal for a Regulation, supra note 5.
11 Article 5 of the Proposal for a Regulation, supra note 5.
12 Article 6 of the Proposal for a Regulation, supra note 5.
13 Article 9 of the Proposal for a Regulation, supra note 5.
14 Article 2.3 of the preamble to the Proposal for a Regulation, supra note 5.
15 Yasmina Yakimova (December 9, 2023), ‟Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI", online.
16 European Parliament (December 19, 2023), ‟EU AI Act: first regulation on artificial intelligence”, online.
17 The White House (October 30, 2023), “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” The White House, online.
18 The White House (October 30, 2023), “Remarks by President Biden and Vice President Harris on the Administration’s Commitment to Advancing the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, online.
19 Article 4a) of the Interim Measures.
20 Article 2 of the Interim Measures.
21 Article 7 of the Interim Measures.