Artificial Intelligence Regulation
Issue at a Glance
Artificial intelligence (AI) is an increasingly important tool leveraged to support several aspects of marketing, from data analytics to ad personalization. At the same time, AI systems have the potential to pose risks to individuals and society (e.g., if they are used in bad faith or poorly designed/trained). Best practices for the ethical development and use of AI have been widely developed in recent years, and some countries have begun to legislate in this area.
The federal government’s Bill C-27, tabled in June 2022, proposes to create Canada’s first ever attempt at AI regulation, the Artificial Intelligence and Data Act (AIDA). The bill intends to promote the responsible use of AI by ensuring high-impact AI systems are developed in a way that mitigates the risk of harm and bias. It would prohibit conduct that could result in material harm to individuals or their interests, including when AI systems unlawfully obtain data or are used in a reckless way. Important terms in AIDA still need to be defined, including what constitutes a “high-impact” system and “material harm”. AIDA would create an AI and Data Commissioner to monitor compliance with the Act and issue audits to non-compliant organizations, with penalties of up to 5% of global revenue, or $25 million, for serious violations. The CMA’s AI Regulation Working Group, consisting of AI and data trust experts, was set up to inform the CMA’s feedback on the bill.
During debate on second reading of Bill C-27 in Parliament, it was decided that AIDA would be split away from the rest of Bill C-27 for voting purposes. As such, there is a likelihood that AIDA will not pass and will be off the table for the foreseeable future. The CMA will keep members informed of relevant updates.
AI Regulation Comparison
Canada vs. EU vs. US
VIEW CHART (members only)