Issue at a Glance
Online brand safety entails protecting a brand’s image and reputation from the negative influence of inappropriate and potentially harmful content when advertising online.
Brand safety has been a growing concern in recent years as a result of a rise in “fake news”, the surge of extremist voices on websites and on social media, growth in bot traffic, and concerns about the influence of online content on democracy and elections.
Many brands have adjusted their advertising strategies and budgets, and pulled ads, in response to their ads being placed next to undesirable content.
Hate and Misinformation
Hate speech and misinformation became a significant issue in Canadian public discourse following increased awareness of hatred and injustice towards BIPOC communities following Black Lives Matter protests in 2020. The CMA made a public statement about the need to stand together against hate, expressed support for brands who use their influence to affect positive social change, and spoke with major platforms on the issue.
In June 2021, the federal government announced a new plan to better protect Canadians from hate speech and online harms. Bill C-36 includes amendments to several existing laws, including to streamline the process for individuals to make hate speech complaints (to the Canadian Human Rights Commission) against individual users and certain website operators.
In August 2021, the federal government followed with a consultation on its proposed approach to regulate “online communications service providers” with respect to online hate. The approach is intended to apply to major platform. It excludes other online products and services providers that would not qualify as online communication services, as well as private communications and telecommunications service providers.
The new legislation would set out a statutory requirement for regulated entities to take all reasonable measures to make harmful content inaccessible in Canada (including through robust flagging, notice, and appeal systems for both authors of content and those who flag content). The legislation would target five categories of harmful content: terrorist content, content that incites violence, hate speech, non-consensual sharing of intimate images and child sexual exploitation content. It proposes that new government entities would be established to, among other things, hear complaints and issue orders and penalties for non-compliance.
The proposed framework could provide more certainty to advertisers on these platforms that their content would not appear next to hateful or harmful content.
Marketers are encouraged to re-visit this webpage for updates.