Brand Safety
Issue at a Glance
Online brand safety entails protecting a brand’s image and reputation from the negative influence of inappropriate and potentially harmful content when advertising online.
Brand safety has been a growing concern in recent years as a result of a rise in “fake news”, the surge of extremist voices on websites and on social media, growth in bot traffic, and concerns about the influence of online content on democracy and elections.
Many brands have adjusted their advertising strategies and budgets, and pulled ads, in response to their ads being placed next to undesirable content.
Hate speech and misinformation became a significant issue in Canadian public discourse following increased awareness of hatred and injustice towards BIPOC communities following Black Lives Matter protests in 2020. The CMA made a public statement about the need to stand together against hate, expressed support for brands who use their influence to affect positive social change, and spoke with major platforms on the issue.
In June 2021, the federal government announced a plan to better protect Canadians from hate speech and online harms. The formerly named Bill C-36 included amendments to several existing laws, including to streamline the process for individuals to make hate speech complaints (to the Canadian Human Rights Commission) against individual users and certain website operators.
In August 2021, the federal government followed with a consultation on its proposed approach to regulate “online communications service providers” with respect to online hate. The approach is intended to apply to major platforms. It excludes other online products and services providers that would not qualify as online communication services, as well as private communications and telecommunications service providers.
The bill proposed a statutory requirement for regulated entities to take all reasonable measures to make harmful content inaccessible in Canada (including through robust flagging, notice, and appeal systems for both authors of content and those who flag content). The legislation proposed to target five categories of harmful content: terrorist content, content that incites violence, hate speech, non-consensual sharing of intimate images and child sexual exploitation content. It proposed that new government entities would be established to, among other things, hear complaints and issue orders and penalties for non-compliance.
The bill was subject to significant scrutiny and was not passed before the 2021 election. The federal government has promised to reintroduce new legislation in 2023 , informed by feedback gathered in 2022 via a public consultation and the work of an Expert Advisory Group on Online Safety.
New legislation could provide more certainty to advertisers on these platforms that their content would not appear next to hateful or harmful content.
Marketers are encouraged to re-visit this webpage for updates.