One of the great things about the internet is that it is the ultimate expression of free speech; anyone can publish their experiences, views and opinions for others to read, comment on and share.
One of the worst things about the internet is that anyone can publish anything they like, no matter how untrue, defamatory or hateful.
The problem is, what makes the internet such an invaluable source of information and entertainment (cat meme anybody?) also makes it very difficult to control. This has been brought into stark reality in the last few months with the rise of ‘fake news’ and even more recently with the news that several big brands (including L’Oreal, The Guardian and Channel 4) have pulled their advertising from Google because their ads appeared next to extremist content.
The issue around ad safety within the digital space has never been far away within the industry, but in light of recent events it has become a bigger question for many of our clients. As with most things in marketing, the answer is not always clear cut, but there are things we as an agency do to protect our clients’ brands.
Firstly, it’s important to make a distinction between online display advertising on the Google Display Network and advertising on Youtube. Although both are owned by Google there are key differences in the way that brands advertise on them.
Google Display Network
At a basic level the GDN allows brands to advertise across a huge network of websites and apps based on demographic, interest based and contextual targeting. As with other online ad networks, specific sites can be added to an exclusion list so ads won’t appear on them. Then, once a campaign is live, it is possible access a list of the sites where the ads are running. If anything is flagged up it can be added to the exclusion list.
Obviously, as a reactive rather than proactive solution, this isn’t ideal, but vigilance does help to enable a quick reaction to any issues. It’s also worth considering that the chances of your ad appearing on a site with extremist or inappropriate content can also be greatly reduced by being as targeted as possible based on interest, context and keyword and excluding broad interest areas, such as political content.
Youtube is a slightly different matter. Because advertising is appearing next to content that is far less controlled than an ordinary publisher website, it is much easier to end up next to unsavoury content without being aware of it.
At this point in time I’m not aware of any way of completely safeguarding brands from this. There are over 400 hours of video uploaded to Youtube every minute so you can see the issue Google has in trying to police this even with the use of sophisticated algorithms. Again, it is possible to reduce the risk somewhat by being as targeted as possible with your campaign parameters, but this is problem for brands who want to tap into the broad reach potential of Youtube.
Presently there is no way to completely eliminate the risk of an ad appearing next to unsavoury content but there are ways of mitigating it. Targeting and segmentation are key when defining campaign parameters and maintaining vigilance will render the risk almost negligible. The other is to use other online networks that have robust safeguards in place and have signed up to industry recognised programs such as the Digital Trading Standards Group and AdSafe. This way you are giving yourself the best chance of avoiding anything inappropriate, but as with most things in life, nothing is 100% guaranteed.