Google has now announced that it is working with third party, MRC (Media Rating Council)- accredited firms vendors to provide brand safety reporting on YouTube. The company also says that it has allocated more of its Artificial Intelligence (AI) resources in flagging offensive content and blocking ads to these pages.
Globally, brands and media agencies are increasingly beginning to ask for more oversight on how their digital money is being spent. As advertising becomes more automated, the need for this will continue to increase.
Explaining the YouTube issue, Ashish Shah, Founder and CEO, Vertoz, said, “YouTube UK has come under the scanner as the media procurement team aims to provide maximum coverage to the brand at minimum cost, instead of looking for content relevancy which in turn jeopardizes brand safety. The lack of a concrete system in place to moderate and categorize its video content is the prime reason for misplacement of brand ads across the platform. It is necessary to moderate first and monetize later, instead of the other way around to provide a healthy brand safe environment to partners.”
“Technology giving immense advantage but also has its pitfalls. The issue of advertising appearing against inappropriate content is an issue that is real and I personally believe that this cannot get eliminated completely given the nature of the medium and technology. Having said that, tech companies like Google and other platforms need to tackle this before advertisers start pulling out and it needs to assure the advertisers about the steps being taken by them,” opined Anurag Gupta, India MD of SVG Media.
But are assurances enough? In February, Google said that it had brought in the MRC to audit YouTube metrics and validate and examine third-party measurement partners, Moat, Integral Ad Science and DoubleVerify. Many brands and agencies have been asking platforms like Facebook and Google to allow audits by third parties and though both have complied to an extent, there is still a lot that is not available for public scrutiny.
“To state an analogy from the security space, no matter how much your firewall or anti-virus protects you, would you stop using it if your computer got infected? You would report it and help the company improve its offering. The boycotting helps the industry take notice and puts onus on Google to improve the ecosystem further. Adding human checks on top of algorithms is not going to solve the problem either because given that we’re a complex planet, these human opinions on the content being safe could vary based on country, religion, political bias, etc,” said Lavin Punjabi, Co-founder & CEO, mCanvas.
He further added that brands and agencies could choose to not run wide reach campaigns (targeted to keywords or categories) cause this is where the slips can happen. Instead, he advised advertisers to let go of some of the reach and target only known trusted content producers.
This is something that certain brands seem to be following. For example, The New York Times reports that JPMorgan Chase, after seeing some of its ads appear next to a questionable website, has slashed the number of websites it advertises to 5,000 from an earlier number of around 400,000.
On similar lines, Sunil Punjabi, VP & Head of Business (South Asia) at C1X, opined that one way placement of ads can be controlled in programmatic environment is through programmatic direct buys, which are a little more expensive. “But then, what is more expensive? Spending a little more on advertising and ensuring a brand safe environment or risking reputation for a few pennies saved?,” he said.
Speaking about the YouTube scenario, he was of the opinion that it is difficult for platforms like YouTube to police content when there is around 400 hours of video getting uploaded on YouTube every minute.