Programmatically-placed ads for dozens of nonprofits and universities have been showing up on misinformation websites — including some that overtly conflict with the missions of major organizations paying for those ads, per new report release today.
Researchers at the news reliability rating service NewsGuard said they found ads for Planned Parenthood on a website promoting dangerous herbal abortion recipes. Elsewhere, ads for groups such as Amnesty International and The Red Cross were served on websites known for promoting pro-Russian propaganda related to the war in Ukraine. Other ads mentioned in the report include some for health organizations and U.S. colleges that showed up alongside online misinformation about Covid-19 vaccines.
The report illustrates the ongoing problem with programmatic platforms that some say remain too complex and opaque for proper accountability.
Comprising ads from 57 nonprofits and government organizations found across 50 websites — websites that NewsGuard says publish misinformation — the report is part of an ongoing collaboration with the European Commission’s Joint Research Centre. It’s also just one of many reports NewsGuard has released in recent months related to the overall problem with misinformation across a variety of content formats and platforms. Other research reports from NewsGuard in the past year include reports on AI-generated content farms, disinformation on ChatGPT and misinformation problems with TikTok’s search engine.
Although the latest report includes just 108 ads, it illustrates an ongoing problem when it comes to how misinformation is monetized — sometimes at the expense of unsuspecting do-gooders accidentally funding the very problems they seek to solve. Researchers also say the findings highlight the opaqueness and complexity of the ad-tech ecosystem. (In 2021, advertisers spent $2.6 billion on misinformation websites each year, according to research from NewsGuard and ComScore.)
“It sends money and advertising to exactly the sites where advertisers don’t want to go,” said Steven Brill, NewsGuard’s co-editor-in-chief and co-CEO. Despite the small number of ads appearing on the websites NewsGuard identified, misinformation overall remains a major problem. Researchers at Stanford University estimate that Americans accounted for 68 million of the 1.5 billion visits to misinformation websites during the 2020 election, according to a paper published last month in the journal Nature Human Behavior. However, that’s an improvement from the 2016 election, when 44.3% of Americans visited misinformation websites compared to just 26.2% in 2020.
Identifying and removing countless websites from ad servers can also end up feeling a bit like a game of whack-a-mole for companies like Google, which NewsGuard said served up 70% of the ads identified for the report. (The rest came from other ad platforms such as Yahoo or unknown sources.)
“If there were greater transparency in programmatic advertising, the industry would have been reformed long ago,” said Gordon Crovitz, NewsGuard’s co-CEO and co-editor-in-chief. “This is not a difficult problem to solve.”
When asked for comment about the findings, Google spokesperson Michael Aciman said the company has “invested significantly” in recent years to expand policies related to the proliferation and monetization of misinformation.
According to Aciman, Google reviewed “a handful” of the examples NewsGuard shared and removed ads from serving where pages violated Google’s content policies. Some of the websites NewsGuard shared with Google had already faced previous page-level enforcement. However, Aciman said the company couldn’t comment on all the findings because NewsGuard declined to share its full report with Google or provide a full list of the exact websites and the ads that appear on them.
“We’ve developed extensive measures to prevent ads from appearing next to misinformation on our platform, including policies that cover false claims about elections, climate change denial and claims related to COVID-19 pandemic and other health-related issues,” Aciman told Digiday via email. “We regularly monitor all sites in our publisher network and when we find content that violates our policies we immediately remove ads from serving.”
In 2022 alone, the company took action against 1.5 billion publisher pages and 143,000 sites, according to Aciman. Google’s current policies already cover many issues related to misinformation about topics such as politics, health-related topics such as Covid-19 and climate change and Russia’s invasion of Ukraine — which were added in 2019, 2020, 2021 and 2022 respectively.
Digiday reached out to some of the organizations for comment including Planned Parenthood, Amnesty International and the American Red Cross. Only the Red Cross provided Digiday with a statement, with a spokesperson saying via email that the organization and its advertising partners “work diligently to monitor ad placements that are not in line with the fundamental principles of the global Red Cross movement.”
“We partner with Integral Ad Science, a platform that prevents our ads from being served on sites that contain things such as hate speech, violence, political content and more, utilizing some of the strictest levels of filters available,” the Red Cross spokesperson said. “We do our best to continually update the list of sites our advertisements should not appear on and greatly appreciate it when a site is brought to our attention. These situations can occasionally happen from time to time as many new sites are added to the web constantly.”
For years, the problem of brand safety for advertisers has meant big business both for companies like NewsGuard, but also ad-tech giants such as DoubleVerify and IAS, which both offer exclusion and inclusion lists for advertisers.
The industry still hasn’t come to an agreement on creating unilateral standards around how to deal with monetizing misinformation, said Neal Thurman, co-founder of the Brand Safety Institute. For example, should it be determined at the domain level or at the content level? He added that there are other questions about how to identify what’s harmful content versus what’s just “ill-considered.”
“It’s never going to be perfect,” said Mike Zaneis, CEO of the Trustworthy Accountability Group and co-founder of the Brand Safety Institute. “Even if you’re using a brand safety vendor, [and] have inclusion and exclusion lists, the problem is if you have a handful of ads on the worst kind of content, it does have an impact on your brands.”
Even if ads seen on misinformation websites don’t garner millions of impressions, experts say ads that show up in the wrong context can still have an impact on how people perceive a company. Sometimes that happens from ads slipping through the cracks via affiliate marketers or if an ad-tech partner doesn’t have rigorous standards in place.
Ongoing advertising and misinformation concerns also raise questions as to whether the emerging world of generative AI might help fix things or just further compound the existing problem.
That’s especially relevant when it comes to how chatbots like Google’s Bard and Microsoft’s Bing will inform users within chat, the ways they might send traffic or ad revenue to reputable or disreputable websites. When it comes to answering queries, will quality content be prioritized over websites where misinformation or other questionable content has spread?
“If you contrast traditional search with generative AI, we’re going to come to really miss traditional search,” Crovitz said.
https://digiday.com/?p=503898