CXOToday has engaged in an exclusive interview with Nachiket Deole, Head of Sales – India, DoubleVerify
- How will challenges in maintaining quality inventory and safeguarding against unsafe content impact programmatic advertising in 2024?
In the ever-evolving landscape of programmatic advertising, marketers achieve long-term success and brand credibility by prioritizing quality inventory and mitigating unsafe content. Both of these fundamentals have played a pivotal role in shaping the outcomes of digital advertising campaigns. However, as technologies advance, advertisers must also adapt their strategies. And in recent years, AI has emerged as a frontrunner in influencing the future trajectory of programmatic advertising.
While AI presents numerous opportunities for brands, it has also brought about a new set of challenges, with malicious actors exploiting these tools to create content farms and made-for-advertising (MFA) websites — a concept Adalytics recently explored in a report. While there are different interpretations of what qualifies as MFA, DV formally defines MFA content as websites that utilize monetization strategies to maximize ad arbitrage profits. These sites often rely heavily on paid media and display an unusually high volume of ads compared to their content.
In July 2023, amidst concerns regarding AI tools accelerating the production of inappropriate online content, the ANA released a report indicating that MFA websites comprise one-fifth (21 percent) of all programmatic ad impressions and attract 15 percent of total advertising expenditure. Additionally, DV’s independent analysis discovered that certain MFA sites generate hundreds of millions of impressions monthly.
While this is a somewhat new challenge for advertisers, DV provides extensive coverage for MFA sites with its tiered MFA brand suitability categories. We use a proprietary analysis process that blends human and AI-driven audits to identify MFA sites on a large scale. These sites are categorized based on a thorough assessment of their ad monetization methods, traffic origins, content creation approaches and other indicators of MFA inventory. The effectiveness of our high-performing solutions was reaffirmed by Adalytics in its report, where our brand safety and suitability solution categorized all of the identified sites and subdomains within our traffic footprint as MFA.
Unlike fraudulent inventory, MFA sites are not inherently fraudulent and numerous advertisers may choose to advertise on them. However, marketers require tools to assess whether specific MFA sites align with their brand values and advertising objectives. With DV’s brand safety and suitability solution, brands have the flexibility and resources to determine if and to what extent they wish their ads to appear on MFA sites.
In addition, the rise of generative AI is further accelerating the challenges of maintaining quality inventory and protecting against unsafe content.
- Media Adjacency: As the ease of generating content at scale with generative AI grows, the prevalence of unsafe or inappropriate online content may rise. This could pose a challenge in finding quality inventory, contributing to the proliferation of harmful content like misinformation and deepfakes.
- Media Creation: Leveraging generative AI for media creation increases efficiency, yet it introduces potential risks, including the generation of unsafe or inappropriate content and the possibility of intellectual property breaches.
- Ad Fraud: Besides the complexities of brand safety and maintaining quality inventory, generative AI amplifies the risk of susceptibility to invalid traffic. This is attributed to the increased vulnerability of automatically generated content, as seen in MFA sites.
Overall, ensuring the quality of media inventory, fortifying defenses against unsafe or inappropriate content and mitigating the risk of invalid traffic will remain paramount challenges in 2024. Fortunately, AI-driven content review and classification allows for scalable verification and ensures media quality and effectiveness.
In anticipation of the upcoming elections, what comprehensive strategies is DoubleVerify employing to ensure brand safety and effectively combat ad fraud? Additionally, could you elaborate on how AI plays a pivotal role in enhancing these security measures?
Interestingly, over four billion people, more than half of the global population, are living in countries that will hold nationwide elections in 2024, including India. With the upcoming 2024 Indian general election, multiple brands are monitoring opportunities closely to place their ads in this hyperactive environment. While this dynamic landscape offers ample opportunities for advertisers to secure prime spots and promote their brands effectively, it also raises concerns. Advertisers must remember that their ad placements might coincide with content containing hate speech and inflammatory remarks.
Additionally, when advertising dollars increase, fraudsters are likely to leverage the opportunity to benefit and siphon ad spend. Ad fraud is likely to hit US$172 billion globally by 2028 (Statista).
DV determined that unprotected campaigns have a high risk of being vulnerable to ad fraud. However, with protection on their media campaigns, advertisers can reduce the likelihood of these challenges to ensure their ads are not associated with negative, inflammatory or hate speech-related content or at risk of being subject to fraud.
We established the DV Election Task Force to actively monitor election and political content worldwide in real time. This multidisciplinary group brings together a team of brand safety and suitability specialists, as well as fraud experts from the industry-leading DV Fraud Lab, to provide advertisers with a deep-dive analysis of content themes that emerge during an election season. The DV Election Task Force categorizes content that promotes baseless, incendiary or racially biased/motivated claims under the Inflammatory Politics and News (IPN) category and Hate Speech & Cyberbullying categories.
In 2022, the DV Election Task Force found notable increases in Inflammatory Politics and News (IPN) and Hate Speech rates. The average IPN rate was twice as high as the year-to-date average and the peak High-Risk Hate Speech (HRHS) rate was 11 times higher than the year-to-date average. We found that the periods where these spikes occurred coincided with the Maharashtra political crisis, as well as elections in five significant states: Uttar Pradesh, Punjab, Uttarakhand, Manipur and Goa. Advertising alongside such controversial and politically charged content can significantly impact customer thoughts and purchasing behaviors in the long run. Therefore, advertisers must be vigilant and safeguard their brands across emerging channels, especially during elections.
Advertisers that are focused on election-related content have access to two more brand suitability tools: exclusion lists and URL keyword blocking. App and site exclusion lists allow advertisers to avoid ad placement on specific apps, domains and subdomains, irrespective of DV’s classification. Keyword blocking, on the other hand, allows advertisers to prevent their ads from appearing on content with URLs containing specific keywords or phrases.
Ad fraud is an ever-present issue, whether or not it’s during election periods. To help brands navigate this environment effectively, DV continuously enhances its ad fraud solutions to align with current trends. To achieve this, we use the perfect synergy of people and technology with the DV Fraud Lab, composed of a dedicated team of data scientists, mathematicians and analysts from the cyber-fraud prevention community. Employing various methodologies, including AI, machine learning and manual review, the DV Fraud Lab continuously detects new forms of fraud. Through ongoing analysis, scenario management and research, DV identifies and updates protection measures in near real-time, safeguarding advertisers from fraudulent activities across sites, apps and devices.
How does Generative AI impact brands’ suitability and safety?
As generative AI continues to revolutionize brand strategies, its impact on media safety and suitability becomes increasingly evident. Identifying high-quality ad placements is becoming challenging with the proliferation of harmful content, such as deepfakes and misinformation. While generative AI can make creating media more efficient, there’s a risk of producing content that might not be safe or suitable, thereby potentially violating intellectual property rights. While enhancing ad performance, machine learning algorithms must be ethically deployed to avoid inadvertently promoting unsafe content. Therefore, finding the right balance between embracing generative AI innovation and safeguarding brand identity and values is paramount. This dynamic landscape underscores the critical importance of navigating generative AI’s impact on brand suitability and safety.
How does DoubleVerify leverage AI today with Human Oversight?
DoubleVerify leverages the potent combination of machine learning models and human oversight to combat fraud at scale effectively. Our advanced AI algorithms play a pivotal role in rapidly detecting fraudulent activities, providing a robust first line of defense. However, recognizing the significance of human intuition and oversight, our experts are intricately involved in the process to enhance accuracy. In content classification, particularly in video and other media, DoubleVerify relies on the power of machine learning for efficient classification. The indispensable touch of human oversight complements the synergy of AI-driven classification. This collaborative approach ensures high precision in content categorization, recognizing the unique strengths that both AI and human expertise bring to the table.
Regarding attention-driven aspects, such as campaign optimization, performance management and cost efficiency, AI also drives these strategies. Our AI-powered systems excel in navigating the complexities of campaign optimization, delivering practical and cost-conscious results. Human oversight, meanwhile, ensures that the AI technology remains aligned with the overarching goals and values of the organization. Furthermore, DV fraud detection is at the forefront of addressing the challenges posed by AI-generated content replication. Our expertise in impression-based telemetry and fraud detection methodologies, guided by AI capabilities and human insight, empowers us to identify and mitigate invalid traffic across various platforms. This dual approach strengthens our ability to adapt and respond swiftly to evolving fraudulent schemes, safeguarding against manipulative tactics that may shift from one shell website or app to another.