Navigating the New Frontier of AI-Generated Video Content: A Guide to Brand Safety

Blog 3 min read | Apr 19, 2024 | Kate Vassallo

Share:

In the rapidly evolving landscape of digital video, the introduction of OpenAI’s Sora on February 15 marked a significant milestone. This cutting-edge generative AI technology has the potential to revolutionize the way we create and consume video content, offering the ability to produce cinematic, lifelike quality videos up to a minute long from simple text descriptions. Currently in Beta, the developers of Sora are collaborating with industry experts to establish standards and controls to ensure the technology is used responsibly.

As we stand on the brink of this technological leap, the conversation around brand safety in the digital video ecosystem has never been more critical. With the anticipation of a public release of a safeguarded version of Sora in the coming months, let’s delve into the safety measures proposed by OpenAI, the challenges they face according to JWP, and recommendations for advertisers to navigate this new terrain.

OpenAI’s Safety Plan and Its Vulnerabilities

OpenAI has outlined a comprehensive safety plan to mitigate risks associated with AI-generated video content. Key measures include:

  1. Employing C2PA Metadata

All OpenAI-generated videos are tagged with C2PA metadata to identify them as AI-created. However, this approach faces significant challenges. Metadata can be easily removed or altered using third-party tools, undermining the effectiveness of this safeguard. Furthermore, the presence of C2PA metadata does not distinguish between safe and unsafe AI-generated content, posing a dilemma for advertisers who wish to avoid or devalue AI-generated video content.

  1. Prohibiting Unsafe Content

OpenAI’s policies strictly prohibit the generation of videos involving extreme violence, sexual content, hateful imagery, celebrity likeness, and intellectual property violations. Yet, the subtleties of harmful misinformation—such as politically motivated falsehoods—present a nuanced challenge. Misinformation does not need to be overtly violent to be detrimental, and in an election year, the potential for misuse of video content to spread political misinformation is a significant concern for brand safety.

JWP’s Recommendations for Advertisers

In response to these challenges, JWP offers strategic recommendations for advertisers aiming to safeguard their brands in the evolving video ecosystem:

Opt for the Open Internet Over Walled Gardens

Editorial content on the  open internet is deemed safer than user-generated content on walled gardens, where even verified videos may be scrutinized. Prioritizing video content from publishers with journalistic integrity is paramount.

Ensure Video Placement Transparency

Advertisers should focus on running video ads within publisher-produced content. This approach minimizes exposure to safety vulnerabilities that may arise from external APIs pulling in third-party content.

Implement Video-Level Brand Safety

Complementing page-level brand safety with video-specific measures is crucial. JW Player, as a leading instream video player, offers enhanced protection by analyzing the video content itself—utilizing transcription, audio, and visual signals to classify and protect against unsafe content.

About JW Player

JW Player has been a pioneer in video innovation since 2004, powering the largest source of owned & operated publisher video on the open internet. Their commitment to advancing video technology while prioritizing brand safety offers a beacon of guidance for advertisers navigating the new possibilities and challenges posed by AI-generated video content.

As we embrace the future of video content creation, understanding and addressing the complexities of brand safety will be essential. By adopting a proactive approach and leveraging the insights and technologies available, advertisers can confidently navigate this new digital landscape, ensuring their brands remain protected in an ever-changing world.