Tuesday, March 5, 2024
More

    Latest Posts

    AI Briefing: Tech giants adopt AI content standards, but will it be enough to curb fakes?

    AI providers and government entities announced a flurry of efforts aimed at bolstering the internet’s defenses from AI-generated misinformation.

    Last week, major AI players announced new transparency and detection tools for AI content. Hours after Meta detailed plans for labeling AI images from outside platforms, OpenAI said it will start including metadata for images generated by ChatGPT and its API for DALL-E. Days later, Google announced it will join the steering committee of Coalition for Content Provenance and Authenticity (C2PA), a key group setting standards for various types of AI content. Google will also start supporting Content Credentials (CC) — a sort of “nutrition label” for AI content that was created by C2PA and the Content Authenticity Initiative (CAI). Adobe, which founded CAI in 2019, debuted a major update for CC in October.

    The updates were especially noteworthy on a few fronts by bringing major distribution platforms into the standardization process. Bringing platform-level participation could also help with driving mainstream adoption of AI standards and helping people better understand how to know if content is real or fake. Andy Parsons, senior director of CAI, said giants like Google help with the “snowball effect” needed to improve the internet’s information ecosystem. It also requires alignment across companies, researchers and various government entities.

    Continue reading this article on digiday.com. Sign up for Digiday newsletters to get the latest on media, marketing and the future of TV.

    Latest Posts

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.