Thursday, July 25, 2024
- Advertisement -

    Latest Posts

    Google Cracks Down on Ads Promoting AI Deepfake Pornogaphic Content Generation

    In a significant policy shift, Google has announced it will prohibit all advertisements promoting services that facilitate the creation of deepfake pornography and other forms of synthetically generated nude content, according to an article published in the Verge. This ban on ads for deepfake porn tools will take effect on May 30.

    According to the recent update to the Google Inappropriate Content Policy published on May 1, Google will suspend the accounts of advertisers upon detection of violation of the policy without any warning, and they won’t be able to advertise with Google again. 

    The update defined the violation as “sites or apps that claim to generate deepfake pornography, instructions on how to create deepfake pornography, endorsing or comparing deepfake pornography services”. 

    While Google has long forbidden sexually explicit advertisements on its platforms, the company’s existing policies did not explicitly cover services that enable users to generate said material through AI techniques like deepfakes.

    Google spokesperson Michael Aciman told the Verge that the “update is to explicitly prohibit advertisements for services that offer to create deepfake pornography or synthetic nude content”. He further revealed how Google is reviewing advertisers with the help of automated systems and human reviews, highlighting anyone who violates the policy will be removed.

    In 2023 alone, the tech giant purged over 1.8 billion ads for breaching its sexual content policies, revealed the Verge report.

    How did Google and others help in deepfake detection?

    In 2019, Google released what it termed to be “a large dataset” of visual deepfakes in order to help researchers’ deepfake detection efforts, following incidents which saw images of public figures, including the likes of Indian journalist Rana Ayyub, being used to proliferate misinformation.

    Google revealed back then that it worked with paid and consenting actors to record hundreds of videos which were further used to generate thousands of deepfakes videos with the help of publicly available deepfake generation methods.

    Meta and Microsoft also took on the challenge to address deepfakes in 2019, when they launched the Deepfake Detection Challenge (DFDCwhere they built a data set of deepfake videos with the help of academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY. 

    Why it matters?

    In the context of the ongoing general elections in India, Google’s new update is of significant importance as seen in the recent letter to the Election Commission of India by civil societies, asking for a model code of conduct for digital platforms for safeguards against deepfakes. Recently, a deepfake video of Indian actor Aamir Khan promoting a certain Indian political party in the 2024 general elections came into the limelight and the actor had to file an FIR against it and issue a clarification. 

    STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


    Also Read:

    The post Google Cracks Down on Ads Promoting AI Deepfake Pornogaphic Content Generation appeared first on MediaNama.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.