Wednesday, July 24, 2024
- Advertisement -

    Latest Posts

    Experiment Reveals Meta Allowed Hateful Political Ads on its Platform

    A report by two civil society organizations revealed that Meta allowed ads containing inflammatory sentiments against minorities to be published on their platform during the election period in India. An experiment conducted by Ekō and India Civil Watch International revealed that 14 out of 22 ads containing AI-generated content and targeting India’s Muslim population were able to bypass Meta’s security systems to ban hate speech and election misinformation.

    What was the experiment?

    Researchers created ads on their Facebook accounts that incorporated AI-generated and manipulated content using Dall-E, Midjourney, and Stable Diffusion. The content of these ads called for “violent uprisings targeting Muslim minorities, disseminated blatant disinformation exploiting communal or religious conspiracy theories prevalent in India’s political landscape, and incited violence through Hindu supremacist narratives.”

    The content of the ads included:

    • Accusing opposition parties of alleged Muslim “favoritism”
    • Sharing sentiments of India being swarmed by Muslim “invaders”
    • Claiming that Muslims attacked India’s Ram temple and calling for Muslims to be burned
    • Claiming EVMS were being destroyed, and calling for a violent uprising
    • Calling for the execution of a prominent lawmaker claiming their allegiance to Pakistan
    • Vilifying Muslims and calling for burning Muslims.
    • A video similar to the doctored video of Amit Shah claiming to ban reservations for backward castes

    All of these ads violated Meta’s Community Guidelines for hate speech but only 5 were rejected for violating guidelines for “hate speech, bullying and harassment, misinformation and violence and incitement.”

    These ads were also posted during Phases 3 and 4 of the electoral process in districts of Madhya Pradesh, Assam, Karnataka, Andhra Pradesh, Telangana, and Kashmir. This is noteworthy because this period is a “silence period” as mandated by the Election Commission which states that starting from 48 hours before the election to the end of the process no political ads can be shared on social media. However, only 3 of the researchers’ ads were rejected by Meta for qualifying as “social issue, electoral or political ads.”

    Easy to bypass Meta

    Notably, Meta requires accounts running political ads to get authorized first by confirming their identity and creating a disclaimer that lists who is paying for the ads. However, researchers weren’t able to do this as their dummy Facebook accounts were located outside of India. Yet, researchers were able to post these ads.

    The ads were placed in English, Hindi, Bengali, Gujarati, and Kannada which is also noteworthy because the Meta Oversight Board is currently investigating if there is a discrepancy in addressing takedowns of posts violative of community guidelines that are in foreign languages.

    Additionally, Meta’s automated ad review system couldn’t identify images that violated their Community Guidelines. Those identified could be slightly modified to circumvent the system, researchers noted. For example, a variant of an ad inciting violence against Muslims and advocating for the burning of their places of worship was permitted to pass through the platform’s filters after adjusting a handful of words or generating a different image. The report noted that these adjustments were made within minutes, and the revised ads were tested and cleared in less than 12 hours.

    Thus, the ability to bypass the system and exploit weaknesses in Meta’s automated review system to incite violence is easier than imagined.

    What can Meta do to remedy this?

    The report charts out recommendations for Meta to prevent the spread of hateful content and misinformation:

    • Adopting the election ‘silence period’ as mandated by the ECI  
    • Ensuring transparency by requiring advertisers to disclose the source of their finances and establishing a strict corporate policy limiting political advertising.
    • Banning shadow advertisers ie. advertisers that cannot be vetted as legal persons, from sharing political ads
    • Ensuring that fact-checkers in India can label misinformative and disinformative advertisements
    • Ensuring fact-checked information is correctly labeled and/or removed in all languages
    • Ensuring that dehumanizing, caricaturing, and demonizing of minorities in India is checked and restricted in line with the platform’s hate speech policy
    • Proactively acting to restrict uploading and re-spawning disinformation and hate speech pages and profiles, especially by political players
    • Allocating resources to address the risk of harm. This should be proportional to the number of people at risk.
    • Shutting down the recommender system and making algorithms open for public audits by civil societies and academia

    Meta’s response to the concerns

    The researchers reached out to Meta for their response to the experiment’s findings. Meta stated that they would investigate the images shared by the researchers. It also directed the researchers to its policies on election misinformation, hate speech, and AI-generated content.

    • Meta stated that all advertisers were required to go through an authorization process and were responsible for complying with all applicable laws and would be removed for violating Meta’s Community Guidelines. They also added that AI-generated content is reviewed and labeled by a network of “independent fact-checkers” and once content is labeled as “altered” its distribution is reduced.
    • They also reiterated that advertisers globally were required to disclose when they use AI to create or alter a political or social issue ad. However, they did not address the loophole the researchers found.
    • They also assured that the Election Commission of India has access to a “high-priority channel” to flag content that may violate election laws. Any ads escalated by the Commission during silence periods were taken down. However, Meta reminded that political advertisers who share content on Facebook are responsible for complying with the law.

    Also Read:

    The post Experiment Reveals Meta Allowed Hateful Political Ads on its Platform appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.