Friday, July 12, 2024
- Advertisement -

    Latest Posts

    UNESCO Warns AI Could Spread Misinformation Regarding Holocaust

    The United Nations Educational, Scientific and Cultural Organization (UNESCO) has released a report warning against the possibility of Artificial Intelligence (AI) spreading harmful misinformation about the Holocaust. AI-generated deep fake videos, alongside simplified historical accounts lacking in context and the propensity for generative AI to ‘hallucinate’ (information that does not in fact exist), can result in increased misinformation and anti-semitism, the report says. This is especially concerning, since students are becoming increasingly reliant on AI to complete assignments. The report cautions that unless decisive action is taken, AI can be used by bad actors to seed disinformation and hate speech.

    “If we allow the horrific facts of the Holocaust to be diluted, distorted or falsified through the irresponsible use of AI, we risk the explosive spread of antisemitism and the gradual diminution of our understanding about the causes and consequences of these atrocities. Implementing UNESCO’s Recommendation on the Ethics of AI is urgent so that younger generations grow up with facts, not fabrications,” said Audrey Azoulay, UNESCO Director-General.

    Here are the key highlights from the report.

    How Does AI Threaten Knowledge About The Holocaust?

    Manipulating and leveraging AI models to produce hate speech:

    AI systems can inadvertently spread Holocaust denial and hateful content due to data flaws, lack of guardrails, and vulnerability to exploitation. Coordinated efforts by malicious actors can manipulate search engines to prioritise offensive images mocking the Holocaust. Generative AI models lacking proper filters are susceptible to “jailbreaking” techniques (techniques to bypass AI’s restrictions)that circumvent restrictions, enabling the generation of Holocaust denial material. Major platforms have acknowledged these issues with AI systems matching queries to hateful online content. Addressing data biases, implementing robust filters, and raising awareness about these vulnerabilities is crucial to prevent the further spread of Holocaust misinformation facilitated by AI.

    False statements and narratives in generative AI content:

    AI models can generate misleading or factually incorrect content about the Holocaust due to flaws in their training data and lack of comprehensive information. Errors can stem from the incorporation of Holocaust denial websites in training data, or “data voids” where there is insufficient reliable data in certain languages. This results in irrelevant search results unrelated to the Holocaust. Generative AI is particularly prone to “hallucinations” where it fabricates non-existent information to fill gaps, such as inventing fake historical events like a “Holocaust by drowning” campaign. Major AI models have erroneously provided distorted narratives supported by fabricated quotes attributed to Holocaust witnesses. Lack of robust data sources and fact-checking mechanisms makes AI vulnerable to propagating false or misleading information about the Holocaust unless properly supervised and moderated.

    Producing fake historical evidence:

    AI systems can generate highly realistic but entirely fabricated content related to the Holocaust, including fake survivor testimonies, doctored historical evidence, and distorted representations of events and figures. This synthetic media is often indistinguishable from authentic materials, even for experts. Deepfakes using AI-generated audio and imagery can falsely depict historical perpetrators denying involvement or portraying antisemitic ideologies. Generative AI tools have been exploited to create revisionist narratives rehabilitating Nazis and producing ahistorical conversations with figures like Hitler and Goebbels. AI has also modernised and altered images of Holocaust victims in misleading ways. As generative AI becomes more advanced, guardrails and robust fact-checking are crucial to prevent the proliferation of deceptive Holocaust misinformation and denial that undermines education and historical truth.

    Jeopardise belief in authentic historical evidence:

    The mere existence of AI systems capable of generating synthetic media may inadvertently enable Holocaust denial by cultivating societal doubt about the authenticity of real historical evidence and survivor testimonies. This phenomenon, known as the “liar’s dividend,” allows Holocaust deniers to leverage the perceived possibility of AI manipulation to falsely dismiss genuine audio-visual documentation as fake or AI-generated, without necessarily creating explicit deep fake artefacts. Suggesting that archival footage of survivors or Nazis could be AI-fabricated may prompt broader rejection of proof of the Holocaust atrocities, even without explicit deep fakes of figures like Hitler professing positive views about Jewish people. The capabilities of modern AI raise concerns about enabling Holocaust revisionism by undermining trust in authentic historical records.

    Oversimplifying Holocaust histories:

    AI systems tend to oversimplify and focus only on a few well-known aspects of the immensely complex history of the Holocaust, such as images of liberated camps or encyclopaedia summaries. Tech companies’ attempts to remedy bias by filtering queries to authoritative sources may further consolidate limited Holocaust narratives. On social media, AI recommendation systems can create “echo chambers” restricting access to diverse historical information beyond the well-documented events. Less well-known episodes are often muted, with basic searches overwhelmingly showing Auschwitz-Birkenau, excluding other locations, experiences and testimonies. Generative AI can generate false information about lesser-known Holocaust events based on prominent narratives in its training data, propagating historical inaccuracies. This oversimplification by AI risks disrupting nuanced understanding of the Holocaust’s complex history by prioritising limited narratives lacking crucial context.

    Language bias: reinforcing gaps in global Holocaust understanding:

    AI systems are often designed to customise results based on the user’s location or language, which can reinforce gaps in understanding complex historical events like the Holocaust across different social, cultural, and geographic groups. Search results for terms related to the Holocaust in Cyrillic script languages like Russian had a higher prevalence of Holocaust denial websites among the top results compared to Latin script languages, with 8-14% of top Russian search results promoting denial. Russian-language searches also retrieved more graphic victim images but fewer historical photos from liberated camps than English searches. Similarly, generative AI responses about the Holocaust in Ukraine varied substantially in accuracy and completeness depending on the prompt language used – Google’s Bard declined to answer over 30% of prompts in Russian but only 1% in English, while ChatGPT provided the most factually incorrect outputs for Ukrainian language prompts. This language-based customization means users in certain linguistic communities are substantially more likely to receive incorrect, incomplete or distorted information about the Holocaust from AI systems.

    How To Counter Holocaust Distortion in AI systems:

    The report also presents ways AI can be used to educate people about the Holocaust. According to UNESCO, the vast historical record of the Holocaust presents immense challenges that AI can help address by organising and analysing the hundreds of thousands of survivor testimonies across languages and contexts. AI can also enable new avenues for Holocaust research, such as indexing complex archival documents, analysing sentiment patterns in testimonies, and studying digital Holocaust memory materials. Educators are recommended to provide appropriate historical context while using AI tools in the classroom and be vigilant about falsehoods and misinformation.

    UNESCO urged swift action to address the risks of AI spreading misinformation about the Holocaust. It called on governments to implement its 2021 Recommendation on the Ethics of AI. It also urged tech companies to adhere to UNESCO’s principles of fairness, transparency, human rights and due diligence when developing AI applications, with 8 firms committing to this in early 2024. Tech firms were recommended to collaborate with Jewish communities, survivors, educators, anti-Semitism experts and historians when creating new AI tools related to the Holocaust. 

    AI spread misinformation has been a hot topic recently, with many concerns being raised about the unreliability of generative AI, amid our increasing dependency on it. Last month, Google’s AI Overview was slammed for providing inaccurate information for multiple questions, and often misunderstanding the context in which certain information appeared.

    Also Read:

    The post UNESCO Warns AI Could Spread Misinformation Regarding Holocaust appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.