Friday, July 12, 2024
- Advertisement -
More

    Latest Posts

    Top AI Companies Commit To Safe AI Development at the AI Seoul Summit

    A major agreement on AI safety commitments was struck with 16 major AI tech companies at the AI Seoul Summit conducted by the UK and South Korea, on May 21. The “Frontier AI Safety Commitments” were signed by firms spanning the globe, including companies from the United States, China and the Middle East. Zhipu.ai from China and the Technology Innovation Institute from the UAE were among the companies that agreed to the new pact.

    Under the terms, AI companies will each publish frameworks detailing how the risks of their most advanced AI models will be measured and mitigated. This includes examining potential misuse by bad actors. Crucially, the frameworks must define risk thresholds that would be “deemed intolerable” unless adequate safeguards are put in place.

    The AI giants have committed to not developing or deploying models at all, if the risks cannot be brought below these thresholds through mitigations. Input from governments will be factored in when setting the risk tolerance levels.

    The binding safety frameworks are expected to be made public ahead of the AI Action Summit hosted by France in early 2025. The 16 signatories represent the world’s leading AI powers and mark a first in joint AI safety standards between rivals like the U.S. and China. The companies part of the agreement include major players like Open AI, Microsoft, Google, Amazon and Anthropic.

    The commitments were announced as the two-day AI Seoul Summit kicked off discussions on bolstering governance of artificial intelligence worldwide. The Summit was organised by the governments of the United Kingdom and the Republic of Korea and brought together international governments and industry leaders, academia and civil society leaders for a two-day discussion. It builds upon the first-ever AI Safety Summit organised in the UK last year at Bletchley Park, which called for international cooperation on ensuring safe use of AI and resulted in the Bletchley Declaration on ‘Frontier AI models.’

    UK Prime Minister, Rishi Sunak, said “It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety. These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI. It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology. The UK’s Bletchley summit was a great success and together with the Republic of Korea we are continuing that success by delivering concrete progress at the AI Seoul Summit.”

    What did the Commitments say?

    The Frontier AI Safety Commitments define ‘Frontier AI’ as, “highly capable general-purpose AI models or systems that can perform a wide variety of tasks and match or exceed the capabilities present in the most advanced models.” The signatory companies are required to achieve certain outcomes, which are detailed in the Commitments, alongside the steps to reach them. 

    Outcome 1: Organisations effectively identify, assess and manage risks when developing and deploying their frontier AI models and AI systems:

    • Conduct risk assessments across the AI lifecycle, considering model capabilities, use contexts, mitigations, and internal/external evaluations.
    • Define risk thresholds where severe unmitigated risks would be intolerable, with government input and alignment to international agreements.
    • Outline risk mitigation strategies to keep risks within thresholds.
    • Have explicit processes to not develop/deploy if mitigations cannot keep risks below thresholds.
    • Continually invest in improving risk assessment, threshold-setting, and mitigation capabilities.

    Outcome 2: Organisations are accountable for safely developing and deploying their frontier AI models and AI systems:

    • Adhere to the above commitments through internal governance frameworks and sufficient resourcing.

    Outcome 3: Organisations’ approaches to frontier AI safety are appropriately transparent to external actors, including governments:

    • Provide public transparency on implementation, except for sensitive information. Share more details with governments.
    • Explain the involvement of external actors in risk assessment and adherence to the safety framework.

    Furthermore, the signatories also commit to implementing current best practices related to frontier AI safety, including:

    • internal and external red-teaming of frontier AI models for severe and novel threats;
    • working toward information sharing;
    • investing safeguards to protect proprietary and unreleased model weights;
    • incentivising third-party discovery and reporting of issues and vulnerabilities;
    • developing mechanisms that help users identify AI-generated content;
    • publicly reporting model capabilities, limitations, and domains of appropriate and inappropriate use;
    • researching societal risks posed by frontier AI models; and
    • using frontier AI to help address the world’s greatest challenges.

    How are frontier AI models different from foundational AI Models?

    The Commitments define Frontier AI as advanced models that can perform a range of tasks and are often superior to other contemporary models. However, they do not sufficiently differentiate Frontier AI models from Foundational AI models, which are trained on a wide variety of broad dataset and can similarly be used for a wide variety of tasks. One of the most advanced AI models today is GPT-4o, which is a foundational model. Under the definition accepted by the Commitments, it would be considered to be a frontier model and subject to the accepted regulations. At the same time, no standards or measures are described that allow for the accurate identification of frontier models, making it difficult for regulatory authorities to properly implement the Commitments.   

    Also Read:

    The post Top AI Companies Commit To Safe AI Development at the AI Seoul Summit appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.