Friday, July 19, 2024
- Advertisement -
More

    Latest Posts

    Singapore IMDA Publishes AI Governance Framework: 9 Key Focus Areas for a Trusted AI Ecosystem

    The Singapore Infocommunications Media Development Authority (IMDA) has published a ‘Model AI Governance Framework for Generative AI’ proposing actions in nine key areas including data, content provenance, testing, and security among others aimed at building a “trusted AI ecosystem”. The IMDA is a statutory body under Singapore’s Ministry of Communications and Information.

    Here are the nine focus-areas or dimensions proposed under the framework for implementing AI-focused measures:

    1. Accountability:

    The framework highlights the importance of accountability of AI players–including model developers, application deployers and cloud service providers–towards end users, in a trusted ecosystem. To ensure AI accountability, the paper suggests allocation of responsibility across multiple players in the ecosystem in two ways:

    Ex-ante or Allocation Upfront:

    Drawing inspiration from the cloud service industry, the framework recommends allocating responsibility to each stakeholder based on their “level of control” in the generative AI development chain. It explained that the shared responsibility models undertaken by the cloud industry “allocate responsibility by explaining the controls and measures that cloud service providers (who provide the base infrastructure layer) and their customers (who host applications on the layer above) respectively undertake”. A shared AI responsibility approach will also need to consider the model types depending on the varied levels of control that application deployers enjoy.

    “Responsibility in this case, for example when using open-source or open-weights models, should require application deployers to download models from reputable platforms to minimise the risk of tampered models. Being the most knowledgeable about their own models and how they are deployed, model developers are well-placed to lead this development in a concerted manner. This will provide stakeholders with greater certainty upfront, and foster a safer ecosystem,” the framework noted.

    Ex-post or establishing Safety Nets: This involved allocating responsibility to address unanticipated issues through measures indemnity and insurance to protect end users’ rights. The paper stated that there will always be unclear or uncovered areas and residual issues that may have a major impact on the society, underscoring the need to update legal frameworks to make them more flexible and to tackle emerging risks.

    2. Data:

    The framework emphasizes the importance of quality data in AI development, recognizing that unclear data usage practices have led to copyright issues, privacy risks, and distrust. To promote “trusted use of personal data,” it advises policymakers to clarify how existing data laws apply to generative AI, guide businesses on obtaining consent, and use Privacy Enhancing Technologies (PETs) to protect data confidentiality.

    Although it doesn’t offer specific solutions for copyright risks in AI training datasets, the framework suggests AI developers implement “data quality control” measures. These include consistent and accurate dataset annotation, using data analysis tools for cleaning, and leveraging reference datasets for model fine-tuning and evaluation.

    Furthermore, it recommends governments collaborate with local communities to create representative training datasets, enhancing the availability of culturally diverse and high-quality data, which supports the development of safer and more inclusive AI models.

    3. Trusted development and deployment

    The framework outlines the following practices to be adopted by model developers and application deployers across the AI development lifecycle to ensure overall AI safety:

    Development–Baseline Safety Practices: This includes implementing Reinforcement Learning from Human Feedback (RLHF) techniques for generating safer outputs from the model. Other measures include adopting further fine-tuning techniques using user interactions to tackle harmful output and conducting regular risk assessment based on specific use cases. The paper also suggests techniques like Retrieval-Augmented Generation (RAG) for tackling hallucinations and improving accuracy of model responses. RAG is a technique to improve responses of an AI model by grounding the model on external sources of knowledge to supplement the model’s internal information.

    Disclosure: The framework suggests standardising disclosure models in order to improve transparency about model information, and to enable better comparison of models. Developers must provide users with information related to the data used in a model, its training infrastructure, evaluation results, mitigation and safety measures, risks and limitations, intended use, and safeguards for user data. Developers of advanced models are advised to disclose additional information.

    “One step forward would be for the industry to agree on the baseline transparency to be provided as part of general disclosure to all parties. This involves both the model developers and application deployers. Alternatively, the development of such a baseline can be facilitated by governments and third parties,” the paper noted.

    Evaluation:

    The IMDA is of the view that existing generative AI evaluation methods such as benchmarking test models against datasets to assess performance, and red teaming are not sufficient to assess model performance and back-end safety. It highlights the need for a comprehensive and standardised approach, including a “a baseline set of required safety tests” for evaluating AI models. To cater to sector-specific needs and applications, industry and policymakers will have to work together to improve evaluation benchmarks, while “maintaining coherence between baseline and sector-specific requirements”.

    4. Incident Reporting:

    The framework emphasises on incident reporting mechanisms to identify vulnerabilities in the AI systems, an approach largely followed in sectors like cybersecurity, finance, and telecommunications. It suggests that AI developers must allow “reporting channels” to flag safety vulnerabilities in the AI system and address it in a stipulated time, alongside ongoing measures to detect malfunctions.

    Similarly, after incidents occur, organisations must establish internal processes to enable reporting and timely resolution of the issue. The policy recommends defining “severe AI incidents” or setting a threshold for formal reporting is important to cater to high-impact cases with urgency.

    5. Testing and assurance

    Underlining the importance of independent, external audits for ensuring transparency and credibility, the IMDA has laid emphasis on two aspects of third-party testing ecosystem:

    • How to Test: This involves “defining a testing methodology that is reliable and consistent, and specifying the scope of testing to complement internal testing”. The framework suggests establishing common benchmarks and having common tooling to “reduce friction” in testing different models and applications.
    • Who to Test: This step involves identifying independent entities to ensure objectivity of the results. An accreditation mechanism should be developed to ensure the independence and competency of third-party testers.

    6. Security

    The framework recommends adopting the principle of security-by-design, which requires designing practices in every phase of the system’s development lifecycle. However, given that the probabilistic nature of generative AI poses additional challenges to the traditional evaluation techniques, the IMDA recommends developing new security safeguards.

    These safeguards include input moderation tools to detect unsafe prompts or to block malicious code, and digital forensic tools specifically designed to identify and extract malicious codes hidden within a generative AI model.

    7. Content provenance

    Technical solutions for labelling AI-generated content such as digital watermarking and cryptographic provenance are not fool-proof and can be circumvented by malicious actors, the framework noted. The IMDA recommends working with key stakeholders in the content ecosystem such as publishers to support “embedding and display of digital watermarks and provenance details” and to provide end users with the ability to verify content across various channels, including social media.

    The framework also highlights that including different types of edits stating the extent to which AI is used for a particular content will impact how users perceive it. Here again, it is recommended that standardisation of the types of edits to be labelled will assist users in distinguish between non-AI and AI-generated content. Moreover, it is important to undertake collaborative efforts to raise awareness among users regarding techniques like content provenance and other tools to verify content authenticity.

    8. Safety and alignment research & development

    The framework emphasises on greater investment in research on safety techniques and evaluation tools to keep up with the fast-pacing developments in the generative AI sector and the incoming novel threats. The paper recommends greater research on the development of more aligned models through Reinforcement Learning from AI Feedback (RLAIF) for advancing RLHF techniques and improving feedback quality as well as oversight of advanced models.

    Second, greater research on the evaluation of a model after it is trained is required to test and detect potentially dangerous abilities in time. “Mechanistic interpretability, which seeks to understand the neural networks of a model to find the source of problematic behaviours, is also gaining traction as a research area,” the framework adds.

    9. AI for public good

    “The imperative is to turbocharge growth and productivity for developed and developing countries alike, while empowering people and businesses globally with the potentially democratising power of AI,” the framework notes.

    In addition to international collaborations, the IMDA proposes four key areas to focus on, to leverage AI for public good:

    Democratising access to technology: This requires the government, companies, and communities to collaborate to design “human-centric” AI, improve awareness among the public about how AI works, and promote innovation among small enterprises.

    Public service delivery: The framework suggests governments to coordinate resources for public sector adoption of AI. This includes facilitating data sharing, compute resources and other policy-based measures.

    Workforce: To address concerns related to the impact of AI on employment, it is recommended that the industry, governments and educational institutions work together to “redesign jobs and provide upskilling opportunities” for workers.

    Sustainability: The framework sheds light on the impact of generative AI development on the environment and calls for AI companies and governments to track and measure the carbon footprint of generative AI.

    “AI developers and equipment manufacturers are better placed to conduct R&D on green computing techniques and adopt energy-efficient hardware. In addition, AI workloads can be hosted in data centres that drive best-in-class energy-efficient practices, with green energy sources or pathways,” the IMDA noted.

    Also Read:


    STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


    The post Singapore IMDA Publishes AI Governance Framework: 9 Key Focus Areas for a Trusted AI Ecosystem appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.