Sunday, May 19, 2024
- Advertisement -
More

    Latest Posts

    US Govt Agencies’ Key Actions On AI Executive Order: NIST GenAI Platform for Detecting AI Content, AI Security Board and More

    The United States Commerce Department’s National Institute of Standards and Technology (NIST) has announced the NIST GenAI program to evaluate and measure the capabilities and limitations of generative AI technologies as part of the actions taken by the department to fulfil objectives outlined under President Joe Biden’s October 2023 AI Executive Order. Additionally, the NIST has also announced publication of four guidance documents focusing on risk management, ensuring security, and trustworthiness of AI systems.

    President Joe Biden had issued an Executive Order on October 30, 2023, outlining measures to be undertaken by various government departments for ensuring AI safety and security, citizen privacy, equity, protection of consumers and workers’ rights, and promoting competition and innovation. On April 29, 2024, the US White House announced a list of actions, reported by federal agencies, completed in 180 days of the Executive Order.

    NIST GenAI platform for ‘information integrity’

    Under the Executive Order, the US Department of Commerce was tasked with developing frameworks for content-authentication and watermarking to label AI-generated content, to enable differentiation between AI-generated and original content.

    In line with the Order, NIST GenAI will serve as a platform for Test and Evaluation of generative AI technologies and will inform the work of the US AI Safety Institute.

    The NIST GenAI evaluation will mainly focus on

    • creating benchmarks for dataset creation,
    • facilitating development of content authenticity detection tech for different types of content like text, audio, image, video, and code,
    • conduct a comparative analysis using relevant metrics
    • promote development of tech that can enable identification of the source for fake or misleading information.

    The NIST GenAI pilot study is being undertaken to understand system behaviour towards human-generated content and synthetic content. The primary focus of the pilot study is to understand how the two types of content differ from each other, and how the evaluation findings can assist in guiding users in differentiating between the two.

    Additionally, the NIST has also published four guidance documents covering various aspects of AI development and deployment. These papers include:

    • AI Risk Management Framework (RMF) Generative AI Profile: The document serves as a guidance for identifying generative AI risks and best practices businesses can adopt to manage them. It outlines 13 risks and over 400 solutions developers can take to mitigate the risks.
    • Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: This framework focuses on reducing threats to the data used for training AI systems.
    • Reducing Risks Posed by Synthetic Content: This paper lays out methods for detection and labelling of synthetic content, and developing strategies to reduce risks arising from synthetic content.
    • A Plan for Global Engagement on AI Standards: The draft plan is designed to develop consensus around AI-standards, cooperation, and information sharing with respect to monitoring as well as testing the credibility of AI systems.

    Actions focused on mitigating risks to safety and security

    In addition to the tasks undertaken by the NIST, other departments of the US government have informed of the following completed actions:

    • Preventing misuse of AI for engineering dangerous biological materials: The US Department of Homeland Security (DHS) has established a framework for nucleic acid synthesis screening to prevent misuse of AI for developing harmful chemical and biological materials. The Executive Order had highlighted that AI systems can pose threats to critical infrastructure, as well as exacerbate chemical, biological, radiological, nuclear and cybersecurity risks.
    • AI safety and security guidelines: Nine agencies of the government had initiated assessment of AI risks across sixteen critical infrastructure sectors, and have developed the AI safety and security guidelines for critical infrastructure owners and operators.
    • AI Safety and Security Board: On April 26, the US DHS established a 22-member high-profile AI Safety and Security Board comprising CEOs of major AI companies like OpenAI, Anthropic, Nvidia, Microsoft, Google, Amazon, and AMD. The board will primarily focus on developing recommendations for various sectors to prevent and prepare for potential AI-related disruptions that could impact national security, economic stability, public health and safety.
    • Identifying vulnerabilities in vital government software: The Department of Defense and the DHS informed that it has piloted a new AI tool that can find and address vulnerabilities in software used for national security and military purposes, and other critical government systems used for governance.

    Actions for protecting citizens’ rights

    • Protecting workers’ rights: The Department of Labour (DoL) has formulated a guide for federal contractors and subcontractors to prevent undesirable impact of AI in employment decisions which can undermine workers’ rights. The department is also looking to the application of the country’s existing labour laws in context of use of AI and other automated technology at workplace.
    • Guidance on use of AI in housing sector: The Department of Housing and Urban Development has issued a guidance for ensuring that AI use for tenant screening or housing advertisements does not lead to discrimination of citizens, and is in consonance with existing laws against discrimination.
    • Responsible use of AI in public benefit programs: The US Department of Agriculture and Department of Health and Human Services have initiated actions to set guardrails for responsible and equitable use of AI in administering public benefit programs. Additionally, on March 28, the US government’s Office of Management and Budget (OMB) introduced a new policy establishing requirements from federal agencies for responsible deployment of artificial intelligence (AI) and minimising risks that impact rights and safety of the public when AI is used for governance.
    • Non-discriminatory use of AI in the health sector: Agencies have reported that they have announced a final rule iterating that existing non-discrimination requirements in health programs apply to the use of AI, clinical algorithms and predictive analysis tools. This is to prevent any kind of bias and discrimination arising out of algorithm-based decisions that can jeopardise people’s healthcare rights. The agencies are also undertaking actions to ensure rigorous testing and evaluation of AI systems used in the health sector.

    Advancing the use of AI

    The President’s Executive Order had highlighted the objective for promotion of AI for innovation and competition. The Order directed agencies to leverage the National AI Research Resource, which provides access to key AI resources and data, to expand AI research and also increase grants for studies in areas like healthcare and climate change. In line with the requirements, the US Department of Energy (DoE) has now reported several actions directed towards increasing funding for AI application of AI in science, including energy-efficient AI algorithms and hardware. The DoE is also looking into exploring use of AI for deployment of clean energy, and advancing clean energy economy, among other objectives.

    The federal agencies were also directed to accelerate hiring of AI professionals in public sector and provide AI training for all employees at all levels across various sectors. Federal agencies are focusing on hiring individuals equipped with AI skills and talents, through the government’s AI and Tech Talent Task Force. The White House informed that agencies have hired over 150 AI professionals and are planning to hire hundreds such individuals by Summer 2024. These individuals are mainly responsible for assisting agencies in their efforts for using AI, advising on AI investments, and drafting AI related policies for the government. The DHS has also informed that it will be hiring 50 AI professionals to build “safe, responsible, and trustworthy” AI for service delivery and homeland security.


    STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


     

    Also Read:

     

    The post US Govt Agencies’ Key Actions On AI Executive Order: NIST GenAI Platform for Detecting AI Content, AI Security Board and More appeared first on MediaNama.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.