Wednesday, July 24, 2024
- Advertisement -
More

    Latest Posts

    US Senate passes AI Bill in California, prohibiting large AI models from creating weapons unauthorized

    The US Senate has passed a Bill in California that prohibits large-scale and powerful AI systems from aiding in the development of chemical, biological, radiological, or nuclear weapons. The Bill, called the “The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”, sets “clear, predictable, common-sense safety standards” for developers of AI systems and orders the creation of a council to monitor the enforcement of the mandates. It states:

    “If not properly subject to human controls, future development in artificial intelligence may also have the potential to be used to create novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.”

    The Bill will now head to the Assembly, where it must pass by August 31. Senator Scott Weiner, who passed the Bill, said it was a “work in progress” and will see significant improvements before going to the Assembly. This Bill is in line with President Biden’s Executive Order on Artificial Intelligence’s aim to encourage the responsible use of AI . The Bill passing in California is noteworthy as the state is home to some of the world’s largest AI companies.

    Nathan Calvin, Senior Policy Counsel at the Center for AI Safety Action Fund said of the Bill, “As the home to many of the largest and most innovative AI companies, California should be leading the way with industry-leading policies for safe, responsible AI while also making this incredible technology accessible to academic researchers and startups to encourage innovation and competition.”

    What does the Bill mandate?

    The Bill applies to developers of AI models with computing power greater than 1026 floating-point operations that cost over $100 million to train, as these models “would be substantially more powerful than any AI in existence today.” In order to determine this, “A Frontier Model Divison” will be set up with the Department of Technology to judge if the models can be classified as “frontier” models and if they are required to follow the provisions set out by the Bill. The Division will also appoint a senior personnel responsible for monitoring and reporting the implementation of the provision under the Bill and conducting audits. Under the Bill, California’s Attorney General can take legal action against any of these developers if they deem that their model or their negligence poses an imminent threat to public safety. Aside from safety guidelines, the Bill also calls for transparent pricing and prohibiting price discrimination of AI models, to ensure competition in the AI landscape.

    What are developers required to do?

    Developers of “frontier’ models are required to take basic precautions, such as pre-deployment safety testing, red-teaming, etc. Additionally, they are required to establish safeguards to prevent the misuse of their model for dangerous capabilities and monitor its use post-deployment.

    The developer must also implement the capability to promptly shutdown the entire model, in case of a safety or security threat. They are required to report each safety incident to the Division no later than 72 hours after learning of the incident.

    Further, they are banned from developing models that have “hazardous capability”, without authorization. This also extends to third-party entities that use their services. These capabilities entail models that can aid in the creation of chemical, biological, radiological, or nuclear weapons in a manner that results in mass casualties or that could cause “$500,000,000 of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents.”

    Employees of developers and those assisting with the development of the model that observe any wrongdoing can report to the Division without consequence as the Bill calls to create protections for whistleblowers. Companies are required to have a “reasonable internal process” through which an employee can reveal any violations or safety concerns, anonymously.

    Developers violating any provisions will be charged a civil penalty “not exceeding 10 percent of the cost, excluding labor cost” of the development of the model for a first violation. For any subsequent model, they will be charged 30 percent.

    Advisory Council

    The Bill also calls to create safety provisions for open-source artificial intelligence development. For this, it calls for establishing an advisory council which will issue safety and security guidelines for open-source AI and advise the Frontier Model Division on future policies and legislation impacting open-source AI.  The guidelines will also apply to developers of models with “hazardous capabilities”.

    Establish a public cloud computer cluster- CalCompute

    The Bill orders the Department of Technology to commission consultants to establish a new public cloud computer cluster called CalCompute to aid in the development of large-scale AI systems. The cluster will be made available at a subsidized rate to startups, researchers, and community groups in California. Those commissioned to operate the cluster would also be required to record any use of the cluster to develop any models that fit the requirements of a “frontier model.” This includes recording the identity of the person developing such a model.

    The Bill is still undergoing changes and aims to confirm the technical thresholds to be covered under this bill On or before July 1, 2026.

    Also Read:

    The post US Senate passes AI Bill in California, prohibiting large AI models from creating weapons unauthorized appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.