Tuesday, July 16, 2024
- Advertisement -
More

    Latest Posts

    AI Researchers, Ex-Employees of Frontier AI Companies Ask For Greater Whistleblower Protections

    A group of former and current employees at frontier AI companies like OpenAI and Google have released an open letter asking for greater whistleblower protections for those who raise the alarm about the dangers of AI. Titled ‘A Right to Warn about Advanced Artificial Intelligence,’ the letter has been signed by thirteen people, most from OpenAI, six of whom chose to stay anonymous. The letter has also been endorsed by esteemed researchers like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell.

    While acknowledging the possible benefits of AI, the researchers write, “We also understand the serious risks posed by these technologies. These risks range from further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.” According to the letter, AI companies possess significant non-public information about their models and have minimal obligations to disclose possible risks with governments and civil society. Thus, former and current employees of such companies are some of the few people who can expose wrongdoings. However, they are hampered by broad confidentiality agreements that prevent them from criticising their companies and many fear retaliation. Ordinary whistleblower protections may also be insufficient as many of these activities are not illegal.

    The letter demands that AI companies commit to the following principles:

    1. That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for such criticism by hindering any vested economic benefit;
    2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organisation with relevant expertise;
    3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organisation with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
    4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. The writers accept the necessity to protect confidential trade secrets and instead ask for a process by which employees could raise concerns to the company’s board, regulators and even other independent organisations. However, as long as the process does not exist, employees are free to release their concerns to the public generally.

    Multiple concerns have been raised about the risks of Artificial Intelligence by former OpenAI officials. A former employee, Leopold Aschenbrenner, recently claimed he was fired for raising security concerns to the company’s board. He alleged that the company’s management was displeased with an internal memo he wrote on OpenAI’s security policies, which he felt were “egregiously insufficient” in protecting model weights or key algorithmic secrets from theft by foreign actors.

    Last month, OpenAI announced that it had formed a ‘Safety and Security Committee’ as it begins to train its next frontier model, which would be responsible for making recommendations to the Board on critical safety and security decisions. However, this move came after core team member Jan Leike resigned, accusing OpenAI of prioritising shiny products over “safety culture and processes.” Similarly, former board member Helen Toner co-wrote a paper accusing OpenAI of contributing to an exceedingly competitive AI landscape that pushed AI developers to “accelerate or circumvent internal safety and ethics review processes.”

    Also Read:

    The post AI Researchers, Ex-Employees of Frontier AI Companies Ask For Greater Whistleblower Protections appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.