Friday, July 12, 2024
- Advertisement -

    Latest Posts

    Fired For Raising Safety Concerns, Says Former OpenAI Researcher

    A former OpenAI employee, Leopold Aschenbrenner, has alleged that he was fired for a memo raising security concerns about AI to the company’s board. Speaking on the Dwarkesh Patel podcast, Aschenbrenner attributed his firing to an internal memo he wrote regarding OpenAI’s security, which he felt was “egregiously insufficient” in protecting model weights or key algorithmic secrets from theft by foreign actors and for sharing said memo with the company’s board. 

    Aschenbrenner was part of the Superalignment team at OpenAI, which was responsible for safety and ensuring “AI systems much smarter than humans follow human intent.” The team focused on mitigating risks related to AI such as “misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance.” According to a report from the Transformer News, Aschenbrenner was ousted for allegedly leaking sensitive information to outsiders. The report also referred to Aschenbrenner as an ally of Ilya Sutskever, former Chief Scientist who left the company after a failed ouster of CEO Sam Altman. 

    What triggered the OpenAI researcher’s firing?

    “Sometime last year, I had written a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI. I shared that with three external researchers for feedback. That’s the leak,” said Aschenbrenner. He claimed that sharing such documents with external researchers for feedback was then a common practice at the company and he had taken precautionary measures to ensure that no sensitive information was exposed. He stated that the company had pointed out a line regarding planning for AGI by 2027-2028 in the document and termed it a leak of sensitive information. 

    The memo he wrote had only been shared with a few colleagues, said Aschenbrenner, until an unidentified “major security incident” which triggered him into sharing the memo with the company’s board. “Days later, it was made very clear to me that leadership was very unhappy I had shared this memo with the board. Apparently, the board hassled leadership about security.” claimed Aschenbrenner “The reason I bring this up is that when I was fired, it was very made explicit that the security memo was a major reason for my being fired.”

    Ashcenbrenner also claimed that before his dismissal, he was questioned by a lawyer regarding his views on AI progress, Artificial General Intelligence (AGI), AI security and government involvement and his loyalty to the company. 

    This comes at a time when OpenAI formed a new committee for AI safety last week, after core members like Sutskever and Head of Alignment Jan Leike left the company citing safety concerns. Leike had stated that he had reached a “breaking point” as his department was finding it “harder and harder” to get “crucial research done,” accusing OpenAI of prioritising shiny products over “safety culture and processes.”

    Also Read:

    The post Fired For Raising Safety Concerns, Says Former OpenAI Researcher appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.