Saturday, July 20, 2024
- Advertisement -
More

    Latest Posts

    Microsoft tested GPT-4 in India without Safety Board’s approval: Report

    In a New York Times profile, an OpenAI whistleblower stated that Microsoft allowed an early version of ChatGPT’s model, GPT-4, to be tested on the Bing search engine in India before its launch globally. Microsoft has confirmed that it tested GPT-4, after denying allegations in the past.

    What did the whistleblower say?

    Daniel Kokotajlo, a former governance researcher at OpenAI, is the leader of a group of nine current and former OpenAI employees who have shared concerns that the company has not done enough to prevent its AI systems from becoming dangerous, leading to their exit from the company.

    This comes at a time, when the OpenAI CEOSam Altman said that OpenAI doesn’t fully understand what’s going on inside its AI models at the ‘AI for Good’ Global Summit. Altman further stated that synthetic data can be of low quality like human data but it’s alright as long as there is enough quality data to train AI models, and they aim for better data efficiency as they learn more from smaller amounts of data.

    Kokotajlo and his colleagues raised questions about AGI safety and the recklessness culture promoted by OpenAI. Speaking to the New York Times, Kokotajlo accuses OpenAI of rushing into launching products. This he claimed was despite the installation of safety protocols such as the “deployment safety board” created with partner Microsoft, to review new AI models for major risks before they were publicly released.

    Providing an example, he stated that in 2022 Microsoft began secretly testing its AI chatbot on its search engine Bing in India, which contained a then-unreleased version of GPT-4. According to Kokotajlo, Microsoft had not obtained the safety board’s approval to test the model on Bing.

    Microsoft’s previous admission of “testing” Bing AI

    While the allegation that a version of GPT-4 was tested in India is new, it was widely suspected that Microsoft had tested its AI chatbot Bing AI in India. A few days after the bot’s launch in the US in 2023, users reported that the bot was acting strangely. Few users noticed that similar complaints were filed in India much before the bot’s rollout in the US. This led them to believe that Microsoft had deployed the bot in India for testing earlier.

    In 2023, Microsft did mention that it had begun testing Bing’s AI chatbot codenamed ‘Sydney’, “more than a year ago” before its release, which led some to view this as a confirmation, though GPT-4 or India were not mentioned.

    Microsoft now admits to testing GPT-4 in India

    Kokotajlo claimed that the Board learned of these tests through user reports and Microsoft continued to roll out the AI features more broadly on Bing, despite the reports.

    Microsoft initially denied the claims that Bing was used to test GPT-4 in India. Spokesman, Frank Shaw, said to NYT that the India tests hadn’t used GPT-4 or any OpenAI models. “The first time Microsoft released technology based on GPT-4 was in early 2023 and it was reviewed and approved by a predecessor to the safety board.”, he said.

    However, after the article was published, Microsoft reversed its denial and confirmed Kokotajlo’s allegations. In a second statement, Shaw said, “After more research, we found that Bing did run a small flight that mixed in results from an early version of the model which eventually became GPT-4.” Further, he added that the tests had not been reviewed by the safety board beforehand, although they received later approval.

    Also Read:

    The post Microsoft tested GPT-4 in India without Safety Board’s approval: Report appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.