Thursday, July 25, 2024
- Advertisement -
More

    Latest Posts

    Sam Altman says that OpenAI doesn’t fully understand what is going on inside its AI models

    OpenAI is yet to solve the problem of artificial intelligence (AI) interpretability, the company’s CEO Sam Altman said in a recent live interview during the AI for Good Global Summit. Interpretability or explainability within the realm of AI refers to the extent to which an AI model’s internal processes and decisions can be explained in human terms. He mentioned that the more that can be understood about what’s happening inside the AI models, the better. 

    When the interviewer asked Altman if lack of interpretability is an argument against releasing newer models, he said there are other ways to understand the system besides at a neuron-by-neuron level. “We don’t understand what’s happening in your brain at a neuron-by-neuron level, and yet we know you can follow some rules and we can ask you to explain why you think something,” he said, adding that the behavior of AI systems is extremely well characterized. 

    Key discussion points during the interview:

    The question of synthetic data: 

    The interviewer asked Altman about the recent AI model that they have begun training and whether synthetic data is being used to train it, considering that the web now contains a lot of synthetic data. “It’s really strange if the best way to train a model was to just generate a quadrillion tokens of synthetic data and feed that back in. You’d say that somehow that seems inefficient and there ought to be something where you can just learn more from the data as you’re training,” Altman responded while admitting that the company has generated a lot of synthetic data and experimented with training on that. 

    The interview also questioned whether the use of synthetic data for model training could corrupt the AI models, Altman responded that what is necessary is that the data is of high quality. He mentioned that while there could be low-quality synthetic data, there can be low-quality human data as well. “As long as we can find enough quality data to train our models or another thing is ways to get better at data efficiency and learn more from smaller amounts of data or any number of other techniques, I think that’s okay,” he added.

    Why make machines more human when that leads to cybersecurity risks?

    The interviewer pointed out that OpenAI has been focused on artificial general intelligence (AGI), which is a machine with human-like capabilities. He mentioned that this is despite the fact that a lot of bad things associated with AI pertain to its ability to impersonate humans. “I think it’s important to design human-compatible systems, but I think it is a mistake to assume that they are humanlike in their thinking or capabilities or limitations, even though we train them off of, you know, we do this behavioral cloning off of all of this human text data,” Altman said responding to the question. 

    Commenting on OpenAI’s voice model, the interviewer suggested whether the company had considered a measure like adding a beep before the voice model speaks to inform users that it is not human. He mentioned the concern surrounding deepfakes and misinformation and asked Altman what could be done at a design level to alleviate these issues. 

    Altman said that people don’t want to listen to something that sounds like a robot. He brought up his own experience using voice mode (which allows users to talk to GPT-4o ) and said that it wouldn’t work the same way for him if it didn’t sound like something he was already familiar with. “But a beep, some other indication, that could all make sense. I think we’ve just got to study how users respond to this,” he added.

    Will AI lead to a collapse of the internet in favor of 10-20 LLMs?

    “I can imagine versions where the whole web gets made into components and you have this AI that is putting together. This is a way in the future, putting together the perfect web page for you every time you need something and everything is live-rendered for you instantly. But I can’t imagine that everything just gets to one website that feels against all instincts I’d have,” he said. 

    Will different countries have different large language models (LLMs)?

    “Although we don’t know, I would expect that China will have their own large language model that’s different from the rest of the world,” Altman said. He guessed that there hundreds of LLMs would be trained of which 10-20 will get the most usage and these will be the ones trained on most resources. 

    What happened with Scarlett Johansson’s voice? 

    In May this year, actress Scarlett Johansson accused OpenAI of using her voice without her permission. The interviewer brought this up adding that the company’s clarifying statement said that it had asked Johansson but had been denied permission so they used the voices of 5 actors who had come in and auditioned. “People are going to have different opinions about how much voices sound alike, but we don’t. It’s not our voice, and we don’t think it [sounds like Johansson]… Not sure what else to say,” Altman said addressing the question.

    Also read:

    The post Sam Altman says that OpenAI doesn’t fully understand what is going on inside its AI models appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.