This story was first published by Digiday sibling WorkLife
Why don’t scientists trust atoms? Because they make everything up.
When Greg Brockman, president and co-founder of OpenAI, demonstrated the possibilities of GPT-4 – Generative Pre-trained Transformer 4, the fourth-generation autoregressive language model that uses deep learning to produce human-like text – upon launch on Mar. 14, he tasked it to create a website from a notebook sketch.
Brockman prompted GPT-4, on which ChatGPT is built, to select a “really funny joke” to entice would-be viewers to click for the answer. It chose the above gag. Presumably, the irony wasn’t purposeful. Because the issues of “trust” and “making things up” remain massive, despite the incredible yet entrancing capabilities of generative artificial intelligence.
Many business leaders are spellbound, stated futurist David Shrier, professor of practice (AI and innovation) at Imperial College Business School in London. And it was easy to understand why if the technology could build websites, invent games, create pioneering drugs, and pass legal exams – all in mere seconds.
Those impressive feats are making it more challenging for leaders to be clear-eyed, said Shrier, who has written books on nascent technologies. In the race to embrace ChatGPT, companies, and individual users, are “blindly ignoring the dangers of confidently incorrect AI.” As a result, he warned that significant risks are emerging as companies rapidly race to re-orient themselves around ChatGPT without being aware of – or ignoring – the numerous pitfalls.
Click here to read the full story.