Human-level AI on the horizon? OpenAI's BOLD prediction about ChatGPT's evolution

GPT-5 likely to demonstrate greatly improved thinking and comprehension
A representational image showing illustration of OpenAI chatGPT. — Unsplash
A representational image showing illustration of OpenAI chatGPT. — Unsplash

OpenAI, an American artificial intelligence organisation, confirmed that ChatGPT will be viewed as "laughably bad" in the future.

Brad Lightcap, the AI lab's COO, expects that in the future, users will speak to the AI chatbot like human and treat it as if it were some colleague.

Moreover, there is suspicion that OpenAI is about to introduce GPT-5, which CEO Sam Altman has described as more clever than GPT-4, the AI model that drives the paid-for version of ChatGPT and Microsoft Copilot.

During an AI panel at the Milken Institute, Lightcap stated: "In the next couple of 12 months, I think the systems that we use today will be laughably bad." Lightcap's statements echoed Altman's remarks last month when he branded GPT-4 as the "dumbest model any of you will ever have to use."

Read more: Microsoft joins Google, OpenAI in the AI race

What can users anticipate from ChatGPT v5?

ChatGPT has progressed greatly since its first debut in November 2022, with the addition of a new memory function, system prompts, and, of course, GPT-based bespoke chatbots.

However, DALL-E has been incorporated to allow picture production, and ChatGPT can now run code snippets to create graphs and other capabilities.

The most significant shift has been in the underlying model, which is effectively the operations' brain, from GPT-3 to GPT-4 and its numerous iterations.

In addition, this has enabled ChatGPT to interpret graphics and audio as well as text and code.

What will make GPT-5 better?

GPT-5 will demonstrate greatly improved thinking and comprehension. According to Lightcap, it will be capable of doing "much more complex work."

Respectively, it will move closer to human levels of comprehension. Altman has previously stated that the objective is to create AGI, or a type of superintelligent that understands the world better than humans yet thinks like us. Moreover, he claims this will be an iterative process, with each new model moving closer to the objective.

According to Lightcap, huge language models are adopting a "system relationship" with users, seeing them as colleague to assist solve problems and altering the way we use the software.

He stated that advancements in reasoning and other capacities are only the beginning, and that "we're just scratching the surface of the full set of capabilities that these systems have."