OpenAI has an update to fix ChatGPT's "laziness" over the past few months.
On Thursday, the OpenAI announced an update to GPT-4 Turbo, its most advanced LLM. GPT-4 Turbo is still only available in limited preview through the API, meaning it doesn't power ChatGPT. However, the hope is this fix will eventually roll out to ChatGPT, too.
SEE ALSO:Here's how OpenAI plans to address election misinformation on ChatGPT and Dall-E
OpenAI addresses LLM lethargy
In December, OpenAI acknowledged complaints that ChatGPT gave less thorough and helpful answers, sometimes even giving up before completing a task. The issue was attributed to the fact that the model may slowly share degraded responses over time, which could account for why users noticed the issue as far as six months ago.
Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!
Users also theorized that ChatGPT was slower and less useful because bandwidth was limited to meet the demands of higher traffic. In yesterday's announcement, OpenAI doesn't mention specific updates to GPT-4, which currently powers ChatGPT. But updates to GPT-4 Turbo and acknowledgment of the same issue with GPT-4 is a good sign.
Explaining updates to GPT-4 Turbo, the OpenAI blog post said, "This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of 'laziness' where the model doesn’t complete a task."
GPT-4 Turbo was announced at OpenAI's developer conference in November (which quickly overshadowed by the failed attempt to oust Sam Altman). The GPT-4 Turbo model has more up-to-date information and has a larger context window, meaning it's capable of processing much more at data from a single prompt.
As of yesterday, a preview model of the updated GPT-4 Turbo is now available.