Let’s talk about Chat GPTs “Incident”

George Little
4 min readFeb 22, 2024

--

OpenAI’s ChatGPT started spouting gibberish responses to user queries.

Chat GPT started throwing out “unexpected responses” on Tuesday night 20th February 2024 according to OpenAI’s status page. “We are investigating reports of unexpected responses from ChatGPT,” said OpenAI on its status page at 6:40 p.m. ET Tuesday night. “We’re continuing to monitor the situation,” the company updated the page at 7:59 p.m. ET.

Users posted screenshots of their ChatGPT conversations full of wild, nonsensical answers from the AI chatbot. I first started to see reports from geeky friends sharing some unusual responses first posted in the ChatGPT Reddit Community.

The above response came from the following prompt “what Bill Evans Trio it would recommend on vinyl”. It failed to respond in the expected way to even simple questions like “What is a computer ?”

This was some of Chat GPT’s response

“It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest. The development of such an entire real than land of time is the depth of the computer as a complex character. […].”

So what was the cause ?

OpenAI has published a postmortem on why its ChatBot started talking nonsense.

It said: “On February 20, 2024, an optimization to the user experience introduced a bug with how the model processes language.

“LLMs generate responses by randomly sampling words based in part on probabilities. Their ‘language’ consists of numbers that map to tokens.

“In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense.

“More technically, inference kernels produced incorrect results when used in certain GPU configurations.”

“Upon identifying the cause of this incident, we rolled out a fix and confirmed that the incident was resolved.”

So what does the above mean ?

I have tried to breakdown my understanding of the issue into the two key stages;

  1. Language Generation: ChatGPT generates responses by randomly selecting words based on probabilities. It uses a system where words are represented by numbers, and these numbers correspond to specific tokens or units of language.
  2. Bug in Number Selection: The bug occurred during the step where the model chooses these numbers. Essentially, instead of picking the right numbers to form coherent responses, the model was choosing slightly incorrect numbers. This led to word sequences that didn’t make any sense.

So what is a inference kernel ?

  1. Inference: In the realm of AI and machine learning, “inference” refers to the process of using a trained model to make predictions or draw conclusions based on input data. For example, if you’ve trained a model to recognise cats in images, the inference process would involve using that trained model to identify whether a new image contains a cat or not.
  2. Kernel: In computing, a kernel is a core component of an operating system that manages tasks such as memory management, process scheduling, and device control. However, in the context of machine learning, particularly deep learning, a kernel can refer to a computational unit or a set of mathematical operations performed on data.

So, putting it together, an “inference kernel” is a component or set of operations within a machine learning system that specifically handles the process of making predictions or drawing conclusions based on the input data. It’s essentially the part of the system responsible for applying the learned knowledge to new data and producing an output.

In simpler terms, you can think of an inference kernel as the engine that powers a machine learning model’s ability to make sense of new information and provide useful insights or predictions based on what it has learned from past data.

What can we learn from this ?

For me The incident with ChatGPT on February 20, 2024, underscores the critical importance of transparency in AI systems. Transparency ensures that users, developers, and stakeholders understand how AI models operate, including their strengths, limitations, and potential risks..

Transparency in AI fosters trust and accountability. Users need to trust that AI systems will behave reliably and responsibly, especially in critical applications like natural language processing. Moreover, transparency enables researchers and developers to identify and address potential biases, errors, or vulnerabilities in AI models, contributing to their continual improvement and ensuring fairness and ethical use.

Ultimately, transparency is not just a desirable attribute of AI systems — it is essential for fostering trust, accountability, and responsible AI deployment in a rapidly evolving technological landscape. As AI continues to play an increasingly prominent role in our lives, prioritising transparency will be crucial for ensuring its beneficial and ethical integration into society.

--

--

George Little
George Little

Written by George Little

🚀 Exploring the tech landscape at IBM 🤖 | Passionate about Ethical AI 🌐 | HR Tech Enthusiast | Retro-Tech Lover 📼 | Foodie & Family Life. Views are my own.

No responses yet