AI Explained – Part 3 Hallucinations

by Treb Gatte

AI - pexels-jimmy-chan

See part 2 of this series at: https://marqueeinsights.com/ai-explained-model-training-fine-tuning-and-grounding

If you’ve been using ChatGPT/GPT-4, you know it can sometimes produce the craziest answers that are outright wrong. These are called hallucinations, and they can be both useful and problematic, depending on how you use them.

So, what causes hallucinations? GPT-4 is a neural network trained on a massive amount of text from the internet. In the training process, it learns to predict the next word or phrase based on the previous ones, using a probabilistic model that assigns a score to each possible continuation. When the scores get very close, it picks. However, without any worldly knowledge to draw upon, GPT-4 can’t assess whether their next choice is true. It only knows what it has seen in the text it has been fed. Sometimes it generates factual looking text that is inaccurate.

For example, I asked GPT-4 for my bio:

Treb Gatte is a business intelligence expert and the CEO of Marquee Insights, a company that helps organizations use data to make better decisions. He is also a Microsoft Data Platform MVP, a speaker, and a podcast host. Lastly, he is the lead singer for “Rage against the Printer”, a 90s rock cover band.

This text sounds plausible, but it’s wrong. I’ve never hosted a podcast and I’m definitely not the lead singer of a band.

Thankfully GPT-4 hallucinates less than earlier versions but when it does, it’s even better at making it sound correct. Caution is advised.

Can you absolutely eliminate hallucinations? No, unfortunately. You can reduce them, but Large Language Models are not deterministic. If you run the same prompt many times, you’ll get different results. While you can reduce the chance of a hallucination, it’s never zero.

Two techniques to use are prompt engineering or grounding the model with additional information. For example, you can instruct the model to return “I don’t know” if it isn’t certain about an answer. I can also ground the model with a list of podcast hosts so that it knows I’m not one.

Hallucinations are not necessarily bad. They can be used creatively, especially f you are looking for unexpected suggestions.

Interestingly, humans accept suggestions more readily from AI than humans. When a human suggests a revolutionary idea, social norms are violated, and stigma is used to correct. For example, “Who suggested Rosemary Dr. Pepper? Oh, Jones? Yeah, no, this is a stupid idea.” Idea is dead.

Whereas if a group asked GPT-4 for some new flavors of Dr. Pepper, they would assess these suggestions equally and without prejudice. A radical idea is more likely to be assessed.

Some uses today of hallucinations are for creative writing, generating games or other entertainment, and education, where GPT-4 is used to test your knowledge. A meta exercise is to ask GPT-4 how you can use AI hallucinations.

To wrap up, when you have a need for correct factual uses, take steps to reduce hallucination and use “human in the loop” techniques to review output. If you need creativity, hallucinations can be an amazing muse.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.