Funnily enough, "hallucinating" is the accepted term what happens when an AI "invents" new information. I wish I could claim that I came up with it lol
It's best to think of chatGPT and its friends like a super-advanced version of autocomplete on your iPhone. Several years ago, there was a brief fad where people would open up notes on their iPad and type an initial word or phrase - then just tap the autocomplete suggestions repeatedly to see what it would "create". The results were often nonsensical and occasionally funny. That's chatGPT, but with a much larger dataset to work from (and some fancier algebra in the backend).
Here's a good article on the whole thing that makes a great analogy between the information in chatGPT, and blurry JPEGs. It's a very good explanation of what's happening behind the scenes with these systems, and why they can't help but "hallucinate".
OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?
www.newyorker.com