top of page

Hallucination calibration: Why AI hallucination is a good (and complex) thing

Let’s say you’re born with the gift of a great sense of humor. At about age 12, you start to figure this out, and you’re fired up about it. You’re learning and cataloging new jokes to spring them on everyone you meet. You’re gleefully thrusting yourself into other people’s conversations to say something clever. You’re so funny; you keep cracking yourself up... and yet…why do people seem less charmed by you than ever? 

 

But you’re a smart kid and not one to give up on a worthy quest, so you refine your gift over the next few years. You learn to laugh at other people’s jokes, even when you could have told them better. You start reading the room and know when to be serious. You become a complex person, not just a funny one—and in that balance, your humor starts to become the jewel in your crown. 

 

But for a while there, you were a real, genuine, awkward, and annoying turd. When out of balance, your greatest strength was the very thing making you look like a doofus. 



(It's a problem as old as humanity.) 
(It's a problem as old as humanity.) 
“Every virtue carried to the extreme, is a vice.” - Aristotle 

In Eudemian Ethics, Bk. 2, 1221, Aristotle published his Table of Virtues and Vices, a chart of character traits and their counterparts taken to opposite extremes.

Vice of excess 

Vice of deficiency 

Virtue 

Irascibility 

Impassivity 

Gentle temper 

Foolhardiness 

Cowardice 

Bravery 

Shamelessness 

Thin-skinned 

Shame 

Intemperance 

Insensibility 

Temperance 

Envy 

(unnamed) 

Fair-mindedness 

Gain 

Disadvantage 

Justice 

Prodigality 

Meanness 

Liberality 

Boastfulness 

Mock-modesty 

Truthfulness 

Flattery  

Churlishness 

Friendliness 

Unaccommodatingness  

Servility 

Dignity 

Imperviousness 

Softness 

Endurance 

Vanity 

Meanness of spirit 

Pride 

Ostentatious extravagance 

Niggardliness 

Magnificence 

Unscrupulousness 

Unworldliness 

Practical wisdom 

It’s a handy guide for accepting that the things that make us great simultaneously have the potential to make us intolerable and vice versa. 

 

And as true as it is with humans, it is true with AI. 

 

Hallucination is AI showing its most powerful strength—generalization—as an embarrassing weakness. 

 

I have a brilliant friend from childhood. He missed one question (one!) on his SATs. He can play any instrument after just a few minutes tinkering with it. He’s single-handedly conceived and built iconic software features that most humans use every day, usually late at night.  

 

And he recently got lost in a hotel bathroom. 

 

He literally couldn’t find the door. He had to explain it to me three times before I could even understand how it was possible.   

 

This is AI hallucinating—passing the NY bar exam, doing incredibly complex reasoning, and then telling you dolphins designed the internet. 


(Everything is fine.)
(Everything is fine.)


Hallucination exposes the weaknesses in how AI balances different ways of transforming learning into outputs.  

 

When we talk about hallucination, we aren’t talking solely about the facts it makes up—it’s also the extra fingers and creepy wrongness that comes out in AI-generated images. But AI, like the human intelligence that trained it (ours), tends to hang its strengths and weaknesses from the same hook. Hallucinating is an entertaining and scary example of how it works. 


Why does AI hallucinate? 


AI consumes training data to learn facts and build context for itself, but the usefulness of AI is its ability to generalize. This is the creative part of what AI does: Taking what it knows and the context it understands and building something unique from it. 

 

Generalization exists in contrast with memorization, where the AI ingests information and context but regurgitates it verbatim. Memorization is much less powerful, more like cataloging information, and not so exciting and new; it is, however, the key to accuracy. 


But it is in the calibration between generalization and memorization that we find our pickle.  


(Love is exciting and new...) 
(Love is exciting and new...) 

Generalization = creativity. 

It’s the astounding complex reasoning and the dazzling experience of watching AI produce genuine novelty, even when sometimes it can’t effectively tie its own informational shoes. 

 

Memorization = accuracy 

(assuming accurate training data). However, in moving toward memorization, we dial down creativity and expansive “thinking” while dialing up precision and concerns about security, privacy, bias, and IP violation. 

 

In the vast space between the two, we may find the best of both... but are equally likely to find ourselves with a sloppy, unimpressive bowl of cold informational oatmeal with a foot growing out of it. 

 

Maybe when AI figures it out, it can explain it to us—just as soon as it finds its way out of the hotel bathroom… 

 

 
 
 

© 2025 Mix Consulting Inc

bottom of page