A
argbe.tech - news1min read
A Geometry-Based Hallucination Check That Skips the LLM Judge
A new write-up proposes detecting hallucinations by comparing the direction of question-to-answer embedding shifts against nearby grounded examples. The method is reported to hit perfect separation on multiple benchmarks without using an LLM-as-judge.
Javier Marin shared a Towards Data Science write-up (Jan 17, 2026) on Displacement Consistency (DC), a geometry-based way to flag LLM hallucinations without an LLM-as-judge.
DC looks at the direction of the embedding shift from question → answer, scoring alignment via cosine similarity.
How it works:
- Build a domain-specific set of grounded Q–A pairs
- For a new query, retrieve nearby questions
- Compute the neighbors’ mean displacement direction
- Score how closely the new answer’s displacement matches it
Reported results:
- Tested across 5 embedding models:
all-mpnet-base-v2,e5-large-v2,bge-large-en-v1.5,gtr-t5-large,nomic-embed-text-v1.5 - AUROC = 1.0 on a synthetic benchmark for all five models
- Also reports perfect separation on:
- HaluEval-QA
- HaluEval-Dialogue
- TruthfulQA
- No source documents required at inference time