TrustScoreEval: Trust Scores for AI/LLM Responses — Detect hallucinations, flags misinformation & Validate outputs. Build trustworthy AI.
-
Updated
Oct 13, 2025 - Python
TrustScoreEval: Trust Scores for AI/LLM Responses — Detect hallucinations, flags misinformation & Validate outputs. Build trustworthy AI.
Multimodal LLM hallucination quantification via KL-smoothed scores + spectral/energy models (RKHS, hypergraphs).
Add a description, image, and links to the hallucination-quantification topic page so that developers can more easily learn about it.
To associate your repository with the hallucination-quantification topic, visit your repo's landing page and select "manage topics."