Mitigating Confidence Distortion for Large Language Models – Zicheng Xu (JHU)

When:
September 8, 2025 @ 12:00 pm – 1:30 pm
2025-09-08T12:00:00-04:00
2025-09-08T13:30:00-04:00
Where:
Hackerman Hall B17
Cost:
Free

 Abstract

Although Large Language Models (LLMs) perform well in general fields, they exhibit a confidence distortion problem on multi-choice
question-answering (MCQA), particularly as the number of answer choices increases. Specifically, on MCQA with many choices, LLMs suffer from under-confidence in correct predictions and over-confidence in incorrect ones, leading to a substantially degraded performance. To solve this problem, we propose Self-Ensemble that splits the choices into several groups and ensembles LLM predictions across these groups to reach a final decision. The advantage of Self-Ensemble is its plug-and-play nature, where it can be integrated into existing LLM architecture based on a designed attention mask and positional encoding, without requiring labeled datasets for parameter tuning. Experimental results on three LLMs and datasets demonstrate that Self-Ensemble comprehensively addresses the confidence distortion problem of LLMs, outperforming standard inference as well as baseline methods.

Advisors

Dr. Vladimir Braverman and Dr. Alexander Szalay

Center for Language and Speech Processing