arXiv 2026 · LLM Alignment
HAL: Inducing Human-likeness in LLMs with Alignment
Masum Hasan, Junjie Zhao, Ehsan Hoque
A framework for aligning language models to conversational human-likeness using an
interpretable, data-driven reward derived from explicit conversational traits.
Project Page
arXiv
Code