Nuevo paper publicado en Transactions on Machine Learning Research (TMLR)
Adapting Language Models to Produce Good Class Probabilities for Classification Tasks
Lautaro Estienne, Matias Vera, Elizabeth Fons, Elena Kochkina, Pablo Piantanida, Luciana Ferrer
Abstract
Large generative language models (GLM) provide a versatile tool for solving a wide variety of natural processing tasks. GLM responses, though, are provided in the form of text, without an indication of the model’s confidence in the answer. This limits the usability of these models on high-risk applications where decisions made based on an incorrect answer can have severe consequences. In this work, we focus on the problem of generating class posterior distributions for text classification tasks like sentiment, news category and intent classification. These posteriors can be used for decision making and as interpretable scores for the user. We show that the naive approach for computing posteriors based on the token posteriors produced by the GLM results in extremely poor posteriors. We then explore different adaptation approaches for improving the quality of posteriors, focusing on low resource scenarios where a small amount of data is available for adaptation. We show that parameter-efficient supervised fine-tuning (SFT), while providing large gains in terms of decision quality, produces suboptimal posteriors due to overfitting. To address this problem, we propose an approach that combines SFT and post-hoc calibration (PHC) using a three-stage training strategy, improving the quality of both posteriors and categorical decisions.
Link al paper: https://openreview.net/forum?id=VVneIp69GR