Trusting Language Models In Education · The Large Language Model Bible Contribute to LLM-Bible

Trusting Language Models In Education

Neto Jogi Suda, Deng Li, Raya Thejaswi, Shahbazi Reza, Liu Nick, Venkatesh Adhitya, Shah Miral, Khosla Neeru, Guido Rodrigo Capobianco. Arxiv 2023

[Paper]    
Attention Mechanism BERT Model Architecture Transformer

Language Models are being widely used in Education. Even though modern deep learning models achieve very good performance on question-answering tasks, sometimes they make errors. To avoid misleading students by showing wrong answers, it is important to calibrate the confidence - that is, the prediction probability - of these models. In our work, we propose to use an XGBoost on top of BERT to output the corrected probabilities, using features based on the attention mechanism. Our hypothesis is that the level of uncertainty contained in the flow of attention is related to the quality of the model’s response itself.

Similar Work