Labrador: Exploring The Limits Of Masked Language Modeling For Laboratory Data · The Large Language Model Bible Contribute to LLM-Bible

Labrador: Exploring The Limits Of Masked Language Modeling For Laboratory Data

Bellamy David R., Kumar Bhawesh, Wang Cindy, Beam Andrew. Arxiv 2023

[Paper]    
BERT Fine Tuning Language Modeling Masked Language Model Model Architecture Pretraining Methods RAG Reinforcement Learning Training Techniques Transformer

In this work we introduce Labrador, a pre-trained Transformer model for laboratory data. Labrador and BERT were pre-trained on a corpus of 100 million lab test results from electronic health records (EHRs) and evaluated on various downstream outcome prediction tasks. Both models demonstrate mastery of the pre-training task but neither consistently outperform XGBoost on downstream supervised tasks. Our ablation studies reveal that transfer learning shows limited effectiveness for BERT and achieves marginal success with Labrador. We explore the reasons for the failure of transfer learning and suggest that the data generating process underlying each patient cannot be characterized sufficiently using labs alone, among other factors. We encourage future work to focus on joint modeling of multiple EHR data categories and to include tree-based baselines in their evaluations.

Similar Work