Almol: Aligned Language-molecule Translation Llms Through Offline Preference Contrastive Optimisation · The Large Language Model Bible Contribute to LLM-Bible

Almol: Aligned Language-molecule Translation Llms Through Offline Preference Contrastive Optimisation

Gkoumas Dimitris. Arxiv 2024

[Paper]    
Reinforcement Learning Training Techniques

The field of chemistry and Artificial Intelligence (AI) intersection is an area of active research that aims to accelerate scientific discovery. The integration of large language models (LLMs) with scientific modalities has shown significant promise in this endeavour. However, challenges persist in effectively addressing training efficacy and the out-of-distribution problem, particularly as existing approaches rely on larger models and datasets. In this context, we focus on machine language-molecule translation and deploy a novel training approach called contrastive preference optimisation, which avoids generating translations that are merely adequate but not perfect. To ensure generalisability and mitigate memorisation effects, we conduct experiments using only 10% of the data. Our results demonstrate that our models achieve up to a 32% improvement compared to counterpart models. Finally, we introduce a fine-grained, domain-agnostic evaluation method to assess hallucination in LLMs and promote responsible use.

Similar Work