Galactic Chitchat: Using Large Language Models To Converse With Astronomy Literature · The Large Language Model Bible Contribute to LLM-Bible

Galactic Chitchat: Using Large Language Models To Converse With Astronomy Literature

Ciucă Ioana, Ting Yuan-sen. Arxiv 2023

[Paper]    
Distillation Efficiency And Optimization Fine Tuning GPT Model Architecture Prompting RAG Reinforcement Learning Tools

We demonstrate the potential of the state-of-the-art OpenAI GPT-4 large language model to engage in meaningful interactions with Astronomy papers using in-context prompting. To optimize for efficiency, we employ a distillation technique that effectively reduces the size of the original input paper by 50%, while maintaining the paragraph structure and overall semantic integrity. We then explore the model’s responses using a multi-document context (ten distilled documents). Our findings indicate that GPT-4 excels in the multi-document domain, providing detailed answers contextualized within the framework of related research findings. Our results showcase the potential of large language models for the astronomical community, offering a promising avenue for further exploration, particularly the possibility of utilizing the models for hypothesis generation.

Similar Work