She Had Cobalt Blue Eyes: Prompt Testing To Create Aligned And Sustainable Language Models · The Large Language Model Bible Contribute to LLM-Bible

She Had Cobalt Blue Eyes: Prompt Testing To Create Aligned And Sustainable Language Models

Chatrath Veronica, Bamgbose Oluwanifemi, Raza Shaina. Arxiv 2023

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods Prompting Training Techniques

As the use of large language models (LLMs) increases within society, as does the risk of their misuse. Appropriate safeguards must be in place to ensure LLM outputs uphold the ethical standards of society, highlighting the positive role that artificial intelligence technologies can have. Recent events indicate ethical concerns around conventionally trained LLMs, leading to overall unsafe user experiences. This motivates our research question: how do we ensure LLM alignment? In this work, we introduce a test suite of unique prompts to foster the development of aligned LLMs that are fair, safe, and robust. We show that prompting LLMs at every step of the development pipeline, including data curation, pre-training, and fine-tuning, will result in an overall more responsible model. Our test suite evaluates outputs from four state-of-the-art language models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in this paper highlights a gap between societal alignment and the capabilities of current LLMs. Additionally, implementing a test suite such as ours lowers the environmental overhead of making models safe and fair.

Similar Work