A Comprehensive Overview of Large Language Models (LLMs) with Papers, Resources and Colab Notebooks · The Large Language Model Bible Contribute to LLM-Bible

Welcome to the LLM Bible!

Large Language Models (LLMs) represent a groundbreaking leap in artificial intelligence, enabling machines to interpret, generate, and engage with human language in ways that are both profound and transformative. These models, trained on diverse datasets containing trillions of words, have become the backbone of numerous applications that influence how we gather information, make decisions, and interact with technology.

Emergent capabilities of LLMs with growing parameter count

This website is dedicated to exploring the fascinating world of LLMs. Here, you will find a curated collection of research papers and educational materials to learn about LLMs.

🏷 Browse Papers by Tag

Agent Agentic Applications Attention Mechanism BERT Bias Mitigation Distillation Efficiency And Optimization Ethics And Bias Fairness Few Shot Fine Tuning GPT Has Code In Context Learning Interpretability And Explainability Language Modeling Large Scale Training Masked Language Model Merging Model Architecture Multimodal Models Pretraining Methods Prompting Pruning Quantization RAG Reinforcement Learning Responsible AI Scaling Laws Security Survey Paper TACL Tokenization Tools Training Techniques Transformer Uncategorized

Chat with the LLM-Bible Bot

LLM-Bible Bot is a world expert on all the papers you will find on the site, which is currently well over 10,000 articles and expanding. Feel free to ask any questions related to Large Language Models (LLMs) or research resources here:

About This Site

This site is an experiment: a living literature review that allows you explore, search and navigate the literature in this area.

Contributing

This research area is evolving so fast that a static review cannot keep up. But a website can! We hope to make this site a living document. Anyone can add a paper to this web site, by completing a web form.


Copyright © Sean Moran 2024. All opinions are my own.