Chatgpt May Pass The Bar Exam Soon, But Has A Long Way To Go For The Lexglue Benchmark · The Large Language Model Bible Contribute to LLM-Bible

Chatgpt May Pass The Bar Exam Soon, But Has A Long Way To Go For The Lexglue Benchmark

Chalkidis Ilias. Arxiv 2023

[Paper] [Code]    
Agentic GPT Has Code Model Architecture RAG Survey Paper

Following the hype around OpenAI’s ChatGPT conversational agent, the last straw in the recent development of Large Language Models (LLMs) that demonstrate emergent unprecedented zero-shot capabilities, we audit the latest OpenAI’s GPT-3.5 model, `gpt-3.5-turbo’, the first available ChatGPT model, in the LexGLUE benchmark in a zero-shot fashion providing examples in a templated instruction-following format. The results indicate that ChatGPT achieves an average micro-F1 score of 47.6% across LexGLUE tasks, surpassing the baseline guessing rates. Notably, the model performs exceptionally well in some datasets, achieving micro-F1 scores of 62.8% and 70.2% in the ECtHR B and LEDGAR datasets, respectively. The code base and model predictions are available for review on https://github.com/coastalcph/zeroshot_lexglue.

Similar Work