Generating Educational Materials With Different Levels Of Readability Using Llms · The Large Language Model Bible Contribute to LLM-Bible

Generating Educational Materials With Different Levels Of Readability Using Llms

Huang Chieh-yang, Wei Jing, Huang Ting-hao 'kenneth'. Arxiv 2024

[Paper]    
Applications Few Shot GPT In Context Learning Language Modeling Model Architecture Prompting

This study introduces the leveled-text generation task, aiming to rewrite educational materials to specific readability levels while preserving meaning. We assess the capability of GPT-3.5, LLaMA-2 70B, and Mixtral 8x7B, to generate content at various readability levels through zero-shot and few-shot prompting. Evaluating 100 processed educational materials reveals that few-shot prompting significantly improves performance in readability manipulation and information preservation. LLaMA-2 70B performs better in achieving the desired difficulty range, while GPT-3.5 maintains original meaning. However, manual inspection highlights concerns such as misinformation introduction and inconsistent edit distribution. These findings emphasize the need for further research to ensure the quality of generated educational content.

Similar Work