Webapp1k: A Practical Code-generation Benchmark For Web App Development · The Large Language Model Bible Contribute to LLM-Bible

Webapp1k: A Practical Code-generation Benchmark For Web App Development

Cui Yi. Arxiv 2024

[Paper]    
GPT Model Architecture Prompting Uncategorized

We introduce WebApp1K, a practical code-generation benchmark to measure LLM ability to develop web apps. This benchmark aims to calibrate LLM output and aid the models to progressively improve code correctness and functionality. The benchmark is lightweight and easy to run. We present the initial version of WebApp1K, and share our findings of running the benchmark against the latest frontier LLMs. First, open source LLMs deliver impressive performance, closely trailing behind GPT-4o and Claude 3.5. Second, model size has strong correlation with code correctness. Third, no prompting techniques have been found to lift performance either universally to all models, or significantly to a single model.

Similar Work