Can We Trust Large Language Models Generated Code? A Framework For In-context Learning, Security Patterns, And Code Evaluations Across Diverse Llms · The Large Language Model Bible Contribute to LLM-Bible

Can We Trust Large Language Models Generated Code? A Framework For In-context Learning, Security Patterns, And Code Evaluations Across Diverse Llms

Mohsin Ahmad, Janicke Helge, Wood Adrian, Sarker Iqbal H., Maglaras Leandros, Janjua Naeem. Arxiv 2024

[Paper]    
Applications Few Shot GPT In Context Learning Model Architecture Prompting RAG Responsible AI Security Tools

Large Language Models (LLMs) such as ChatGPT and GitHub Copilot have revolutionized automated code generation in software engineering. However, as these models are increasingly utilized for software development, concerns have arisen regarding the security and quality of the generated code. These concerns stem from LLMs being primarily trained on publicly available code repositories and internet-based textual data, which may contain insecure code. This presents a significant risk of perpetuating vulnerabilities in the generated code, creating potential attack vectors for exploitation by malicious actors. Our research aims to tackle these issues by introducing a framework for secure behavioral learning of LLMs through In-Content Learning (ICL) patterns during the code generation process, followed by rigorous security evaluations. To achieve this, we have selected four diverse LLMs for experimentation. We have evaluated these coding LLMs across three programming languages and identified security vulnerabilities and code smells. The code is generated through ICL with curated problem sets and undergoes rigorous security testing to evaluate the overall quality and trustworthiness of the generated code. Our research indicates that ICL-driven one-shot and few-shot learning patterns can enhance code security, reducing vulnerabilities in various programming scenarios. Developers and researchers should know that LLMs have a limited understanding of security principles. This may lead to security breaches when the generated code is deployed in production systems. Our research highlights LLMs are a potential source of new vulnerabilities to the software supply chain. It is important to consider this when using LLMs for code generation. This research article offers insights into improving LLM security and encourages proactive use of LLMs for code generation to ensure software system safety.

Similar Work