An Empirical Study Of Netops Capability Of Pre-trained Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

An Empirical Study Of Netops Capability Of Pre-trained Large Language Models

Miao Yukai, Bai Yu, Chen Li, Li Dan, Sun Haifeng, Wang Xizheng, Luo Ziqiu, Ren Yanyu, Sun Dapeng, Xu Xiuting, Zhang Qi, Xiang Chao, Li Xinchi. Arxiv 2023

[Paper]    
Attention Mechanism GPT Model Architecture

Nowadays, the versatile capabilities of Pre-trained Large Language Models (LLMs) have attracted much attention from the industry. However, some vertical domains are more interested in the in-domain capabilities of LLMs. For the Networks domain, we present NetEval, an evaluation set for measuring the comprehensive capabilities of LLMs in Network Operations (NetOps). NetEval is designed for evaluating the commonsense knowledge and inference ability in NetOps in a multi-lingual context. NetEval consists of 5,732 questions about NetOps, covering five different sub-domains of NetOps. With NetEval, we systematically evaluate the NetOps capability of 26 publicly available LLMs. The results show that only GPT-4 can achieve a performance competitive to humans. However, some open models like LLaMA 2 demonstrate significant potential.

Similar Work