Search papers, labs, and topics across Lattice.
This paper benchmarks GPT-5, DeepSeek-R1, Qwen-Plus, and Llama-3.3-70B-Instruct on a dataset of 1,068 questions from computer science certification exams, evaluating their performance across language, cognitive level, and domain. GPT-5 excels in English, Qwen-Plus in Chinese, and DeepSeek-R1 shows balanced cross-lingual ability, but all models struggle with complex reasoning. The study highlights the potential and limitations of LLMs in CS education, offering insights for curriculum design and assessment.
GPT-5 isn't always the smartest student: Qwen-Plus outshines it in Chinese CS certification exams, revealing critical cross-lingual performance gaps in LLMs.
Large language models (LLMs) are increasingly applied in computer science education for tasks such as tutoring, content generation, and code assessment. However, systematic evaluations aligned with formal curricula and certification standards remain limited. This study benchmarked four recent models, including GPT-5, DeepSeek-R1, Qwen-Plus, and Llama-3.3-70B-Instruct, using a dataset of 1,068 questions derived from six certification exams covering networking, office applications, and Java programming. We evaluated performance across language (Chinese vs. English), cognitive levels based on Bloom's Taxonomy, domain knowledge, confidence-accuracy alignment, and robustness to input masking. Results showed that GPT-5 performed best on English-language certifications, while Qwen-Plus performed better in Chinese contexts. DeepSeek-R1 achieved the most balanced cross-lingual performance, whereas Llama-3.3 showed clear limitations in higher-order reasoning and robustness. All models performed worse on more complex tasks. These findings provide empirical support for the integration of LLMs into computer science education and offer practical implications for curriculum design and assessment.