Skip to content

This repository contains an AI-powered tool that uses LLMs to generate, verify, and explain solutions to competitive programming problems, acting as both a problem-solver and a learning aid.

Notifications You must be signed in to change notification settings

nipunsaif/CodeLlama-FT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

AI-Powered Competitive Programming Code Generation Tool Leveraging Large Language Models

Abstract

Competitive programming serves as a rigorous bench- mark for evaluating algorithmic reasoning, solution design, and implementation efficiency under strict time constraints, thereby offering a valuable testbed for assessing the reasoning capabilities of AI systems. Recent advances in Large Language Models (LLMs), such as AlphaCode, have demonstrated substantial potential in automated code generation. However, these systems continue to face key challenges, including high false positive rates, limited robustness across problem variations, and reduced pedagogical value for learners. In this work, we present an AI- powered competitive programming assistant built on an iterative LLM-driven pipeline that integrates self-reflection, correctness verification, and pedagogical explanation modules. Our approach employs fine-tuned transformer models (e.g., Code LLaMA, StarCoder, Mistral-7B) adapted with Low-Rank Adaptation (LoRA) for parameter-efficient training on the APPS dataset, which comprises 10,000 competitive programming problems. The proposed system operates through a multi-stage process encompassing problem understanding, iterative code synthesis with justification, execution-driven verification, and the generation of step-by-step explanations augmented with annotated code. Designed as both a problem-solving agent and an educational tool, the framework targets 50–60% accuracy on the APPS benchmark while substantially reducing false positive generations compared to existing baselines. Its novelty lies in the seamless integration of self-reflective debugging, execution-guided evaluation, and didactic explanation generation, resulting in a robust, transparent, and pedagogically oriented assistant for competitive programming and algorithmic learning.

About

This repository contains an AI-powered tool that uses LLMs to generate, verify, and explain solutions to competitive programming problems, acting as both a problem-solver and a learning aid.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published