https://github.com/copocaneta/code-evaluation-platform/
The AI Code Evaluation Game is an interactive platform that combines coding challenges with real-time AI evaluation using GPT-4. This educational tool helps developers improve their coding skills through immediate, intelligent feedback and features an extensible challenge system for customized learning paths.
// Example of the evaluation result structure interface EvaluationResult { id: string; // Unique identifier timestamp: string; // Evaluation timestamp status: 'success' | 'error' | 'warning'; content: string; // AI feedback }
interface Challenge { id: string; // Unique identifier for the challenge title: string; // Display title description: string; // Challenge requirements defaultLanguage: string; // Initial programming language defaultCode: string; // Solution example/template initialCode: string; // Starting code for users }
Challenges can be easily added by modifying the challengeLoader.ts
file:
const challenges: Challenge[] = [ { id: "custom-challenge", title: "Your Challenge Title", description: "Description of what needs to be accomplished", defaultLanguage: "python", defaultCode: 'def solution():\n # Example solution\n pass', initialCode: `# Starting point for users\ndef solution():\n pass` } ];
{ id: "reducing-verbosity", title: "Reducing Verbosity", description: "Replace excessive use of loops featuring multiple temporary variables with functional patterns.", defaultLanguage: "python", defaultCode: '# Example of clean, functional code...', initialCode: `# Code that needs refactoring...` }
{ id: "singleton-pattern", title: "Implement Singleton Pattern", description: "Create a thread-safe singleton class with lazy initialization.", defaultLanguage: "python", defaultCode: '# Example of correct singleton implementation...', initialCode: `# Basic class structure to be modified...` }
{ id: "optimize-search", title: "Search Optimization", description: "Optimize the search algorithm to achieve O(log n) complexity.", defaultLanguage: "python", defaultCode: '# Optimized binary search implementation...', initialCode: `# Linear search implementation to be optimized...` }
Corporate Training
Academic Settings
Interview Preparation
// Two-phase evaluation approach const evaluationResponse = await makeOpenAIRequest( systemPrompt, `Please evaluate this ${language} code:\n\n${code}` ); const statusResponse = await makeOpenAIRequest( "You are a code validator. Respond with ONLY 'PASS' or 'FAIL'.", `Does this code meet the requirements?...` );
CREATE TABLE completed_challenges ( user_id TEXT, challenge_id TEXT, completed_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(), PRIMARY KEY (user_id, challenge_id) );
Clear Descriptions
Progressive Difficulty
Educational Value
{ id: "unique-identifier", title: "Descriptive Title", description: "Clear, detailed requirements", defaultLanguage: "preferred-language", defaultCode: "// Example solution or template", initialCode: "// Starting point for users" }
The AI Code Evaluation Game represents a novel approach to coding education, combining artificial intelligence with gamification to create an engaging learning experience. Its extensible challenge system makes it adaptable for various training scenarios, from corporate training to academic education.
This project was developed for the Ready Tensor AI Competition, showcasing the potential of AI in educational technology.
There are no datasets linked
There are no datasets linked