This project presents a multi-agent system that evaluates a candidate's public GitHub profile against a provided job description (JD). Leveraging LangGraph for agent orchestration, the system automates the extraction of required skills from JDs, analyzes GitHub repositories for tech stack compatibility and activity, and generates a comprehensive evaluation report. The approach demonstrates the power of agentic workflows in automating technical profile screening and recommendation tasks.
Technical hiring often requires manual screening of candidates' open-source contributions to assess their fit for a role. This project automates the process by combining:
The system is composed of four main agents, each responsible for a distinct stage in the evaluation pipeline:
| Agent Name | Role & Functionality |
|---|---|
| JD Analyzer Agent | Extracts programming languages and skills from the job description using LLM with static analysis fallback. |
| Repo Match Agent | Matches candidate's GitHub repositories to required skills. |
| Activity Agent | Analyzes commit activity in relevant repositories over the past year. |
| Evaluation Agent | Scores the match and generates a human-readable evaluation report. |
The workflow is orchestrated as follows:
JD Analyzer → Repo Match → Activity → Evaluation → Report
A hiring manager provides a job description for a Senior AI/Python Engineer. The system:
=== JD vs GitHub Profile Evaluator ===
=== Evaluation Report ===
Found 3 relevant repositories.
Tech stack match: Python, JavaScript
Active repositories 2 in last year.
Score: 0.85
--- Human-Readable Evaluation ---
Candidate's open-source work matches 85.0% of the required tech stack.
Excellent match.
For questions or collaboration, please contact the project maintainer.