![]()
Streamline your job search and application process with JobTracker, a free command-line tool that helps you organize and manage your job applications using a local database.
If you're a meticulous and detail-oriented person and are tired of tracking your jobs on a spreadsheet, then use JobTracker instead.
I started this project out of consistent frustration with interviewers removing job postings. As a way to maintain a reference, JobTracker can come to the rescue. However, now this is a great entry project for anyone interested in understanding how to create AI applications.
Clone the repository:
git clone https://github.com/luisdavidgarcia/JobTracker.git cd JobTracker
Install Poetry if you do not have it already. MAKE SURE TO READ THE OUTPUT AFTER INSTALLING FOR ACCESSING POETRY AND FOLLOW THEIR INSTRUCTIONS.
Install the dependencies using Poetry:
poetry install
Install Ollama and ensure it is running.
Install Docker Desktop on your system and ensure it is running.
Create a .env file in the JobTracker/ directory containing your PostgreSQL database information. Below is an example:
POSTGRES_USER=your_username POSTGRES_PASSWORD=your_password POSTGRES_DB=your_database_name DB_HOST=localhost DB_PORT=5432 DATABASE_URL="postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DB_HOST}:${DB_PORT}/${POSTGRES_DB}"
Once you have all these components set up, your database and JobTracker will start running with the following commands:
Turn on your database with Docker Compose:
docker compose up -d --build
This command will build and start the database container in detached mode.
Copy the job description to your clipboard.
Run the JobTracker script:
poetry run python job_tracker/main.py --schema init.sql
Or if you installed poetry shell, you can use it to run as:
python job_tracker/main.py --schema init.sql
Follow the interactive prompts to confirm the extracted information and add the job to your database.
You can incorporate your own schemas, but you will need to edit the template for the LangChain so that it can properly create the desired JSON output.
To get started, navigate to the job posting on the website and copy the application details.
Once you run the script, it will automatically generate this information as a new entry.
You can then review and verify the details before adding them to the database.
The overall structure of JobTracker is shown in the figure below:
![]()
The first step is finding your job application (any site will do). Then, you must copy the job description to your clipboard. Next, with the JobTracker program running and Docker Compose up, press the hotkey set in JobTracker/main.py. By default, I have set them to:
SAVE_JOB_DESCRIPTION_HOTKEY = "<ctrl>+<alt>+s" QUIT_HOTKEY = "<ctrl>+<alt>+x"
This makes it very convenient to add new jobs without having to switch tabs all the time.
After pressing the hotkey, your terminal will display the LLM-generated query for the database. You will need to review it to ensure it meets the standards and schema you have defined. I opted for this approach as it is always best to verify LLM output when you can.
If you dislike the query, you can input n for "no" to reject it. You can then retry by pressing your hotkey to save the job or copy and paste new job descriptions. Once you input y to accept the query, and if it's successful, the query will be saved to your database.
If you prefer to view your database with visual tools, I recommend DBeaver, which is helpful for indexing and managing your entries later on.
I opted for Ollama 3.3 with 70B parameters, but feel free to select your own model by specifying it using the --model argument. If you're interested in learning more about the backend of this system, we should start with the JobTracker/job_tracker/utils/database_helpers.py file, which contains the core logic.
In this file, I leverage LangChain within the _analyze_job_description function. You create a "template," which is essentially a predefined prompt for the model, specifying how it should interact with the given job application data.
Based on the schema you've created, the model will parse the job description and populate all the fields for a new entry. If you find that your model is performing poorly with regard to parsing, the best measure is to increase num_ctx by factors of 4096 * 2^n, where n is the scaling factor. I've noticed great results when setting n to 2, giving a num_ctx of 16,384. However, keep in mind that increasing num_ctx will cause the model to use more physical memory on your device.
def _analyze_job_description( description: str, schema: str, model: str = "llama3.2", num_ctx: int = 4096, ) -> dict: template = """ You are a helpful assistant designed to parse job descriptions and extract relevant information for a job application tracker. Your output MUST be a valid JSON object CONFIRMING to the PROVIDED schema. If a field cannot be extracted, its value in the JSON MUST be null. Do not include any explanation or commentary outside the JSON object. Database Schema: ```sql {schema} ``` Job Description: ``` {description} ``` Instructions: 1. **Direct Extraction:** Extract the company name and position title directly from the job description. If a piece of information is explicitly provided, use *that exact wording* if possible. Otherwise leave as null. Output (DON'T MAKE KEYS OUTSIDE OF THE SCHEMA): ```json """ prompt = ChatPromptTemplate.from_template(template) model = OllamaLLM(model=model, num_ctx=num_ctx) chain = prompt | model try: response = chain.invoke({"description": description, "schema": schema}) json_response = json.loads(response) json_response["original_description"] = description json_response["created_at"] = datetime.now().isoformat() return json_response except json.JSONDecodeError as e: print(f"Error decoding JSON: {e}") print(f"Raw response: {response}") return {} except Exception as e: print(f"An unexpected error occurred: {e}") return {} """ prompt = ChatPromptTemplate.from_template(template) model = OllamaLLM(model=model, num_ctx=num_ctx) chain = prompt | model
For the complete source code, please visit: JobTracker GitHub
![]()