Note that because of nature of project it could be very dependent to iterations and no plan is straight forward for this type of projects.
Welcome to the Kaggle Problem Solver, the Swiss Army knife of machine learning challenges! This isn't just any old problem solver β it's your AI-powered companion in the wild world of Kaggle competitions. Using a "plan and execute" strategy that would make any project manager jealous, our system tackles ML problems with the finesse of a seasoned data scientist and the tireless energy of a thousand interns. code generation agent is inspired from langgraph agent link
It's like a never-ending dance party, but with more algorithms and less awkward small talk.
Behold, the piΓ¨ce de rΓ©sistance of our project β the Agent Graph! π
graph TB %% Define styles style A fill:#f9f,stroke:#333,stroke-width:2px style H fill:#f9f,stroke:#333,stroke-width:2px style B fill:#bbf,stroke:#333,stroke-width:1px style C fill:#cfc,stroke:#333,stroke-width:1px style D fill:#fcc,stroke:#333,stroke-width:1px style E fill:#ffc,stroke:#333,stroke-width:1px style F fill:#ccf,stroke:#333,stroke-width:1px style G fill:#fcf,stroke:#333,stroke-width:1px A((Start)) --> B[Scraper] B --> G[Data Utils] G --> D[Planner] D --> F[Enhancer] F --> I H((Finish)) subgraph Code_Agent_Process [Code Agent Process] style Code_Agent_Process fill:#cfc,stroke:#333,stroke-width:1px I((Start)) J[Generate Code] K{Ran Error Free?} L((Finish)) M[Reflect On Error] I --> J J --> K K -- Yes --> L K -- No --> M M --> J end %% Link the main process to subgraph L -->|Returns| E[Executor] %% Annotations classDef annotation fill:#fff,stroke:none,color:#333,font-size:12px; class B,G,D,F,C,E annotation; %% Annotating Feedback Loops E -. Feedback Loop .-> F E -. Completion .-> H
This isn't just any graph β it's a visual symphony of our agents working in harmony. Watch as data flows through our system like a well-choreographed ballet of bits and bytes!
Clone this repo faster than you can say "git":
git clone https://github.com/msnp1381/kaggle-agent.git
Start the required services using Docker Compose:
docker-compose up -d
Install Poetry if you haven't already:
curl -sSL https://install.python-poetry.org | python3 -
Set up the Python environment:
poetry install
Configure the project:
Copy the .env.template
file to .env
:
cp .env.template .env
Open the .env
file and fill in the required environment variables.
Review and update the config.ini
file if necessary.
Run the main script:
poetry run python main.py
The Kaggle Problem Solver can be customized using the config.ini
file. This file allows you to adjust various settings without modifying the code directly. Here's how you can change the configuration:
Open the config.ini
file in a text editor.
Modify the values as needed. Here are some key sections and their purposes:
[General] recursion_limit = 50 # Set the maximum recursion depth
Our Kaggle Problem Solver comes equipped with a sophisticated memory system that acts as the brain of our AI, allowing it to learn, adapt, and make informed decisions throughout the problem-solving process. Here's how it works:
Short-Term Memory:
Long-Term Memory:
Examples Memory:
The memory agent continuously updates a summary of the project's progress:
updated_summary = memory_agent.update_summary(task, code, result)
Combines short-term and long-term memory for informed responses:
answer = memory_agent.ask("What are the key points of the challenge?")
Finds relevant information based on meaning, not just keywords:
relevant_docs = memory_agent.search_documents("AI advancements", doc_type="tech_report")
Retrieves similar examples to guide new task executions:
few_shot_examples = memory_agent.get_few_shots(task, n=4)
Adds and retrieves documents with metadata:
doc_id = memory_agent.add_document("Document content", "doc_type", {"metadata": "value"}) document = memory_agent.load_document(doc_id)
Adds important information to short-term memory with priority:
memory_agent.add_to_short_term_memory("Important info", importance=1.5)
# Retrieve relevant examples few_shot_examples = memory_agent.get_few_shots(current_task, n=4) # Access documentation relevant_docs = memory_agent.search_documents(query, doc_type="documentation") # Maintain context memory_agent.add_to_short_term_memory(f"Generated code: {code}", importance=1.5) # Add executed task to examples memory_agent.add_example(task, code, result)
# Initialize document retrieval memory_agent.init_doc_retrieve() # Access challenge information challenge_info = memory_agent.ask(f"What are the key points of the {challenge_name} challenge?")
# Retrieve relevant context relevant_context = memory_agent.ask_docs(current_task) # Add enhanced task to memory memory_agent.add_to_short_term_memory(str(enhanced_task))
By leveraging this powerful memory system across all components, our Kaggle Problem Solver becomes more than just a code generator β it's a learning, adapting, and evolving AI partner in your machine learning journey!
There are no datasets linked
There are no models linked
There are no datasets linked
There are no models linked