Developer Inspiration Assistant is an open-source AI tool that helps developers discover and draw inspiration from award-winning projects on ReadyTensor. Using RAG with Llama-3.3-70B (Groq) and Chroma, it supports queries like:
tag "Best Overall Project"
It returns up to 5 matching publications with full context.
ReadyTensor hosts hundreds of high-quality AI/ML publications, but finding the award-winning ones — and understanding why they stand out — is time-consuming.
Developer Inspiration Assistant solves this by:
all-MiniLM-L6-v2 embeddingsGoal: Turn ReadyTensor into a dynamic inspiration engine for developers.
The Developer Inspiration Assistant follows a three‑stage pipeline to deliver fast, accurate, and inspiration‑rich search over ReadyTensor publications.
The first step is to gather all publication data from ReadyTensor. A web scraper visits the public publications page, extracts structured metadata (title, ID, description, awards, username, license), and saves it locally for offline processing.
This guarantees a complete, up‑to‑date snapshot of every project.
# scraper.py from playwright.sync_api import sync_playwright with sync_playwright() as p: browser = p.chromium.launch() page = browser.new_page() page.goto("https://app.readytensor.ai/publications") # Extract: title, ID, description, awards, username, license # Save to: data/readytensor_publications.json
Output:
data/readytensor_publications.json
File:scraper.py
After the raw data is collected, it is pre‑processed and embedded into a semantic vector space. Text is split into manageable chunks, each chunk is transformed into a dense vector with a lightweight embedding model, and the vectors are stored in a persistent vector database.
This step enables fast semantic retrieval and fuzzy matching on award names (e.g., “Best Overall” ≈ “Best Overall Project”).
# ingest.py from langchain_huggingface import HuggingFaceEmbeddings from langchain_chroma import Chroma embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2") vectorstore = Chroma(persist_directory="chroma_db", embedding_function=embeddings) # Load JSON → chunk → embed → store
Output:
chroma_db/(persistent vector store)
Embedding Model:all-MiniLM-L6-v2
File:ingest.py
When a user submits a query (e.g., tag "Most Innovative Project"), the system:
The final response lists up to 5 matching projects with title, ID, awards, and a short snippet.
# RAG Chain in app.py / assistant.py retriever = vectorstore.as_retriever(search_kwargs={"k": 500}) rag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | ChatGroq(model="llama-3.3-70b-versatile") | StrOutputParser() )
Output: Up to 5 projects (title, ID, awards, snippet)
LLM:llama-3.3-70b-versatilevia Groq API
Files:app.py,assistant.py
Pipeline Flow
scraper.py → JSON → ingest.py → Chroma → app.py/assistant.py → Llama‑3.3‑70B → Answer
All code: GitHub
all‑MiniLM‑L6‑v2llama‑3.3‑70b‑versatile (Groq)| Query | Expected | Result | 
|---|---|---|
tag "Best Overall Project" | Top 5 winners | 100% recall | 
most innovative project | Innovation winners | 5 matches | 
best technical implementation | Technical deep‑dives | 4 matches | 
nonexistent award | No results | "Not enough info" | 
Tested on 15 award categories.
| Metric | Value | 
|---|---|
| Award Recall | 100% | 
| Response Time | < 2 sec | 
| Max Projects Returned | 5 | 
| Fuzzy Matching Accuracy | 95%+ | 
| Interface | Streamlit + CLI | 
Title: AI‑Powered Medical Diagnosis ID: rt‑12345 Awards: Best Overall Project | Best Technical Implementation Content: This project uses multimodal RAG to...
Users can click through to the original publication and replicate using linked code/datasets.
Developer Inspiration Assistant transforms ReadyTensor into a real‑time inspiration engine.
Discover → Replicate → Innovate