Ever feel like your fridge is a black hole where perfectly good food goes to disappear? 🕳️ We've all been there – discovering that forgotten head of lettuce or that container of leftovers that's now a science experiment.
Food waste is a huge problem, not just for our wallets 💸, but for the planet. 🌎 Globally, roughly one-third of all food produced for human consumption is wasted, according to the Food and Agriculture Organization of the United Nations
Imagine the resources – water 💧, land 🏞️, energy ⚡ – that go into producing food that just ends up in the trash! 🗑️ If we all took small steps to reduce our food waste, we could make a real difference in terms of environmental impact and even food security.
Think about it: less waste means less strain on our resources, and potentially, more food available for those who need it. 🙏
Last summer I became a dad! And in those precious moments when my son was napping 😴, I found myself strangely energized and inspired so I decided to tackle this food waste experiment head-on.
That's how this project, born out of late-night coding sessions 💻 and stolen moments, came to life.
Honestly, while it's working in my fridge right now 🧊, I know there are still improvements to be made. But it's a start. 🚀
So, how does it work? 🤔 Well, that brings us to the methodology...
NOTE: I upload code but for more compact and better understanding please just go to myGitHub repo in order to have full picture!
The methodology behind "Non-Waste-Ingredients" involves a multi-pronged approach combining image recognition, a knowledge graph, and an AI-powered recommendation system.
First, users register their groceries using a Streamlit-based web application. This can be done either:
-By scanning barcodes with a dedicated reader (just bought one on amazon):
-By capturing images of the products with any connected device - A pre-trained YOLOv10 object detection model, fine-tuned on 70 common food categories (including items like 'olive', 'avocado', 'garlic', 'egg', etc.), identifies the items in the images.
This model was trained using Roboflow for image labeling and YOLOv10 for fine-tuning.
Sometimes can get confused 😆
-By hand.
This registration process creates a personal food inventory, stored in a food_db.json file.
Next, nutritional information for each registered item is retrieved. The system uses the Open Food Facts and Edamam APIs to gather details about the products based on the scanned barcodes. This data includes nutritional values (carbs, proteins, fats, fiber, etc. per 100g), which are crucial for recipe generation.
The core of the system is the "Chef Assistant," which leverages a Retrieval Augmented Generation (RAG) system powered by an AI agentic workflow. This system is designed to suggest daily dishes based on the user's oldest ingredients, minimizing food waste. It uses a Milvus vector database, populated with recipe embeddings generated from a sample recipe book. These embeddings are created using extract_embeddings.py and stored in the Milvus database using milvus_vector_db.py. The RAG system retrieves relevant recipes from the Milvus database based on the available ingredients and then uses an AI agent (powered by a Large Language Model via the Groq API) to generate a specific recipe tailored to the user's oldest items.
Here is the github repo with the instructions to make it yours: GitHub Repo
Have to mention the solution to do streaming videos from any devices using streamlit, that I got inspired from whitphx.
The result of this project is a functional Streamlit application with several key features designed to minimize food waste. The application provides a user-friendly interface divided into distinct tabs. The "Register" section allows users to easily add items to their virtual pantry using either barcode scanning or image recognition.
In my case in order to use it I put a screen at my doors fridge with four magnets to use it while I load it with food with the barcode reader.
Also a raspberry with its own touchable screen could work.
A "Synchronize State" tab helps users manage the freshness of their ingredients. By inputting the number of days since purchase, users can update the state of their items, indicating whether they are still good to use or nearing expiration. This information is crucial for the AI agent to make informed decisions about recipe suggestions.
The "Batch Cooking" tab generates a cooking plan based on user-defined parameters, helping users prepare larger batches of food while prioritizing the use of older ingredients.
The most compelling feature is the "Daily Dish" tab. Here, the AI-powered "Chef Assistant" truly shines. Users can specify dietary preferences and other constraints, and the system generates a daily dish recommendation based on the oldest ingredients in their pantry. The AI agent not only suggests a recipe but also cleverly adapts existing recipes to incorporate the user's available ingredients, "recycling" items that might otherwise go to waste. This feature is a game-changer for reducing food waste and inspiring creative cooking.
While the current version is functional, there are several avenues for future improvement:
The YOLO object detection model could be refined by reducing the number of categories, minimizing noise and improving accuracy. 📸
Exploring other Large Language Models, such as DeepSeek R1, could potentially enhance the quality and creativity of the recipe suggestions. 💡
The "Batch Cooking" section could benefit from a more sophisticated, agentic workflow similar to the "Daily Dish" feature, rather than relying solely on prompts.
The current method for updating the state of ingredients could be made more sophisticated, perhaps by integrating with smart fridge technology or incorporating more nuanced criteria for food spoilage. ❄️
Finally, the system should be updated to automatically modify the state of ingredients as they are used in recipes, ensuring the inventory remains accurate.