This project uses YOLO (You Only Look Once) or OpenCV to detect objects within a room from video or image input. The goal is to enhance real-time user interaction by recognizing key objects (e.g., chairs, laptops, bottles) and optionally triggering AI-based actions based on the detected objects.
git clone https://github.com/yourusername/yolo-room-detection.git cd yolo-room-detection
Make sure Python 3.8+ is installed on your system.
Install the required Python packages:
pip install -r requirements.txt
Download the pre-trained YOLOv5 weights (e.g., yolov5s.pt) from the official YOLOv5 GitHub Repository.
Place the .pt file into the models/ directory of this project:
mkdir -p models mv yolov5s.pt models/
🎙️ Voice Command Integration
Allow users to control the system or respond to detected objects using voice input.
🏠 Multi-Room Detection Logic
Detect and differentiate between multiple rooms based on visual context or metadata.
👤 User-Specific Object Personalization
Train the system to recognize user-specific objects or preferences for a tailored experience.
🌐 Home Automation API Integration
Integrate with platforms like Home Assistant to trigger real-world actions based on object detection.