This project demonstrates real-time human detection using the YOLO model, combined with a Kalman filter to track and predict the future movement of humans. It is designed to be highly efficient and adaptable for various use cases, including real-time object tracking and prediction systems.
By integrating the power of YOLO (You Only Look Once) for object detection with the Kalman filter for state estimation, this system can:
The inclusion of Supervision further enhances efficiency, providing robust monitoring and optimization capabilities for the detection pipeline.
Ensure you have Python 3.6 or higher installed. Download Python from here.
git clone https://github.com/AnikaitOO7/-Object-Detection-with-YOLO-and-Kalman-Filter cd -Object-Detection-with-YOLO-and-Kalman-Filter
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate` pip install -r requirements.txt
-Object-Detection-with-YOLO-and-Kalman-Filter/ │ ├── supervision_live_feed.py ├── README.md └── requirements.txt
YOLO (You Only Look Once) is employed for real-time object detection. The YOLOv8 model detects humans in the video feed, producing bounding boxes for detected objects.
The Kalman filter is used to smooth the bounding box positions over time, reducing noise and providing more stable tracking. It also predicts future movements based on current trajectories.
Supervision enhances the detection and tracking pipeline by:
A Tkinter-based GUI displays the video feed, showcasing both raw and filtered detections. This allows users to observe the system's performance and effectiveness in real-time.
This project can be adapted for a variety of use cases, such as:
There are no datasets linked
There are no datasets linked