Introduction
Car crash detection is a critical component of automated traffic systems, with applications in smart city monitoring, autonomous vehicles, and insurance fraud detection. This project demonstrates how a system can automatically detect car crashes in real-time video streams using deep learning-based object detection models, especially YOLOv8, and take emergency actions when a crash is detected.
Tools and Technologies
-OpenCV: A powerful computer vision library used for video processing and displaying results.
-YOLOv8: A state-of-the-art, real-time object detection model for detecting crash events.
-Python: Programming language for implementing the solution.
-Pandas: Data analysis tool for managing detection results.
-CVZone: High-level OpenCV wrapper for enhanced visualization of detection results.
-Twilio (optional): To send SMS and make phone calls in emergency scenarios.
-GPS Module/API (optional): For retrieving real-time location of the vehicle.
Project Overview
The goal of this project is to detect car crashes in video footage, and when a crash is detected:
-The system will automatically trigger an emergency response.
-It will send a photo of the crash scene along with the vehicle's location to an emergency number.
-This allows authorities to verify the severity of the crash and take timely action.
YOLOv8 Model Overview
YOLOv8 is known for its speed and accuracy, making it ideal for real-time object detection tasks. In this project, YOLOv8 detects crash events within the video frames. The model processes the video frames and outputs the bounding boxes around detected objects, including crashes.
Dataset Preparation
The dataset used for training should include video footage or images annotated with crash and non-crash scenes. If a pre-trained YOLOv8 model is used, the model can be directly applied for crash detection.
Model Training
If you're training the YOLOv8 model from scratch, follow the steps for training on a custom dataset. If using a pre-trained model, you can skip directly to implementation.
Implementation (Detection with OpenCV and YOLOv8)
The system utilizes OpenCV for capturing video input and YOLOv8 for detecting crash events in each video frame. If a crash is detected, the system proceeds to trigger an emergency call and send alerts.
System Workflow
Step-by-Step Workflow:
Input: The system processes video streams (e.g., from dashcams, CCTV).
Preprocessing: Each frame is resized and processed for object detection.
Detection: YOLOv8 detects crashes in real-time.
Post-processing: If a crash is detected, the system highlights the event with bounding boxes and initiates emergency measures.
Emergency Response: If a crash is detected, the system sends an automatic alert with crash photos and location data to emergency services.
Display: The system continuously displays frames with bounding boxes around detected objects.
import cv2 import pandas as pd from ultralytics import YOLO import cvzone import smtplib from email.mime.multipart import MIMEMultipart from email.mime.image import MIMEImage from email.mime.text import MIMEText import geocoder # To get real-time location data (optional) from twilio.rest import Client # To send SMS/calls # Load YOLOv8 model (pre-trained or custom trained) model = YOLO('best.pt') # Emergency call function using Twilio API def send_emergency_alert(image, location): # Twilio credentials (replace with actual credentials) account_sid = 'your_twilio_account_sid' auth_token = 'your_twilio_auth_token' client = Client(account_sid, auth_token) # Sending an SMS with crash details message = client.messages.create( body=f"Car crash detected! Location: {location}", from_='+Twilio_phone_number', to='+Emergency_contact_number' ) # Making a call to emergency services call = client.calls.create( twiml=f"<Response><Say>Car crash detected at {location}</Say></Response>", to='+Emergency_contact_number', from_='+Twilio_phone_number' ) print(f"Alert sent: {message.sid}") print(f"Call initiated: {call.sid}") # Function for crash detection def Crash_detect(event, x, y, flags, param): if event == cv2.EVENT_MOUSEMOVE: point = [x, y] print(point) cv2.namedWindow('Crash_detect') cv2.setMouseCallback('Crash_detect', Crash_detect) # Video capture cap = cv2.VideoCapture('crash_video.mp4') # Load class names my_file = open("coco_classes.txt", "r") class_list = my_file.read().split("\n") count = 0 while True: ret, frame = cap.read() if not ret: cap.set(cv2.CAP_PROP_POS_FRAMES, 0) continue count += 1 if count % 3 != 0: # Skip frames for faster processing continue frame = cv2.resize(frame, (1020, 500)) # YOLOv8 prediction results = model.predict(frame) boxes = results[0].boxes.data # Convert results to DataFrame px = pd.DataFrame(boxes).astype("float") for _, row in px.iterrows(): x1, y1, x2, y2 = map(int, row[:4]) class_id = int(row[5]) class_name = class_list[class_id] # Draw bounding boxes if 'accident' in class_name.lower(): cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 3) cvzone.putTextRect(frame, f'{class_name}', (x1, y1), 1, 1) # Get location data (optional) g = geocoder.ip('me') # Using IP-based location location = g.latlng # Save the frame as an image crash_image = "crash_event.jpg" cv2.imwrite(crash_image, frame) # Send emergency alert send_emergency_alert(crash_image, location) else: cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2) cvzone.putTextRect(frame, f'{class_name}', (x1, y1), 1, 1) cv2.imshow("Crash_Detect", frame) if cv2.waitKey(1) & 0xFF == 27: # ESC key to exit break cap.release() cv2.destroyAllWindows()
Key Components:
-Crash Detection: The YOLOv8 model identifies car crashes from the video feed.
-Emergency Alert: Upon crash detection, the system:
-Sends an SMS with the crash details.
-Makes a call to emergency services with the crash location.
-Sends a snapshot of the crash scene via email.
Here is the video of working model :
-Detection Accuracy: How well the model identifies crash events.
-Response Speed: The time it takes to send an alert after a crash is detected.
-False Positives/Negatives: Minimizing misclassifications is crucial for the reliability of emergency alerts.
Automatic Phone Call: Using Twilio, the system automatically initiates a phone call to an emergency contact when a crash is detected.
SMS Notification: Along with the phone call, an SMS containing the crash location is sent for verification.
Crash Image: A snapshot of the crash event is captured and sent via email or other communication channels for verification.
Implementation Notes:
Twilio API is used for sending SMS and making calls.
You can also integrate GPS or Geolocation services to send the vehicle's exact location.
Configuration:
Make sure to replace the placeholders for the Twilio credentials and phone numbers.
Optionally, you can use GPS modules or third-party APIs (like Google Maps API) for more accurate location information.
Future Work and Improvements
Enhance Emergency Integration: Integrate with more robust emergency communication systems.
Critical Crash Detection: Add machine learning algorithms to assess the severity of a crash and prioritize critical events.
Deploy in Real-Time Environments: Integrate the system into vehicles or roadside monitoring stations for real-time crash detection.
Conclusion
This crash detection system, enhanced with emergency alert functionality, can significantly improve road safety by providing rapid responses in the event of an accident. By leveraging computer vision, real-time object detection, and emergency response mechanisms, the system provides a comprehensive solution for traffic monitoring and accident management.
There are no datasets linked
There are no datasets linked