Behavior Detection System for Autistic Kids
Introduction
This project aims to assist autism centers in monitoring and improving student behavior through a machine learning-based behavior detection system. The solution includes behavior classification, student-specific introductory videos, and detailed behavior analysis.
Key Features
-
Behavior Detection:
- Classifies 8 behaviors:
- Hand flapping
- Head beating
- Biting
- Sitting in chair (active)
- Sitting in chair (inactive)
- Running
- Fighting
- Tracks behavior duration and generates a pie chart for analysis.
-
Personalized Student Experience:
- Face recognition for student identification.
- Plays an introductory video unique to each student after detecting their presence.
-
Classroom Monitoring:
- Webcam-based real-time face recognition and behavior analysis.
- Records the time spent on each behavior for detailed tracking.
-
Face Attendance System:
- Tracks student attendance via face recognition.
- Plays advertisements or other default content when no student is detected.
System Design
Architecture
- Frontend: Built using
Tkinter
for a simple and user-friendly UI.
- Backend:
- Custom-trained Machine Learning model for behavior classification.
- Python-based scripts for integration with hardware (webcam).
Hardware Requirements
- Webcam
- Computer with moderate processing capabilities
Software Requirements
Project Workflow
1. Face Attendance System
- Starts when the system detects a student using a webcam.
- If a registered student is identified:
- Stops playing the advertisement.
- Plays the student’s introductory video.
2. Behavior Detection
- The classroom webcam monitors students and identifies their behaviors.
- The behavior classification system tracks each student’s activities in real-time.
3. Behavior Analysis
- Generates a pie chart summarizing each student’s behavior distribution.
- Provides insights into behavioral improvements over time.
Code Snippets
Face Detection and Attendance
import cv2
import face_recognition
# Initialize the webcam
video_capture = cv2.VideoCapture(0)
while True:
ret, frame = video_capture.read()
rgb_frame = frame[:, :, ::-1]
face_locations = face_recognition.face_locations(rgb_frame)
for top, right, bottom, left in face_locations:
cv2.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 2)
cv2.imshow("Attendance System", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()
### run gui.py