The system processes video footage to detect violent activities in real-time. It uses a combination of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN - LSTM) to classify video sequences and compute a mean violence score for each sequence
Steps:
Step 1: Move Files
Navigate to the lstm_model directory and run the 1_move_file.py script to move all files into the appropriate train/test folders.
Step 2: Extract Files
Run the 2_extract_files.py script to extract images from the videos and create a data file for training and testing.
Step 3: Extract Features
Navigate to the data directory inside the lstm_model folder and run extract_features.py to generate extracted features for each video.
Step 4: Train Model
In the same data directory, run train.py to train the LSTM model. The trained model will be saved in the checkpoints directory.
Step 5: Run Web Application
Navigate to the root directory of the project and run main.py to start the web application: python main.py
Ensure the saved_model path variable in main.py points to the correct location of the trained model
.png?Expires=1767365873&Key-Pair-Id=K2V2TN6YBJQHTG&Signature=RtP2e8RYfFyKhDaBcl8sYoHKLTr8evvJp1UtUPtemKoklbOtnT8xg9G0rBPMUPifmmGOj3rtlQ40EU6YMOxbj63KE~aP~HwU35kbMw-ORzTeCXzgPiaZWPaGV-EYNDt3PoArxTuJK1kZCmSLG3EcovSvdJ-~CjSW-gvV6Mw-uk4Ve0hFf8fBzzVfWNmwkdwH9hm64RGJo4~S49mLL2LpTVIaq1IyWX5BGb5QWASJppNQOiIhWMA3GRDiq3E2c-iNFHMmcYkwS~LcvuOQbVq3NU7Kiq9Q1UBw6SEhFf6vBrqvCG020eaIx6008RxvCEmz0-MgwAEIvQWs2GDp6VSxJw__)