The system processes video footage to detect violent activities in real-time. It uses a combination of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN - LSTM) to classify video sequences and compute a mean violence score for each sequence
Steps:
Step 1: Move Files
Navigate to the lstm_model directory and run the 1_move_file.py script to move all files into the appropriate train/test folders.
Step 2: Extract Files
Run the 2_extract_files.py script to extract images from the videos and create a data file for training and testing.
Step 3: Extract Features
Navigate to the data directory inside the lstm_model folder and run extract_features.py to generate extracted features for each video.
Step 4: Train Model
In the same data directory, run train.py to train the LSTM model. The trained model will be saved in the checkpoints directory.
Step 5: Run Web Application
Navigate to the root directory of the project and run main.py to start the web application: python main.py
Ensure the saved_model path variable in main.py points to the correct location of the trained model
.png?Expires=1775314873&Key-Pair-Id=K2V2TN6YBJQHTG&Signature=WvRfJvMNBFAhGHMHqrHEz~iBycriuQQ-Ay0Wa6q6qIFvysQ5-n0lE-sxfdLQzM2-7vXRdvXyRmkatXneHh0tPLtDU0TQvlF8-zgpHISzX8TJJu3SrnsMnTiTlWxec0Sg-3tsfmrpw7hSFbAXv5kLKE6wW~AeMjDVjB2XC09VhaxG3-15Lp5Wmbn2DOFccrMrfYdx9kVrlDJ0rp7NWK2i9lnVR5lQLtZL4dQuL4UfvB7juSjh8wISgSYDRmsmaPNEAkgV1XUt-0zA0FCmPaGL-Fr9E4SFlFuSLgmolw-UFGGTvQT3aluOaigkmsJLxaJDc26ffqp9Mo8odjoitRr5Og__)