The system processes video footage to detect violent activities in real-time. It uses a combination of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN - LSTM) to classify video sequences and compute a mean violence score for each sequence
Steps:
Step 1: Move Files
Navigate to the lstm_model directory and run the 1_move_file.py script to move all files into the appropriate train/test folders.
Step 2: Extract Files
Run the 2_extract_files.py script to extract images from the videos and create a data file for training and testing.
Step 3: Extract Features
Navigate to the data directory inside the lstm_model folder and run extract_features.py to generate extracted features for each video.
Step 4: Train Model
In the same data directory, run train.py to train the LSTM model. The trained model will be saved in the checkpoints directory.
Step 5: Run Web Application
Navigate to the root directory of the project and run main.py to start the web application: python main.py
Ensure the saved_model path variable in main.py points to the correct location of the trained model
.png?Expires=1762130877&Key-Pair-Id=K2V2TN6YBJQHTG&Signature=PNdyMERd35DLvi5GEzNIxId8E5fbWE5TeVjvdwVakqjT1-XpAVSfSQZoLTOyug2xVschj5epMuVzGOTknmBqL7FdQTQw2YiNQ6fUMUbsaAat-~hoXsxGD6uI0VDp2Xb3yoRWqnGycqSpA-5oESZklkps0DObkTWv0NMdvvt~BHKB7Hye5LSefNXDBKxNRDn4Il-U7kTpNNCAL3bQcuAGwmognuy4Fgg6jTHSkgkFD~GGd6~O8Ltl7c-hMOYucWlNALezLQcrKXFiuMxdb5P7yQxXe20h9HUKV0dItUj~AgXNXM67Os-oeYCJ1en5gqgR-W3Z1WOoXE7JhI4yp0xndQ__)