This notebook leverages a custom dataset collected from Singapore's Land Transport Authority (LTA) API to build and evaluate deep learning models for traffic density classification. The images are labeled across five traffic density levels and are captured under varying conditions, including different lighting and camera angles.
This project is centered around developing a robust deep learning solution to classify traffic density based on images captured from multiple traffic cameras. The dataset includes images taken during both daytime and nighttime, ensuring that the models are well‑equipped to handle diverse lighting and environmental conditions.
This work was initially inspired by and adapted from Sudhanshu Rastogi’s traffic density classification project. It has been substantially redesigned (approximately 70% of the pipeline and code paths) to improve capability, reproducibility, and usability:
timm
), chosen for stronger performance at similar compute and a modern training recipe.timm
transforms at 384×384 with RandAugment and Random Erasing.The dataset provides a well‑rounded collection of images, making it ideal for training, validating, and testing deep learning models focused on traffic analysis.
The dataset has been divided into three subsets to ensure robust model evaluation:
This split enables effective model training while ensuring that performance metrics are evaluated on unseen data.
https://colab.research.google.com/github/Thabhelo/traffic-density-classification/blob/main/Traffic_Density_Classification_with_EfficientNet.ipynb
/content/traffic_density/Final Dataset/{training,validation,testing}/<ClassName>
NOTE:
I had to subscribe to Google Colab’s premium tier to get faster GPU access for the convolutional neural network to handle high‑resolution image classification. On my 2024 MacBook Air with the M3 chip — powerful as it is — the process could take around 20 hours. By switching to a cloud‑based GPU (in this case, the NVIDIA P100), runtime drops to about 30–45 minutes. Colab’s architecture supports both Python and R, and grants up to 89.6 GB of RAM, making it a crucial resource for memory‑intensive tasks. If you need to run something that pushes beyond what a local CPU can handle, I highly recommend it! If you think we are friends, let me know and I can grant you access to my paid subscription.
Core libraries used by the notebook:
torch
, torchvision
)timm
)grad-cam
) for model interpretabilityExample pinned setup (local macOS, Apple Silicon):
python3.11 -m venv .venv source .venv/bin/activate python -m pip install --upgrade pip pip install \ torch==2.8.0 torchvision==0.23.0 timm==1.0.9 \ albumentations==1.4.20 grad-cam==1.5.5 \ numpy==2.2.6 pandas==2.2.3 seaborn==0.13.2 matplotlib==3.9.2 \ scikit-learn==1.5.2 opencv-python-headless==4.12.0.88
timm
), initialized with pretrained weights/content/convnext_feature_extractor.pth
/content/convnext_finetuned.pth
Use the pinned setup above, open the notebook, and point the dataset paths to your local folders. The notebook auto‑detects MPS (Apple Silicon), CUDA, or CPU.
!pip install albumentations torch torchvision timm grad-cam
Upload or mount the dataset, unzip to /content/traffic_density/
, then run all cells.
!python fetch_lta_images_to_zip.py --snapshots 3 --interval 60 !unzip /content/traffic-density-singapore.zip -d /content/traffic_density
Empty/Low/Medium/High/Traffic Jam
using a YOLO vehicle counter and create the final dataset ZIP:!pip install ultralytics tqdm !python auto_label_density.py \ --raw_dir /content/traffic_density/raw \ --out_root /content/traffic_density \ --train_ratio 0.8 --val_ratio 0.1 --test_ratio 0.1 \ --model yolov8n.pt --conf 0.25 --ymin 0.0 --ymax 1.0 \ --make_zip --zip_path /content/traffic-density-singapore.zip !unzip /content/traffic-density-singapore.zip -d /content/traffic_density
Notes:
--thr_low
, --thr_med
, --thr_high
..pth
) outside dataset folders.On the provided split, test accuracy is typically around 0.91 with balanced per‑class precision and recall.
This project is licensed under the MIT License. See the LICENSE file.