This project focuses on instance segmentation of microscopic cellular images using YOLOv8, a cutting-edge deep learning model known for its speed and accuracy in object detection and segmentation tasks. Leveraging YOLOv8, this project successfully segments cellular structures of various shapes and sizes, achieving remarkable precision across different cell types. This README provides an in-depth description of the dataset, YOLOv8 architecture, annotations, training process, evaluation metrics, and the tech stack involved.
The dataset, named Diverse, consists of 10,000 microscopic cellular images. This dataset is unique due to its wide variety of cell shapes and types, including square, circular, and triangular cells. This diversity is essential for training the model to recognize and segment a broad spectrum of cellular structures, making it more generalizable.
The dataset is annotated in a format compatible with YOLOv8 and supports various segmentation classes. Each image contains multiple instances, with annotations covering boundaries of each cellular structure, which allows for precise localization and segmentation of individual cells.
YOLOv8 is a modern object detection and segmentation framework from the YOLO (You Only Look Once) family, known for its balance between speed and accuracy. YOLOv8 introduces several architectural enhancements that make it highly suitable for real-time segmentation tasks:
Annotations for this project were done to support YOLOv8’s requirements for both bounding boxes and segmentation masks. Key aspects of the annotation format include:
Each image in the Diverse dataset contains these annotations to facilitate multi-class segmentation, allowing the model to distinguish between different cellular shapes.
Training was conducted using a high-performance GPU environment with YOLOv8 configured for instance segmentation. The training process involved fine-tuning the model over 50 epochs, with detailed tracking of loss metrics, precision, and recall.
Training utilized a GPU with 8.54 GB memory, ensuring fast computation and allowing for larger batch sizes.
At the 38th epoch, the model achieved excellent performance metrics:
Epoch GPU_mem box_loss seg_loss cls_loss dfl_loss Instances Size
38/50 8.54G 0.7109 0.9734 0.397 0.9016 427 800
This epoch highlighted the model’s efficient learning curve, with steady improvements across detection and segmentation tasks.
Upon evaluation at the 38th epoch, YOLOv8 achieved phenomenal results on the test set. Below are the precision, recall, and mAP metrics, essential for understanding model performance:
Metric | Precision (P) | Recall (R) | mAP@50 | mAP@50-95 |
---|---|---|---|---|
Box | 0.957 | 0.967 | 0.983 | 0.879 |
Mask | 0.951 | 0.960 | 0.975 | 0.817 |
These results underscore YOLOv8’s effectiveness in handling various cell shapes and sizes, demonstrating high precision and recall rates for both bounding boxes and masks.
Clone the repository:
git clone https://github.com/username/repository-name.git cd repository-name
Install the dependencies:
pip install -r requirements.txt
Configure the dataset path in the YOLOv8 configuration file:
# config.yaml dataset_path: "path/to/diverse_dataset"
To train the model on the Diverse dataset:
python train.py --data config.yaml --epochs 50 --img-size 800
To evaluate model performance:
python evaluate.py --data config.yaml --weights best.pt
The YOLOv8 model showcased remarkable performance in segmenting diverse cellular images, handling various shapes with high precision and recall. This project exemplifies the strength of YOLOv8 in complex instance segmentation tasks, highlighting its applicability in fields requiring detailed cellular analysis.
For further details on customization or deployment, refer to the documentation in the docs/
folder.