ToothGapID is an innovative tool designed for the precise detection of missing teeth in dental X-rays, utilizing state-of-the-art deep learning models to facilitate reliable and standardized reporting based on the FDI numbering system. This project integrates advanced image processing techniques to enhance diagnostic capabilities and improve patient outcomes.
Features
- FDI-Based Detection: Provides accurate identification of missing teeth using the FDI numbering system, ensuring standardized communication among dental professionals.
- High Detection Accuracy: Leveraging deep learning, ToothGapID achieves impressive detection rates across diverse datasets.
- Flexible Integration: Designed for easy adaptation into various dental imaging workflows, allowing for seamless integration into existing clinical systems.
- Extensive Documentation: Comprehensive guides for setup, usage, and customization ensure accessibility for users with varying levels of technical expertise.
Installation
To set up ToothGapID, follow these steps:
-
Clone the repository:
git clone https://github.com/arpsn123/ToothGapID.git
cd ToothGapID
-
Install the required dependencies:
pip install -r requirements.txt
ToothGapID leveraged two prominent deep learning models—Detectron2 and YOLOv8—to evaluate their effectiveness in detecting missing teeth.
-
Overview: Initially, Detectron2 was chosen for its robust segmentation capabilities and adaptability to various object detection tasks. However, its performance did not meet expectations due to several critical factors.
-
Weaknesses:
- Annotation Issues:
- The performance was significantly hindered by the presence of incorrect annotations within the training dataset.
- Mislabeling led to poor model training, resulting in low detection accuracy and significant misclassifications.
- Generalization Failures:
- The model struggled to generalize beyond the training data, showing weak performance on unseen images, thereby limiting its practical application in real-world scenarios.
Metric | Value |
---|
Precision | Low due to errors |
Recall | Inconsistent |
mAP@0.5 | Below expectations |
Metric | Value |
---|
Precision | High (85%+) |
Recall | Moderate (70%) |
mAP@0.5 | Good (75%) |
Summary of Model Comparison
Model | Detection Accuracy | Segmentation Capability | Annotation Quality Impact |
---|
Detectron2 | Poor | Yes | High |
YOLOv8 | Good | No | Moderate |
Challenges and Failures
Throughout the development of ToothGapID, several challenges were encountered:
-
Data Variability:
- The diversity in X-ray imaging conditions—including varying resolutions and noise levels—resulted in inconsistent model performance, affecting detection accuracy.
-
Annotation Quality:
- Detectron2's performance failures highlighted the necessity for high-quality, precise annotations. Incorrect labels in the training set were a significant contributor to the model's inadequacy.
-
Class Imbalance:
- The dataset's imbalance, with a significantly higher number of images representing non-missing teeth, biased the models toward over-representing these classes. This imbalance was addressed through data augmentation and the implementation of weighted loss functions to improve the learning process.
-
Integration and Adaptation:
- Adapting ToothGapID for clinical use required extensive testing to ensure compatibility with existing systems, as well as modifications to output formats to align with the requirements of dental professionals.
Future Work
To enhance ToothGapID's capabilities, several future directions are proposed:
Contributing
Contributions are welcome! If you have suggestions for improvements or want to report issues, please create an issue or submit a pull request. For larger contributions, please consider discussing them in an issue first to ensure alignment with project goals.
Acknowledgments
- Gratitude to dental professionals who provided insights and feedback during the testing phases, enhancing the model's applicability in real-world scenarios.
- Appreciation to the developers and researchers behind Detectron2 and YOLOv8 for their groundbreaking work in deep learning and computer vision, which formed the foundation of this project.