Abstract
How we spend six months of algorithm development experience and breakthroughs
With the continuous advancement of artificial intelligence (AI) technology, medical image analysis has become one of the most mature applications of AI in the medical field. In the field of dentistry, panoramic X-rays and apical radiographs are crucial tools in clinical diagnosis. Not only are they the basis for assessing the health of your teeth, but they also help doctors spot hidden diseases. However, the interpretation of these images often requires a great deal of expertise and experience, and is a great challenge for the doctor’s attention span and time.
Method
In the past six months, we have focused on the development of AI algorithms based on deep learning, which are designed to automatically identify and diagnose diseases in dental X-ray panoramic and apical films. Through the precipitation of technology and the verification of application scenarios in this process, we have achieved a series of important results.
Industry pain points and R&D goals
In dental image analysis, we found the following main pain points:
- Physicians have a high workload:
Reading a large number of images can take a lot of time, especially in complex cases.
- Diagnostic consistency:
Different physicians may interpret the same image differently, affecting the reliability of diagnostic results.
- Occult lesions that are difficult to detect:
such as early caries, root infection, or periodontal disease, may be overlooked because the lesions are small.
Based on these questions, our R&D objectives are:
- Improve the efficiency of image interpretation and reduce the
workload of doctors.
- Provide high-precision and consistent diagnostic recommendations
to assist physicians in decision-making.
- AI technology is used to detect subtle lesions and support early
detection and treatment.
Technology development and breakthroughs
- Data acquisition and annotation
Dental X-ray image data is at the heart of algorithm development. We have worked with a number of dental clinics to obtain a large number of high-quality panoramic X-rays and apical films. In order to ensure the accuracy of labeling, we worked with senior dentists to participate in data labeling and defined a variety of dental labels, including general labels and tooth lesion labels.
- In panoramic we define the following tags:
Position labels,Common labels,Lesion labels
- In the apical slice we define the following labels:
Position labels,Common labels,Lesion labels
Selection and design of deep learning models
We have designed a multi-task learning framework based on Convolutional Neural Network (CNN), which is able to identify multiple lesion types and locate lesion areas at the same time. During training, we employ the following key technologies:
- Transfer learning:
Use pre-trained models to improve the learning efficiency of small-sample data.
- Data augmentation:
Expand the dataset by rotating, scaling, and adjusting brightness to improve the robustness of the model.
- Attention mechanism:
It helps the model focus on key areas and improves the accuracy of lesion detection.
Model selection process
-
When we started to train the tooth slice model, we used the mask_rcnn of the mmdetection framework to train the tooth slice model, and the pre-trained model used was the best data performance, see this figure.
Base Modules
We used the best X-101-64x4d-FPN pre-trained model, the box AP performance score and mask AP performance score were 44.5 and 39.7 respectively, which is the best model of the mmdetection frameworkWe went and chose the Ultralytics (yolov11) framework.
-
Why do we choose the Ultralytics (yolov11) framework, first of all, it uses the yolo framework, which is very mature and has been developing wildly for 10 years, from yolo to yolo11. Then its authors give its performance score. Look at this picture.
Base Modules
Here we only look at the highest, that is, the YOLO11x-seg model, which has a box AP performance score of 54.7% and a mask AP performance score of 54.7% 3E44.5 and 43.8% 3E39.7, respectively. Obviously, I feel that its model is better, of course, it is useless to rely on data, we have to look at the actual performance. So I used the same number of small tooth fragments dataset trained in the mmdetection framework, trained under the YOLO11x-seg model, trained 100 layers, and trained a model.
-
Performance optimization and deployment
In the model optimization stage, we debug the hyperparameters and learning rate to make the accuracy and recall rate of the model reach the industry-leading level in the validation set. At the same time, we used docker and the latest 4090 graphics card to deploy the model, and the speed has reached the industry-leading level.
-
Periodontal disease algorithm in apical and panoramic films
- In the panoramic, we calculate the ratio of RBL (that is, the percentage or length of tooth loss) according to the PBL and CEJ identified by the panoramic and the tooth position, and at the same time give the displayed value, and finally calculate the grade and stage of periodontal disease.
Panoramic
- In the periapicalwe calculate the ratio of RBL (that is, the percentage or length of tooth loss) according to the CEJ and tooth position and upper and lower jaw recognized by the apical slice, and at the same time give the displayed value, and finally calculate the grade and stage of periodontal disease.
Periapical
- Achievements and applications
After repeated testing and clinical validation, our AI algorithm has initially become practical
- Efficient auxiliary diagnosis
The model can quickly analyze X-ray images, label suspected lesion areas and generate diagnostic recommendations, which greatly improves the work efficiency of doctors.
- Accurate and consistent judgments
Automatically identify the type of lesion and the location of the lesion, reduce the diagnostic error caused by human factors, and provide reliable reference opinions for doctors.
- Detection of early disease
In multiple tests, our algorithm has successfully detected early lesions that are imperceptible to the naked eye, giving patients the best opportunity for treatment.
- Automatic sequencing of apical slices
In multiple tests, our algorithm successfully experiments with the automatic sorting and recognition of 18 positions of apical slices, without manual operation, only need to input apical slices.
- Automatic generation of treatment protocols
Once our algorithm has successfully identified the disease, it will automatically generate a treatment plan for the patient based on the location of the lesion and the measures.
- Automatic identification of periodontal disease classification
Our algorithm automatically generates periodontal disease grades and stages based on the input panorama and charting content, as well as the patient’s historical cases, in full compliance with current international standards.
Conclusion
Six months of exploration made us deeply aware of the huge potential of AI technology in dental image analysis. It is not only a tool, but also a catalyst for improving the efficiency and accuracy of diagnosis and treatment. In the future, we will continue to devote ourselves to technological innovation and work with dentists to jointly promote the dental industry to move towards intelligence. Let AI technology better serve the clinic and benefit every patient.
If you would like to join our waiting list, you can CONTACT US , or send an email to us, I welcome and appreciate it!
Contact mail
[email protected]
- Founder Frank 🍻 Cheers