Burn image segmentation based on Mask Regions with Convolutional Neural Network deep learning framework: more accurate and more convenient
发布时间:2023-09-27
点击次数:
- 影响因子:
- 5.3
- DOI码:
- 10.1186/s41038-018-0137-9
- 所属单位:
- OXFORD UNIV PRESS
- 发表刊物:
- BURNS & TRAUMA
- 刊物所在地:
- GREAT CLARENDON ST, OXFORD OX2 6DP, ENGLAND
- 关键字:
- Burn image;Deep learning;Mask R-CNN;Image segmentation
- 摘要:
- BackgroundBurns are life-threatening with high morbidity and mortality. Reliable diagnosis supported by accurate burn area and depth assessment is critical to the success of the treatment decision and, in some cases, can save the patient's life. Current techniques such as straight-ruler method, aseptic film trimming method, and digital camera photography method are not repeatable and comparable, which lead to a great difference in the judgment of burn wounds and impede the establishment of the same evaluation criteria. Hence, in order to semi-automate the burn diagnosis process, reduce the impact of human error, and improve the accuracy of burn diagnosis, we include the deep learning technology into the diagnosis of burns.MethodThis article proposes a novel method employing a state-of-the-art deep learning technique to segment the burn wounds in the images. We designed this deep learning segmentation framework based on the Mask Regions with Convolutional Neural Network (Mask R-CNN). For training our framework, we labeled 1150 pictures with the format of the Common Objects in Context (COCO) data set and trained our model on 1000 pictures. In the evaluation, we compared the different backbone networks in our framework. These backbone networks are Residual Network-101 with Atrous Convolution in Feature Pyramid Network (R101FA), Residual Network-101 with Atrous Convolution (R101A), and InceptionV2-Residual Network with Atrous Convolution (IV2RA). Finally, we used the Dice coefficient (DC) value to assess the model accuracy.ResultThe R101FA backbone network gains the highest accuracy 84.51% in 150 pictures. Moreover, we chose different burn depth pictures to evaluate these three backbone networks. The R101FA backbone network gains the best segmentation effect in superficial, superficial thickness, and deep partial thickness. The R101A backbone network gains the best segmentation effect in full-thickness burn.ConclusionThis deep learning framework shows excellent segmentation in burn wound and extremely robust in different burn wound depths. Moreover, this framework just needs a suitable burn wound image when analyzing the burn wound. It is more convenient and more suitable when using in clinics compared with the traditional methods. And it also contributes more to the calculation of total body surface area (TBSA) burned.
- 合写作者:
- Xie Weiguo,Ye Ziqing
- 第一作者:
- Jiao Chong
- 论文类型:
- 文章
- 通讯作者:
- Su Kehua
- 文献类型:
- J
- 卷号:
- 7
- ISSN号:
- 2321-3868
- 是否译文:
- 否
- 发表时间:
- 2019-03-12
- 收录刊物:
- SCI