Magnetic resonance (MR) images can be used to detect lesions in the brains of patients with multiple sclerosis (MS). An automatic method is presented for segmentation of MS lesions using multispectral MR images in this paper. Firstly, a Pd-w image is subtracted from its corresponding T1-w images to get an image in which the cerebral spinal fluid (CSF) is enhanced. Secondly, based on kernel fuzzy c-means clustering (KFCM) algorithm, the enhanced image and the corresponding T2-w image are segmented respectively to extract the CSF region and the CSF-MS lesions combinatoin region. A raw MS lesions image is obtained by subtracting the CSF region from CSF-MS region. Thirdly, based on applying median filter and thresholding to the raw image, the MS lesions were detected finally. Results were tested on BrainWeb images and evaluated with Dice similarity coefficient (DSC), sensitivity (Sens), specificity (Spec) and accuracy (Acc). The testing results were satisfactory.
Glaucoma is one of blind causing diseases. The cup-to-disc ratio is the main basis for glaucoma screening. Therefore, it is of great significance to precisely segment the optic cup and disc. In this article, an optic cup and disc segmentation model based on the linear attention and dual attention is proposed. Firstly, the region of interest is located and cropped according to the characteristics of the optic disc. Secondly, linear attention residual network-34 (ResNet-34) is introduced as a feature extraction network. Finally, channel and spatial dual attention weights are generated by the linear attention output features, which are used to calibrate feature map in the decoder to obtain the optic cup and disc segmentation image. Experimental results show that the intersection over union of the optic disc and cup in Retinal Image Dataset for Optic Nerve Head Segmentation (DRISHTI-GS) dataset are 0.962 3 and 0.856 4, respectively, and the intersection over union of the optic disc and cup in retinal image database for optic nerve evaluation (RIM-ONE-V3) are 0.956 3 and 0.784 4, respectively. The proposed model is better than the comparison algorithm and has certain medical value in the early screening of glaucoma. In addition, this article uses knowledge distillation technology to generate two smaller models, which is beneficial to apply the models to embedded device.
Aiming at the problems of low accuracy and large difference of segmentation boundary distance in anterior cruciate ligament (ACL) image segmentation of knee joint, this paper proposes an ACL image segmentation model by fusing dilated convolution and residual hybrid attention U-shaped network (DRH-UNet). The proposed model builds upon the U-shaped network (U-Net) by incorporating dilated convolutions to expand the receptive field, enabling a better understanding of the contextual relationships within the image. Additionally, a residual hybrid attention block is designed in the skip connections to enhance the expression of critical features in key regions and reduce the semantic gap, thereby improving the representation capability for the ACL area. This study constructs an enhanced annotated ACL dataset based on the publicly available Magnetic Resonance Imaging Network (MRNet) dataset. The proposed method is validated on this dataset, and the experimental results demonstrate that the DRH-UNet model achieves a Dice similarity coefficient (DSC) of (88.01±1.57)% and a Hausdorff distance (HD) of 5.16±0.85, outperforming other ACL segmentation methods. The proposed approach further enhances the segmentation accuracy of ACL, providing valuable assistance for subsequent clinical diagnosis by physicians.
In this paper, we propose a new active contour algorithm, i.e. hierarchical contextual active contour (HCAC), and apply it to automatic liver segmentation from three-dimensional CT (3D-CT) images. HCAC is a learning-based method and can be divided into two stages. At the first stage, i.e. the training stage, given a set of abdominal 3D-CT training images and the corresponding manual liver labels, we tried to establish a mapping between automatic segmentations (in each round) and manual reference segmentations via context features, and obtained a series of self-correcting classifiers. At the second stage, i.e. the segmentation stage, we firstly used the basic active contour to segment the image and subsequently used the contextual active contour (CAC) iteratively, which combines the image information and the current shape model, to improve the segmentation result. The current shape model is produced by the corresponding self-correcting classifier (the input is the previous automatic segmentation result). The proposed method was evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results showed that we would get more and more accurate segmentation results by the iterative steps and the satisfied results would be obtained after about six rounds of iterations.
In order to get the adaptive bandwidth of mean shift to make the tumor segmentation of brain magnetic resonance imaging (MRI) to be more accurate, we in this paper present an advanced mean shift method. Firstly, we made use of the space characteristics of brain image to eliminate the impact on segmentation of skull; and then, based on the characteristics of spatial agglomeration of different tissues of brain (includes tumor), we applied edge points to get the optimal initial mean value and the respectively adaptive bandwidth, in order to improve the accuracy of tumor segmentation. The results of experiment showed that, contrast to the fixed bandwidth mean shift method, the method in this paper could segment the tumor more accurately.
[Abstract]Automatic and accurate segmentation of lung parenchyma is essential for assisted diagnosis of lung cancer. In recent years, researchers in the field of deep learning have proposed a number of improved lung parenchyma segmentation methods based on U-Net. However, the existing segmentation methods ignore the complementary fusion of semantic information in the feature map between different layers and fail to distinguish the importance of different spaces and channels in the feature map. To solve this problem, this paper proposes the double scale parallel attention (DSPA) network (DSPA-Net) architecture, and introduces the DSPA module and the atrous spatial pyramid pooling (ASPP) module in the “encoder-decoder” structure. Among them, the DSPA module aggregates the semantic information of feature maps of different levels while obtaining accurate space and channel information of feature map with the help of cooperative attention (CA). The ASPP module uses multiple parallel convolution kernels with different void rates to obtain feature maps containing multi-scale information under different receptive fields. The two modules address multi-scale information processing in feature maps of different levels and in feature maps of the same level, respectively. We conducted experimental verification on the Kaggle competition dataset. The experimental results prove that the network architecture has obvious advantages compared with the current mainstream segmentation network. The values of dice similarity coefficient (DSC) and intersection on union (IoU) reached 0.972 ± 0.002 and 0.945 ± 0.004, respectively. This paper achieves automatic and accurate segmentation of lung parenchyma and provides a reference for the application of attentional mechanisms and multi-scale information in the field of lung parenchyma segmentation.
When applying deep learning to the automatic segmentation of organs at risk in medical images, we combine two network models of Dense Net and V-Net to develop a Dense V-network for automatic segmentation of three-dimensional computed tomography (CT) images, in order to solve the problems of degradation and gradient disappearance of three-dimensional convolutional neural networks optimization as training samples are insufficient. This algorithm is applied to the delineation of pelvic endangered organs and we take three representative evaluation parameters to quantitatively evaluate the segmentation effect. The clinical result showed that the Dice similarity coefficient values of the bladder, small intestine, rectum, femoral head and spinal cord were all above 0.87 (average was 0.9); Jaccard distance of these were within 2.3 (average was 0.18). Except for the small intestine, the Hausdorff distance of other organs were less than 0.9 cm (average was 0.62 cm). The Dense V-Network has been proven to achieve the accurate segmentation of pelvic endangered organs.
For tooth segmentation problem on the three-dimensional computed tomography (CT) volume data, this paper proposes a regional adaptive deformable model for tooth structure measurement of CT images. The proposed method combines the automatic thresholding segmentation, CV active contour model, and graph-cut. Firstly, we achieved the segmentation and location of dental crowns by automatic thresholding segmentation. And then by using the above segmentation result as the initial contour, we utilized active contour method to slice gradually the segment of remaining tooth. By incorporating active contour and graph-cut then, we realized the accurate segmentation for tooth root, which is the most difficult to be segmented. The experimental results showed that the proposed tooth structure measurement accurately and automatically segmented dental crowns from CT data, and then rapidly and accurately segmented the tooth neck and tooth root. The structure of tooth could be effectively segmented from CT data by using the proposed method. Experimental results indicated that the proposed method was rather robust and accurate, and could effectively assist the doctor for diagnosis in clinical treatment.
In the segmentation of aortic dissection, there are issues such as low contrast between the aortic dissection and surrounding organs and vessels, significant differences in dissection morphology, and high background noise. To address these issues, this paper proposed a reinforcement learning-based method for type B aortic dissection localization. With the assistance of a two-stage segmentation model, the deep reinforcement learning was utilized to perform the first-stage aortic dissection localization task, ensuring the integrity of the localization target. In the second stage, the coarse segmentation results from the first stage were used as input to obtain refined segmentation results. To improve the recall rate of the first-stage segmentation results and include the segmentation target more completely in the localization results, this paper designed a reinforcement learning reward function based on the direction of recall changes. Additionally, the localization window was separated from the field of view window to reduce the occurrence of segmentation target loss. Unet, TransUnet, SwinUnet, and MT-Unet were selected as benchmark segmentation models. Through experiments, it was verified that the majority of the metrics in the two-stage segmentation process of this paper performed better than the benchmark results. Specifically, the Dice index improved by 1.34%, 0.89%, 27.66%, and 7.37% for each respective model. In conclusion, by incorporating the type B aortic dissection localization method proposed in this paper into the segmentation process, the overall segmentation accuracy is improved compared to the benchmark models. The improvement is particularly significant for models with poorer segmentation performance.
This paper presents a probability segmentation algorithm for lung nodules based on three-dimensional features. Firstly, we computed intensity and texture features in region of interest (ROI) pixel by pixel to get their feature vector, and then classified all the pixels based on their feature vector. At last, we carried region growing on the classified result, and got the final segmentation result. Using the public Lung Imaging Database Consortium (LIDC) lung nodule datasets, we verified the performance of proposed method by comparing the probability map within LIDC datasets, which was drawn by four radiology doctors separately. The experimental results showed that the segmentation algorithm using three-dimensional intensity and texture features would be effective.