In order to get the adaptive bandwidth of mean shift to make the tumor segmentation of brain magnetic resonance imaging (MRI) to be more accurate, we in this paper present an advanced mean shift method. Firstly, we made use of the space characteristics of brain image to eliminate the impact on segmentation of skull; and then, based on the characteristics of spatial agglomeration of different tissues of brain (includes tumor), we applied edge points to get the optimal initial mean value and the respectively adaptive bandwidth, in order to improve the accuracy of tumor segmentation. The results of experiment showed that, contrast to the fixed bandwidth mean shift method, the method in this paper could segment the tumor more accurately.
[Abstract]Automatic and accurate segmentation of lung parenchyma is essential for assisted diagnosis of lung cancer. In recent years, researchers in the field of deep learning have proposed a number of improved lung parenchyma segmentation methods based on U-Net. However, the existing segmentation methods ignore the complementary fusion of semantic information in the feature map between different layers and fail to distinguish the importance of different spaces and channels in the feature map. To solve this problem, this paper proposes the double scale parallel attention (DSPA) network (DSPA-Net) architecture, and introduces the DSPA module and the atrous spatial pyramid pooling (ASPP) module in the “encoder-decoder” structure. Among them, the DSPA module aggregates the semantic information of feature maps of different levels while obtaining accurate space and channel information of feature map with the help of cooperative attention (CA). The ASPP module uses multiple parallel convolution kernels with different void rates to obtain feature maps containing multi-scale information under different receptive fields. The two modules address multi-scale information processing in feature maps of different levels and in feature maps of the same level, respectively. We conducted experimental verification on the Kaggle competition dataset. The experimental results prove that the network architecture has obvious advantages compared with the current mainstream segmentation network. The values of dice similarity coefficient (DSC) and intersection on union (IoU) reached 0.972 ± 0.002 and 0.945 ± 0.004, respectively. This paper achieves automatic and accurate segmentation of lung parenchyma and provides a reference for the application of attentional mechanisms and multi-scale information in the field of lung parenchyma segmentation.
In this paper, we propose a new active contour algorithm, i.e. hierarchical contextual active contour (HCAC), and apply it to automatic liver segmentation from three-dimensional CT (3D-CT) images. HCAC is a learning-based method and can be divided into two stages. At the first stage, i.e. the training stage, given a set of abdominal 3D-CT training images and the corresponding manual liver labels, we tried to establish a mapping between automatic segmentations (in each round) and manual reference segmentations via context features, and obtained a series of self-correcting classifiers. At the second stage, i.e. the segmentation stage, we firstly used the basic active contour to segment the image and subsequently used the contextual active contour (CAC) iteratively, which combines the image information and the current shape model, to improve the segmentation result. The current shape model is produced by the corresponding self-correcting classifier (the input is the previous automatic segmentation result). The proposed method was evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results showed that we would get more and more accurate segmentation results by the iterative steps and the satisfied results would be obtained after about six rounds of iterations.
In recent years, optical coherence tomography (OCT) has developed into a popular coronary imaging technology at home and abroad. The segmentation of plaque regions in coronary OCT images has great significance for vulnerable plaque recognition and research. In this paper, a new algorithm based on K-means clustering and improved random walk is proposed and Semi-automated segmentation of calcified plaque, fibrotic plaque and lipid pool was achieved. And the weight function of random walk is improved. The distance between the edges of pixels in the image and the seed points is added to the definition of the weight function. It increases the weak edge weights and prevent over-segmentation. Based on the above methods, the OCT images of 9 coronary atherosclerotic patients were selected for plaque segmentation. By contrasting the doctor’s manual segmentation results with this method, it was proved that this method had good robustness and accuracy. It is hoped that this method can be helpful for the clinical diagnosis of coronary heart disease.
Lung diseases such as lung cancer and COVID-19 seriously endanger human health and life safety, so early screening and diagnosis are particularly important. computed tomography (CT) technology is one of the important ways to screen lung diseases, among which lung parenchyma segmentation based on CT images is the key step in screening lung diseases, and high-quality lung parenchyma segmentation can effectively improve the level of early diagnosis and treatment of lung diseases. Automatic, fast and accurate segmentation of lung parenchyma based on CT images can effectively compensate for the shortcomings of low efficiency and strong subjectivity of manual segmentation, and has become one of the research hotspots in this field. In this paper, the research progress in lung parenchyma segmentation is reviewed based on the related literatures published at domestic and abroad in recent years. The traditional machine learning methods and deep learning methods are compared and analyzed, and the research progress of improving the network structure of deep learning model is emphatically introduced. Some unsolved problems in lung parenchyma segmentation were discussed, and the development prospect was prospected, providing reference for researchers in related fields.
Glaucoma is the leading cause of irreversible blindness, but its early symptoms are not obvious and are easily overlooked, so early screening for glaucoma is particularly important. The cup to disc ratio is an important indicator for clinical glaucoma screening, and accurate segmentation of the optic cup and disc is the key to calculating the cup to disc ratio. In this paper, a full convolutional neural network with residual multi-scale convolution module was proposed for the optic cup and disc segmentation. First, the fundus image was contrast enhanced and polar transformation was introduced. Subsequently, W-Net was used as the backbone network, which replaced the standard convolution unit with the residual multi-scale full convolution module, the input port was added to the image pyramid to construct the multi-scale input, and the side output layer was used as the early classifier to generate the local prediction output. Finally, a new multi-tag loss function was proposed to guide network segmentation. The mean intersection over union of the optic cup and disc segmentation in the REFUGE dataset was 0.904 0 and 0.955 3 respectively, and the overlapping error was 0.178 0 and 0.066 5 respectively. The results show that this method not only realizes the joint segmentation of cup and disc, but also improves the segmentation accuracy effectively, which could be helpful for the promotion of large-scale early glaucoma screening.
As an important basis for lesion determination and diagnosis, medical image segmentation has become one of the most important and hot research fields in the biomedical field, among which medical image segmentation algorithms based on full convolutional neural network and U-Net neural network have attracted more and more attention by researchers. At present, there are few reports on the application of medical image segmentation algorithms in the diagnosis of rectal cancer, and the accuracy of the segmentation results of rectal cancer is not high. In this paper, a convolutional network model of encoding and decoding combined with image clipping and pre-processing is proposed. On the basis of U-Net, this model replaced the traditional convolution block with the residual block, which effectively avoided the problem of gradient disappearance. In addition, the image enlargement method is also used to improve the generalization ability of the model. The test results on the data set provided by the "Teddy Cup" Data Mining Challenge showed that the residual block-based improved U-Net model proposed in this paper, combined with image clipping and preprocessing, could greatly improve the segmentation accuracy of rectal cancer, and the Dice coefficient obtained reached 0.97 on the verification set.
To realize the accurate positioning and quantitative volume measurement of tumor in head and neck tumor CT images, we proposed a level set method based on augmented gradient. With the introduction of gradient information in the edge indicator function, our proposed level set model is adaptive to different intensity variation, and achieves accurate tumor segmentation. The segmentation result has been used to calculate tumor volume. In large volume tumor segmentation, the proposed level set method can reduce manual intervention and enhance the segmentation accuracy. Tumor volume calculation results are close to the gold standard. From the experiment results, the augmented gradient based level set method has achieved accurate head and neck tumor segmentation. It can provide useful information to computer aided diagnosis.
Objective To propose an innovative self-supervised learning method for vascular segmentation in computed tomography angiography (CTA) images by integrating feature reconstruction with masked autoencoding. Methods A 3D masked autoencoder-based framework was developed, where in 3D histogram of oriented gradients (HOG) was utilized for multi-scale vascular feature extraction. During pre-training, random masking was applied to local patches of CTA images, and the model was trained to jointly reconstruct original voxels and HOG features of masked regions. The pre-trained model was further fine-tuned on two annotated datasets for clinical-level vessel segmentation. Results Evaluated on two independent datasets (30 labeled CTA images each), our method achieved superior segmentation accuracy to the supervised neural network U-Net (nnU-Net) baseline, with Dice similarity coefficients of 91.2% vs. 89.7% (aorta) and 84.8% vs. 83.2% (coronary arteries). Conclusion The proposed self-supervised model significantly reduces manual annotation costs without compromising segmentation precision, showing substantial potential for enhancing clinical workflows in vascular disease management.
When applying deep learning algorithms to magnetic resonance (MR) image segmentation, a large number of annotated images are required as data support. However, the specificity of MR images makes it difficult and costly to acquire large amounts of annotated image data. To reduce the dependence of MR image segmentation on a large amount of annotated data, this paper proposes a meta-learning U-shaped network (Meta-UNet) for few-shot MR image segmentation. Meta-UNet can use a small amount of annotated image data to complete the task of MR image segmentation and obtain good segmentation results. Meta-UNet improves U-Net by introducing dilated convolution, which can increase the receptive field of the model to improve the sensitivity to targets of different scales. We introduce the attention mechanism to improve the adaptability of the model to different scales. We introduce the meta-learning mechanism, and employ a composite loss function for well-supervised and effective bootstrapping of model training. We use the proposed Meta-UNet model to train on different segmentation tasks, and then use the trained model to evaluate on a new segmentation task, where the Meta-UNet model achieves high-precision segmentation of target images. Meta-UNet has a certain improvement in mean Dice similarity coefficient (DSC) compared with voxel morph network (VoxelMorph), data augmentation using learned transformations (DataAug) and label transfer network (LT-Net). Experiments show that the proposed method can effectively perform MR image segmentation using a small number of samples. It provides a reliable aid for clinical diagnosis and treatment.