west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "medical image" 17 results
  • Current situation and prospects of machine learning applications in the study of esophageal cancer

    China is one of the countries in the world with the highest rate of esophageal cancer. Early detection, accurate diagnosis, and treatment of esophageal cancer are critical for improving patients’ prognosis and survival. Machine learning technology has become widely used in cancer, which is benefited from the accumulation of medical images and advancement of artificial intelligence technology. Therefore, the learning model, image type, data type and application efficiency of current machine learning technology in esophageal cancer are summarized in this review. The major challenges are identified, and solutions are proposed in medical image machine learning for esophageal cancer. Machine learning's potential future directions in esophageal cancer diagnosis and treatment are discussed, with a focus on the possibility of establishing a link between medical images and molecular mechanisms. The general rules of machine learning application in the medical field are summarized and forecasted on this foundation. By drawing on the advanced achievements of machine learning in other cancers and focusing on interdisciplinary cooperation, esophageal cancer research will be effectively promoted.

    Release date:2022-06-24 01:25 Export PDF Favorites Scan
  • Development and validation of an automatic diagnostic tool for lumbar stability based on deep learning

    Objective To develop an automatic diagnostic tool based on deep learning for lumbar spine stability and validate diagnostic accuracy. Methods Preoperative lumbar hyper-flexion and hyper-extension X-ray films were collected from 153 patients with lumbar disease. The following 5 key points were marked by 3 orthopedic surgeons: L4 posteroinferior, anterior inferior angles as well as L5 posterosuperior, anterior superior, and posterior inferior angles. The labeling results of each surgeon were preserved independently, and a total of three sets of labeling results were obtained. A total of 306 lumbar X-ray films were randomly divided into training (n=156), validation (n=50), and test (n=100) sets in a ratio of 3∶1∶2. A new neural network architecture, Swin-PGNet was proposed, which was trained using annotated radiograph images to automatically locate the lumbar vertebral key points and calculate L4, 5 intervertebral Cobb angle and L4 lumbar sliding distance through the predicted key points. The mean error and intra-class correlation coefficient (ICC) were used as an evaluation index, to compare the differences between surgeons’ annotations and Swin-PGNet on the three tasks (key point positioning, Cobb angle measurement, and lumbar sliding distance measurement). Meanwhile, the change of Cobb angle more than 11° was taken as the criterion of lumbar instability, and the lumbar sliding distance more than 3 mm was taken as the criterion of lumbar spondylolisthesis. The accuracy of surgeon annotation and Swin-PGNet in judging lumbar instability was compared. Results ① Key point: The mean error of key point location by Swin-PGNet was (1.407±0.939) mm, and by different surgeons was (3.034±2.612) mm. ② Cobb angle: The mean error of Swin-PGNet was (2.062±1.352)° and the mean error of surgeons was (3.580±2.338)°. There was no significant difference between Swin-PGNet and surgeons (P>0.05), but there was a significant difference between different surgeons (P<0.05). ③ Lumbar sliding distance: The mean error of Swin-PGNet was (1.656±0.878) mm and the mean error of surgeons was (1.884±1.612) mm. There was no significant difference between Swin-PGNet and surgeons and between different surgeons (P>0.05). The accuracy of lumbar instability diagnosed by surgeons and Swin-PGNet was 75.3% and 84.0%, respectively. The accuracy of lumbar spondylolisthesis diagnosed by surgeons and Swin-PGNet was 70.7% and 71.3%, respectively. There was no significant difference between Swin-PGNet and surgeons, as well as between different surgeons (P>0.05). ④ Consistency of lumbar stability diagnosis: The ICC of Cobb angle among different surgeons was 0.913 [95%CI (0.898, 0.934)] (P<0.05), and the ICC of lumbar sliding distance was 0.741 [95%CI (0.729, 0.796)] (P<0.05). The result showed that the annotating of the three surgeons were consistent. The ICC of Cobb angle between Swin-PGNet and surgeons was 0.922 [95%CI (0.891, 0.938)] (P<0.05), and the ICC of lumbar sliding distance was 0.748 [95%CI(0.726, 0.783)] (P<0.05). The result showed that the annotating of Swin-PGNet were consistent with those of surgeons. ConclusionThe automatic diagnostic tool for lumbar instability constructed based on deep learning can realize the automatic identification of lumbar instability and spondylolisthesis accurately and conveniently, which can effectively assist clinical diagnosis.

    Release date:2023-02-13 09:57 Export PDF Favorites Scan
  • Research progress of details enhancement methods in medical images

    Effective medical image enhancement method can not only highlight the interested target and region, but also suppress the background and noise, thus improving the quality of the image and reducing the noise while keeping the original geometric structure, which contributes to easier diagnosis in disease based on the image enhanced. This article carries out research on strengthening methods of subtle structure in medical image nowadays, including images sharpening enhancement, rough sets and fuzzy sets, multi-scale geometrical analysis and differential operator. Finally, some commonly used quantitative evaluation criteria of image detail enhancement are given, and further research directions of fine structure enhancement of medical images are discussed.

    Release date:2018-08-23 05:06 Export PDF Favorites Scan
  • Cross modal translation of magnetic resonance imaging and computed tomography images based on diffusion generative adversarial networks

    To address the issues of difficulty in preserving anatomical structures, low realism of generated images, and loss of high-frequency image information in medical image cross-modal translation, this paper proposes a medical image cross-modal translation method based on diffusion generative adversarial networks. First, an unsupervised translation module is used to convert magnetic resonance imaging (MRI) into pseudo-computed tomography (CT) images. Subsequently, a nonlinear frequency decomposition module is used to extract high-frequency CT images. Finally, the pseudo-CT image is input into the forward process, while the high-frequency CT image as a conditional input is used to guide the reverse process to generate the final CT image. The proposed model is evaluated on the SynthRAD2023 dataset, which is used for CT image generation for radiotherapy planning. The generated brain CT images achieve a Fréchet Inception Distance (FID) score of 33.159 7, a structure similarity index measure (SSIM) of 89.84%, a peak signal-to-noise ratio (PSNR) of 35.596 5 dB, and a mean squared error (MSE) of 17.873 9. The generated pelvic CT images yield an FID score of 33.951 6, a structural similarity index of 91.30%, a PSNR of 34.870 7 dB, and an MSE of 17.465 8. Experimental results show that the proposed model generates highly realistic CT images while preserving anatomical accuracy as much as possible. The transformed CT images can be effectively used in radiotherapy planning, further enhancing diagnostic efficiency.

    Release date:2025-06-23 04:09 Export PDF Favorites Scan
  • A multimodal medical image contrastive learning algorithm with domain adaptive denormalization

    Recently, deep learning has achieved impressive results in medical image tasks. However, this method usually requires large-scale annotated data, and medical images are expensive to annotate, so it is a challenge to learn efficiently from the limited annotated data. Currently, the two commonly used methods are transfer learning and self-supervised learning. However, these two methods have been little studied in multimodal medical images, so this study proposes a contrastive learning method for multimodal medical images. The method takes images of different modalities of the same patient as positive samples, which effectively increases the number of positive samples in the training process and helps the model to fully learn the similarities and differences of lesions on images of different modalities, thus improving the model's understanding of medical images and diagnostic accuracy. The commonly used data augmentation methods are not suitable for multimodal images, so this paper proposes a domain adaptive denormalization method to transform the source domain images with the help of statistical information of the target domain. In this study, the method is validated with two different multimodal medical image classification tasks: in the microvascular infiltration recognition task, the method achieves an accuracy of (74.79 ± 0.74)% and an F1 score of (78.37 ± 1.94)%, which are improved as compared with other conventional learning methods; for the brain tumor pathology grading task, the method also achieves significant improvements. The results show that the method achieves good results on multimodal medical images and can provide a reference solution for pre-training multimodal medical images.

    Release date:2023-08-23 02:45 Export PDF Favorites Scan
  • Research on convolutional neural network and its application on medical image

    Recent years, convolutional neural network (CNN) is a research hot spot in machine learning and has some application value in computer aided diagnosis. Firstly, this paper briefly introduces the basic principle of CNN. Secondly, it summarizes the improvement on network structure from two dimensions of model and structure optimization. In model structure, it summarizes eleven classical models about CNN in the past 60 years, and introduces its development process according to timeline. In structure optimization, the research progress is summarized from five aspects (input layer, convolution layer, down-sampling layer, full-connected layer and the whole network) of CNN. Thirdly, the learning algorithm is summarized from the optimization algorithm and fusion algorithm. In optimization algorithm, it combs the progress of the algorithm according to optimization purpose. In algorithm fusion, the improvement is summarized from five angles: input layer, convolution layer, down-sampling layer, full-connected layer and output layer. Finally, CNN is mapped into the medical image domain, and it is combined with computer aided diagnosis to explore its application in medical images. It is a good summary for CNN and has positive significance for the development of CNN.

    Release date:2019-02-18 02:31 Export PDF Favorites Scan
  • Research progress of computer-aided diagnosis in cancer based on deep learning and medical imaging

    The dramatically increasing high-resolution medical images provide a great deal of useful information for cancer diagnosis, and play an essential role in assisting radiologists by offering more objective decisions. In order to utilize the information accurately and efficiently, researchers are focusing on computer-aided diagnosis (CAD) in cancer imaging. In recent years, deep learning as a state-of-the-art machine learning technique has contributed to a great progress in this field. This review covers the reports about deep learning based CAD systems in cancer imaging. We found that deep learning has outperformed conventional machine learning techniques in both tumor segmentation and classification, and that the technique may bring about a breakthrough in CAD of cancer with great prospect in the future clinical practice.

    Release date:2017-04-13 10:03 Export PDF Favorites Scan
  • Research Progress of Multi-Model Medical Image Fusion at Feature Level

    Medical image fusion realizes advantage integration of functional images and anatomical images. This article discusses the research progress of multi-model medical image fusion at feature level. We firstly describe the principle of medical image fusion at feature level. Then we analyze and summarize fuzzy sets, rough sets, D-S evidence theory, artificial neural network, principal component analysis and other fusion methods' applications in medical image fusion and get summery. Lastly, we in this article indicate present problems and the research direction of multi-model medical images in the future.

    Release date: Export PDF Favorites Scan
  • Research progress of methods allowing quantitative analysis of aortic valve calcification

    With the development of social economy and medicine, degenerative heart valve disease has become the major part in heart valve disease. Calcific aortic valve disease (CAVD) is one of the most representative manifestations of degenerative valvular disease. Aortic valve calcification (AVC) has been found to be a strong predictor of major cardiovascular events, which makes it necessary to identify an effective way to evaluate the degree of AVC. Numerous methods of quantitative assessment of AVC have been reported. Here, we discuss these methods from the aspects of pathology and imageology.

    Release date:2018-09-25 04:15 Export PDF Favorites Scan
  • An algorithm for three-dimensional plumonary parenchymal segmentation by integrating surfacelet transform with pulse coupled neural network

    In order to overcome the difficulty in lung parenchymal segmentation due to the factors such as lung disease and bronchial interference, a segmentation algorithm for three-dimensional lung parenchymal is presented based on the integration of surfacelet transform and pulse coupled neural network (PCNN). First, the three-dimensional computed tomography of lungs is decomposed into surfacelet transform domain to obtain multi-scale and multi-directional sub-band information. The edge features are then enhanced by filtering sub-band coefficients using local modified Laplacian operator. Second, surfacelet inverse transform is implemented and the reconstructed image is fed back to the input of PCNN. Finally, iteration process of the PCNN is carried out to obtain final segmentation result. The proposed algorithm is validated on the samples of public dataset. The experimental results demonstrate that the proposed algorithm has superior performance over that of the three-dimensional surfacelet transform edge detection algorithm, the three-dimensional region growing algorithm, and the three-dimensional U-NET algorithm. It can effectively suppress the interference coming from lung lesions and bronchial, and obtain a complete structure of lung parenchyma.

    Release date:2020-10-20 05:56 Export PDF Favorites Scan
2 pages Previous 1 2 Next

Format

Content