Currently, the medical imaging methods based on artificial intelligence are developing rapidly, and the related literature reports are increasing year by year. However, there is no special reporting standard, and the reporting of the results is not standardized. In order to improve the report quality of this kind of research and help readers and evaluators evaluate the quality of this kind of research more scientifically, a checklist for artificial intelligence in medical imaging (CLAIM) was put forward abroad. This paper introduces the content of CLAIM and explains its items.
Central lung cancer is a common disease in clinic which usually occurs above the segmental bronchus. It is commonly accompanied by bronchial stenosis or obstruction, which can easily lead to atelectasis. Accurately distinguishing lung cancer from atelectasis is important for tumor staging, delineating the radiotherapy target area, and evaluating treatment efficacy. This article reviews domestic and foreign literatures on how to define the boundary between central lung cancer and atelectasis based on multimodal images, aiming to summarize the experiences and propose the prospects.
The rapid development of medical imaging methods based on artificial intelligence (AI) has led to the first release of the AI medical imaging research checklist (CLAIM) in 2020 to promote the completeness and consistency of AI medical imaging research reports. However, during the application process, it was found that some entries in CLAIM needed improvement. Therefore, the expert committee updated CLAIM and released the updated version of CLAIM 2024. This article introduces CLAIM 2024 for domestic scholars to follow up and refer to in a timely manner.
The task of automatic generation of medical image reports faces various challenges, such as diverse types of diseases and a lack of professionalism and fluency in report descriptions. To address these issues, this paper proposes a multimodal medical imaging report based on memory drive method (mMIRmd). Firstly, a hierarchical vision transformer using shifted windows (Swin-Transformer) is utilized to extract multi-perspective visual features of patient medical images, and semantic features of textual medical history information are extracted using bidirectional encoder representations from transformers (BERT). Subsequently, the visual and semantic features are integrated to enhance the model's ability to recognize different disease types. Furthermore, a medical text pre-trained word vector dictionary is employed to encode labels of visual features, thereby enhancing the professionalism of the generated reports. Finally, a memory driven module is introduced in the decoder, addressing long-distance dependencies in medical image data. This study is validated on the chest X-ray dataset collected at Indiana University (IU X-Ray) and the medical information mart for intensive care chest x-ray (MIMIC-CXR) released by the Massachusetts Institute of Technology and Massachusetts General Hospital. Experimental results indicate that the proposed method can better focus on the affected areas, improve the accuracy and fluency of report generation, and assist radiologists in quickly completing medical image report writing.
With the change of medical diagnosis and treatment mode, the quality of medical image directly affects the diagnosis and treatment of the disease for doctors. Therefore, realization of intelligent image quality control by computer will have a greater auxiliary effect on the radiographer’s filming work. In this paper, the research methods and applications of image segmentation model and image classification model in the field of deep learning and traditional image processing algorithm applied to medical image quality evaluation are described. The results demonstrate that deep learning algorithm is more accurate and efficient than the traditional image processing algorithm in the effective training of medical image big data, which explains the broad application prospect of deep learning in the medical field. This paper developed a set of intelligent quality control system for auxiliary filming, and successfully applied it to the Radiology Department of West China Hospital and other city and county hospitals, which effectively verified the feasibility and stability of the quality control system.
Objective To explore the clinical value of gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA) enhanced magnetic resonance (MR) imaging for cirrhosis-related nodules. Methods Nineteen patients who were suspected cirrhosis with lesions of liver were prospectively included for Gd-EOB-DTPA enhanced MR imaging test between Nov. 2011 and Jan. 2013. The hepatobiliary phase (HBP) images were taken in 20 minutes after agents’ injection. The images were diagnosed independently in two groups: group A, including the plain phase and dynamic phase images; group B, including plain phase, dynamic phase, and HBP phase images. The signal intensity (SI) of lesions in HBP images, background liver SI, and background noise standard deviation were measured by using a circular region of interest, then the lesion signal to noise ratio (SNR) and contrast signal to noise ratio (CNR) were calculated. Results Nineteen patients had 25 tumors in all, including 18 hepatocelluar carcinoma (HCC) and 7 regenerative nodule (RN) or dysplastic nodule (DN), with the diameter ranged from 0.6 cm to 3.2 cm (average 1.3 cm) . Sixteen HCC manifested hypo SI relative to the normal liver, while 2 HCC manifested hyper SI at HBP. Five HCC had cystic necrosis with the necrotic area, and there were no enhancement in artery phase, while performed flocculent enhancement at HBP. Six RN or DN showed hyper SI while another 1 showed iso SI to background liver at HBP. The diagnostic accuracy rates of group A and group B were 80.0% (20/25) and 92.0% (23/25). SNR of RN or DN at HBP was 132.90±17.21, and of HCC was 114.35±19.27, while the CNR of RN or DN was 19.47±8.20, and of HCC was 112.15±33.52. Conclusion Gd-EOB-DTPA enhanced MR imaging can improve the diagnosis capacity of cirrhosis-related nodules, so as to develop more accurate and reasonable treatment options.
ObjectiveTo systematically review the efficacy of the application of team-based learning (TBL) pedagogy and traditional lecture-based learning (LBL) pedagogy in radiology education.MethodsPubMed, EMbase, Web of Science, WanFang Data, CNKI and VIP databases were electronically searched to collect randomized controlled trials (RCTs) of the application of TBL and LBL pedagogy in radiology education from inception to March 31st, 2020. Two reviewers independently screened literature, extracted data, and assessed the risk of bias of included studies; meta-analysis was then performed by using Stata/SE 16.0 software.ResultsA total of 11 RCTs involving 721 participants were included. The results of meta-analysis showed that TBL significantly improved students’ theoretical assessment scores (SMD=1.70, 95%CI 1.05 to 2.36, P<0.001), practical assessment scores (SMD=2.00, 95%CI 1.02 to 2.98, P<0.001), preference to the curriculum design (RR=1.53, 95%CI 1.19 to 1.97, P=0.001), agreed to more effective promotion in aspects of teamwork ability (RR=2.46, 95%CI 1.69 to 3.59, P<0.001), self-directed learning ability (RR=2.41, 95%CI 1.33 to 4.39, P=0.004), and clinical practice ability (RR=2.09, 95%CI 1.46 to 3.00, P<0.001) compared with LBL pedagogical method. However, no significant difference was found in the subjective evaluation of theoretical knowledge between two pedagogies.ConclusionsCurrent evidence shows that TBL pedagogy based on active learning and team cooperation has obvious advantages over traditional LBL mode in radiology education. Due to limited quality and quantity of the included studies, more high-quality studies are needed to verify above conclusions.
China is one of the countries in the world with the highest rate of esophageal cancer. Early detection, accurate diagnosis, and treatment of esophageal cancer are critical for improving patients’ prognosis and survival. Machine learning technology has become widely used in cancer, which is benefited from the accumulation of medical images and advancement of artificial intelligence technology. Therefore, the learning model, image type, data type and application efficiency of current machine learning technology in esophageal cancer are summarized in this review. The major challenges are identified, and solutions are proposed in medical image machine learning for esophageal cancer. Machine learning's potential future directions in esophageal cancer diagnosis and treatment are discussed, with a focus on the possibility of establishing a link between medical images and molecular mechanisms. The general rules of machine learning application in the medical field are summarized and forecasted on this foundation. By drawing on the advanced achievements of machine learning in other cancers and focusing on interdisciplinary cooperation, esophageal cancer research will be effectively promoted.
In deep learning-based image registration, the deformable region with complex anatomical structures is an important factor affecting the accuracy of network registration. However, it is difficult for existing methods to pay attention to complex anatomical regions of images. At the same time, the receptive field of the convolutional neural network is limited by the size of its convolution kernel, and it is difficult to learn the relationship between the voxels with far spatial location, making it difficult to deal with the large region deformation problem. Aiming at the above two problems, this paper proposes a cascaded multi-level registration network model based on transformer, and equipped it with a difficult deformable region perceptron based on mean square error. The difficult deformation perceptron uses sliding window and floating window techniques to retrieve the registered images, obtain the difficult deformation coefficient of each voxel, and identify the regions with the worst registration effect. In this study, the cascaded multi-level registration network model adopts the difficult deformation perceptron for hierarchical connection, and the self-attention mechanism is used to extract global features in the basic registration network to optimize the registration results of different scales. The experimental results show that the method proposed in this paper can perform progressive registration of complex deformation regions, thereby optimizing the registration results of brain medical images, which has a good auxiliary effect on the clinical diagnosis of doctors.