The electroencephalogram (EEG) signal is a general reflection of the neurophysiological activity of the brain, which has the advantages of being safe, efficient, real-time and dynamic. With the development and advancement of machine learning research, automatic diagnosis of Alzheimer’s diseases based on deep learning is becoming a research hotspot. Started from feedforward neural networks, this paper compared and analysed the structural properties of neural network models such as recurrent neural networks, convolutional neural networks and deep belief networks and their performance in the diagnosis of Alzheimer’s disease. It also discussed the possible challenges and research trends of this research in the future, expecting to provide a valuable reference for the clinical application of neural networks in the EEG diagnosis of Alzheimer’s disease.
Heart failure is a disease that seriously threatens human health and has become a global public health problem. Diagnostic and prognostic analysis of heart failure based on medical imaging and clinical data can reveal the progression of heart failure and reduce the risk of death of patients, which has important research value. The traditional analysis methods based on statistics and machine learning have some problems, such as insufficient model capability, poor accuracy due to prior dependence, and poor model adaptability. In recent years, with the development of artificial intelligence technology, deep learning has been gradually applied to clinical data analysis in the field of heart failure, showing a new perspective. This paper reviews the main progress, application methods and major achievements of deep learning in heart failure diagnosis, heart failure mortality and heart failure readmission, summarizes the existing problems and presents the prospects of related research to promote the clinical application of deep learning in heart failure clinical research.
Compared with the previous automatic segmentation neural network for the target area which considered the target area as an independent area, a stacked neural network which uses the position and shape information of the organs around the target area to regulate the shape and position of the target area through the superposition of multiple networks and fusion of spatial position information to improve the segmentation accuracy on medical images was proposed in this paper. Taking the Graves’ ophthalmopathy disease as an example, the left and right radiotherapy target areas were segmented by the stacked neural network based on the fully convolutional neural network. The volume Dice similarity coefficient (DSC) and bidirectional Hausdorff distance (HD) were calculated based on the target area manually drawn by the doctor. Compared with the full convolutional neural network, the stacked neural network segmentation results can increase the volume DSC on the left and right sides by 1.7% and 3.4% respectively, while the two-way HD on the left and right sides decrease by 0.6. The results show that the stacked neural network improves the degree of coincidence between the automatic segmentation result and the doctor's delineation of the target area, while reducing the segmentation error of small areas. The stacked neural network can effectively improve the accuracy of the automatic delineation of the radiotherapy target area of Graves' ophthalmopathy.
With the development of artificial intelligence (AI) technology, great progress has been made in the application of AI in the medical field. While foreign journals have published a large number of papers on the application of AI in epilepsy, there is a dearth of studies within domestic journals. In order to understand the global research progress and development trend of AI applications in epilepsy, a total of 895 papers on AI applications in epilepsy included in the Web of Science Core Collection and published before December 31, 2022 were selected as the research objects. The annual number of papers and their cited times, the most published authors, institutions and countries, and their cooperative relationships were analyzed, and the research hotspots and future trends in this field were explored by using bibliometrics and other methods. The results showed that before 2016, the annual number of papers on the application of AI in epilepsy increased slowly, and after 2017, the number of publications increased rapidly. The United States had the largest number of papers (n=273), followed by China (n=195). The institution with the largest number of papers was the University of London (n=36), and Capital Medical University in China had 23 papers. The author with the most published papers was Gregory Worrell (n=14), and the scholar with the most published articles in China was Guo Jiayan from Xiamen University (n=7). The application of machine learning in the diagnosis and treatment of epilepsy is an early research focus in this field, while the seizure prediction model based on EEG feature extraction, deep learning especially convolutional neural network application in epilepsy diagnosis, and cloud computing application in epilepsy healthcare, are the current research priorities in this field. AI-based EEG feature extraction, the application of deep learning in the diagnosis and treatment of epilepsy, and the Internet of things to solve epilepsy health-related problems are the research aims of this field in the future.
With the advancement and development of computer technology, the medical decision-making system based on artificial intelligence (AI) has been widely applied in clinical practice. In the perioperative period of cardiovascular surgery, AI can be applied to preoperative diagnosis, intraoperative, and postoperative risk management. This article introduces the application and development of AI during the perioperative period of cardiovascular surgery, including preoperative auxiliary diagnosis, intraoperative risk management, postoperative management, and full process auxiliary decision-making management. At the same time, it explores the challenges and limitations of the application of AI and looks forward to the future development direction.
In recent years, epileptic seizure detection based on electroencephalogram (EEG) has attracted the widespread attention of the academic. However, it is difficult to collect data from epileptic seizure, and it is easy to cause over fitting phenomenon under the condition of few training data. In order to solve this problem, this paper took the CHB-MIT epilepsy EEG dataset from Boston Children's Hospital as the research object, and applied wavelet transform for data augmentation by setting different wavelet transform scale factors. In addition, by combining deep learning, ensemble learning, transfer learning and other methods, an epilepsy detection method with high accuracy for specific epilepsy patients was proposed under the condition of insufficient learning samples. In test, the wavelet transform scale factors 2, 4 and 8 were set for experimental comparison and verification. When the wavelet scale factor was 8, the average accuracy, average sensitivity and average specificity was 95.47%, 93.89% and 96.48%, respectively. Through comparative experiments with recent relevant literatures, the advantages of the proposed method were verified. Our results might provide reference for the clinical application of epilepsy detection.
Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.
Identification of molecular subtypes of malignant tumors plays a vital role in individualized diagnosis, personalized treatment, and prognosis prediction of cancer patients. The continuous improvement of comprehensive tumor genomics database and the ongoing breakthroughs in deep learning technology have driven further advancements in computer-aided tumor classification. Although the existing classification methods based on gene expression omnibus database take the complexity of cancer molecular classification into account, they ignore the internal correlation and synergism of genes. To solve this problem, we propose a multi-layer graph convolutional network model for breast cancer subtype classification combined with hierarchical attention network. This model constructs the graph embedding datasets of patients’ genes, and develops a new end-to-end multi-classification model, which can effectively recognize molecular subtypes of breast cancer. A large number of test data prove the good performance of this new model in the classification of breast cancer subtypes. Compared to the original graph convolutional neural networks and two mainstream graph neural network classification algorithms, the new model has remarkable advantages. The accuracy, weight-F1-score, weight-recall, and weight-precision of our model in seven-category classification has reached 0.851 7, 0.823 5, 0.851 7 and 0.793 6 respectively. In the four-category classification, the results are 0.928 5, 0.894 9, 0.928 5 and 0.865 0 respectively. In addition, compared with the latest breast cancer subtype classification algorithms, the method proposed in this paper also achieved the highest classification accuracy. In summary, the model proposed in this paper may serve as an auxiliary diagnostic technology, providing a reliable option for precise classification of breast cancer subtypes in the future and laying the theoretical foundation for computer-aided tumor classification.
Magnetic resonance imaging (MRI) is an important medical imaging method, whose major limitation is its long scan time due to the imaging mechanism, increasing patients’ cost and waiting time for the examination. Currently, parallel imaging (PI) and compress sensing (CS) together with other reconstruction technologies have been proposed to accelerate image acquisition. However, the image quality of PI and CS depends on the image reconstruction algorithms, which is far from satisfying in respect to both the image quality and the reconstruction speed. In recent years, image reconstruction based on generative adversarial network (GAN) has become a research hotspot in the field of magnetic resonance imaging because of its excellent performance. In this review, we summarized the recent development of application of GAN in MRI reconstruction in both single- and multi-modality acceleration, hoping to provide a useful reference for interested researchers. In addition, we analyzed the characteristics and limitations of existing technologies and forecasted some development trends in this field.