The pace of modern life is accelerating, the pressure of life is gradually increasing, and the long-term accumulation of mental fatigue poses a threat to health. By analyzing physiological signals and parameters, this paper proposes a method that can identify the state of mental fatigue, which helps to maintain a healthy life. The method proposed in this paper is a new recognition method of psychological fatigue state of electrocardiogram signals based on convolutional neural network and long short-term memory. Firstly, the convolution layer of one-dimensional convolutional neural network model is used to extract local features, the key information is extracted through pooling layer, and some redundant data is removed. Then, the extracted features are used as input to the long short-term memory model to further fuse the ECG features. Finally, by integrating the key information through the full connection layer, the accurate recognition of mental fatigue state is successfully realized. The results show that compared with traditional machine learning algorithms, the proposed method significantly improves the accuracy of mental fatigue recognition to 96.3%, which provides a reliable basis for the early warning and evaluation of mental fatigue.
Cardiovascular disease (CVD) is one of the leading causes of death worldwide. Heart sound classification plays a key role in the early detection of CVD. The difference between normal and abnormal heart sounds is not obvious. In this paper, in order to improve the accuracy of the heart sound classification model, we propose a heart sound feature extraction method based on bispectral analysis and combine it with convolutional neural network (CNN) to classify heart sounds. The model can effectively suppress Gaussian noise by using bispectral analysis and can effectively extract the features of heart sound signals without relying on the accurate segmentation of heart sound signals. At the same time, the model combines with the strong classification performance of convolutional neural network and finally achieves the accurate classification of heart sound. According to the experimental results, the proposed algorithm achieves 0.910, 0.884 and 0.940 in terms of accuracy, sensitivity and specificity under the same data and experimental conditions, respectively. Compared with other heart sound classification algorithms, the proposed algorithm shows a significant improvement and strong robustness and generalization ability, so it is expected to be applied to the auxiliary detection of congenital heart disease.
A method was proposed to detect pulmonary nodules in low-dose computed tomography (CT) images by two-dimensional convolutional neural network under the condition of fine image preprocessing. Firstly, CT image preprocessing was carried out by image clipping, normalization and other algorithms. Then the positive samples were expanded to balance the number of positive and negative samples in convolutional neural network. Finally, the model with the best performance was obtained by training two-dimensional convolutional neural network and constantly optimizing network parameters. The model was evaluated in Lung Nodule Analysis 2016(LUNA16) dataset by means of five-fold cross validation, and each group's average model experiment results were obtained with the final accuracy of 92.3%, sensitivity of 92.1% and specificity of 92.6%.Compared with other existing automatic detection and classification methods for pulmonary nodules, all indexes were improved. Subsequently, the model perturbation experiment was carried out on this basis. The experimental results showed that the model is stable and has certain anti-interference ability, which could effectively identify pulmonary nodules and provide auxiliary diagnostic advice for early screening of lung cancer.
Compared with the previous automatic segmentation neural network for the target area which considered the target area as an independent area, a stacked neural network which uses the position and shape information of the organs around the target area to regulate the shape and position of the target area through the superposition of multiple networks and fusion of spatial position information to improve the segmentation accuracy on medical images was proposed in this paper. Taking the Graves’ ophthalmopathy disease as an example, the left and right radiotherapy target areas were segmented by the stacked neural network based on the fully convolutional neural network. The volume Dice similarity coefficient (DSC) and bidirectional Hausdorff distance (HD) were calculated based on the target area manually drawn by the doctor. Compared with the full convolutional neural network, the stacked neural network segmentation results can increase the volume DSC on the left and right sides by 1.7% and 3.4% respectively, while the two-way HD on the left and right sides decrease by 0.6. The results show that the stacked neural network improves the degree of coincidence between the automatic segmentation result and the doctor's delineation of the target area, while reducing the segmentation error of small areas. The stacked neural network can effectively improve the accuracy of the automatic delineation of the radiotherapy target area of Graves' ophthalmopathy.
Fetal electrocardiogram (ECG) signals provide important clinical information for early diagnosis and intervention of fetal abnormalities. In this paper, we propose a new method for fetal ECG signal extraction and analysis. Firstly, an improved fast independent component analysis method and singular value decomposition algorithm are combined to extract high-quality fetal ECG signals and solve the waveform missing problem. Secondly, a novel convolutional neural network model is applied to identify the QRS complex waves of fetal ECG signals and effectively solve the waveform overlap problem. Finally, high quality extraction of fetal ECG signals and intelligent recognition of fetal QRS complex waves are achieved. The method proposed in this paper was validated with the data from the PhysioNet computing in cardiology challenge 2013 database of the Complex Physiological Signals Research Resource Network. The results show that the average sensitivity and positive prediction values of the extraction algorithm are 98.21% and 99.52%, respectively, and the average sensitivity and positive prediction values of the QRS complex waves recognition algorithm are 94.14% and 95.80%, respectively, which are better than those of other research results. In conclusion, the algorithm and model proposed in this paper have some practical significance and may provide a theoretical basis for clinical medical decision making in the future.
Deep learning method can be used to automatically analyze electrocardiogram (ECG) data and rapidly implement arrhythmia classification, which provides significant clinical value for the early screening of arrhythmias. How to select arrhythmia features effectively under limited abnormal sample supervision is an urgent issue to address. This paper proposed an arrhythmia classification algorithm based on an adaptive multi-feature fusion network. The algorithm extracted RR interval features from ECG signals, employed one-dimensional convolutional neural network (1D-CNN) to extract time-domain deep features, employed Mel frequency cepstral coefficients (MFCC) and two-dimensional convolutional neural network (2D-CNN) to extract frequency-domain deep features. The features were fused using adaptive weighting strategy for arrhythmia classification. The paper used the arrhythmia database jointly developed by the Massachusetts Institute of Technology and Beth Israel Hospital (MIT-BIH) and evaluated the algorithm under the inter-patient paradigm. Experimental results demonstrated that the proposed algorithm achieved an average precision of 75.2%, an average recall of 70.1% and an average F1-score of 71.3%, demonstrating high classification accuracy and being able to provide algorithmic support for arrhythmia classification in wearable devices.
This study aims to address the limitations in gesture recognition caused by the susceptibility of temporal and frequency domain feature extraction from surface electromyography signals, as well as the low recognition rates of conventional classifiers. A novel gesture recognition approach was proposed, which transformed surface electromyography signals into grayscale images and employed convolutional neural networks as classifiers. The method began by segmenting the active portions of the surface electromyography signals using an energy threshold approach. Temporal voltage values were then processed through linear scaling and power transformations to generate grayscale images for convolutional neural network input. Subsequently, a multi-view convolutional neural network model was constructed, utilizing asymmetric convolutional kernels of sizes 1 × n and 3 × n within the same layer to enhance the representation capability of surface electromyography signals. Experimental results showed that the proposed method achieved recognition accuracies of 98.11% for 13 gestures and 98.75% for 12 multi-finger movements, significantly outperforming existing machine learning approaches. The proposed gesture recognition method, based on surface electromyography grayscale images and multi-view convolutional neural networks, demonstrates simplicity and efficiency, substantially improving recognition accuracy and exhibiting strong potential for practical applications.
This study aims to optimize surface electromyography-based gesture recognition technique, focusing on the impact of muscle fatigue on the recognition performance. An innovative real-time analysis algorithm is proposed in the paper, which can extract muscle fatigue features in real time and fuse them into the hand gesture recognition process. Based on self-collected data, this paper applies algorithms such as convolutional neural networks and long short-term memory networks to provide an in-depth analysis of the feature extraction method of muscle fatigue, and compares the impact of muscle fatigue features on the performance of surface electromyography-based gesture recognition tasks. The results show that by fusing the muscle fatigue features in real time, the algorithm proposed in this paper improves the accuracy of hand gesture recognition at different fatigue levels, and the average recognition accuracy for different subjects is also improved. In summary, the algorithm in this paper not only improves the adaptability and robustness of the hand gesture recognition system, but its research process can also provide new insights into the development of gesture recognition technology in the field of biomedical engineering.
Glaucoma is the leading cause of irreversible blindness, but its early symptoms are not obvious and are easily overlooked, so early screening for glaucoma is particularly important. The cup to disc ratio is an important indicator for clinical glaucoma screening, and accurate segmentation of the optic cup and disc is the key to calculating the cup to disc ratio. In this paper, a full convolutional neural network with residual multi-scale convolution module was proposed for the optic cup and disc segmentation. First, the fundus image was contrast enhanced and polar transformation was introduced. Subsequently, W-Net was used as the backbone network, which replaced the standard convolution unit with the residual multi-scale full convolution module, the input port was added to the image pyramid to construct the multi-scale input, and the side output layer was used as the early classifier to generate the local prediction output. Finally, a new multi-tag loss function was proposed to guide network segmentation. The mean intersection over union of the optic cup and disc segmentation in the REFUGE dataset was 0.904 0 and 0.955 3 respectively, and the overlapping error was 0.178 0 and 0.066 5 respectively. The results show that this method not only realizes the joint segmentation of cup and disc, but also improves the segmentation accuracy effectively, which could be helpful for the promotion of large-scale early glaucoma screening.
The count and recognition of white blood cells in blood smear images play an important role in the diagnosis of blood diseases including leukemia. Traditional manual test results are easily disturbed by many factors. It is necessary to develop an automatic leukocyte analysis system to provide doctors with auxiliary diagnosis, and blood leukocyte segmentation is the basis of automatic analysis. In this paper, we improved the U-Net model and proposed a segmentation algorithm of leukocyte image based on dual path and atrous spatial pyramid pooling. Firstly, the dual path network was introduced into the feature encoder to extract multi-scale leukocyte features, and the atrous spatial pyramid pooling was used to enhance the feature extraction ability of the network. Then the feature decoder composed of convolution and deconvolution was used to restore the segmented target to the original image size to realize the pixel level segmentation of blood leukocytes. Finally, qualitative and quantitative experiments were carried out on three leukocyte data sets to verify the effectiveness of the algorithm. The results showed that compared with other representative algorithms, the proposed blood leukocyte segmentation algorithm had better segmentation results, and the mIoU value could reach more than 0.97. It is hoped that the method could be conducive to the automatic auxiliary diagnosis of blood diseases in the future.