In order to monitor the emotional state changes of audience on real-time and to adjust the music playlist, we proposed an algorithm framework of an electroencephalogram (EEG) driven personalized affective music recommendation system based on the portable dry electrode shown in this paper. We also further finished a preliminary implementation on the Android platform. We used a two-dimensional emotional model of arousal and valence as the reference, and mapped the EEG data and the corresponding seed songs to the emotional coordinate quadrant in order to establish the matching relationship. Then, Mel frequency cepstrum coefficients were applied to evaluate the similarity between the seed songs and the songs in music library. In the end, during the music playing state, we used the EEG data to identify the audience’s emotional state, and played and adjusted the corresponding song playlist based on the established matching relationship.
Automatic classification of different types of cough plays an important role in clinical. In the previous research of cough classification or cough recognition, traditional Mel frequency cepstrum coefficients (MFCC) which extracts feature mainly from low frequency band is usually used as feature expression. In this paper, by analyzing the distributions of spectral energy of dry/wet cough, it is found that spectral difference of two types of cough exits mainly in middle frequency band and high frequency band. To better reflect the spectral difference of dry cough and wet cough, an improved method of extracting reverse MFCC is proposed. In this method, reverse Mel filter-bank in which filters are allocated in reverse Mel scale is adopted and is improved by placing filters only in the frequency band with high spectral energy. As a result, features are mainly extracted from the frequency band where two types of cough show both high spectral energy and distinguished difference. Detailed process of accessing improved reverse MFCC was introduced and hidden Markov models trained by 60 dry cough and 60 wet cough were used as cough classification model. Classification experiment results for 120 dry cough and 85 wet cough showed that, compared to traditional MFCC, better classification performance was achieved by the proposed method and the total classification accuracy was raised from 89.76% to 93.66%.
Heart sounds are critical for early detection of cardiovascular diseases, yet existing studies mostly focus on traditional signal segmentation, feature extraction, and shallow classifiers, which often fail to sufficiently capture the dynamic and nonlinear characteristics of heart sounds, limit recognition of complex heart sound patterns, and are sensitive to data imbalance, resulting in poor classification performance. To address these limitations, this study proposes a novel heart sound classification method that integrates improved Mel-frequency cepstral coefficients (MFCC) for feature extraction with a convolutional neural network (CNN) and a deep Transformer model. In the preprocessing stage, a Butterworth filter is applied for denoising, and continuous heart sound signals are directly processed without segmenting the cardiac cycles, allowing the improved MFCC features to better capture dynamic characteristics. These features are then fed into a CNN for feature learning, followed by global average pooling (GAP) to reduce model complexity and mitigate overfitting. Lastly, a deep Transformer module is employed to further extract and fuse features, completing the heart sound classification. To handle data imbalance, the model uses focal loss as the objective function. Experiments on two public datasets demonstrate that the proposed method performs effectively in both binary and multi-class classification tasks. This approach enables efficient classification of continuous heart sound signals, provides a reference methodology for future heart sound research for disease classification, and supports the development of wearable devices and home monitoring systems.