The decoding of electroencephalogram (EEG) signals plays an extremely important role in brain-computer interfaces (BCI). However, the processing of physiological signals, particularly the decoding of multi-channel EEG signals, still poses significant challenges. Past deep learning methods often relied on subject-dependent settings, which resulted in new users needing to perform complex calibration procedures before they could use BCI devices. Therefore, we proposed a novel end-to-end deep learning model, MRMHNet, for motor imagery (MI) classification. Firstly, we utilized a feature extraction block based on a Multi-Resolution convolutional neural network (MRCNN) to extract features in both frequency and spatial domains. Secondly, we utilized a block based on the Multi-Head Attention (MHA) to extract global temporal information of the features. Finally, we validated the classification performance of our method using OpenBMI datasets, and the results showed that our method achieved the highest accuracy in both subject-dependent and subject-independent settings. Specifically, in the subject-independent setting, our method achieved the highest accuracy and F1-score, with values of 73.74±13.35% and 73.33±14.87%, respectively. This indicates that our method has good classification performance and high practical value in the field of BCI.
Histobiology, development and regeneration, disease modeling, organ transplantation technology improvement, drug discovery/efficacy evaluation, and pathology research can be carried out through the parameters such as morphology, size, and number of organ like organs. However, it takes a lot of time to manually count various parameters of organs. Therefore, here we propose a new semantic segmentation model, AttUneXt, for segmentation of adenocarcinoid organs. The dice of the model can reach 0.9. As can be seen, our proposed model achieves the effect of rapidly and precisely segmenting organoids.
Dentofacial malformation, also known as skeletal malocclusion, is a common clinical disease with a incidence rate of about 5%. Patients are often accompanied by occlusal, masticatory and other functional disorders and facial deformities, which seriously affect physical and mental health, and require orthognathic and orthodontic treatment[1][2].This research developed a regression neural network W-ANN based on artificial neural network. First, three-dimensional cephalometric analysis is used to quantify the bone-face shape as the input feature. Next, ridge regression is used to add regularization coefficients to the input features to minimize the deviation caused by multicollinearity. Finally, 10 landmark based relocation vectors are output for surgical planning. The model validation results show that the coefficient of determination R2-score is 0.54, and the mean square error MSE is 0.144, indicating that the constructed model has good prediction accuracy for the mapping relationship between the input (three-dimensional cephalometric measurement index) and the output (tooth bone segment movement vector).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.