torcheeg.models¶
Extensive baseline method reproduction.
Convolutional Neural Networks¶
A compact convolutional neural network (EEGNet). |
|
Frequency Band Correlation Convolutional Neural Network (FBCCNN). |
|
An Efficient Multi-view Convolutional Neural Network for Brain-Computer Interface. |
|
Multi-Task Convolutional Neural Network (MT-CNN). |
|
Spatio-temporal Network (STNet). |
|
Continuous Convolutional Neural Network (CCNN). |
|
Continuous Convolutional Neural Network (CCNN). |
|
Spatial-Spectral-Temporal based Attention 3D Dense Network (SST-EmotionNet) for EEG emotion recognition. |
Recurrent Neural Networks¶
A simple but effective gate recurrent unit (GRU) network structure from the book of Zhang et al. For more details, please refer to the following information. |
|
A simple but effective long-short term memory (LSTM) network structure from the book of Zhang et al. For more details, please refer to the following information. |
Graph Neural Networks¶
Dynamical Graph Convolutional Neural Networks (DGCNN). |
|
DLocal-Global-Graph Networks (LGGNet). |
|
Regularized Graph Neural Networks (RGNN). |
|
A simple but effective graph isomorphism network (GIN) structure from the book of Zhang et al. For more details, please refer to the following information. |
Transformer¶
A Simple and Effective Vision Transformer (SimpleViT). |
|
Arjun et al. employ a variation of the Transformer, the Vision Transformer to process EEG signals for emotion recognition. |
|
A vanilla version of the transformer adapted on EEG analysis. |
|
The Vision Transformer. |
Generative Adversarial Network¶
TorchEEG provides an EEG feature generator based on CNN architecture and GAN for generating EEG grid representations of different frequency bands based on a given class label. |
|
TorchEEG provides an EEG feature generator based on CNN architecture and GAN for generating EEG grid representations of different frequency bands based on a given class label. |
|
GAN-based methods formulate a zero-sum game between the generator and the discriminator. |
|
GAN-based methods formulate a zero-sum game between the generator and the discriminator. |
Variational Auto Encoder¶
The variational autoencoder consists of two parts, an encoder, and a decoder. |
|
The variational autoencoder consists of two parts, an encoder, and a decoder. |
|
TorchEEG provides an EEG feature encoder based on CNN architecture and CVAE for generating EEG grid representations of different frequency bands based on a given class label. |
|
TorchEEG provides an EEG feature decoder based on CNN architecture and CVAE for generating EEG grid representations of different frequency bands based on a given class label. |
Normalization Flow¶
A flow-based model is dedicated to train an encoder that encodes the input as a hidden variable and makes the hidden variable obey the standard normal distribution. |
Diffusion Models¶
The diffusion model consists of two processes, the forward process, and the backward process. |
|
The diffusion model consists of two processes, the forward process, and the backward process. |