torcheeg.models¶
Convolutional Neural Networks¶
A compact convolutional neural network (EEGNet). |
|
Frequency Band Correlation Convolutional Neural Network (FBCCNN). |
|
An Efficient Multi-view Convolutional Neural Network for Brain-Computer Interface. |
|
FBMSNet, a novel multiscale temporal convolutional neural network for MI decoding tasks, employs Mixed Conv to extract multiscale temporal features which enhance the intra-class compactness and improve the inter-class separability with the joint supervision of the center loss andcenter loss. |
|
Multi-Task Convolutional Neural Network (MT-CNN). |
|
Spatio-temporal Network (STNet). |
|
TSCeption. |
|
Continuous Convolutional Neural Network (CCNN). |
|
Spatial-Spectral-Temporal based Attention 3D Dense Network (SST-EmotionNet) for EEG emotion recognition. |
Recurrent Neural Networks¶
A simple but effective gate recurrent unit (GRU) network structure from the book of Zhang et al. For more details, please refer to the following information. |
|
A simple but effective long-short term memory (LSTM) network structure from the book of Zhang et al. For more details, please refer to the following information. |
Graph Neural Networks¶
Dynamical Graph Convolutional Neural Networks (DGCNN). |
|
DLocal-Global-Graph Networks (LGGNet). |
|
Regularized Graph Neural Networks (RGNN). |
|
A simple but effective graph isomorphism network (GIN) structure from the book of Zhang et al. For more details, please refer to the following information. |
Transformer¶
A Simple and Effective Vision Transformer (SimpleViT). |
|
Arjun et al. employ a variation of the Transformer, the Vision Transformer to process EEG signals for emotion recognition. |
|
A vanilla version of the transformer adapted on EEG analysis. |
|
The Vision Transformer. |
|
ATCNet: An attention-based temporal convolutional network forEEG-based motor imagery classification.For more details ,please refer to the following information: |
Generative Adversarial Network¶
TorchEEG provides an EEG feature generator based on CNN architecture and GAN for generating EEG grid representations of different frequency bands based on a given class label. |
|
TorchEEG provides an EEG feature generator based on CNN architecture and GAN for generating EEG grid representations of different frequency bands based on a given class label. |
|
GAN-based methods formulate a zero-sum game between the generator and the discriminator. |
|
GAN-based methods formulate a zero-sum game between the generator and the discriminator. |
|
EEGFuseNet: A hybrid unsupervised network which can fuse high-dimensional EEG to obtain deep feature characterization and generate similar signals. |
|
EFDiscriminator: the discriminator that comes with EEGFuseNet is to distinguish whether the input EEG signals is a fake one generated by the eegfusenet or a real one collected from human brain. |
Variational Auto Encoder¶
The variational autoencoder consists of two parts, an encoder, and a decoder. |
|
The variational autoencoder consists of two parts, an encoder, and a decoder. |
|
TorchEEG provides an EEG feature encoder based on CNN architecture and CVAE for generating EEG grid representations of different frequency bands based on a given class label. |
|
TorchEEG provides an EEG feature decoder based on CNN architecture and CVAE for generating EEG grid representations of different frequency bands based on a given class label. |
Normalization Flow¶
This class implements the normalized flow model, allowing to generate samples close to the true distribution. |
|
This class implements a conditional normalized flow model that allows generating samples of specified classes. |
Diffusion Models¶
The diffusion model consists of two processes, the forward process, and the backward process. |
|
The diffusion model consists of two processes, the forward process, and the backward process. |