torcheeg.trainers¶
Basic Classification¶
A generic trainer class for EEG classification. |
Cross-domain Classification¶
The individual differences and nonstationary of EEG signals make it difficult for deep learning models trained on the training set of subjects to correctly classify test samples from unseen subjects, since the training set and test set come from different data distributions. Domain adaptation is used to address the problem of distribution drift between training and test sets and thus achieves good performance in subject-independent (cross-subject) scenarios.
This class supports the implementation of CORrelation ALignment (CORAL) for deep domain adaptation. |
|
The individual differences and nonstationary of EEG signals make it difficult for deep learning models trained on the training set of subjects to correctly classify test samples from unseen subjects, since the training set and test set come from different data distributions. |
|
This class supports the implementation of Deep Adaptation Network (DAN) for deep domain adaptation. |
|
This class supports the implementation of Joint Adaptation Networks (JAN) for deep domain adaptation. |
|
This class supports the implementation of Associative Domain Adaptation (ADA) for deep domain adaptation. |
|
The individual differences and nonstationary of EEG signals make it difficult for deep learning models trained on the training set of subjects to correctly classify test samples from unseen subjects, since the training set and test set come from different data distributions. |
|
A trainer trains classification model contains a extractor and a classifier. |
Imbalance Learning for Classification¶
EEG emotion datasets have the problem of sample class imbalance, and imbalance learning can be used to solve the class imbalance problem in emotion recognition tasks.
A trainer class for EEG classification with Logit-adjusted (LA) loss for imbalanced datasets. |
|
A trainer class for EEG classification with Label-distribution-aware margin (LDAM) loss for imbalanced datasets. |
|
A trainer class for EEG classification with Equalization (EQ) loss for imbalanced datasets. |
|
A trainer class for EEG classification with Focal loss for imbalanced datasets. |
|
A trainer class for EEG classification with Weighted Cross Entropy (WCE) loss for imbalanced datasets. |
EEG Generation¶
Data scarcity and data imbalance are one of the important challenges in the analysis of EEG signals. TorchEEG provides different types of generative model trainers to help train generative models to augment EEG datasets. The trainer starting with “C” represents the trainer of the category conditioned generative models, allowing the user to control the category of the generated EEG signal. The others generate samples close to real EEG signals randomly without controlling the category.
This class provide the implementation for BetaVAE training. |
|
This class provide the implementation for BetaVAE training. |
|
This class provide the implementation for WGAN-GP. |
|
This class provide the implementation for WGAN-GP. |
Self-supervised Algorithm for Pre-training¶
As the cost of data collection decreases, the difficulty of obtaining unlabeled data is greatly reduced. How to use unlabeled data to train the model so that the model can learn task-independent general knowledge on large-scale datasets has attracted extensive attention. In natural language processing and computer vision, self-supervised models have made continuous progress by building pretext tasks to learn good language or visual representations. Today, self-supervised learning algorithms are also being tested for EEG analysis to train larger models.
This class supports the implementation of A Simple Framework for Contrastive Learning of Visual Representations (SimCLR) for self-supervised pre-training. |
|
This class supports the implementation of Bootstrap Your Own Latent (BYOL) for self-supervised pre-training. |