Shortcuts

ViT

class torcheeg.models.ViT(chunk_size: int = 128, grid_size: Tuple[int, int] = (9, 9), t_patch_size: int = 32, s_patch_size: Tuple[int, int] = (3, 3), hid_channels: int = 32, depth: int = 3, heads: int = 4, head_channels: int = 64, mlp_channels: int = 64, num_classes: int = 2, embed_dropout: float = 0.0, dropout: float = 0.0, pool_func: str = 'cls')[source][source]

The Vision Transformer. For more details, please refer to the following information. It is worth noting that this model is not designed for EEG analysis, but shows good performance and can serve as a good research start.

Below is a recommended suite for use in emotion recognition tasks:

from torcheeg.datasets import DEAPDataset
from torcheeg import transforms
from torcheeg.models import ViT
from torch.utils.data import DataLoader
from torcheeg.datasets.constants import DEAP_CHANNEL_LOCATION_DICT

dataset = DEAPDataset(root_path='./data_preprocessed_python',
                      offline_transform=transforms.Compose([
                          transforms.MinMaxNormalize(axis=-1),
                          transforms.ToGrid(DEAP_CHANNEL_LOCATION_DICT)
                      ]),
                      online_transform=transforms.Compose([
                          transforms.ToTensor(),
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('valence'),
                          transforms.Binary(5.0),
                      ]))

model = ViT(chunk_size=128,
            grid_size=(9, 9),
            t_patch_size=32,
            num_classes=2)

x, y = next(iter(DataLoader(dataset, batch_size=64)))
model(x)

It can also be used for the analysis of features such as DE, PSD, etc:

dataset = DEAPDataset(io_path=f'./deap',
            root_path='./data_preprocessed_python',
            offline_transform=transforms.Compose([
                transforms.BandDifferentialEntropy(sampling_rate=128),
                transforms.ToGrid(DEAP_CHANNEL_LOCATION_DICT)
            ]),
            online_transform=transforms.Compose([
                transforms.ToTensor(),
            ]),
            label_transform=transforms.Compose([
                transforms.Select('valence'),
                transforms.Binary(5.0),
            ]))
model = ViT(chunk_size=4,
            grid_size=(9, 9),
            t_patch_size=1,
            num_classes=2)
Parameters:
  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. (default: 128)

  • grid_size (tuple) – Spatial dimensions of grid-like EEG representation. (default: (9, 9))

  • t_patch_size (int) – The size of each input patch at the temporal (chunk size) dimension. (default: 32)

  • s_patch_size (tuple) – The size (resolution) of each input patch at the spatial (grid size) dimension. (default: (3, 3))

  • hid_channels (int) – The feature dimension of embeded patch. (default: 32)

  • depth (int) – The number of attention layers for each transformer block. (default: 3)

  • heads (int) – The number of attention heads for each attention layer. (default: 4)

  • head_channels (int) – The dimension of each attention head for each attention layer. (default: 8)

  • mlp_channels (int) – The number of hidden nodes in the fully connected layer of each transformer block. (default: 64)

  • num_classes (int) – The number of classes to predict. (default: 0.0)

  • embed_dropout (float) – Probability of an element to be zeroed in the dropout layers of the embedding layers. (default: 0.0)

  • dropout (float) – Probability of an element to be zeroed in the dropout layers of the transformer layers. (default: 0.0)

  • pool_func (str) – The pool function before the classifier, optionally including cls and mean, where cls represents selecting classification-related token and mean represents the average pooling. (default: cls)

forward(x: Tensor) Tensor[source][source]
Parameters:

x (torch.Tensor) – EEG signal representation, the ideal input shape is [n, 128, 9, 9]. Here, n corresponds to the batch size, 128 corresponds to chunk_size, and (9, 9) corresponds to grid_size.

Returns:

the predicted probability that the samples belong to the classes.

Return type:

torch.Tensor[number of sample, number of classes]

Read the Docs v: latest
Versions
latest
stable
v1.1.1
v1.1.0
v1.0.11
v1.0.10
v1.0.9
v1.0.8.post1
v1.0.8
v1.0.7
v1.0.6
v1.0.4
v1.0.3
v1.0.2
v1.0.1
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources