Recurrent Neural Networks

torcheeg.models.GRU

class torcheeg.models.GRU(num_electrodes: int = 32, hid_channels: int = 64, num_classes: int = 2)[source]

Bases: Module

A simple but effective gate recurrent unit (GRU) network structure from the book of Zhang et al. For more details, please refer to the following information.

Below is a recommended suite for use in emotion recognition tasks:

dataset = DEAPDataset(io_path=f'./deap',
            root_path='./data_preprocessed_python',
            online_transform=transforms.ToTensor(),
            label_transform=transforms.Compose([
                transforms.Select('valence'),
                transforms.Binary(5.0),
            ]))
model = GRU(num_electrodes=32, hid_channels=64, num_classes=2)
Parameters
  • num_electrodes (int) – The number of electrodes, i.e., \(C\) in the paper. (defualt: 32)

  • hid_channels (int) – The number of hidden nodes in the GRU layers and the fully connected layer. (defualt: 64)

  • num_classes (int) – The number of classes to predict. (defualt: 2)

forward(x: Tensor) Tensor[source]
Parameters

x (torch.Tensor) – EEG signal representation, the ideal input shape is [n, 32, 128]. Here, n corresponds to the batch size, 32 corresponds to num_electrodes, and 128 corresponds to the number of data points included in the input EEG chunk.

Returns

the predicted probability that the samples belong to the classes.

Return type

torch.Tensor[number of sample, number of classes]

training: bool

torcheeg.models.LSTM

class torcheeg.models.LSTM(num_electrodes: int = 32, hid_channels: int = 64, num_classes: int = 2)[source]

Bases: Module

A simple but effective long-short term memory (LSTM) network structure from the book of Zhang et al. For more details, please refer to the following information.

Below is a recommended suite for use in emotion recognition tasks:

dataset = DEAPDataset(io_path=f'./deap',
            root_path='./data_preprocessed_python',
            online_transform=transforms.ToTensor(),
            label_transform=transforms.Compose([
                transforms.Select('valence'),
                transforms.Binary(5.0),
            ]))
model = GRU(num_electrodes=32, hid_channels=64, num_classes=2)
Parameters
  • num_electrodes (int) – The number of electrodes, i.e., \(C\) in the paper. (defualt: 32)

  • hid_channels (int) – The number of hidden nodes in the GRU layers and the fully connected layer. (defualt: 64)

  • num_classes (int) – The number of classes to predict. (defualt: 2)

forward(x: Tensor) Tensor[source]
Parameters

x (torch.Tensor) – EEG signal representation, the ideal input shape is [n, 32, 128]. Here, n corresponds to the batch size, 32 corresponds to num_electrodes, and 128 corresponds to the number of data points included in the input EEG chunk.

Returns

the predicted probability that the samples belong to the classes.

Return type

torch.Tensor[number of sample, number of classes]

training: bool