Convolutional Neural Networks

torcheeg.models.EEGNet

class torcheeg.models.EEGNet(in_channels: int = 151, num_electrodes: int = 60, F1: int = 8, F2: int = 16, D: int = 2, num_classes: int = 2, kernel_1: int = 64, kernel_2: int = 16, dropout: float = 0.25)[source]

A compact convolutional neural network (EEGNet). For more details, please refer to the following information.

Below is a recommended suite for use in emotion recognition tasks:

dataset = DEAPDataset(io_path=f'./deap',
            root_path='./data_preprocessed_python',
            online_transform=transforms.Compose([
                transforms.To2d()
                transforms.ToTensor(),
            ]),
            label_transform=transforms.Compose([
                transforms.Select('valence'),
                transforms.Binary(5.0),
            ]))
model = EEGNet(in_channels=128,
               num_electrodes=32,
               dropout=0.5,
               kernel_1=64,
               kernel_2=16,
               F1=8,
               F2=16,
               D=2,
               num_classes=2)
Parameters
  • in_channels (int) – The dimension of each electrode, i.e., \(T\) in the paper. (defualt: 4)

  • num_electrodes (int) – The number of electrodes, i.e., \(C\) in the paper. (defualt: 32)

  • F1 (int) – The filter number of block 1, i.e., \(F_1\) in the paper. (defualt: 8)

  • F2 (int) – The filter number of block 2, i.e., \(F_2\) in the paper. (defualt: 16)

  • D (int) – The depth multiplier (number of spatial filters), i.e., \(D\) in the paper. (defualt: 2)

  • num_classes (int) – The number of classes to predict, i.e., \(N\) in the paper. (defualt: 2)

  • kernel_1 (int) – The filter size of block 1. (defualt: 64)

  • kernel_2 (int) – The filter size of block 2. (defualt: 64)

  • dropout (float) – Probability of an element to be zeroed in the dropout layers. (defualt: 0.25)

forward(x: Tensor) Tensor[source]
Parameters

x (torch.Tensor) – EEG signal representation, the ideal input shape is [n, 60, 151]. Here, n corresponds to the batch size, 60 corresponds to num_electrodes, and 151 corresponds to in_channels.

Returns

the predicted probability that the samples belong to the classes.

Return type

torch.Tensor[number of sample, number of classes]

torcheeg.models.FBCCNN

class torcheeg.models.FBCCNN(in_channels: int = 4, grid_size: Tuple[int, int] = (9, 9), num_classes: int = 2)[source]

Frequency Band Correlation Convolutional Neural Network (FBCCNN). For more details, please refer to the following information.

Below is a recommended suite for use in emotion recognition tasks:

dataset = DEAPDataset(io_path=f'./deap',
            root_path='./data_preprocessed_python',
            online_transform=transforms.Compose([
                transforms.BandPowerSpectralDensity(),
                transforms.ToGrid(DEAP_CHANNEL_LOCATION_DICT)
            ]),
            label_transform=transforms.Compose([
                transforms.Select('valence'),
                transforms.Binary(5.0),
            ]))
model = FBCCNN(num_classes=2, in_channels=4, grid_size=(9, 9))
Parameters
  • in_channels (int) – The feature dimension of each electrode, i.e., \(N\) in the paper. (defualt: 4)

  • grid_size (tuple) – Spatial dimensions of grid-like EEG representation. (defualt: (9, 9))

  • num_classes (int) – The number of classes to predict. (defualt: 2)

forward(x: Tensor) Tensor[source]
Parameters

x (torch.Tensor) – EEG signal representation, the ideal input shape is [n, 4, 9, 9]. Here, n corresponds to the batch size, 4 corresponds to in_channels, and (9, 9) corresponds to grid_size.

Returns

the predicted probability that the samples belong to the classes.

Return type

torch.Tensor[number of sample, number of classes]

torcheeg.models.MTCNN

class torcheeg.models.MTCNN(in_channels: int = 8, grid_size: Tuple[int, int] = (8, 9), num_classes: int = 2, dropout: float = 0.2)[source]

Multi-Task Convolutional Neural Network (MT-CNN). For more details, please refer to the following information.

Below is a recommended suite for use in emotion recognition tasks:

DEAP_LOCATION_LIST = [['-', '-', 'AF3', 'FP1', '-', 'FP2', 'AF4', '-', '-'],
                      ['F7', '-', 'F3', '-', 'FZ', '-', 'F4', '-', 'F8'],
                      ['-', 'FC5', '-', 'FC1', '-', 'FC2', '-', 'FC6', '-'],
                      ['T7', '-', 'C3', '-', 'CZ', '-', 'C4', '-', 'T8'],
                      ['-', 'CP5', '-', 'CP1', '-', 'CP2', '-', 'CP6', '-'],
                      ['P7', '-', 'P3', '-', 'PZ', '-', 'P4', '-', 'P8'],
                      ['-', '-', '-', 'PO3', '-', 'PO4', '-', '-', '-'],
                      ['-', '-', '-', 'O1', 'OZ', 'O2', '-', '-', '-']]
DEAP_CHANNEL_LOCATION_DICT = format_channel_location_dict(DEAP_CHANNEL_LIST, DEAP_LOCATION_LIST)

dataset = DEAPDataset(io_path=f'./deap',
            root_path='./data_preprocessed_python',
            online_transform=transforms.Compose([
                transforms.Concatenate([
                    transforms.BandDifferentialEntropy(),
                    transforms.BandPowerSpectralDensity()
                ]),
                transforms.ToGrid(DEAP_CHANNEL_LOCATION_DICT)
            ]),
            label_transform=transforms.Compose([
                transforms.Select('valence'),
                transforms.Binary(5.0),
            ]))
model = MTCNN(num_classes=2, in_channels=8, grid_size=(8, 9), dropout=0.2)
Parameters
  • in_channels (int) – The feature dimension of each electrode, i.e., \(N\) in the paper. (defualt: 4)

  • grid_size (tuple) – Spatial dimensions of grid-like EEG representation. (defualt: (8, 9))

  • num_classes (int) – The number of classes to predict. (defualt: 2)

  • dropout (float) – Probability of an element to be zeroed in the dropout layers. (defualt: 0.2)

forward(x: Tensor) Tensor[source]
Parameters

x (torch.Tensor) – EEG signal representation, the ideal input shape is [n, 8, 8, 9]. Here, n corresponds to the batch size, 8 corresponds to in_channels, and (8, 9) corresponds to grid_size.

Returns

the predicted probability that the samples belong to the classes.

Return type

torch.Tensor[number of sample, number of classes]

torcheeg.models.STNet

class torcheeg.models.STNet(in_channels: int = 128, grid_size: Tuple[int, int] = (9, 9), num_classes: int = 2, dropout: float = 0.2)[source]

Spatio-temporal Network (STNet). For more details, please refer to the following information.

Below is a recommended suite for use in emotion recognition tasks:

dataset = DEAPDataset(io_path=f'./deap',
            root_path='./data_preprocessed_python',
            offline_transform=transforms.Compose([
                transforms.ToGrid(DEAP_CHANNEL_LOCATION_DICT)
            ]),
            online_transform=transforms.ToTensor(),
            label_transform=transforms.Compose([
                transforms.Select('valence'),
                transforms.Binary(5.0),
            ]))
model = STNet(num_classes=2, in_channels=4, grid_size=(9, 9), dropout=0.2)
Parameters
  • in_channels (int) – The dimension of each electrode. (defualt: 128)

  • grid_size (tuple) – Spatial dimensions of grid-like EEG representation. (defualt: (9, 9))

  • num_classes (int) – The number of classes to predict. (defualt: 2)

  • dropout (float) – Probability of an element to be zeroed in the dropout layers. (defualt: 0.2)

forward(x: Tensor) Tensor[source]
Parameters

x (torch.Tensor) – EEG signal representation, the ideal input shape is [n, 128, 9, 9]. Here, n corresponds to the batch size, 128 corresponds to in_channels, and (9, 9) corresponds to grid_size.

Returns

the predicted probability that the samples belong to the classes.

Return type

torch.Tensor[number of sample, number of classes]

torcheeg.models.TSCeption

class torcheeg.models.TSCeption(num_electrodes: int = 28, num_T: int = 15, num_S: int = 15, hid_channels: int = 32, num_classes: int = 2, sampling_rate: int = 128, dropout: float = 0.5)[source]

Continuous Convolutional Neural Network (CCNN). For more details, please refer to the following information.

Below is a recommended suite for use in emotion recognition tasks:

dataset = DEAPDataset(io_path=f'./deap',
            root_path='./data_preprocessed_python',
            chunk_size=512,
            baseline_num=1,
            baseline_chunk_size=512,
            offline_transform=transforms.Compose([
                PickElectrode(PickElectrode.to_index_list(
                ['FP1', 'AF3', 'F3', 'F7',
                'FC5', 'FC1', 'C3', 'T7',
                'CP5', 'CP1', 'P3', 'P7',
                'PO3','O1', 'FP2', 'AF4',
                'F4', 'F8', 'FC6', 'FC2',
                'C4', 'T8', 'CP6', 'CP2',
                'P4', 'P8', 'PO4', 'O2'], DEAP_CHANNEL_LIST)),
                transforms.To2d()
            ]),
            online_transform=transforms.ToTensor(),
            label_transform=transforms.Compose([
                transforms.Select('valence'),
                transforms.Binary(5.0),
            ]))
model = TSCeption(num_classes=2,
                  num_electrodes=28,
                  sampling_rate=128,
                  num_T=15,
                  num_S=15,
                  hid_channels=32,
                  dropout=0.5)
Parameters
  • num_electrodes (int) – The number of electrodes. (defualt: 28)

  • num_T (int) – The number of multi-scale 1D temporal kernels in the dynamic temporal layer, i.e., \(T\) kernels in the paper. (defualt: 15)

  • num_S (int) – The number of multi-scale 1D spatial kernels in the asymmetric spatial layer. (defualt: 15)

  • hid_channels (int) – The number of hidden nodes in the first fully connected layer. (defualt: 32)

  • num_classes (int) – The number of classes to predict. (defualt: 2)

  • sampling_rate (int) – The sampling rate of the EEG signals, i.e., \(f_s\) in the paper. (defualt: 128)

  • dropout (float) – Probability of an element to be zeroed in the dropout layers. (defualt: 0.5)

forward(x: Tensor) Tensor[source]
Parameters

x (torch.Tensor) – EEG signal representation, the ideal input shape is [n, 1, 28, 512]. Here, n corresponds to the batch size, 1 corresponds to number of channels for convolution, 28 corresponds to num_electrodes, and 512 corresponds to the input dimension for each electrode.

Returns

the predicted probability that the samples belong to the classes.

Return type

torch.Tensor[number of sample, number of classes]

torcheeg.models.CCNN

class torcheeg.models.CCNN(in_channels: int = 4, grid_size: Tuple[int, int] = (9, 9), num_classes: int = 2)[source]

Continuous Convolutional Neural Network (CCNN). For more details, please refer to the following information.

Below is a recommended suite for use in emotion recognition tasks:

dataset = DEAPDataset(io_path=f'./deap',
            root_path='./data_preprocessed_python',
            offline_transform=transforms.Compose([
                transforms.BandDifferentialEntropy(),
                transforms.ToGrid(DEAP_CHANNEL_LOCATION_DICT)
            ]),
            online_transform=transforms.ToTensor(),
            label_transform=transforms.Compose([
                transforms.Select('valence'),
                transforms.Binary(5.0),
            ]))
model = CCNN(num_classes=2, in_channels=4, grid_size=(9, 9))
Parameters
  • in_channels (int) – The feature dimension of each electrode. (defualt: 4)

  • grid_size (tuple) – Spatial dimensions of grid-like EEG representation. (defualt: (9, 9))

  • num_classes (int) – The number of classes to predict. (defualt: 2)

forward(x)[source]
Parameters

x (torch.Tensor) – EEG signal representation, the ideal input shape is [n, 4, 9, 9]. Here, n corresponds to the batch size, 4 corresponds to in_channels, and (9, 9) corresponds to grid_size.

Returns

the predicted probability that the samples belong to the classes.

Return type

torch.Tensor[number of sample, number of classes]