torcheeg.datasets

DEAPDataset

class torcheeg.datasets.DEAPDataset(root_path: str = './data_preprocessed_python', chunk_size: int = 128, overlap: int = 0, num_channel: int = 32, num_baseline: int = 3, baseline_chunk_size: int = 128, online_transform: Union[None, Callable] = None, offline_transform: Union[None, Callable] = None, label_transform: Union[None, Callable] = None, io_path: str = './io/deap', num_worker: int = 0, verbose: bool = True, cache_size: int = 68719476736)[source]

A multimodal dataset for the analysis of human affective states. This class generates training samples and test samples according to the given parameters, and caches the generated results in a unified input and output format (IO). The relevant information of the dataset is as follows:

  • Author: Koelstra et al.

  • Year: 2012

  • Download URL: https://www.eecs.qmul.ac.uk/mmv/datasets/deap/download.html

  • Reference: Koelstra S, Muhl C, Soleymani M, et al. DEAP: A database for emotion analysis; using physiological signals[J]. IEEE transactions on affective computing, 2011, 3(1): 18-31.

  • Stimulus: 40 one-minute long excerpts from music videos.

  • Signals: Electroencephalogram (32 channels at 512Hz, downsampled to 128Hz), skinconductance level (SCL), respiration amplitude, skin temperature,electrocardiogram, blood volume by plethysmograph, electromyograms ofZygomaticus and Trapezius muscles (EMGs), electrooculogram (EOG), face video (for 22 participants).

  • Rating: Arousal, valence, like/dislike, dominance (all ona scale from 1 to 9), familiarity (on a scale from 1 to 5).

In order to use this dataset, the download folder data_preprocessed_python is required, containing the following files:

  • s01.dat

  • s02.dat

  • s03.dat

  • s32.dat

An example dataset for CNN-based methods:

dataset = DEAPDataset(io_path=f'./deap',
                      root_path='./data_preprocessed_python',
                      offline_transform=transforms.Compose([
                          transforms.BandDifferentialEntropy(),
                          transforms.ToGrid(DEAP_CHANNEL_LOCATION_DICT)
                      ]),
                      online_transform=transforms.ToTensor(),
                      label_transform=transforms.Compose([
                          transforms.Select('valence'),
                          transforms.Binary(5.0),
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[128, 9, 9]),
# coresponding baseline signal (torch.Tensor[128, 9, 9]),
# label (int)

Another example dataset for CNN-based methods:

dataset = DEAPDataset(io_path=f'./deap',
                      root_path='./data_preprocessed_python',
                      online_transform=transforms.Compose([
                          transforms.To2d(),
                          transforms.ToTensor()
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select(['valence', 'arousal']),
                          transforms.Binary(5.0),
                          transforms.BinariesToCategory()
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[1, 32, 128]),
# coresponding baseline signal (torch.Tensor[1, 32, 128]),
# label (int)

An example dataset for GNN-based methods:

dataset = DEAPDataset(io_path=f'./deap',
                      root_path='./data_preprocessed_python',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(DEAP_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('arousal'),
                          transforms.Binary(5.0)
                      ]))
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)

In particular, TorchEEG utilizes the producer-consumer model to allow multi-process data preprocessing. If your data preprocessing is time consuming, consider increasing num_worker for higher speedup. If running under Windows, please use the proper idiom in the main module:

if __name__ == '__main__':
    dataset = DEAPDataset(io_path=f'./deap',
                      root_path='./data_preprocessed_python',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(DEAP_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('arousal'),
                          transforms.Binary(5.0)
                      ]),
                      num_worker=4)
    print(dataset[0])
    # EEG signal (torch_geometric.data.Data),
    # coresponding baseline signal (torch_geometric.data.Data),
    # label (int)
Parameters
  • root_path (str) – Downloaded data files in pickled python/numpy (unzipped data_preprocessed_python.zip) formats (default: './data_preprocessed_python')

  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. (default: 128)

  • overlap (int) – The number of overlapping data points between different chunks when dividing EEG chunks. (default: 0)

  • num_channel (int) – Number of channels used, of which the first 32 channels are EEG signals. (default: 32)

  • num_baseline (int) – Number of baseline signal chunks used. (default: 3)

  • baseline_chunk_size (int) – Number of data points included in each baseline signal chunk. The baseline signal in the DEAP dataset has a total of 384 data points. (default: 128)

  • online_transform (Callable, optional) – The transformation of the EEG signals and baseline EEG signals. The input is a np.ndarray, and the ouput is used as the first and second value of each element in the dataset. (default: None)

  • offline_transform (Callable, optional) – The usage is the same as online_transform, but executed before generating IO intermediate results. (default: None)

  • label_transform (Callable, optional) – The transformation of the label. The input is an information dictionary, and the ouput is used as the third value of each element in the dataset. (default: None)

  • io_path (str) – The path to generated unified data IO, cached as an intermediate result. (default: /io/deap)

  • num_worker (str) – How many subprocesses to use for data processing. (default: 0)

  • verbose (bool) – Whether to display logs during processing, such as progress bars, etc. (default: True)

  • cache_size (int) – Maximum size database may grow to; used to size the memory mapping. If database grows larger than map_size, an exception will be raised and the user must close and reopen. (default: 64 * 1024 * 1024 * 1024)

DREAMERDataset

class torcheeg.datasets.DREAMERDataset(mat_path: str = './DREAMER.mat', chunk_size: int = 128, overlap: int = 0, num_channel: int = 14, num_baseline: int = 61, baseline_chunk_size: int = 128, online_transform: Union[None, Callable] = None, offline_transform: Union[None, Callable] = None, label_transform: Union[None, Callable] = None, io_path: str = './io/dreamer', num_worker: int = 0, verbose: bool = True, cache_size: int = 68719476736)[source]

A multi-modal database consisting of electroencephalogram and electrocardiogram signals recorded during affect elicitation by means of audio-visual stimuli. This class generates training samples and test samples according to the given parameters, and caches the generated results in a unified input and output format (IO). The relevant information of the dataset is as follows:

  • Author: Katsigiannis et al.

  • Year: 2017

  • Download URL: https://zenodo.org/record/546113

  • Reference: Katsigiannis S, Ramzan N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices[J]. IEEE journal of biomedical and health informatics, 2017, 22(1): 98-107.

  • Stimulus: 18 movie clips.

  • Signals: Electroencephalogram (14 channels at 128Hz), and electrocardiogram (2 channels at 256Hz) of 23 subjects.

  • Rating: Arousal, valence, like/dislike, dominance, familiarity (all ona scale from 1 to 5).

In order to use this dataset, the download file DREAMER.mat is required.

An example dataset for CNN-based methods:

dataset = DREAMERDataset(io_path=f'./dreamer',
                      mat_path='./DREAMER.mat',
                      offline_transform=transforms.Compose([
                          transforms.BandDifferentialEntropy(),
                          transforms.ToGrid(DREAMER_CHANNEL_LOCATION_DICT)
                      ]),
                      online_transform=transforms.ToTensor(),
                      label_transform=transforms.Compose([
                          transforms.Select('valence'),
                          transforms.Binary(3.0),
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[128, 9, 9]),
# coresponding baseline signal (torch.Tensor[128, 9, 9]),
# label (int)

Another example dataset for CNN-based methods:

dataset = DREAMERDataset(io_path=f'./dreamer',
                      mat_path='./DREAMER.mat',
                      online_transform=transforms.Compose([
                          transforms.To2d(),
                          transforms.ToTensor()
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select(['valence', 'arousal']),
                          transforms.Binary(3.0),
                          transforms.BinariesToCategory()
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[1, 14, 128]),
# coresponding baseline signal (torch.Tensor[1, 14, 128]),
# label (int)

An example dataset for GNN-based methods:

dataset = DREAMERDataset(io_path=f'./dreamer',
                      mat_path='./DREAMER.mat',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(DREAMER_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('arousal'),
                          transforms.Binary(3.0)
                      ]))
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)

In particular, TorchEEG utilizes the producer-consumer model to allow multi-process data preprocessing. If your data preprocessing is time consuming, consider increasing num_worker for higher speedup. If running under Windows, please use the proper idiom in the main module:

if __name__ == '__main__':
    dataset = DREAMERDataset(io_path=f'./dreamer',
                      mat_path='./DREAMER.mat',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(DREAMER_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('arousal'),
                          transforms.Binary(3.0)
                      ]),
                      num_worker=4)
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)
Parameters
  • mat_path (str) – Downloaded data files in pickled matlab formats (default: './DREAMER.mat')

  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. (default: 128)

  • overlap (int) – The number of overlapping data points between different chunks when dividing EEG chunks. (default: 0)

  • num_channel (int) – Number of channels used, of which the first 14 channels are EEG signals. (default: 14)

  • num_baseline (int) – Number of baseline signal chunks used. (default: 61)

  • baseline_chunk_size (int) – Number of data points included in each baseline signal chunk. The baseline signal in the DREAMER dataset has a total of 7808 data points. (default: 128)

  • online_transform (Callable, optional) – The transformation of the EEG signals and baseline EEG signals. The input is a np.ndarray, and the ouput is used as the first and second value of each element in the dataset. (default: None)

  • offline_transform (Callable, optional) – The usage is the same as online_transform, but executed before generating IO intermediate results. (default: None)

  • label_transform (Callable, optional) – The transformation of the label. The input is an information dictionary, and the ouput is used as the third value of each element in the dataset. (default: None)

  • io_path (str) – The path to generated unified data IO, cached as an intermediate result. (default: /io/dreamer)

  • num_worker (str) – How many subprocesses to use for data processing. (default: 0)

  • verbose (bool) – Whether to display logs during processing, such as progress bars, etc. (default: True)

  • cache_size (int) – Maximum size database may grow to; used to size the memory mapping. If database grows larger than map_size, an exception will be raised and the user must close and reopen. (default: 64 * 1024 * 1024 * 1024)

SEEDDataset

class torcheeg.datasets.SEEDDataset(root_path: str = './Preprocessed_EEG', chunk_size: int = 200, overlap: int = 0, num_channel: int = 62, online_transform: Union[None, Callable] = None, offline_transform: Union[None, Callable] = None, label_transform: Union[None, Callable] = None, io_path: str = './io/seed', num_worker: int = 0, verbose: bool = True, cache_size: int = 68719476736)[source]

The SJTU Emotion EEG Dataset (SEED), is a collection of EEG datasets provided by the BCMI laboratory, which is led by Prof. Bao-Liang Lu. This class generates training samples and test samples according to the given parameters, and caches the generated results in a unified input and output format (IO). The relevant information of the dataset is as follows:

  • Author: Zheng et al.

  • Year: 2015

  • Download URL: https://bcmi.sjtu.edu.cn/home/seed/index.html

  • Reference: Zheng W L, Lu B L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks[J]. IEEE Transactions on Autonomous Mental Development, 2015, 7(3): 162-175.

  • Stimulus: 15 four-minute long film clips from six Chinese movies.

  • Signals: Electroencephalogram (62 channels at 200Hz) of 15 subjects, and eye movement data of 12 subjects. Each subject conducts the experiment three times, with an interval of about one week, totally 15 people x 3 times = 45

  • Rating: positive (1), negative (-1), and neutral (0).

In order to use this dataset, the download folder data_preprocessed_python is required, containing the following files:

  • label.mat

  • readme.txt

  • 10_20131130.mat

  • 9_20140704.mat

An example dataset for CNN-based methods:

dataset = SEEDDataset(io_path=f'./seed',
                      root_path='./Preprocessed_EEG',
                      offline_transform=transforms.Compose([
                          transforms.BandDifferentialEntropy(),
                          transforms.ToGrid(SEED_CHANNEL_LOCATION_DICT)
                      ]),
                      online_transform=transforms.ToTensor(),
                      label_transform=transforms.Compose([
                          transforms.Select(['emotion']),
                          transforms.Lambda(x: x + 1)
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[200, 9, 9]),
# coresponding baseline signal (torch.Tensor[200, 9, 9]),
# label (int)

Another example dataset for CNN-based methods:

dataset = SEEDDataset(io_path=f'./seed',
                      root_path='./Preprocessed_EEG',
                      online_transform=transforms.Compose([
                          transforms.ToTensor(),
                          transforms.To2d()
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select(['emotion']),
                          transforms.Lambda(x: x + 1)
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[62, 200]),
# coresponding baseline signal (torch.Tensor[62, 200]),
# label (int)

An example dataset for GNN-based methods:

dataset = SEEDDataset(io_path=f'./seed',
                      root_path='./Preprocessed_EEG',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(SEED_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select(['emotion']),
                          transforms.Lambda(x: x + 1)
                      ]))
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)

In particular, TorchEEG utilizes the producer-consumer model to allow multi-process data preprocessing. If your data preprocessing is time consuming, consider increasing num_worker for higher speedup. If running under Windows, please use the proper idiom in the main module:

if __name__ == '__main__':
    dataset = SEEDDataset(io_path=f'./seed',
                      root_path='./Preprocessed_EEG',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(SEED_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select(['emotion']),
                          transforms.Lambda(x: x + 1)
                      ]),
                      num_worker=4)
    print(dataset[0])
    # EEG signal (torch_geometric.data.Data),
    # coresponding baseline signal (torch_geometric.data.Data),
    # label (int)
Parameters
  • root_path (str) – Downloaded data files in matlab (unzipped Preprocessed_EEG.zip) formats (default: './Preprocessed_EEG')

  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. (default: 200)

  • overlap (int) – The number of overlapping data points between different chunks when dividing EEG chunks. (default: 0)

  • num_channel (int) – Number of channels used, of which the first 62 channels are EEG signals. (default: 62)

  • online_transform (Callable, optional) – The transformation of the EEG signals and baseline EEG signals. The input is a np.ndarray, and the ouput is used as the first and second value of each element in the dataset. (default: None)

  • offline_transform (Callable, optional) – The usage is the same as online_transform, but executed before generating IO intermediate results. (default: None)

  • label_transform (Callable, optional) – The transformation of the label. The input is an information dictionary, and the ouput is used as the third value of each element in the dataset. (default: None)

  • io_path (str) – The path to generated unified data IO, cached as an intermediate result. (default: /io/seed)

  • num_worker (str) – How many subprocesses to use for data processing. (default: 0)

  • verbose (bool) – Whether to display logs during processing, such as progress bars, etc. (default: True)

AMIGOSDataset

class torcheeg.datasets.AMIGOSDataset(root_path: str = './data_preprocessed', chunk_size: int = 128, overlap: int = 0, num_channel: int = 14, num_trial: int = 16, skipped_subjects: List[int] = [9, 12, 21, 22, 23, 24, 33], num_baseline: int = 5, baseline_chunk_size: int = 128, online_transform: Union[None, Callable] = None, offline_transform: Union[None, Callable] = None, label_transform: Union[None, Callable] = None, io_path: str = './io/amigos', num_worker: int = 0, verbose: bool = True, cache_size: int = 68719476736)[source]

A dataset for Multimodal research of affect, personality traits and mood on Individuals and GrOupS (AMIGOS). This class generates training samples and test samples according to the given parameters, and caches the generated results in a unified input and output format (IO). The relevant information of the dataset is as follows:

  • Author: Miranda-Correa et al.

  • Year: 2018

  • Download URL: http://www.eecs.qmul.ac.uk/mmv/datasets/amigos/download.html

  • Reference: Miranda-Correa J A, Abadi M K, Sebe N, et al. Amigos: A dataset for affect, personality and mood research on individuals and groups[J]. IEEE Transactions on Affective Computing, 2018, 12(2): 479-493.

  • Stimulus: 16 short affective video extracts and 4 long affective video extracts from movies.

  • Signals: Electroencephalogram (14 channels at 128Hz), electrocardiogram (2 channels at 60Hz) and galvanic skin response (1 channel at 60Hz) of 40 subjects. For the first 16 trials, 40 subjects watched a set of short affective video extracts. For the last 4 trials, 37 of the participants of the previous experiment watched a set of long affective video extracts.

  • Rating: arousal (1-9), valence (1-9), dominance (1-9), liking (1-9), familiarity (1-9), neutral (0, 1), disgust (0, 1),happiness (0, 1), surprise (0, 1), anger (0, 1), fear (0, 1), and sadness (0, 1).

In order to use this dataset, the download folder data_preprocessed is required, containing the following files:

  • Data_Preprocessed_P01.mat

  • Data_Preprocessed_P02.mat

  • Data_Preprocessed_P03.mat

  • Data_Preprocessed_P40.mat

An example dataset for CNN-based methods:

dataset = AMIGOSDataset(io_path=f'./amigos',
                        root_path='./data_preprocessed',
                        offline_transform=transforms.Compose([
                            transforms.BandDifferentialEntropy(),
                            transforms.ToGrid(AMIGOS_CHANNEL_LOCATION_DICT)
                        ]),
                        online_transform=transforms.ToTensor(),
                        label_transform=transforms.Compose([
                            transforms.Select('valence'),
                            transforms.Binary(5.0),
                        ]))
print(dataset[0])
# EEG signal (torch.Tensor[128, 9, 9]),
# coresponding baseline signal (torch.Tensor[128, 9, 9]),
# label (int)

Another example dataset for CNN-based methods:

dataset = AMIGOSDataset(io_path=f'./amigos',
                      root_path='./data_preprocessed',
                      online_transform=transforms.Compose([
                          transforms.To2d(),
                          transforms.ToTensor()
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('valence'),
                          transforms.Binary(5.0),
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[14, 128]),
# coresponding baseline signal (torch.Tensor[14, 128]),
# label (int)

An example dataset for GNN-based methods:

dataset = AMIGOSDataset(io_path=f'./amigos',
                      root_path='./data_preprocessed',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(AMIGOS_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('valence'),
                          transforms.Binary(5.0),
                      ]))
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)

In particular, TorchEEG utilizes the producer-consumer model to allow multi-process data preprocessing. If your data preprocessing is time consuming, consider increasing num_worker for higher speedup. If running under Windows, please use the proper idiom in the main module:

if __name__ == '__main__':
    dataset = AMIGOSDataset(io_path=f'./amigos',
                            root_path='./data_preprocessed',
                            online_transform=transforms.Compose([
                                transforms.pyg.ToG(AMIGOS_ADJACENCY_MATRIX)
                            ]),
                            label_transform=transforms.Compose([
                                transforms.Select('valence'),
                                transforms.Binary(5.0),
                            ]),
                            num_worker=4)
    print(dataset[0])
    # EEG signal (torch_geometric.data.Data),
    # coresponding baseline signal (torch_geometric.data.Data),
    # label (int)
Parameters
  • root_path (str) – Downloaded data files in matlab (unzipped data_preprocessed.zip) formats (default: './data_preprocessed')

  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. (default: 128)

  • overlap (int) – The number of overlapping data points between different chunks when dividing EEG chunks. (default: 0)

  • num_channel (int) – Number of channels used, of which the first 14 channels are EEG signals. (default: 14)

  • num_trial (int) – Number of trials used, of which the first 16 trials are conducted with short videos and the last 4 trials are conducted with long videos. If set to -1, all trials are used. (default: 16)

  • skipped_subjects (int) – The participant ID to be removed because there are some invalid data in the preprocessed version. (default: [9, 12, 21, 22, 23, 24, 33])

  • num_baseline (int) – Number of baseline signal chunks used. (default: 5)

  • baseline_chunk_size (int) – Number of data points included in each baseline signal chunk. The baseline signal in the AMIGOS dataset has a total of 640 data points. (default: 128)

  • online_transform (Callable, optional) – The transformation of the EEG signals and baseline EEG signals. The input is a np.ndarray, and the ouput is used as the first and second value of each element in the dataset. (default: None)

  • offline_transform (Callable, optional) – The usage is the same as online_transform, but executed before generating IO intermediate results. (default: None)

  • label_transform (Callable, optional) – The transformation of the label. The input is an information dictionary, and the ouput is used as the third value of each element in the dataset. (default: None)

  • io_path (str) – The path to generated unified data IO, cached as an intermediate result. (default: /io/amigos)

  • num_worker (str) – How many subprocesses to use for data processing. (default: 0)

  • verbose (bool) – Whether to display logs during processing, such as progress bars, etc. (default: True)

  • cache_size (int) – Maximum size database may grow to; used to size the memory mapping. If database grows larger than map_size, an exception will be raised and the user must close and reopen. (default: 64 * 1024 * 1024 * 1024)

MAHNOBDataset

class torcheeg.datasets.MAHNOBDataset(root_path: str = './Sessions', chunk_size: int = 128, sampling_rate: int = 128, overlap: int = 0, num_channel: int = 32, num_baseline: int = 30, baseline_chunk_size: int = 128, num_trial_sample: int = 30, online_transform: Union[None, Callable] = None, offline_transform: Union[None, Callable] = None, label_transform: Union[None, Callable] = None, io_path: str = './io/mahnob', num_worker: int = 0, verbose: bool = True, cache_size: int = 68719476736)[source]

MAHNOB-HCI is a multimodal database recorded in response to affective stimuli with the goal of emotion recognition and implicit tagging research. This class generates training samples and test samples according to the given parameters, and caches the generated results in a unified input and output format (IO). The relevant information of the dataset is as follows:

  • Author: Soleymani et al.

  • Year: 2011

  • Download URL: https://mahnob-db.eu/hci-tagging/

  • Reference: Soleymani M, Lichtenauer J, Pun T, et al. A multimodal database for affect recognition and implicit tagging[J]. IEEE transactions on affective computing, 2011, 3(1): 42-55.

  • Stimulus: 20 videos from famous movies. Each video clip lasts 34-117 seconds (may not be an integer), in addition to 30 seconds before the beginning of the affective stimuli experience and another 30 seconds after the end.

  • Signals: Electroencephalogram (32 channels at 512Hz), peripheral physiological signals (ECG, GSR, Temp, Resp at 256 Hz), and eye movement signals (at 60Hz) of 30-5=25 subjects (3 subjects with missing data records and 2 subjects with incomplete data records).

  • Rating: Arousal, valence, control and predictability (all ona scale from 1 to 9).

In order to use this dataset, the download folder Sessions (Physiological files of emotion elicitation) is required, containing the following files:

  • 1

    • Part_1_N_Trial1_emotion.bdf

    • session.xml

  • 3810

    • Part_30_S_Trial20_emotion.bdf

    • session.xml

An example dataset for CNN-based methods:

dataset = MAHNOBDataset(io_path=f'./mahnob',
                      root_path='./Sessions',
                      offline_transform=transforms.Compose([
                          transforms.BandDifferentialEntropy(),
                          transforms.ToGrid(MAHNOB_CHANNEL_LOCATION_DICT)
                      ]),
                      online_transform=transforms.ToTensor(),
                      label_transform=transforms.Compose([
                          transforms.Select('feltVlnc'),
                          transforms.Binary(5.0),
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[128, 9, 9]),
# coresponding baseline signal (torch.Tensor[128, 9, 9]),
# label (int)

Another example dataset for CNN-based methods:

dataset = MAHNOBDataset(io_path=f'./mahnob',
                      root_path='./Sessions',
                      online_transform=transforms.Compose([
                          transforms.To2d(),
                          transforms.ToTensor()
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select(['feltVlnc', 'feltArsl']),
                          transforms.Binary(5.0),
                          transforms.BinariesToCategory()
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[1, 32, 128]),
# coresponding baseline signal (torch.Tensor[1, 32, 128]),
# label (int)

An example dataset for GNN-based methods:

dataset = MAHNOBDataset(io_path=f'./mahnob',
                      root_path='./Sessions',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(MAHNOB_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('feltArsl'),
                          transforms.Binary(5.0)
                      ]))
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)

In particular, TorchEEG utilizes the producer-consumer model to allow multi-process data preprocessing. If your data preprocessing is time consuming, consider increasing num_worker for higher speedup. If running under Windows, please use the proper idiom in the main module:

if __name__ == '__main__':
    dataset = MAHNOBDataset(io_path=f'./mahnob',
                      root_path='./Sessions',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(MAHNOB_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('feltArsl'),
                          transforms.Binary(5.0)
                      ]),
                      num_worker=4)
    print(dataset[0])
    # EEG signal (torch_geometric.data.Data),
    # coresponding baseline signal (torch_geometric.data.Data),
    # label (int)
Parameters
  • root_path (str) – Downloaded data files in bdf and xml (unzipped Sessions.zip) formats (default: './Sessions')

  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. (default: 128)

  • sampling_rate (int) – The number of data points taken over a second. (default: 128)

  • overlap (int) – The number of overlapping data points between different chunks when dividing EEG chunks. (default: 0)

  • num_channel (int) – Number of channels used, of which the first 32 channels are EEG signals. (default: 32)

  • num_baseline (int) – Number of baseline signal chunks used. (default: 30)

  • baseline_chunk_size (int) – Number of data points included in each baseline signal chunk. The baseline signal in the MAHNOB dataset has a total of 512 (downsampled to sampling_rate) * 30 data points. (default: 128)

  • num_trial_sample (int) – Number of samples picked from each trial. If set to -1, all samples in trials are used. (default: 30)

  • online_transform (Callable, optional) – The transformation of the EEG signals and baseline EEG signals. The input is a np.ndarray, and the ouput is used as the first and second value of each element in the dataset. (default: None)

  • offline_transform (Callable, optional) – The usage is the same as online_transform, but executed before generating IO intermediate results. (default: None)

  • label_transform (Callable, optional) – The transformation of the label. The input is an information dictionary, and the ouput is used as the third value of each element in the dataset. (default: None)

  • io_path (str) – The path to generated unified data IO, cached as an intermediate result. (default: /io/mahnob)

  • num_worker (str) – How many subprocesses to use for data processing. (default: 0)

  • verbose (bool) – Whether to display logs during processing, such as progress bars, etc. (default: True)

  • cache_size (int) – Maximum size database may grow to; used to size the memory mapping. If database grows larger than map_size, an exception will be raised and the user must close and reopen. (default: 64 * 1024 * 1024 * 1024)

BCI2022Dataset

class torcheeg.datasets.BCI2022Dataset(root_path: str = './2022EmotionPublic/TrainSet/', chunk_size: int = 250, overlap: int = 0, channel_num: int = 30, online_transform: Union[None, Callable] = None, offline_transform: Union[None, Callable] = None, label_transform: Union[None, Callable] = None, io_path: str = './io/bci2022', num_worker: int = 0, verbose: bool = True, cache_size: int = 68719476736)[source]

The 2022 EMOTION_BCI competition aims at tackling the cross-subject emotion recognition challenge and provides participants with a batch of EEG data from 80 participants with known emotional state information. Participants are required to establish an EEG computing model with cross-individual emotion recognition ability. The subjects’ EEG data were used for real-time emotion recognition. This class generates training samples and test samples according to the given parameters and caches the generated results in a unified input and output format (IO). The relevant information of the dataset is as follows:

  • Author: Please refer to the downloaded URL.

  • Year: 2022

  • Download URL: https://oneuro.cn/n/competitiondetail/2022_emotion_bci/doc0

  • Reference: Please refer to the downloaded URL.

  • Stimulus: video clips.

  • Signals: Electroencephalogram (30 channels at 250Hz) and two channels of left/right mastoid signals from 80 subjects.

  • Rating: 28 video clips are annotated in valence and discrete emotion dimensions. The valence is divided into positive (1), negative (-1), and neutral (0). Discrete emotions are divided into anger (0), disgust (1), fear (2), sadness (3), neutral (4), amusement (5), excitation (6), happiness (7), and warmth (8).

In order to use this dataset, the download folder TrainSet is required, containing the following files:

  • TrainSet_first_batch

    • sub1

    • sub10

    • sub11

  • TrainSet_second_batch

    • sub55

    • sub57

    • sub59

An example dataset for CNN-based methods:

dataset = BCI2022Dataset(io_path=f'./bci2022',
                      root_path='./TrainSet',
                      offline_transform=transforms.Compose([
                          transforms.BandDifferentialEntropy(),
                          transforms.ToGrid(BCI2022_CHANNEL_LOCATION_DICT)
                      ]),
                      online_transform=transforms.ToTensor(),
                      label_transform=transforms.Select(['emotion']))
print(dataset[0])
# EEG signal (torch.Tensor[250, 8, 9]),
# coresponding baseline signal (torch.Tensor[250, 8, 9]),
# label (int)

Another example dataset for CNN-based methods:

dataset = BCI2022Dataset(io_path=f'./bci2022',
                      root_path='./TrainSet',
                      online_transform=transforms.Compose([
                          transforms.ToTensor(),
                          transforms.To2d()
                      ]),
                      label_transform=transforms.Select(['emotion']))
print(dataset[0])
# EEG signal (torch.Tensor[30, 250]),
# coresponding baseline signal (torch.Tensor[30, 250]),
# label (int)

An example dataset for GNN-based methods:

dataset = BCI2022Dataset(io_path=f'./bci2022',
                      root_path='./TrainSet',
                      online_transform=transforms.Compose([
                          transforms.ToG(BCI2022_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Select(['emotion']))
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)

In particular, TorchEEG utilizes the producer-consumer model to allow multi-process data preprocessing. If your data preprocessing is time consuming, consider increasing num_worker for higher speedup. If running under Windows, please use the proper idiom in the main module:

if __name__ == '__main__':
    dataset = BCI2022Dataset(io_path=f'./bci2022',
                      root_path='./TrainSet',
                      online_transform=transforms.Compose([
                          transforms.ToG(BCI2022_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Select(['emotion']),
                      num_worker=4)
    print(dataset[0])
    # EEG signal (torch_geometric.data.Data),
    # coresponding baseline signal (torch_geometric.data.Data),
    # label (int)
Parameters
  • root_path (str) – Downloaded data files in pickle (the TrainSet folder in unzipped 2022EmotionPublic.zip) formats (default: './TrainSet')

  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. (default: 250)

  • overlap (int) – The number of overlapping data points between different chunks when dividing EEG chunks. (default: 0)

  • channel_num (int) – Number of channels used, of which the first 30 channels are EEG signals. (default: 30)

  • online_transform (Callable, optional) – The transformation of the EEG signals and baseline EEG signals. The input is a np.ndarray, and the ouput is used as the first and second value of each element in the dataset. (default: None)

  • offline_transform (Callable, optional) – The usage is the same as online_transform, but executed before generating IO intermediate results. (default: None)

  • label_transform (Callable, optional) – The transformation of the label. The input is an information dictionary, and the ouput is used as the third value of each element in the dataset. (default: None)

  • io_path (str) – The path to generated unified data IO, cached as an intermediate result. (default: /io/bci2022)

  • num_worker (str) – How many subprocesses to use for data processing. (default: 0)

  • verbose (bool) – Whether to display logs during processing, such as progress bars, etc. (default: True)

M3CVDataset

class torcheeg.datasets.M3CVDataset(root_path: str = './aistudio', subset: str = 'Enrollment', chunk_size: int = 1000, overlap: int = 0, num_channel: int = 64, online_transform: Union[None, Callable] = None, offline_transform: Union[None, Callable] = None, label_transform: Union[None, Callable] = None, io_path: str = './io/m3cv', num_worker: int = 0, verbose: bool = True, cache_size: int = 68719476736)[source]

A reliable EEG-based biometric system should be able to withstand changes in an individual’s mental state (cross-task test) and still be able to successfully identify an individual after several days (cross-session test). The authors built an EEG dataset M3CV with 106 subjects, two sessions of experiment on different days, and multiple paradigms. Ninety-five of the subjects participated in two sessions of the experiments, separated by more than 6 days. The experiment includes 6 common EEG experimental paradigms including resting state, sensory and cognitive task, and brain-computer interface.

In order to use this dataset, the download dataset folder aistudio is required, containing the following files:

  • Calibration_Info.csv

  • Enrollment_Info.csv

  • Testing_Info.csv

  • Calibration (unzipped Calibration.zip)

  • Testing (unzipped Testing.zip)

  • Enrollment (unzipped Enrollment.zip)

An example dataset for CNN-based methods:

dataset = M3CVDataset(io_path=f'./m3cv',
                      root_path='./aistudio',
                      offline_transform=transforms.Compose([
                          transforms.BandDifferentialEntropy(),
                          transforms.ToGrid(M3CV_CHANNEL_LOCATION_DICT)
                      ]),
                      online_transform=transforms.ToTensor(),
                      label_transform=transforms.Compose([
                          transforms.Select('SubjectID'),
                          transforms.StringToNumber()
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[1000, 9, 9]),
# coresponding baseline signal (torch.Tensor[1000, 9, 9]),
# label (int)

Another example dataset for CNN-based methods:

dataset = M3CVDataset(io_path=f'./m3cv',
                      root_path='./aistudio',
                      online_transform=transforms.Compose([
                          transforms.To2d(),
                          transforms.ToTensor()
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('SubjectID'),
                          transforms.StringToNumber()
                      ]))
print(dataset[0])
# EEG signal (torch.Tensor[1, 65, 1000]),
# coresponding baseline signal (torch.Tensor[1, 65, 1000]),
# label (int)

An example dataset for GNN-based methods:

dataset = M3CVDataset(io_path=f'./m3cv',
                      root_path='./aistudio',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(M3CV_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('SubjectID'),
                          transforms.StringToNumber()
                      ]))
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)

In particular, TorchEEG utilizes the producer-consumer model to allow multi-process data preprocessing. If your data preprocessing is time consuming, consider increasing num_worker for higher speedup. If running under Windows, please use the proper idiom in the main module:

if __name__ == '__main__':
    dataset = M3CVDataset(io_path=f'./m3cv',
                      root_path='./aistudio',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(M3CV_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Compose([
                          transforms.Select('SubjectID'),
                          transforms.StringToNumber()
                      ]),
                      num_worker=4)
    print(dataset[0])
    # EEG signal (torch_geometric.data.Data),
    # coresponding baseline signal (torch_geometric.data.Data),
    # label (int)
Parameters
  • root_path (str) – Downloaded data files in pickled python/numpy (unzipped aistudio.zip) formats (default: './aistudio')

  • subset (str) – In the competition, the M3CV dataset is splited into the Enrollment set, Calibration set, and Testing set. Please specify the subset to use, options include Enrollment, Calibration and Testing. (default: 'Enrollment')

  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. (default: 1000)

  • overlap (int) – The number of overlapping data points between different chunks when dividing EEG chunks. (default: 0)

  • num_channel (int) – Number of channels used, of which the first 32 channels are EEG signals. (default: 64)

  • online_transform (Callable, optional) – The transformation of the EEG signals and baseline EEG signals. The input is a np.ndarray, and the ouput is used as the first and second value of each element in the dataset. (default: None)

  • offline_transform (Callable, optional) – The usage is the same as online_transform, but executed before generating IO intermediate results. (default: None)

  • label_transform (Callable, optional) – The transformation of the label. The input is an information dictionary, and the ouput is used as the third value of each element in the dataset. (default: None)

  • io_path (str) – The path to generated unified data IO, cached as an intermediate result. (default: /io/m3cv)

  • num_worker (str) – How many subprocesses to use for data processing. (default: 0)

  • verbose (bool) – Whether to display logs during processing, such as progress bars, etc. (default: True)

  • cache_size (int) – Maximum size database may grow to; used to size the memory mapping. If database grows larger than map_size, an exception will be raised and the user must close and reopen. (default: 64 * 1024 * 1024 * 1024)

TSUBenckmarkDataset

class torcheeg.datasets.TSUBenckmarkDataset(root_path: str = './TSUBenchmark', chunk_size: int = 250, overlap: int = 0, num_channel: int = 64, online_transform: Union[None, Callable] = None, offline_transform: Union[None, Callable] = None, label_transform: Union[None, Callable] = None, io_path: str = './io/tsu_benchmark', num_worker: int = 0, verbose: bool = True, cache_size: int = 68719476736)[source]

The benchmark dataset for SSVEP-Based brain-computer interfaces (TSUBenckmark) is provided by the Tsinghua BCI Lab. It presents a benchmark steady-state visual evoked potential (SSVEP) dataset acquired with a 40-target brain-computer interface (BCI) speller. This class generates training samples and test samples according to the given parameters, and caches the generated results in a unified input and output format (IO). The relevant information of the dataset is as follows:

  • Author: Wang et al.

  • Year: 2016

  • Download URL: http://bci.med.tsinghua.edu.cn/

  • Reference: Wang Y, Chen X, Gao X, et al. A benchmark dataset for SSVEP-based brain-computer interfaces[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2016, 25(10): 1746-1752.

  • Stimulus: Each trial started with a visual cue (a red square) indicating a target stimulus. The cue appeared for 0.5s on the screen. Subjects were asked to shift their gaze to the target as soon as possible within the cue duration. Following the cue offset, all stimuli started to flicker on the screen concurrently and lasted 5s. After stimulus offset, the screen was blank for 0.5s before the next trial began, which allowed the subjects to have short breaks between consecutive trials.

  • Signals: Electroencephalogram (64 channels at 250Hz) of 35 subjects. For each subject, the experiment consisted of 6 blocks. Each block contained 40 trials corresponding to all 40 characters indicated in a random order. Totally 35 people x 6 blocks x 40 trials.

  • Rating: Frequency and phase values for the 40 trials.

In order to use this dataset, the download folder data_preprocessed_python is required, containing the following files:

  • Readme.txt

  • Sub_info.txt

  • 64-channels.loc

  • Freq_Phase.mat

  • S1.mat

  • S35.mat

An example dataset for CNN-based methods:

dataset = TSUBenckmarkDataset(io_path=f'./tsu_benchmark',
                      root_path='./TSUBenchmark',
                      offline_transform=transforms.Compose([
                          transforms.BandDifferentialEntropy(),
                          transforms.ToGrid(TSUBenckmark_CHANNEL_LOCATION_DICT)
                      ]),
                      online_transform=transforms.ToTensor(),
                      label_transform=transforms.Select(['trial_id']))
print(dataset[0])
# EEG signal (torch.Tensor[250, 10, 11]),
# coresponding baseline signal (torch.Tensor[250, 10, 11]),
# label (int)

Another example dataset for CNN-based methods:

dataset = TSUBenckmarkDataset(io_path=f'./tsu_benchmark',
                      root_path='./TSUBenchmark',
                      online_transform=transforms.Compose([
                          transforms.ToTensor(),
                          transforms.To2d()
                      ]),
                      label_transform=transforms.Select(['trial_id']))
print(dataset[0])
# EEG signal (torch.Tensor[64, 250]),
# coresponding baseline signal (torch.Tensor[64, 250]),
# label (int)

An example dataset for GNN-based methods:

dataset = TSUBenckmarkDataset(io_path=f'./tsu_benchmark',
                      root_path='./TSUBenchmark',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(TSUBenckmark_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Select(['trial_id']))
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)

In particular, TorchEEG utilizes the producer-consumer model to allow multi-process data preprocessing. If your data preprocessing is time consuming, consider increasing num_worker for higher speedup. If running under Windows, please use the proper idiom in the main module:

if __name__ == '__main__':
    dataset = TSUBenckmarkDataset(io_path=f'./tsu_benchmark',
                      root_path='./TSUBenchmark',
                      online_transform=transforms.Compose([
                          transforms.pyg.ToG(TSUBenckmark_ADJACENCY_MATRIX)
                      ]),
                      label_transform=transforms.Select(['freq']),
                      num_worker=4)
    print(dataset[0])
    # EEG signal (torch_geometric.data.Data),
    # coresponding baseline signal (torch_geometric.data.Data),
    # label (int)
Parameters
  • root_path (str) – Downloaded data files in matlab (unzipped TSUBenchmark.zip) formats (default: './TSUBenchmark')

  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. (default: 250)

  • overlap (int) – The number of overlapping data points between different chunks when dividing EEG chunks. (default: 0)

  • num_channel (int) – Number of channels used, of which the first 64 channels are EEG signals. (default: 64)

  • online_transform (Callable, optional) – The transformation of the EEG signals and baseline EEG signals. The input is a np.ndarray, and the ouput is used as the first and second value of each element in the dataset. (default: None)

  • offline_transform (Callable, optional) – The usage is the same as online_transform, but executed before generating IO intermediate results. (default: None)

  • label_transform (Callable, optional) – The transformation of the label. The input is an information dictionary, and the ouput is used as the third value of each element in the dataset. (default: None)

  • io_path (str) – The path to generated unified data IO, cached as an intermediate result. (default: /io/tsu_benchmark)

  • num_worker (str) – How many subprocesses to use for data processing. (default: 0)

  • verbose (bool) – Whether to display logs during processing, such as progress bars, etc. (default: True)