Shortcuts

AMIGOSDataset

class torcheeg.datasets.AMIGOSDataset(root_path: str = './data_preprocessed', chunk_size: int = 128, overlap: int = 0, num_channel: int = 14, num_trial: int = 16, skipped_subjects: List[int] = [9, 12, 21, 22, 23, 24, 33], num_baseline: int = 5, baseline_chunk_size: int = 128, online_transform: None | Callable = None, offline_transform: None | Callable = None, label_transform: None | Callable = None, before_trial: None | Callable = None, after_trial: None | Callable = None, after_session: None | Callable = None, after_subject: None | Callable = None, io_path: None | str = None, io_size: int = 1048576, io_mode: str = 'lmdb', num_worker: int = 0, verbose: bool = True)[source][source]

A dataset for Multimodal research of affect, personality traits and mood on Individuals and GrOupS (AMIGOS). This class generates training samples and test samples according to the given parameters, and caches the generated results in a unified input and output format (IO). The relevant information of the dataset is as follows:

  • Author: Miranda-Correa et al.

  • Year: 2018

  • Download URL: http://www.eecs.qmul.ac.uk/mmv/datasets/amigos/download.html

  • Reference: Miranda-Correa J A, Abadi M K, Sebe N, et al. Amigos: A dataset for affect, personality and mood research on individuals and groups[J]. IEEE Transactions on Affective Computing, 2018, 12(2): 479-493.

  • Stimulus: 16 short affective video extracts and 4 long affective video extracts from movies.

  • Signals: Electroencephalogram (14 channels at 128Hz), electrocardiogram (2 channels at 60Hz) and galvanic skin response (1 channel at 60Hz) of 40 subjects. For the first 16 trials, 40 subjects watched a set of short affective video extracts. For the last 4 trials, 37 of the participants of the previous experiment watched a set of long affective video extracts.

  • Rating: arousal (1-9), valence (1-9), dominance (1-9), liking (1-9), familiarity (1-9), neutral (0, 1), disgust (0, 1),happiness (0, 1), surprise (0, 1), anger (0, 1), fear (0, 1), and sadness (0, 1).

In order to use this dataset, the download folder data_preprocessed is required, containing the following files:

  • Data_Preprocessed_P01.mat

  • Data_Preprocessed_P02.mat

  • Data_Preprocessed_P03.mat

  • Data_Preprocessed_P40.mat

An example dataset for CNN-based methods:

from torcheeg.datasets import AMIGOSDataset
from torcheeg import transforms

from torcheeg.datasets.constants.emotion_recognition.amigos import AMIGOS_CHANNEL_LOCATION_DICT

dataset = AMIGOSDataset(root_path='./data_preprocessed',
                        offline_transform=transforms.Compose([
                            transforms.BandDifferentialEntropy(),
                            transforms.ToGrid(AMIGOS_CHANNEL_LOCATION_DICT)
                        ]),
                        online_transform=transforms.ToTensor(),
                        label_transform=transforms.Compose([
                            transforms.Select('valence'),
                            transforms.Binary(5.0),
                        ]))
print(dataset[0])
# EEG signal (torch.Tensor[4, 9, 9]),
# coresponding baseline signal (torch.Tensor[4, 9, 9]),
# label (int)

Another example dataset for CNN-based methods:

from torcheeg.datasets import AMIGOSDataset
from torcheeg import transforms

dataset = AMIGOSDataset(root_path='./data_preprocessed',
                        online_transform=transforms.Compose(
                            [transforms.To2d(),
                            transforms.ToTensor()]),
                        label_transform=transforms.Compose([
                            transforms.Select('valence'),
                            transforms.Binary(5.0),
                        ]))
print(dataset[0])
# EEG signal (torch.Tensor[14, 128]),
# coresponding baseline signal (torch.Tensor[14, 128]),
# label (int)

An example dataset for GNN-based methods:

from torcheeg.datasets import AMIGOSDataset
from torcheeg import transforms
from torcheeg.transforms.pyg import ToG

from torcheeg.datasets.constants.emotion_recognition.amigos import AMIGOS_ADJACENCY_MATRIX

dataset = AMIGOSDataset(root_path='./data_preprocessed',
                        online_transform=transforms.Compose(
                            [ToG(AMIGOS_ADJACENCY_MATRIX)]),
                        label_transform=transforms.Compose([
                            transforms.Select('valence'),
                            transforms.Binary(5.0),
                        ]))
print(dataset[0])
# EEG signal (torch_geometric.data.Data),
# coresponding baseline signal (torch_geometric.data.Data),
# label (int)
Parameters:
  • root_path (str) – Downloaded data files in matlab (unzipped data_preprocessed.zip) formats (default: './data_preprocessed')

  • chunk_size (int) – Number of data points included in each EEG chunk as training or test samples. If set to -1, the EEG signal of a trial is used as a sample of a chunk. (default: 128)

  • overlap (int) – The number of overlapping data points between different chunks when dividing EEG chunks. (default: 0)

  • num_channel (int) – Number of channels used, of which the first 14 channels are EEG signals. (default: 14)

  • num_trial (int) – Number of trials used, of which the first 16 trials are conducted with short videos and the last 4 trials are conducted with long videos. If set to -1, all trials are used. (default: 16)

  • skipped_subjects (int) – The participant ID to be removed because there are some invalid data in the preprocessed version. (default: [9, 12, 21, 22, 23, 24, 33])

  • num_baseline (int) – Number of baseline signal chunks used. (default: 5)

  • baseline_chunk_size (int) – Number of data points included in each baseline signal chunk. The baseline signal in the AMIGOS dataset has a total of 640 data points. (default: 128)

  • online_transform (Callable, optional) – The transformation of the EEG signals and baseline EEG signals. The input is a np.ndarray, and the ouput is used as the first and second value of each element in the dataset. (default: None)

  • offline_transform (Callable, optional) – The usage is the same as online_transform, but executed before generating IO intermediate results. (default: None)

  • label_transform (Callable, optional) – The transformation of the label. The input is an information dictionary, and the ouput is used as the third value of each element in the dataset. (default: None)

  • before_trial (Callable, optional) – The hook performed on the trial to which the sample belongs. It is performed before the offline transformation and thus typically used to implement context-dependent sample transformations, such as moving averages, etc. The input of this hook function is a 2D EEG signal with shape (number of electrodes, number of data points), whose ideal output shape is also (number of electrodes, number of data points).

  • after_trial (Callable, optional) – The hook performed on the trial to which the sample belongs. It is performed after the offline transformation and thus typically used to implement context-dependent sample transformations, such as moving averages, etc. The input and output of this hook function should be a sequence of dictionaries representing a sequence of EEG samples. Each dictionary contains two key-value pairs, indexed by eeg (the EEG signal matrix) and key (the index in the database) respectively.

  • io_path (str) – The path to generated unified data IO, cached as an intermediate result. If set to None, a random path will be generated. (default: None)

  • io_size (int) – Maximum size database may grow to; used to size the memory mapping. If database grows larger than map_size, an exception will be raised and the user must close and reopen. (default: 1048576)

  • io_mode (str) – Storage mode of EEG signal. When io_mode is set to lmdb, TorchEEG provides an efficient database (LMDB) for storing EEG signals. LMDB may not perform well on limited operating systems, where a file system based EEG signal storage is also provided. When io_mode is set to pickle, pickle-based persistence files are used. When io_mode is set to memory, memory are used. (default: lmdb)

  • num_worker (int) – Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)

  • verbose (bool) – Whether to display logs during processing, such as progress bars, etc. (default: True)

Read the Docs v: latest
Versions
latest
stable
v1.1.1
v1.1.0
v1.0.11
v1.0.10
v1.0.9
v1.0.8.post1
v1.0.8
v1.0.7
v1.0.6
v1.0.4
v1.0.3
v1.0.2
v1.0.1
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources