Numpy-based Transforms

transforms.BandDifferentialEntropy

class torcheeg.transforms.BandDifferentialEntropy(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the differential entropy of EEG signals in several sub-bands with EEG signals as input.

transform = BandDifferentialEntropy()
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the differential entropy of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The differential entropy of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.BandPowerSpectralDensity

class torcheeg.transforms.BandPowerSpectralDensity(frequency: int = 128, window_size: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the power spectral density of EEG signals in several sub-bands with EEG signals as input.

transform = BandPowerSpectralDensity()
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • window (int) – Welch’s method computes an estimate of the power spectral density by dividing the data into overlapping segments, where the window denotes length of each segment. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the power spectral density of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The power spectral density of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.BandMeanAbsoluteDeviation

class torcheeg.transforms.BandMeanAbsoluteDeviation(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the mean absolute deviation of EEG signals in several sub-bands with EEG signals as input.

transform = BandMeanAbsoluteDeviation()
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the mean absolute deviation of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The mean absolute deviation of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.BandKurtosis

class torcheeg.transforms.BandKurtosis(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the kurtosis of EEG signals in several sub-bands with EEG signals as input.

transform = BandKurtosis()
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the kurtosis of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The kurtosis of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.BandSkewness

class torcheeg.transforms.BandSkewness(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the skewness of EEG signals in several sub-bands with EEG signals as input.

transform = BandSkewness()
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the skewness of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The skewness of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.MeanStdNormalize

class torcheeg.transforms.MeanStdNormalize(mean: Optional[ndarray] = None, std: Optional[ndarray] = None, axis: Optional[int] = None, apply_to_baseline: bool = False)[source]

Perform z-score normalization on the input data. This class allows the user to define the dimension of normalization and the used statistic.

transform = MeanStdNormalize(axis=0)
# normalize along the first dimension (electrode dimension)
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 128)

transform = MeanStdNormalize(axis=1)
# normalize along the second dimension (temproal dimension)
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 128)
Parameters
  • mean (np.array, optional) – The mean used in the normalization process, allowing the user to provide mean statistics in np.ndarray format. When statistics are not provided, use the statistics of the current sample for normalization.

  • std (np.array, optional) – The standard deviation used in the normalization process, allowing the user to provide tandard deviation statistics in np.ndarray format. When statistics are not provided, use the statistics of the current sample for normalization.

  • axis (int, optional) – The dimension to normalize, when no dimension is specified, the entire data is normalized.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals or features.

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The normalized results.

Return type

np.ndarray

transforms.MinMaxNormalize

class torcheeg.transforms.MinMaxNormalize(min: Union[ndarray, None, float] = None, max: Union[ndarray, None, float] = None, axis: Optional[int] = None, apply_to_baseline: bool = False)[source]

Perform min-max normalization on the input data. This class allows the user to define the dimension of normalization and the used statistic.

transform = MinMaxNormalize(axis=0)
# normalize along the first dimension (electrode dimension)
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 128)

transform = MinMaxNormalize(axis=1)
# normalize along the second dimension (temproal dimension)
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 128)
Parameters
  • min (np.array, optional) – The minimum used in the normalization process, allowing the user to provide minimum statistics in np.ndarray format. When statistics are not provided, use the statistics of the current sample for normalization.

  • max (np.array, optional) – The maximum used in the normalization process, allowing the user to provide maximum statistics in np.ndarray format. When statistics are not provided, use the statistics of the current sample for normalization.

  • axis (int, optional) – The dimension to normalize, when no dimension is specified, the entire data is normalized.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals or features.

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The normalized results.

Return type

np.ndarray

transforms.PickElectrode

class torcheeg.transforms.PickElectrode(pick_list: List[int], apply_to_baseline: bool = False)[source]

Select parts of electrode signals based on a given electrode index list.

transform = PickElectrode(PickElectrode.to_index_list(
    ['FP1', 'AF3', 'F3', 'F7',
     'FC5', 'FC1', 'C3', 'T7',
     'CP5', 'CP1', 'P3', 'P7',
     'PO3','O1', 'FP2', 'AF4',
     'F4', 'F8', 'FC6', 'FC2',
     'C4', 'T8', 'CP6', 'CP2',
     'P4', 'P8', 'PO4', 'O2'], DEAP_CHANNEL_LIST))
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (28, 128)
Parameters
  • pick_list (np.ndarray) – Selected electrode list. Should consist of integers representing the corresponding electrode indices. to_index_list can be used to obtain an index list when we only know the names of the electrode and not their indices.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The output signals with the shape of [number of picked electrodes, number of data points].

Return type

np.ndarray

transforms.To2d

class torcheeg.transforms.To2d(apply_to_baseline: bool = False)[source]

Taking the electrode index as the row index and the temporal index as the column index, a two-dimensional EEG signal representation with the size of [number of electrodes, number of data points] is formed. While PyTorch performs convolution on the 2d tensor, an additional channel dimension is required, thus we append an additional dimension.

transform = To2d()
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (1, 32, 128)
__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The transformed results with the shape of [1, number of electrodes, number of data points].

Return type

np.ndarray

transforms.ToGrid

class torcheeg.transforms.ToGrid(channel_location: Dict[str, Tuple[int, int]], apply_to_baseline: bool = False)[source]

A transform method to project the EEG signals of different channels onto the grid according to the electrode positions to form a 3D EEG signal representation with the size of [number of electrodes, width of grid, height of grid]. For the electrode position information, please refer to constants grouped by dataset:

  • datasets.constants.emotion_recognition.deap.DEAP_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.dreamer.DREAMER_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.seed.SEED_CHANNEL_LOCATION_DICT

transform = ToGrid(DEAP_CHANNEL_LOCATION_DICT)
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (128, 9, 9)
Parameters
  • channel_location (dict) – Electrode location information. Represented in dictionary form, where key corresponds to the electrode name and value corresponds to the row index and column index of the electrode on the grid.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The projected results with the shape of [number of electrodes, width of grid, height of grid].

Return type

np.ndarray

transforms.ToInterpolatedGrid

class torcheeg.transforms.ToInterpolatedGrid(channel_location: Dict[str, Tuple[int, int]], apply_to_baseline: bool = False)[source]

A transform method to project the EEG signals of different channels onto the grid according to the electrode positions to form a 3D EEG signal representation with the size of [number of electrodes, width of grid, height of grid]. For the electrode position information, please refer to constants grouped by dataset:

  • datasets.constants.emotion_recognition.deap.DEAP_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.dreamer.DREAMER_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.seed.SEED_CHANNEL_LOCATION_DICT

transform = ToInterpolatedGrid(DEAP_CHANNEL_LOCATION_DICT)
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (128, 9, 9)

Especially, missing values on the grid are supplemented using cubic interpolation

Parameters
  • channel_location (dict) – Electrode location information. Represented in dictionary form, where key corresponds to the electrode name and value corresponds to the row index and column index of the electrode on the grid.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The projected results with the shape of [number of electrodes, width of grid, height of grid].

Return type

np.ndarray

transforms.Concatenate

class torcheeg.transforms.Concatenate(transforms: Sequence[Callable], apply_to_baseline: bool = False)[source]

Merge the calculation results of multiple transforms, which are used when feature fusion is required.

transform = Concatenate([
    BandDifferentialEntropy(),
    BandMeanAbsoluteDeviation()
])
transform(eeg=torch.randn(32, 128))['eeg'].shape
>>> (32, 8)
Parameters
  • transforms (list, tuple) – a sequence of transforms

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (torch.Tensor, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The combined results of multiple transforms.

Return type

np.ndarray