Numpy-based Transforms

transforms.CWTSpectrum

class torcheeg.transforms.CWTSpectrum(sampling_rate: int = 250, wavelet: str = 'morl', total_scale: int = 128, contourf: bool = False, apply_to_baseline: bool = False)[source]

A transform method to convert EEG signals of each channel into spectrograms using wavelet transform.

transform = CWTSpectrum()
transform(eeg=np.random.randn(32, 1000))['eeg'].shape
>>> (32, 128, 1000)

Part of the existing work uses Resize to warp the output spectrum to a specified size suitable for CNN processing.

transform = Compose([
    CWTSpectrum(),
    ToTensor(),
    Resize([260, 260])
])
transform(eeg=np.random.randn(32, 1000))['eeg'].shape
>>> (32, 128, 1000)

When contourf is set to True, a spectrogram of filled contours will be generated for each channel and converted to np.ndarray and returned. This option is usually used for single-channel analysis or visualization of a single channel.

transform = CWTSpectrum(contourf=True)
transform(eeg=np.random.randn(32, 1000))['eeg'].shape
>>> (32, 480, 640, 4)
Parameters
  • sampling_rate (int) – The sampling period for the frequencies output in Hz. (defualt: 128)

  • wavelet (str) – Wavelet to use. Options include: cgau1, cgau2, cgau3, cgau4, cgau5, cgau6, cgau7, cgau8, cmor, fbsp, gaus1, gaus2 , gaus3, gaus4, gaus5, gaus6, gaus7, gaus8, mexh, morl, shan. (defualt: 'morl')

  • total_scale – (int): The total wavelet scales to use. (defualt: 128)

  • contourf – (bool): Whether to output the np.ndarray corresponding to the image with content of filled contours. (defualt: False)

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The spectrograms based on the wavelet transform for all electrodes. If contourf=False, the output shape is [number of electrodes, total_scale, number of data points]. Otherwise, the output shape is [number of electrodes, height of image, width of image of image, 4], where 4 represents the four channels of the image colors.

Return type

np.ndarray[number of electrodes, …]

transforms.BandDifferentialEntropy

class torcheeg.transforms.BandDifferentialEntropy(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the differential entropy of EEG signals in several sub-bands with EEG signals as input.

transform = BandDifferentialEntropy()
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the differential entropy of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The differential entropy of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.BandPowerSpectralDensity

class torcheeg.transforms.BandPowerSpectralDensity(frequency: int = 128, window_size: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the power spectral density of EEG signals in several sub-bands with EEG signals as input.

transform = BandPowerSpectralDensity()
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • window (int) – Welch’s method computes an estimate of the power spectral density by dividing the data into overlapping segments, where the window denotes length of each segment. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the power spectral density of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The power spectral density of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.BandMeanAbsoluteDeviation

class torcheeg.transforms.BandMeanAbsoluteDeviation(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the mean absolute deviation of EEG signals in several sub-bands with EEG signals as input.

transform = BandMeanAbsoluteDeviation()
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the mean absolute deviation of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The mean absolute deviation of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.BandKurtosis

class torcheeg.transforms.BandKurtosis(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the kurtosis of EEG signals in several sub-bands with EEG signals as input.

transform = BandKurtosis()
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the kurtosis of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The kurtosis of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.BandSkewness

class torcheeg.transforms.BandSkewness(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]}, apply_to_baseline: bool = False)[source]

A transform method for calculating the skewness of EEG signals in several sub-bands with EEG signals as input.

transform = BandSkewness()
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • frequency (int) – The sample frequency in Hz. (defualt: 128)

  • order (int) – The order of the filter. (defualt: 5)

  • band_dict – (dict): Band name and the critical frequency or frequencies. By default, the skewness of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt: {...})

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The skewness of several subbands for all electrodes.

Return type

np.ndarray[number of electrodes, number of subbands]

transforms.MeanStdNormalize

class torcheeg.transforms.MeanStdNormalize(mean: Optional[ndarray] = None, std: Optional[ndarray] = None, axis: Optional[int] = None, apply_to_baseline: bool = False)[source]

Perform z-score normalization on the input data. This class allows the user to define the dimension of normalization and the used statistic.

transform = MeanStdNormalize(axis=0)
# normalize along the first dimension (electrode dimension)
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 128)

transform = MeanStdNormalize(axis=1)
# normalize along the second dimension (temproal dimension)
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 128)
Parameters
  • mean (np.array, optional) – The mean used in the normalization process, allowing the user to provide mean statistics in np.ndarray format. When statistics are not provided, use the statistics of the current sample for normalization.

  • std (np.array, optional) – The standard deviation used in the normalization process, allowing the user to provide tandard deviation statistics in np.ndarray format. When statistics are not provided, use the statistics of the current sample for normalization.

  • axis (int, optional) – The dimension to normalize, when no dimension is specified, the entire data is normalized.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals or features.

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The normalized results.

Return type

np.ndarray

transforms.MinMaxNormalize

class torcheeg.transforms.MinMaxNormalize(min: Union[ndarray, None, float] = None, max: Union[ndarray, None, float] = None, axis: Optional[int] = None, apply_to_baseline: bool = False)[source]

Perform min-max normalization on the input data. This class allows the user to define the dimension of normalization and the used statistic.

transform = MinMaxNormalize(axis=0)
# normalize along the first dimension (electrode dimension)
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 128)

transform = MinMaxNormalize(axis=1)
# normalize along the second dimension (temproal dimension)
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 128)
Parameters
  • min (np.array, optional) – The minimum used in the normalization process, allowing the user to provide minimum statistics in np.ndarray format. When statistics are not provided, use the statistics of the current sample for normalization.

  • max (np.array, optional) – The maximum used in the normalization process, allowing the user to provide maximum statistics in np.ndarray format. When statistics are not provided, use the statistics of the current sample for normalization.

  • axis (int, optional) – The dimension to normalize, when no dimension is specified, the entire data is normalized.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals or features.

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The normalized results.

Return type

np.ndarray

transforms.PickElectrode

class torcheeg.transforms.PickElectrode(pick_list: List[int], apply_to_baseline: bool = False)[source]

Select parts of electrode signals based on a given electrode index list.

transform = PickElectrode(PickElectrode.to_index_list(
    ['FP1', 'AF3', 'F3', 'F7',
     'FC5', 'FC1', 'C3', 'T7',
     'CP5', 'CP1', 'P3', 'P7',
     'PO3','O1', 'FP2', 'AF4',
     'F4', 'F8', 'FC6', 'FC2',
     'C4', 'T8', 'CP6', 'CP2',
     'P4', 'P8', 'PO4', 'O2'], DEAP_CHANNEL_LIST))
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (28, 128)
Parameters
  • pick_list (np.ndarray) – Selected electrode list. Should consist of integers representing the corresponding electrode indices. to_index_list can be used to obtain an index list when we only know the names of the electrode and not their indices.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The output signals with the shape of [number of picked electrodes, number of data points].

Return type

np.ndarray

transforms.To2d

class torcheeg.transforms.To2d(apply_to_baseline: bool = False)[source]

Taking the electrode index as the row index and the temporal index as the column index, a two-dimensional EEG signal representation with the size of [number of electrodes, number of data points] is formed. While PyTorch performs convolution on the 2d tensor, an additional channel dimension is required, thus we append an additional dimension.

transform = To2d()
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (1, 32, 128)
__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The transformed results with the shape of [1, number of electrodes, number of data points].

Return type

np.ndarray

transforms.ToGrid

class torcheeg.transforms.ToGrid(channel_location_dict: Dict[str, Tuple[int, int]], apply_to_baseline: bool = False)[source]

A transform method to project the EEG signals of different channels onto the grid according to the electrode positions to form a 3D EEG signal representation with the size of [number of data points, width of grid, height of grid]. For the electrode position information, please refer to constants grouped by dataset:

  • datasets.constants.emotion_recognition.deap.DEAP_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.dreamer.DREAMER_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.seed.SEED_CHANNEL_LOCATION_DICT

transform = ToGrid(DEAP_CHANNEL_LOCATION_DICT)
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (128, 9, 9)
Parameters
  • channel_location_dict (dict) – Electrode location information. Represented in dictionary form, where key corresponds to the electrode name and value corresponds to the row index and column index of the electrode on the grid.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The projected results with the shape of [number of data points, width of grid, height of grid].

Return type

np.ndarray

transforms.ToInterpolatedGrid

class torcheeg.transforms.ToInterpolatedGrid(channel_location_dict: Dict[str, Tuple[int, int]], apply_to_baseline: bool = False)[source]

A transform method to project the EEG signals of different channels onto the grid according to the electrode positions to form a 3D EEG signal representation with the size of [number of data points, width of grid, height of grid]. For the electrode position information, please refer to constants grouped by dataset:

  • datasets.constants.emotion_recognition.deap.DEAP_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.dreamer.DREAMER_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.seed.SEED_CHANNEL_LOCATION_DICT

transform = ToInterpolatedGrid(DEAP_CHANNEL_LOCATION_DICT)
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (128, 9, 9)

Especially, missing values on the grid are supplemented using cubic interpolation

Parameters
  • channel_location_dict (dict) – Electrode location information. Represented in dictionary form, where key corresponds to the electrode name and value corresponds to the row index and column index of the electrode on the grid.

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The projected results with the shape of [number of data points, width of grid, height of grid].

Return type

np.ndarray

transforms.ARRCoefficient

class torcheeg.transforms.ARRCoefficient(order: int = 4, norm: str = 'biased', apply_to_baseline: bool = False)[source]

Calculate autoregression reflection coefficients on the input data.

transform = ARRCoefficient(order=4)
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 4)
Parameters
  • order (int) – The order of autoregressive process to be fitted. (defualt: 4)

  • norm (str) – Use a biased or unbiased correlation. (defualt: biased)

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals or features.

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The autoregression reflection coefficients.

Return type

np.ndarray [number of electrodes, order]

transforms.Concatenate

class torcheeg.transforms.Concatenate(transforms: Sequence[Callable], apply_to_baseline: bool = False)[source]

Merge the calculation results of multiple transforms, which are used when feature fusion is required.

transform = Concatenate([
    BandDifferentialEntropy(),
    BandMeanAbsoluteDeviation()
])
transform(eeg=np.random.randn(32, 128))['eeg'].shape
>>> (32, 8)
Parameters
  • transforms (list, tuple) – a sequence of transforms

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The combined results of multiple transforms.

Return type

np.ndarray

transforms.ChunkConcatenate

class torcheeg.transforms.ChunkConcatenate(transforms: Sequence[Callable], chunk_size: int = 250, overlap: int = 0, apply_to_baseline: bool = False)[source]

Divide the input EEG signal into multiple chunks according to chunk_size and overlap, and then apply transforms to each chunk, and combine the calculation results of all transforms on all chunks. It is used when feature fusion is required.

transform = ChunkConcatenate([
    BandDifferentialEntropy(),
    BandMeanAbsoluteDeviation()
],
chunk_size=250,
overlap=0)
transform(eeg=np.random.randn(64, 1000))['eeg'].shape
>>> (64, 32)

TorchEEG allows feature fusion at multiple scales:

transform = Concatenate([
    ChunkConcatenate([
        BandDifferentialEntropy()
    ],
    chunk_size=250,
    overlap=0),  # 4 chunk * 4-dim feature
    ChunkConcatenate([
        BandDifferentialEntropy()
    ],
    chunk_size=500,
    overlap=0),  # 2 chunk * 4-dim feature
    BandDifferentialEntropy()  # 1 chunk * 4-dim feature
])
transform(eeg=np.random.randn(64, 1000))['eeg'].shape
>>> (64, 28) # 4 * 4 + 2 * 4 + 1 * 4
Parameters
  • transforms (list, tuple) – a sequence of transforms

  • apply_to_baseline – (bool): Whether to act on the baseline signal at the same time, if the baseline is passed in when calling. (defualt: False)

__call__(*args, eeg: ndarray, baseline: Optional[ndarray] = None, **kwargs) Dict[str, ndarray][source]
Parameters
  • eeg (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].

  • baseline (np.ndarray, optional) – The corresponding baseline signal, if apply_to_baseline is set to True and baseline is passed, the baseline signal will be transformed with the same way as the experimental signal.

Returns

The combined results of multiple transforms.

Return type

np.ndarray