Numpy-based Transforms
transforms.BandDifferentialEntropy
- class torcheeg.transforms.BandDifferentialEntropy(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]})[source]
Bases:
BandTransform
A transform method for calculating the differential entropy of EEG signals in several sub-bands with EEG signals as input.
transform = BandDifferentialEntropy() transform(torch.randn(32, 128)).shape >>> (32, 4)
- Parameters
frequency (int) – The sample frequency in Hz. (defualt:
128
)order (int) – The order of the filter. (defualt:
5
)band_dict – (dict): Band name and the critical frequency or frequencies. By default, the differential entropy of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt:
{...}
)
transforms.BandPowerSpectralDensity
- class torcheeg.transforms.BandPowerSpectralDensity(frequency: int = 128, window_size: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]})[source]
Bases:
object
A transform method for calculating the power spectral density of EEG signals in several sub-bands with EEG signals as input.
transform = BandPowerSpectralDensity() transform(torch.randn(32, 128)).shape >>> (32, 4)
- Parameters
frequency (int) – The sample frequency in Hz. (defualt:
128
)window (int) – Welch’s method computes an estimate of the power spectral density by dividing the data into overlapping segments, where the window denotes length of each segment. (defualt:
128
)order (int) – The order of the filter. (defualt:
5
)band_dict – (dict): Band name and the critical frequency or frequencies. By default, the power spectral density of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt:
{...}
)
transforms.BandMeanAbsoluteDeviation
- class torcheeg.transforms.BandMeanAbsoluteDeviation(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]})[source]
Bases:
BandTransform
A transform method for calculating the mean absolute deviation of EEG signals in several sub-bands with EEG signals as input.
transform = BandMeanAbsoluteDeviation() transform(torch.randn(32, 128)).shape >>> (32, 4)
- Parameters
frequency (int) – The sample frequency in Hz. (defualt:
128
)order (int) – The order of the filter. (defualt:
5
)band_dict – (dict): Band name and the critical frequency or frequencies. By default, the mean absolute deviation of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt:
{...}
)
transforms.BandKurtosis
- class torcheeg.transforms.BandKurtosis(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]})[source]
Bases:
BandTransform
A transform method for calculating the kurtosis of EEG signals in several sub-bands with EEG signals as input.
transform = BandKurtosis() transform(torch.randn(32, 128)).shape >>> (32, 4)
- Parameters
frequency (int) – The sample frequency in Hz. (defualt:
128
)order (int) – The order of the filter. (defualt:
5
)band_dict – (dict): Band name and the critical frequency or frequencies. By default, the kurtosis of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt:
{...}
)
transforms.BandSkewness
- class torcheeg.transforms.BandSkewness(frequency: int = 128, order: int = 5, band_dict: Dict[str, Tuple[int, int]] = {'alpha': [8, 14], 'beta': [14, 31], 'gamma': [31, 49], 'theta': [4, 8]})[source]
Bases:
BandTransform
A transform method for calculating the skewness of EEG signals in several sub-bands with EEG signals as input.
transform = BandSkewness() transform(torch.randn(32, 128)).shape >>> (32, 4)
- Parameters
frequency (int) – The sample frequency in Hz. (defualt:
128
)order (int) – The order of the filter. (defualt:
5
)band_dict – (dict): Band name and the critical frequency or frequencies. By default, the skewness of the four subbands, theta, alpha, beta and gamma, is calculated. (defualt:
{...}
)
transforms.MeanStdNormalize
- class torcheeg.transforms.MeanStdNormalize(mean: Optional[ndarray] = None, std: Optional[ndarray] = None, axis: Optional[int] = None)[source]
Bases:
object
Perform z-score normalization on the input data. This class allows the user to define the dimension of normalization and the used statistic.
transform = MeanStdNormalize(axis=0) # normalize along the first dimension (electrode dimension) transform(torch.randn(32, 128)).shape >>> (32, 128) transform = MeanStdNormalize(axis=1) # normalize along the second dimension (temproal dimension) transform(torch.randn(32, 128)).shape >>> (32, 128)
- Parameters
mean (np.array, optional) – The mean used in the normalization process, allowing the user to provide mean statistics in
np.ndarray
format. When statistics are not provided, use the statistics of the current sample for normalization.std (np.array, optional) – The standard deviation used in the normalization process, allowing the user to provide tandard deviation statistics in
np.ndarray
format. When statistics are not provided, use the statistics of the current sample for normalization.axis (int, optional) – The dimension to normalize, when no dimension is specified, the entire data is normalized.
transforms.MinMaxNormalize
- class torcheeg.transforms.MinMaxNormalize(min: Union[ndarray, None, float] = None, max: Union[ndarray, None, float] = None, axis: Optional[int] = None)[source]
Bases:
object
Perform min-max normalization on the input data. This class allows the user to define the dimension of normalization and the used statistic.
transform = MinMaxNormalize(axis=0) # normalize along the first dimension (electrode dimension) transform(torch.randn(32, 128)).shape >>> (32, 128) transform = MinMaxNormalize(axis=1) # normalize along the second dimension (temproal dimension) transform(torch.randn(32, 128)).shape >>> (32, 128)
- Parameters
min (np.array, optional) – The minimum used in the normalization process, allowing the user to provide minimum statistics in
np.ndarray
format. When statistics are not provided, use the statistics of the current sample for normalization.max (np.array, optional) – The maximum used in the normalization process, allowing the user to provide maximum statistics in
np.ndarray
format. When statistics are not provided, use the statistics of the current sample for normalization.axis (int, optional) – The dimension to normalize, when no dimension is specified, the entire data is normalized.
transforms.PickElectrode
- class torcheeg.transforms.PickElectrode(pick_list: List[int])[source]
Bases:
object
Select parts of electrode signals based on a given electrode index list.
transform = PickElectrode(PickElectrode.to_index_list( ['FP1', 'AF3', 'F3', 'F7', 'FC5', 'FC1', 'C3', 'T7', 'CP5', 'CP1', 'P3', 'P7', 'PO3','O1', 'FP2', 'AF4', 'F4', 'F8', 'FC6', 'FC2', 'C4', 'T8', 'CP6', 'CP2', 'P4', 'P8', 'PO4', 'O2'], DEAP_CHANNEL_LIST)) transform(torch.randn(32, 128)).shape >>> (28, 128)
- Parameters
pick_list (np.ndarray) – Selected electrode list. Should consist of integers representing the corresponding electrode indices.
to_index_list
can be used to obtain an index list when we only know the names of the electrode and not their indices.
- __call__(x: ndarray) ndarray [source]
- Parameters
x (np.ndarray) – The input EEG signals in shape of [number of electrodes, number of data points].
- Returns
The output signals with the shape of [number of picked electrodes, number of data points].
- Return type
np.ndarray
- static to_index_list(electrode_list: List[str], dataset_electrode_list: List[str], strict_mode=False) List[int] [source]
- Parameters
electrode_list (list) – picked electrode name, consisting of strings.
dataset_electrode_list (list) – The description of the electrode information contained in the EEG signal in the dataset, consisting of strings. For the electrode position information, please refer to constants grouped by dataset
datasets.constants
.strict_mode – (bool): Whether to use strict mode. In strict mode, unmatched picked electrode names are thrown as errors. Otherwise, unmatched picked electrode names are automatically ignored. (defualt:
False
)
- Returns
Selected electrode list, consisting of integers representing the corresponding electrode indices.
- Return type
list
transforms.To2d
- class torcheeg.transforms.To2d[source]
Bases:
object
Taking the electrode index as the row index and the temporal index as the column index, a two-dimensional EEG signal representation with the size of [number of electrodes, number of data points] is formed. While PyTorch performs convolution on the 2d tensor, an additional channel dimension is required, thus we append an additional dimension.
transform = To2d() transform(torch.randn(32, 128)).shape >>> (1, 32, 128)
transforms.ToGrid
- class torcheeg.transforms.ToGrid(channel_location: Dict[str, Tuple[int, int]])[source]
Bases:
object
A transform method to project the EEG signals of different channels onto the grid according to the electrode positions to form a 3D EEG signal representation with the size of [number of electrodes, width of grid, height of grid]. For the electrode position information, please refer to constants grouped by dataset:
datasets.constants.emotion_recognition.deap.DEAP_CHANNEL_LOCATION_DICT
datasets.constants.emotion_recognition.dreamer.DREAMER_CHANNEL_LOCATION_DICT
datasets.constants.emotion_recognition.seed.SEED_CHANNEL_LOCATION_DICT
…
transform = ToGrid(DEAP_CHANNEL_LOCATION_DICT) transform(torch.randn(32, 128)).shape >>> (128, 9, 9)
- Parameters
channel_location (dict) – Electrode location information. Represented in dictionary form, where
key
corresponds to the electrode name andvalue
corresponds to the row index and column index of the electrode on the grid.
transforms.ToInterpolatedGrid
- class torcheeg.transforms.ToInterpolatedGrid(channel_location: Dict[str, Tuple[int, int]])[source]
Bases:
object
A transform method to project the EEG signals of different channels onto the grid according to the electrode positions to form a 3D EEG signal representation with the size of [number of electrodes, width of grid, height of grid]. For the electrode position information, please refer to constants grouped by dataset:
datasets.constants.emotion_recognition.deap.DEAP_CHANNEL_LOCATION_DICT
datasets.constants.emotion_recognition.dreamer.DREAMER_CHANNEL_LOCATION_DICT
datasets.constants.emotion_recognition.seed.SEED_CHANNEL_LOCATION_DICT
…
transform = ToInterpolatedGrid(DEAP_CHANNEL_LOCATION_DICT) transform(torch.randn(32, 128)).shape >>> (128, 9, 9)
Especially, missing values on the grid are supplemented using cubic interpolation
- Parameters
channel_location (dict) – Electrode location information. Represented in dictionary form, where
key
corresponds to the electrode name andvalue
corresponds to the row index and column index of the electrode on the grid.
transforms.Concatenate
- class torcheeg.transforms.Concatenate(transforms: Sequence[Callable])[source]
Bases:
object
Merge the calculation results of multiple transforms, which are used when feature fusion is required.
transform = Concatenate([ BandDifferentialEntropy(), BandMeanAbsoluteDeviation() ]) transform(torch.randn(32, 128)).shape >>> (32, 8)
- Parameters
transforms (list, tuple) – a sequence of transforms