BCEncoder¶
- class torcheeg.models.BCEncoder(in_channels: int = 4, grid_size: Tuple[int, int] = (9, 9), hid_channels: int = 64, num_classes: int = 3)[source][source]¶
TorchEEG provides an EEG feature encoder based on CNN architecture and CVAE for generating EEG grid representations of different frequency bands based on a given class label. In particular, the expected labels are additionally provided to guide the encoder to derive the mean and standard deviation vectors of the given expected labels and input data.
Related Project: https://github.com/timbmg/VAE-CVAE-MNIST/blob/master/models.py
import torch from torcheeg.models import BCEncoder encoder = BCEncoder(in_channels=4, num_classes=3) y = torch.randint(low=0, high=3, size=(1,)) mock_eeg = torch.randn(1, 4, 9, 9) mu, logvar = encoder(mock_eeg, y)
- Parameters:
in_channels (int) – The feature dimension of each electrode. (default:
4
)grid_size (tuple) – Spatial dimensions of grid-like EEG representation. (default:
(9, 9)
)hid_channels (int) – The number of hidden nodes in the first convolutional layer, which is also used as the dimension of output mu and logvar. (default:
32
)num_classes (int) – The number of classes. (default:
2
)
- forward(x: Tensor, y: Tensor)[source][source]¶
- Parameters:
x (torch.Tensor) – EEG signal representation, the ideal input shape is
[n, 4, 9, 9]
. Here,n
corresponds to the batch size,4
corresponds toin_channels
, and(9, 9)
corresponds togrid_size
.y (torch.Tensor) – Category labels (int) for a batch of samples The shape should be
[n,]
. Here,n
corresponds to the batch size.
- Returns:
The mean and standard deviation vectors obtained by encoder. The shapes of the feature vectors are all
[n, 64]
. Here,n
corresponds to the batch size, and64
corresponds tohid_channels
.- Return type:
tuple[2,]