BCDecoder¶
- class torcheeg.models.BCDecoder(in_channels: int = 64, out_channels: int = 4, grid_size: Tuple[int, int] = (9, 9), num_classes: int = 3)[source][source]¶
TorchEEG provides an EEG feature decoder based on CNN architecture and CVAE for generating EEG grid representations of different frequency bands based on a given class label. In particular, the expected labels are additionally provided to guide the decoder to reconstruct samples of the specified class.
Related Project: https://github.com/timbmg/VAE-CVAE-MNIST/blob/master/models.py
encoder = BCEncoder(in_channels=4, num_classes=3) decoder = BCDecoder(in_channels=64, out_channels=4, num_classes=3) y = torch.randint(low=0, high=3, size=(1,)) mock_eeg = torch.randn(1, 4, 9, 9) mu, logvar = encoder(mock_eeg, y) std = torch.exp(0.5 * logvar) eps = torch.randn_like(std) z = eps * std + mu fake_X = decoder(z, y)
- Parameters:
in_channels (int) – The input feature dimension (of noise vectors). (default:
64
)out_channels (int) – The generated feature dimension of each electrode. (default:
4
)grid_size (tuple) – Spatial dimensions of grid-like EEG representation. (default:
(9, 9)
)
- forward(x: Tensor, y: Tensor)[source][source]¶
- Parameters:
x (torch.Tensor) – Given the mean and standard deviation vectors, the feature vector
z
obtained using the reparameterization technique. The shapes of the feature vector should be[n, 64]
. Here,n
corresponds to the batch size, and64
corresponds toin_channels
.y (torch.Tensor) – Category labels (int) for a batch of samples The shape should be
[n,]
. Here,n
corresponds to the batch size.
- Returns:
the decoded results, which should have the same shape as the input noise, i.e.,
[n, 4, 9, 9]
. Here,n
corresponds to the batch size,4
corresponds toin_channels
, and(9, 9)
corresponds togrid_size
.- Return type:
torch.Tensor[n, 4, 9, 9]