BEncoder¶
- class torcheeg.models.BEncoder(in_channels: int = 4, grid_size: Tuple[int, int] = (9, 9), hid_channels: int = 64)[source][source]¶
The variational autoencoder consists of two parts, an encoder, and a decoder. The encoder compresses the input into the latent space. The decoder receives as input the information sampled from the latent space and produces it as similar as possible to ground truth. The latent vector should approach the gaussian distribution supervised by KL divergence based on the variation trick. This class implement the encoder part.
import torch from torcheeg.models import BEncoder encoder = BEncoder(in_channels=4) mock_eeg = torch.randn(1, 4, 9, 9) mu, logvar = encoder(mock_eeg)
- Parameters:
in_channels (int) – The feature dimension of each electrode. (default:
4
)grid_size (tuple) – Spatial dimensions of grid-like EEG representation. (default:
(9, 9)
)hid_channels (int) – The number of hidden nodes in the first convolutional layer, which is also used as the dimension of output mu and var. (default:
32
)
- forward(x: Tensor)[source][source]¶
- Parameters:
x (torch.Tensor) – EEG signal representation, the ideal input shape is
[n, 4, 9, 9]
. Here,n
corresponds to the batch size,4
corresponds toin_channels
, and(9, 9)
corresponds togrid_size
.- Returns:
The mean and standard deviation vectors obtained by encoder. The shapes of the feature vectors are all
[n, 64]
. Here,n
corresponds to the batch size, and64
corresponds tohid_channels
.- Return type:
tuple[2,]