BGlow¶
- class torcheeg.models.BGlow(in_channels: int = 4, grid_size: tuple = (32, 32), hidden_channels: int = 64, num_steps: int = 32, num_blocks: int = 3, actnorm_scale: float = 1.0, flow_permutation: str = 'invconv', flow_coupling: str = 'affine', LU_decomposed: bool = True, learn_top: bool = True)[source][source]¶
This class implements the normalized flow model, allowing to generate samples close to the true distribution. A flow-based model is dedicated to train an encoder that encodes the input as a hidden variable and makes the hidden variable obey the standard normal distribution. By good design, the encoder should be reversible. On this basis, as soon as the encoder is trained, the corresponding decoder can be used to generate samples from a Gaussian distribution according to the inverse operation. In particular, the Glow model is a easy-to-use flow-based model that replaces the operation of permutating the channel axes by introducing a 1x1 reversible convolution.
Paper: Kingma D P, Dhariwal P. Glow: Generative flow with invertible 1x1 convolutions[J]. Advances in neural information processing systems, 2018, 31.
Related Project: https://github.com/y0ast/Glow-PyTorch/
Related Project: https://github.com/ikostrikov/pytorch-flows/
Below is a recommended suite for use in EEG generation:
import torch from torcheeg.models.flow.bglow import BGlow eeg = torch.randn(1, 4, 32, 32) model = BGlow() nll_loss = model(eeg) fake_X = model(num=1, temperature=1.0)
- Parameters:
in_channels (int) – The feature dimension of each electrode. (default:
4
)grid_size (tuple) – Spatial dimensions of grid-like EEG representation. (default:
(32, 32)
)hid_channels (int) – The basic hidden channels in the network blocks. (default:
64
)num_steps (int) – The number of steps in the flow, each step contains an affine coupling layer, an invertible 1x1 conv and an actnorm layer. (default:
32
)num_blocks (int) – Number of blocks, each block includes split, step of flow and squeeze. (default:
3
)actnorm_scale (float) – The pre-defined scale factor in the actnorm layer. (default:
1.0
)flow_permutation (str) – The used flow permutation method, options include
invconv
,shuffle
andreverse
. (default:invconv
)flow_coupling (str) – The used flow coupling method, options include
additive
andaffine
. (default:affine
)LU_decomposed (bool) – Whether to use LU decomposed 1x1 convs. (default:
True
)learnable_prior (bool) – Whether to train top layer (prior). (default:
True
)
… automethod:: log_probs … automethod:: sample
- forward(x: Tensor) Tensor [source][source]¶
- Parameters:
x (torch.Tensor) – EEG signal representation. The ideal input shape is
[n, 4, 32, 32]
. Here,n
corresponds to the batch size,4
corresponds to thein_channels
, and(32, 32)
corresponds to thegrid_size
.y (torch.Tensor) – Category labels (int) for a batch of samples The shape should be
[n,]
. Here,n
corresponds to the batch size.
- Returns:
The latent representation. torch.Tensor: The bit per dimension (BPD) negative log-likelihood.
- Return type:
torch.Tensor