FBCNet¶
- class torcheeg.models.FBCNet(num_electrodes: int = 20, chunk_size: int = 1000, in_channels: int = 9, num_S: int = 32, num_classes: int = 2, temporal: str = 'LogVarLayer', stride_factor: int = 4, weight_norm: bool = True)[source][source]¶
An Efficient Multi-view Convolutional Neural Network for Brain-Computer Interface. For more details, please refer to the following information.
Paper: Mane R, Chew E, Chua K, et al. FBCNet: A multi-view convolutional neural network for brain-computer interface[J]. arXiv preprint arXiv:2104.01233, 2021.
Related Project: https://github.com/ravikiran-mane/FBCNet
Below is a recommended suite for use in emotion recognition tasks:
from torcheeg.datasets import DEAPDataset from torcheeg import transforms from torcheeg.models import FBCNet from torch.utils.data import DataLoader dataset = DEAPDataset(root_path='./data_preprocessed_python', chunk_size=512, num_baseline=1, baseline_chunk_size=512, offline_transform=transforms.BandSignal(), online_transform=transforms.ToTensor(), label_transform=transforms.Compose([ transforms.Select('valence'), transforms.Binary(5.0), ])) model = FBCNet(num_classes=2, num_electrodes=32, chunk_size=512, in_channels=4, num_S=32) x, y = next(iter(DataLoader(dataset, batch_size=64))) model(x)
- Parameters:
num_electrodes (int) – The number of electrodes. (default:
28
)chunk_size (int) – Number of data points included in each EEG chunk. (default:
1000
)in_channels (int) – The number of channels of the signal corresponding to each electrode. If the original signal is used as input, in_channels is set to 1; if the original signal is split into multiple sub-bands, in_channels is set to the number of bands. (default:
9
)num_S (int) – The number of spatial convolution block. (default:
32
)num_classes (int) – The number of classes to predict. (default:
2
)temporal (str) – The temporal layer used, with options including VarLayer, StdLayer, LogVarLayer, MeanLayer, and MaxLayer, used to compute statistics using different techniques in the temporal dimension. (default:
LogVarLayer
)stride_factor (int) – The stride factor. (default:
4
)weight_norm (bool) – Whether to use weight renormalization technique in Conv2dWithConstraint. (default:
True
)
- forward(x: Tensor) Tensor [source][source]¶
- Parameters:
x (torch.Tensor) – EEG signal representation, the ideal input shape is
[n, in_channel, num_electrodes, chunk_size]
. Here,n
corresponds to the batch size- Returns:
the predicted probability that the samples belong to the classes.
- Return type:
torch.Tensor[number of sample, number of classes]