Shortcuts

torcheeg.utils

Extensive utility functions for visualizing data of different structures for debugging and display.

plot_raw_topomap

torcheeg.utils.plot_raw_topomap(tensor: ~torch.Tensor, channel_list: ~typing.List[str], sampling_rate: int, plot_second_list: ~typing.List[int] = [0.0, 0.25, 0.5, 0.75], montage: ~mne.channels.montage.DigMontage = <DigMontage | 0 extras (headshape), 0 HPIs, 3 fiducials, 94 channels>)[source][source]

Plot a topographic map of the input raw EEG signal as image.

eeg = torch.randn(32, 128)
img = plot_raw_topomap(eeg,
                 channel_list=DEAP_CHANNEL_LIST,
                 sampling_rate=128)
# If using jupyter, the output image will be drawn on notebooks.
The output image of plot_raw_topomap

Parameters
  • tensor (torch.Tensor) – The input EEG signal, the shape should be [number of channels, number of data points].

  • channel_list (list) – The channel name lists corresponding to the input EEG signal. If the dataset in TorchEEG is used, please refer to the CHANNEL_LIST related constants in the torcheeg.constants module.

  • sampling_rate (int) – Sample rate of the data.

  • plot_second_list (list) – The time (second) at which the topographic map is drawn. (default: [0.0, 0.25, 0.5, 0.75])

  • montage (any) – Channel positions and digitization points defined in obj:mne. (default: mne.channels.make_standard_montage('standard_1020'))

Returns

The output image in the form of np.ndarray.

Return type

np.ndarray

plot_feature_topomap

torcheeg.utils.plot_feature_topomap(tensor: ~torch.Tensor, channel_list: ~typing.List[str], feature_list: ~typing.Optional[~typing.List[str]] = None, montage: ~mne.channels.montage.DigMontage = <DigMontage | 0 extras (headshape), 0 HPIs, 3 fiducials, 94 channels>)[source][source]

Plot a topographic map of the input EEG features as image.

eeg = torch.randn(32, 4)
img = plot_feature_topomap(eeg,
                 channel_list=DEAP_CHANNEL_LIST,
                 feature_list=["theta", "alpha", "beta", "gamma"])
# If using jupyter, the output image will be drawn on notebooks.
The output image of plot_feature_topomap

Parameters
  • tensor (torch.Tensor) – The input EEG signal, the shape should be [number of channels, dimensions of features].

  • channel_list (list) – The channel name lists corresponding to the input EEG signal. If the dataset in TorchEEG is used, please refer to the CHANNEL_LIST related constants in the torcheeg.constants module.

  • feature_list (list) – . The names of feature dimensions displayed on the output image, whose length should be consistent with the dimensions of features. If set to None, the dimension index of the feature is used instead. (default: None)

  • montage (any) – Channel positions and digitization points defined in obj:mne. (default: mne.channels.make_standard_montage('standard_1020'))

Returns

The output image in the form of np.ndarray.

Return type

np.ndarray

plot_signal

torcheeg.utils.plot_signal(tensor: ~torch.Tensor, channel_list: ~typing.List[str], sampling_rate: int, montage: ~mne.channels.montage.DigMontage = <DigMontage | 0 extras (headshape), 0 HPIs, 3 fiducials, 94 channels>)[source][source]

Plot signal values of the input raw EEG as image.

eeg = torch.randn(32, 128)
img = plot_signal(eeg,
                  channel_list=DEAP_CHANNEL_LIST,
                  sampling_rate=128)
# If using jupyter, the output image will be drawn on notebooks.
The output image of plot_signal

Parameters
  • tensor (torch.Tensor) – The input EEG signal, the shape should be [number of channels, number of data points].

  • channel_list (list) – The channel name lists corresponding to the input EEG signal. If the dataset in TorchEEG is used, please refer to the CHANNEL_LIST related constants in the torcheeg.constants module.

  • sampling_rate (int) – Sample rate of the data.

  • montage (any) – Channel positions and digitization points defined in obj:mne. (default: mne.channels.make_standard_montage('standard_1020'))

Returns

The output image in the form of np.ndarray.

Return type

np.ndarray

plot_3d_tensor

torcheeg.utils.plot_3d_tensor(tensor: Tensor, color: Union[Colormap, str] = 'hsv')[source][source]

Visualize a 3-D matrices in 3-D space.

eeg = torch.randn(128, 9, 9)
img = plot_3d_tensor(eeg)
# If using jupyter, the output image will be drawn on notebooks.
The output image of plot_3d_tensor

Parameters
  • tensor (torch.Tensor) – The input 3-D tensor.

  • color (colors.Colormap or str) – The color map used for the face color of the axes. (default: hsv)

Returns

The output image in the form of np.ndarray.

Return type

np.ndarray

plot_2d_tensor

torcheeg.utils.plot_2d_tensor(tensor: Tensor, color: Union[Colormap, str] = 'hsv')[source][source]

Visualize a 2-D matrices in 2-D space.

eeg = torch.randn(9, 9)
img = plot_2d_tensor(eeg)
# If using jupyter, the output image will be drawn on notebooks.
The output image of plot_2d_tensor

Parameters
  • tensor (torch.Tensor) – The input 2-D tensor.

  • color (colors.Colormap or str) – The color map used for the face color of the axes. (default: hsv)

Returns

The output image in the form of np.ndarray.

Return type

np.ndarray

plot_adj_connectivity

torcheeg.utils.plot_adj_connectivity(adj: Tensor, channel_list: Optional[list] = None, region_list: Optional[list] = None, num_connectivity: int = 60, linewidth: float = 1.5)[source][source]

Visualize connectivity between nodes in an adjacency matrix, using circular networks.

adj = torch.randn(62, 62) # relationship between 62 electrodes
img = plot_adj_connectivity(adj, SEED_CHANNEL_LIST)
# If using jupyter, the output image will be drawn on notebooks.
The output image of plot_adj_connectivity

Parameters
  • adj (torch.Tensor) – The input 2-D adjacency tensor.

  • channel_list (list) – The electrode name of the row/column in the input adjacency matrix, used to label the electrode corresponding to the node on circular networks. If set to None, the electrode’s index is used. (default: None)

  • region_list (list) – region_list (list): The region list where the electrodes are divided into different brain regions. If set, electrodes in the same area will be aligned on the map and filled with the same color. (default: None)

  • num_connectivity (int) – The number of connections to retain on circular networks, where edges with larger weights in the adjacency matrix will be limitedly retained, and the excess is omitted. (default: 50)

  • linewidth (float) – Line width to use for connections. (default: 1.5)

Returns

The output image in the form of np.ndarray.

Return type

np.ndarray

plot_graph

torcheeg.utils.pyg.plot_graph(data: Data, channel_location_dict: Dict[str, List[int]], color: Union[Colormap, str] = 'hsv')[source][source]

Visualize a graph structure. For the electrode position information, please refer to constants grouped by dataset:

  • datasets.constants.emotion_recognition.deap.DEAP_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.dreamer.DREAMER_CHANNEL_LOCATION_DICT

  • datasets.constants.emotion_recognition.seed.SEED_CHANNEL_LOCATION_DICT

eeg = np.random.randn(32, 128)
g = ToG(DEAP_ADJACENCY_MATRIX)(eeg=eeg)['eeg']
img = plot_graph(g)
# If using jupyter, the output image will be drawn on notebooks.
The output image of plot_graph

Parameters
  • data (torch_geometric.data.Data) – The input graph structure represented by torch_geometric.

  • channel_location_dict (dict) – Electrode location information. Represented in dictionary form, where key corresponds to the electrode name and value corresponds to the row index and column index of the electrode on the grid.

  • color (colors.Colormap or str) – The color map used for the face color of the axes. (default: hsv)

Returns

The output image in the form of np.ndarray.

Return type

np.ndarray

Read the Docs v: v1.0.11
Versions
latest
stable
v1.0.11
v1.0.10
v1.0.9
v1.0.8.post1
v1.0.8
v1.0.7
v1.0.6
v1.0.4
v1.0.3
v1.0.2
v1.0.1
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources