Image Reconstruction

Notebook configuration

Decide for an example with a sparse probe or a high density probe for DOT. The notebook will load example data accordingly.

Also specify, if precomputed results of the photon propagation should be used and if the 3D visualizations should be interactive.

[1]:
# choose between two datasets
DATASET = "fingertappingDOT" # high-density montage
#DATASET = "fingertapping"   # sparse montage

# choose a head model
HEAD_MODEL = "colin27"
# HEAD_MODEL = "icbm152"

# choose between the monte
FORWARD_MODEL = "MCX" # photon monte carlo
#FORWARD_MODEL = "NIRFASTER" # finite element method - NOTE, you must have NIRFASTer installed via runnning <$ bash install_nirfaster.sh CPU # or GPU> from a within your cedalion root directory.

# set this flag to False to actual compute the forward model results
PRECOMPUTED_FLUENCE = True

# set this flag to True to enable interactive 3D plots
INTERACTIVE_PLOTS = False



%load_ext autoreload
%autoreload 2
[2]:
import pyvista as pv

if INTERACTIVE_PLOTS:
    pv.set_jupyter_backend('server')
else:
    pv.set_jupyter_backend('static')

import time
import os

import matplotlib.pyplot as p
import numpy as np
import xarray as xr
from IPython.display import Image
import warnings
from pint.errors import UnitStrippedWarning

import cedalion
import cedalion.dataclasses as cdc
import cedalion.datasets
import cedalion.sigproc.quality as quality
import cedalion.geometry.registration
import cedalion.geometry.segmentation
import cedalion.imagereco.forward_model as fw
import cedalion.imagereco.tissue_properties
import cedalion.io
import cedalion.plots
import cedalion.vis.plot_sensitivity_matrix
from cedalion.imagereco.solver import pseudo_inverse_stacked
from cedalion import units

xr.set_options(display_expand_data=False)
[2]:
<xarray.core.options.set_options at 0x7f72857430d0>

Load a DOT finger-tapping dataset

For this demo we load an example finger-tapping recording through cedalion.datasets.get_fingertapping. The file contains a single NIRS element with one block of raw amplitude data.

[3]:
if DATASET == "fingertappingDOT":
    rec = cedalion.datasets.get_fingertappingDOT()
elif DATASET == "fingertapping":
    rec = cedalion.datasets.get_fingertapping()
else:
    raise ValueError("unknown dataset")

The location of the probes is obtained from the snirf metadata (i.e. /nirs0/probe/)

Note that units (‘m’) are adopted and the coordinate system is named ‘digitized’.

[4]:
geo3d_meas = rec.geo3d
display(geo3d_meas)
<xarray.DataArray (label: 346, digitized: 3)> Size: 8kB
[mm] -77.82 15.68 23.17 -61.91 21.23 56.49 ... 14.23 -38.28 81.95 -0.678 -37.03
Coordinates:
    type     (label) object 3kB PointType.SOURCE ... PointType.LANDMARK
  * label    (label) <U6 8kB 'S1' 'S2' 'S3' 'S4' ... 'FFT10h' 'FT10h' 'FTT10h'
Dimensions without coordinates: digitized
[5]:
cedalion.plots.plot_montage3D(rec["amp"], geo3d_meas)
../../_images/examples_head_models_40_image_reconstruction_8_0.png

The measurement list is a pandas.DataFrame that describes which source detector pairs form channels.

[6]:
meas_list = rec._measurement_lists["amp"]
display(meas_list.head(5))
sourceIndex detectorIndex wavelengthIndex wavelengthActual wavelengthEmissionActual dataType dataUnit dataTypeLabel dataTypeIndex sourcePower detectorGain moduleIndex sourceModuleIndex detectorModuleIndex channel source detector wavelength chromo
0 1 1 1 None None 1 None raw-DC 1 None None None None None S1D1 S1 D1 760.0 None
1 1 2 1 None None 1 None raw-DC 1 None None None None None S1D2 S1 D2 760.0 None
2 1 4 1 None None 1 None raw-DC 1 None None None None None S1D4 S1 D4 760.0 None
3 1 5 1 None None 1 None raw-DC 1 None None None None None S1D5 S1 D5 760.0 None
4 1 6 1 None None 1 None raw-DC 1 None None None None None S1D6 S1 D6 760.0 None

Event/stimulus information is also stored in a pandas.DataFrame. Here events are given more descriptive names:

[7]:

if DATASET == "fingertappingDOT": rec.stim.cd.rename_events( { "1": "Control", "2": "FTapping/Left", "3": "FTapping/Right", "4": "BallSqueezing/Left", "5": "BallSqueezing/Right" } ) elif DATASET == "fingertapping": rec.stim.cd.rename_events( { "1.0": "Control", "2.0": "FTapping/Left", "3.0": "FTapping/Right" } ) display(rec.stim.groupby("trial_type")[["onset"]].count())
onset
trial_type
BallSqueezing/Left 17
BallSqueezing/Right 16
Control 65
FTapping/Left 16
FTapping/Right 16

Perform pruning, conversion to OD and bandpass filtering

(for this demo select 20 seconds after a trial starts at t=117s and transform raw amplitudes to optical density)

Perform SNR quality check and pruning and then transform CW raw amplitudes to optical density

[8]:
## Prune with SNR threshold
snr_thresh = 10 # dB
snr, rec.masks["snr_mask"] = quality.snr(rec["amp"], snr_thresh)
# prune channels using the masks and the operator "all", which will keep only channels that pass all three metrics
rec["amp_pruned"], drop_list = quality.prune_ch(rec["amp"], [rec.masks["snr_mask"]], "all")

print(drop_list)

# Convert to OD
rec["od"] = cedalion.nirs.int2od(rec["amp_pruned"])

# bandpass filter the data
rec["od_freqfiltered"] = rec["od"].cd.freq_filter(fmin=0.01, fmax=0.5, butter_order=4)
[]

Calculate block averages in optical density

[9]:
# segment data into epochs
epochs = rec["od_freqfiltered"].cd.to_epochs(
        rec.stim, # stimulus dataframe
        ["FTapping/Left", "FTapping/Right"],  # select events, discard the others
        before=5 * cedalion.units.s, # seconds before stimulus
        after=30 * cedalion.units.s, # seconds after stimulus
)

# calculate baseline
baseline = epochs.sel(reltime=(epochs.reltime < 0)).mean("reltime")
# subtract baseline
epochs_blcorrected = epochs - baseline

# group trials by trial_type. For each group individually average the epoch dimension
blockaverage = epochs_blcorrected.groupby("trial_type").mean("epoch")

# Plot block averages. Please ignore errors if the plot is too small in the HD case

noPlts2 = int(np.ceil(np.sqrt(len(blockaverage.channel))))
f,ax = p.subplots(noPlts2,noPlts2, figsize=(12,10))
ax = ax.flatten()
for i_ch, ch in enumerate(blockaverage.channel):
    for ls, trial_type in zip(["-", "--"], blockaverage.trial_type):
        ax[i_ch].plot(blockaverage.reltime, blockaverage.sel(wavelength=760, trial_type=trial_type, channel=ch), "r", lw=2, ls=ls)
        ax[i_ch].plot(blockaverage.reltime, blockaverage.sel(wavelength=850, trial_type=trial_type, channel=ch), "b", lw=2, ls=ls)

    ax[i_ch].grid(1)
    ax[i_ch].set_title(ch.values)
    ax[i_ch].set_ylim(-.02, .02)
    ax[i_ch].set_axis_off()
    ax[i_ch].axhline(0, c="k")
    ax[i_ch].axvline(0, c="k")

p.suptitle("760nm: r | 850nm: b | left: - | right: --")
p.tight_layout()

../../_images/examples_head_models_40_image_reconstruction_16_0.png

Load segmented MRI scan

For this example use a segmentation of the Colin27 average brain.

[10]:
if HEAD_MODEL == "colin27":
    SEG_DATADIR, mask_files, landmarks_file = cedalion.datasets.get_colin27_segmentation()
elif HEAD_MODEL == "icbm152":
    SEG_DATADIR, mask_files, landmarks_file = cedalion.datasets.get_icbm152_segmentation()
else:
    raise ValueError("unknown head model")

The segmentation masks are in individual niftii files. The dict mask_files maps mask filenames relative to SEG_DATADIR to short labels. These labels describe the tissue type of the mask.

In principle the user is free to choose these labels. However, they are later used to lookup the tissue’s optical properties. So they must be map to one of the tabulated tissue types (c.f. cedalion.imagereco.tissue_properties.TISSUE_LABELS).

The variable landmarks_file holds the path to a file containing landmark positions in scanner space (RAS). This file can be created with Slicer3D.

[11]:
display(SEG_DATADIR)
display(mask_files)
display(landmarks_file)
'/home/runner/.cache/cedalion/colin27_segmentation.zip.unzip/colin27_segmentation'
{'csf': 'mask_csf.nii',
 'gm': 'mask_gray.nii',
 'scalp': 'mask_skin.nii',
 'skull': 'mask_bone.nii',
 'wm': 'mask_white.nii'}
'landmarks.mrk.json'

Coordinate systems

Up to now we have geometrical data from three different coordinate reference systems (CRS):

  • The optode positions are in one space CRS='digitized' and the coordinates are in meter. In our example the origin is at the head center and y-axis pointing in the superior direction. Other digitization tools can use other units or coordinate systems.

  • The segmentation masks are in voxel space (CRS='ijk') in which the voxel edges are aligned with the coordinate axes. Each voxel has unit edge length, i.e. coordinates are dimensionless. Axis-aligned grids are computationally efficient, which is why the photon simulation code (MCX) uses this coordinate system.

  • The voxel space (CRS='ijk') is related to scanner space (CRS='ras' or CRS='aligned') in which coordinates have physical units and coordinate axes point to the (r)ight, (a)nterior and s(uperior) directions. The relation between both spaces is given through an affine transformation (e.g. t_ijk2ras). When loading the segmentation masks in Slicer3D this transformation is automatically applied. Hence, the picked landmark coordinates are exported in RAS space.

    The niftii file provides a string label for the scanner space. In this example the RAS space is called ‘aligned’ because the masks are aligned to another MRI scan.

To avoid confusion between these different coordinate systems, cedalion tries to be explicit about which CRS a given point cloud or surface is in.

The TwoSurfaceHeadModel

The photon propagation considers the complete MRI scan, in which each voxel is attributed to one tissue type with its respective optical properties. However, the image reconstruction does not intend to reconstruct absorption changes in each voxel. The inverse problem is simplified, by considering only two surfaces (scalp and brain) and reconstruct only absorption changes in voxels close to these surfaces.

The class cedalion.imagereco.forward_model.TwoSurfaceHeadModel groups together the segmentation mask, landmark positions and affine transformations as well as the scalp and brain surfaces. The brain surface is calculated by grouping together white and gray matter masks. The scalp surface encloses the whole head.

[12]:
head = fw.TwoSurfaceHeadModel.from_surfaces(
    segmentation_dir=SEG_DATADIR,
    mask_files = mask_files,
    brain_surface_file= os.path.join(SEG_DATADIR, "mask_brain.obj"),
    # scalp_surface_file= os.path.join(SEG_DATADIR, "mask_scalp.obj"), # FIXME!
    landmarks_ras_file=landmarks_file,
    brain_face_count=None,
    scalp_face_count=None
)
[13]:
head.segmentation_masks
[13]:
<xarray.DataArray (segmentation_type: 5, i: 181, j: 217, k: 181)> Size: 36MB
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Coordinates:
  * segmentation_type  (segmentation_type) <U5 100B 'csf' 'gm' ... 'skull' 'wm'
Dimensions without coordinates: i, j, k
[14]:
head.landmarks
[14]:
<xarray.DataArray (label: 4, ijk: 3)> Size: 96B
[] 89.95 205.8 35.86 91.76 25.05 17.14 18.05 109.9 17.34 165.9 113.0 17.92
Coordinates:
  * label    (label) <U3 48B 'Nz' 'Iz' 'LPA' 'RPA'
    type     (label) object 32B PointType.LANDMARK ... PointType.LANDMARK
Dimensions without coordinates: ijk
[15]:
head.brain
[15]:
TrimeshSurface(mesh=<trimesh.Trimesh(vertices.shape=(15002, 3), faces.shape=(29988, 3))>, crs='ijk', units=<Unit('dimensionless')>)
[16]:
head.scalp
[16]:
TrimeshSurface(mesh=<trimesh.Trimesh(vertices.shape=(152572, 3), faces.shape=(304080, 3))>, crs='ijk', units=<Unit('dimensionless')>)

TwoSurfaceHeadModel.from_segmentation converts everything into voxel space (CRS='ijk')

[17]:
head.crs
[17]:
'ijk'

The transformation matrix to translate from voxel to scanner space:

[18]:
head.t_ijk2ras
[18]:
<xarray.DataArray (aligned: 4, ijk: 4)> Size: 128B
[mm] 1.0 0.0 0.0 -90.0 0.0 1.0 0.0 -126.0 0.0 0.0 1.0 -72.0 0.0 0.0 0.0 1.0
Dimensions without coordinates: aligned, ijk

Changing between coordinate systems:

[19]:
head_ras = head.apply_transform(head.t_ijk2ras)
display(head_ras.crs)
display(head_ras.brain)
'aligned'
TrimeshSurface(mesh=<trimesh.Trimesh(vertices.shape=(15002, 3), faces.shape=(29988, 3))>, crs='aligned', units=<Unit('millimeter')>)

Optode Registration

The optode coordinates from the recording must be aligned with the scalp surface. Currently, cedaĺion offers a simple registration method, which finds an affine transformation (scaling, rotating, translating) that matches the landmark positions of the head model and their digitized counter parts. Afterwards, optodes are snapped to the nearest vertex on the scalp.

[20]:
geo3d_snapped_ijk = head.align_and_snap_to_scalp(geo3d_meas)
display(geo3d_snapped_ijk)
<xarray.DataArray (label: 346, ijk: 3)> Size: 8kB
[] 9.901 129.0 88.0 23.58 135.0 127.0 ... 173.1 140.0 35.0 172.0 126.0 34.98
Coordinates:
    type     (label) object 3kB PointType.SOURCE ... PointType.LANDMARK
  * label    (label) <U6 8kB 'S1' 'S2' 'S3' 'S4' ... 'FFT10h' 'FT10h' 'FTT10h'
Dimensions without coordinates: ijk
[21]:
plt = pv.Plotter()
cedalion.plots.plot_surface(plt, head.brain, color="w")
cedalion.plots.plot_surface(plt, head.scalp, opacity=.1)
cedalion.plots.plot_labeled_points(plt, geo3d_snapped_ijk)
plt.show()
../../_images/examples_head_models_40_image_reconstruction_36_0.png

Simulate light propagation in tissue

cedalion.imagereco.forward_model.ForwardModel is a wrapper around pmcx. Using the data in the head model it prepares the inputs for either pmcx or NIRFASTer and offers functionality to calculate the sensitivty matrix.

[22]:
fwm = cedalion.imagereco.forward_model.ForwardModel(head, geo3d_snapped_ijk, meas_list)

Run the simulation

The compute_fluence_mcx and compute_fluence_nirfaster methods simulate a light source at each optode position and calculate the fluence in each voxel. By setting RUN_PACKAGE, you can choose between the pmcx or NIRFASTer package to perform this simulation. PLEASE NOTE: if you USE_CACHED data (download the example data) be aware that the file is quite big (~2GB).

[23]:
if PRECOMPUTED_FLUENCE:
    if FORWARD_MODEL == "MCX":
        fluence_all, fluence_at_optodes = cedalion.datasets.get_precomputed_fluence(DATASET, HEAD_MODEL)
    elif FORWARD_MODEL == "NIRFASTER":
        raise NotImplementedError(
            "Currently there are no precomputed NIRFASTER results available"
        )
else:
    if FORWARD_MODEL == "MCX":
        fluence_all, fluence_at_optodes = fwm.compute_fluence_mcx()
    elif FORWARD_MODEL == "NIRFASTER":
        fluence_all, fluence_at_optodes = fwm.compute_fluence_nirfaster()

The photon simulation yields the fluence in each voxel for each wavelength.

[24]:
fluence_all
[24]:
<xarray.DataArray '/fluence_all' (label: 46, wavelength: 2, i: 181, j: 217,
                                  k: 181)> Size: 5GB
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Coordinates:
  * label       (label) object 368B 'S1' 'S2' 'S3' 'S4' ... 'D30' 'D31' 'D32'
    type        (label) object 368B PointType.SOURCE ... PointType.DETECTOR
  * wavelength  (wavelength) float64 16B 760.0 850.0
Dimensions without coordinates: i, j, k

Also, for a each combination of two optodes, the fluence in the voxels at the optode positions is calculated.

[25]:
fluence_at_optodes
[25]:
<xarray.DataArray '/fluence_at_optodes' (optode1: 46, optode2: 46, wavelength: 2)> Size: 34kB
0.05344 0.05344 1.021e-06 1.021e-06 ... 2.2e-07 2.2e-07 0.02938 0.02938
Coordinates:
  * optode1     (optode1) object 368B 'S1' 'S2' 'S3' 'S4' ... 'D30' 'D31' 'D32'
  * optode2     (optode2) object 368B 'S1' 'S2' 'S3' 'S4' ... 'D30' 'D31' 'D32'
  * wavelength  (wavelength) float64 16B 760.0 850.0

Plot fluence

To illustrate the tissue probed by light travelling from a source to the detector two fluence profiles need to be multiplied.

[26]:
# for plotting use a geo3d without the landmarks
geo3d_plot = geo3d_snapped_ijk[geo3d_snapped_ijk.type != cdc.PointType.LANDMARK]
[27]:
time.sleep(1)

plt = pv.Plotter()

if DATASET == "fingertappingDOT":
    src, det, wl = "S4", "D2", 760
elif DATASET == "fingertapping":
    src, det, wl = "S2", "D3", 760
else:
    raise ValueError("unknown dataset")

f = fluence_all.loc[src, wl].values * fluence_all.loc[det, wl].values
f[f <= 0] = f[f > 0].min()
f = np.log10(f)
vf = pv.wrap(f)

plt.add_volume(
    vf,
    log_scale=False,
    cmap="plasma_r",
    clim=(-10, 0),
)
cedalion.plots.plot_surface(plt, head.brain, color="w")
cedalion.plots.plot_labeled_points(plt, geo3d_plot, show_labels=False)

cog = head.brain.vertices.mean("label").values
plt.camera.position = cog + [-300, 30, 150]
plt.camera.focal_point = cog
plt.camera.up = [0, 0, 1]

plt.show()
/home/runner/miniconda3/envs/cedalion/lib/python3.11/site-packages/xarray/core/variable.py:338: UnitStrippedWarning: The unit of the quantity is stripped when downcasting to ndarray.
  data = np.asarray(data)
../../_images/examples_head_models_40_image_reconstruction_48_1.png

Calculate the sensitivity matrices

The sensitivity matrix describes the effect of an absorption change at a given surface vertex in the OD recording in a given channel and at given wavelength. The coordinate is_brain holds a mask to distinguish brain and scalp voxels.

[28]:
Adot = fwm.compute_sensitivity(fluence_all, fluence_at_optodes)
Adot
[28]:
<xarray.DataArray (channel: 100, vertex: 167574, wavelength: 2)> Size: 268MB
7.021e-13 7.021e-13 1.155e-13 1.155e-13 2.353e-14 ... 0.0 0.0 0.0 0.0 0.0
Coordinates:
  * channel     (channel) <U6 2kB 'S1D1' 'S1D2' 'S1D4' ... 'S14D31' 'S14D32'
  * wavelength  (wavelength) float64 16B 760.0 850.0
    is_brain    (vertex) bool 168kB True True True True ... False False False
Dimensions without coordinates: vertex
Attributes:
    units:    mm

Plot Sensitivity Matrix

[29]:
plotter = cedalion.vis.plot_sensitivity_matrix.Main(
    sensitivity=Adot,
    brain_surface=head.brain,
    head_surface=head.scalp,
    labeled_points=geo3d_plot,
)
plotter.plot(high_th=0, low_th=-3)
plotter.plt.show()
../../_images/examples_head_models_40_image_reconstruction_52_0.png

The sensitivity Adot has shape (nchannel, nvertex, nwavelenghts). To solve the inverse problem we need a matrix that relates OD in channel space to absorption in image space. Hence, the sensitivity must include the extinction coefficients to translate between OD and concentrations. Furthermore, channels at different wavelengths must be stacked as well vertice and chromophores into new dimensions (flat_channel, flat_vertex):

\[\begin{split} \left( \begin{matrix} OD_{c_1, \lambda_1} \\ \vdots \\ OD_{c_N,\lambda_1} \\ OD_{c_1,\lambda_2} \\ \vdots \\ OD_{c_N,\lambda_2} \end{matrix}\right) = A \cdot \left( \begin{matrix} \Delta c_{v_1, HbO} \\ \vdots \\ \Delta c_{v_N, HbO} \\ \Delta c_{v_1, HbR} \\ \vdots \\ \Delta c_{v_N, HbR} \end{matrix}\right)\end{split}\]
[30]:
Adot_stacked = fwm.compute_stacked_sensitivity(Adot)
Adot_stacked
[30]:
<xarray.DataArray (flat_channel: 200, flat_vertex: 335148)> Size: 536MB
9.474e-11 1.559e-11 3.175e-12 2.06e-11 5.291e-10 ... 7.612e-17 0.0 0.0 0.0 0.0
Coordinates:
    is_brain    (flat_vertex) bool 335kB True True True ... False False False
    chromo      (flat_vertex) <U3 4MB 'HbO' 'HbO' 'HbO' ... 'HbR' 'HbR' 'HbR'
    vertex      (flat_vertex) int64 3MB 0 1 2 3 ... 167570 167571 167572 167573
    wavelength  (flat_channel) float64 2kB 760.0 760.0 760.0 ... 850.0 850.0
Dimensions without coordinates: flat_channel, flat_vertex
Attributes:
    units:    1 / molar

Invert the sensitivity matrix

[31]:
B = pseudo_inverse_stacked(Adot_stacked, alpha = 0.01, alpha_spatial = 0.001)
B
[31]:
<xarray.DataArray (flat_vertex: 335148, flat_channel: 200)> Size: 536MB
1.207e-17 3.591e-16 -3.076e-16 -2.678e-17 -2.313e-15 ... 0.0 0.0 0.0 0.0 0.0
Coordinates:
    is_brain    (flat_vertex) bool 335kB True True True ... False False False
    chromo      (flat_vertex) <U3 4MB 'HbO' 'HbO' 'HbO' ... 'HbR' 'HbR' 'HbR'
    vertex      (flat_vertex) int64 3MB 0 1 2 3 ... 167570 167571 167572 167573
    wavelength  (flat_channel) float64 2kB 760.0 760.0 760.0 ... 850.0 850.0
Dimensions without coordinates: flat_vertex, flat_channel
Attributes:
    units:    molar

Calculate concentration changes

  • the optical density has shape (nchannel, nwavelength, time). Additional dimensions like ‘trial_type’ in this example are allowed, too.

[32]:
blockaverage
[32]:
<xarray.DataArray (trial_type: 2, channel: 100, wavelength: 2, reltime: 154)> Size: 493kB
[] -0.0006526 -0.001002 -0.001244 -0.00137 ... -0.0003574 -0.0002813 -0.0001787
Coordinates:
  * reltime     (reltime) float64 1kB -5.038 -4.809 -4.58 ... 29.54 29.77 30.0
  * channel     (channel) object 800B 'S1D1' 'S1D2' 'S1D4' ... 'S14D31' 'S14D32'
    source      (channel) object 800B 'S1' 'S1' 'S1' 'S1' ... 'S14' 'S14' 'S14'
    detector    (channel) object 800B 'D1' 'D2' 'D4' 'D5' ... 'D29' 'D31' 'D32'
  * wavelength  (wavelength) float64 16B 760.0 850.0
  * trial_type  (trial_type) object 16B 'FTapping/Left' 'FTapping/Right'

To apply the inverted sensitiviy matrix, the OD wavelength and channel dimensions need to be flattened. Then the inverted sensitivity matrix can be multiplied which contracts over flat_channel and the flat_vertex dimension remains. The flat_vertex dimensions containes vertices of the scalp and the brain for both chromophores. These need to be separated again. The function fw.apply_inv_sensitiviy takes care of all of this.

[33]:
dC_brain, dC_scalp = fw.apply_inv_sensitivity(blockaverage, B)
display(dC_brain)
display(dC_scalp)
<xarray.DataArray (trial_type: 2, reltime: 154, chromo: 2, vertex: 15002)> Size: 74MB
-5.844e-16 -5.904e-17 -3.294e-17 -6.016e-17 ... -2.332e-16 -7.767e-14 -5.56e-18
Coordinates:
  * chromo      (chromo) <U3 24B 'HbO' 'HbR'
  * vertex      (vertex) int64 120kB 0 1 2 3 4 ... 14997 14998 14999 15000 15001
    is_brain    (chromo, vertex) bool 30kB True True True ... True True True
  * reltime     (reltime) float64 1kB -5.038 -4.809 -4.58 ... 29.54 29.77 30.0
  * trial_type  (trial_type) object 16B 'FTapping/Left' 'FTapping/Right'
Attributes:
    units:    molar
<xarray.DataArray (trial_type: 2, reltime: 154, chromo: 2, vertex: 152572)> Size: 752MB
-9.254e-24 0.0 -3.873e-23 0.0 0.0 ... -1.859e-21 0.0 -3.872e-22 -3.289e-23 0.0
Coordinates:
  * chromo      (chromo) <U3 24B 'HbO' 'HbR'
  * vertex      (vertex) int64 1MB 15002 15003 15004 ... 167571 167572 167573
    is_brain    (chromo, vertex) bool 305kB False False False ... False False
  * reltime     (reltime) float64 1kB -5.038 -4.809 -4.58 ... 29.54 29.77 30.0
  * trial_type  (trial_type) object 16B 'FTapping/Left' 'FTapping/Right'
Attributes:
    units:    molar

Plot concentration changes

Using cedalion plot functions to visualie image recon results

Using Scalp Plot Functionality to Create a Gif of Brain Activity on the Scalp

This gives us the activity in channel space across time.

[34]:
from cedalion.plots import scalp_plot_gif

warnings.filterwarnings("ignore", category=UnitStrippedWarning)

# configure the plot
data_ts = blockaverage.sel(wavelength=850, trial_type="FTapping/Right")
data_ts = data_ts.rename({"reltime": "time"})
geo3d = rec.geo3d
filename_scalp = 'scalp_plot_ts'

# call plot function
scalp_plot_gif(
    data_ts,
    geo3d,
    filename = filename_scalp,
    time_range=(-5,30,0.5)*units.s,
    scl=(-0.01, 0.01),
    fps=6,
    optode_size=6,
    optode_labels=True,
    str_title='850 nm'
    )
../../_images/examples_head_models_40_image_reconstruction_63_0.png
[35]:
display(Image(data=open(filename_scalp+'.gif','rb').read(), format='png'))
../../_images/examples_head_models_40_image_reconstruction_64_0.png

Using Image Recon View Functionality to Create a Gif of Activity on the Brain

This gives us activity on the brain from a single view as still image or across time

[36]:
from cedalion.plots import image_recon_view

filename_view = 'image_recon_view'

X_ts = xr.concat([dC_brain.sel(trial_type="FTapping/Right"), dC_scalp.sel(trial_type="FTapping/Right")], dim="vertex")
X_ts = X_ts.rename({"reltime": "time"})
X_ts = X_ts.transpose("vertex", "chromo", "time")
X_ts = X_ts.assign_coords(is_brain=('vertex', Adot.is_brain.values))

scl = np.percentile(np.abs(X_ts.sel(chromo='HbO').values.reshape(-1)),99)
clim = (-scl,scl)

image_recon_view(
    X_ts,  # time series data; can be 2D (static) or 3D (dynamic)
    head,
    cmap='seismic',
    clim=clim,
    view_type='hbo_brain',
    view_position='left',
    title_str='HbO',
    filename=filename_view,
    SAVE=True,
    time_range=(-5,30,0.5)*units.s,
    fps=6,
    geo3d_plot = geo3d_plot,
    wdw_size = (1024, 768)
)
[37]:
display(Image(data=open(filename_view+'.gif','rb').read(), format='png'))
../../_images/examples_head_models_40_image_reconstruction_67_0.png

Alternatively, we can just select on time point and plot activity as a still image at that time.

[38]:

# selects the nearest time sample at t=10s in X_ts X_ts = X_ts.sel(time=4*units.s, method="nearest") filename_view = 'image_recon_view_still' image_recon_view( X_ts, # time series data; can be 2D (static) or 3D (dynamic) head, cmap='seismic', clim=clim, view_type='hbo_brain', view_position='left', title_str='HbO', filename=filename_view, SAVE=True, time_range=(-5,30,0.5)*units.s, fps=6, geo3d_plot = geo3d_plot, wdw_size = (1024, 768) )
[39]:
display(Image(data=open(filename_view+'.png','rb').read(), format='png'))
../../_images/examples_head_models_40_image_reconstruction_70_0.png

Using Image Recon Multi View Functionality to Create a Gif of Activity on the Brain

This gives us activity on the brain after recon from all angles as still image or across time

[40]:
from cedalion.plots import image_recon_multi_view

filename_multiview = 'image_recon_multiview'

# prepare data
X_ts = xr.concat([dC_brain.sel(trial_type="FTapping/Right"), dC_scalp.sel(trial_type="FTapping/Right")], dim="vertex")
X_ts = X_ts.rename({"reltime": "time"})
X_ts = X_ts.transpose("vertex", "chromo", "time")
X_ts = X_ts.assign_coords(is_brain=('vertex', Adot.is_brain.values))

scl = np.percentile(np.abs(X_ts.sel(chromo='HbO').values.reshape(-1)),99)
clim = (-scl,scl)


image_recon_multi_view(
    X_ts,  # time series data; can be 2D (static) or 3D (dynamic)
    head,
    cmap='seismic',
    clim=clim,
    view_type='hbo_brain',
    title_str='HbO',
    filename=filename_multiview,
    SAVE=True,
    time_range=(-5,30,0.5)*units.s,
    fps=6,
    geo3d_plot = None, #  geo3d_plot
    wdw_size = (1024, 768)
)
[41]:
display(Image(data=open(filename_multiview+'.gif','rb').read(), format='png'))
../../_images/examples_head_models_40_image_reconstruction_73_0.png

Using Image Recon Multi View Functionality to Create a Gif of Activity on the Scalp

This gives us activity on the scalp after recon from all angles as still image or across time

[42]:
from cedalion.plots import image_recon_multi_view

filename_multiview_scalp = 'image_recon_multiview_scalp'

# prepare data
X_ts = xr.concat([dC_brain.sel(trial_type="FTapping/Right"), dC_scalp.sel(trial_type="FTapping/Right")], dim="vertex")
X_ts = X_ts.rename({"reltime": "time"})
X_ts = X_ts.transpose("vertex", "chromo", "time")
X_ts = X_ts.assign_coords(is_brain=('vertex', Adot.is_brain.values))

scl = np.percentile(np.abs(X_ts.sel(chromo='HbO').values.reshape(-1)),99)
clim = (-scl,scl)


image_recon_multi_view(
    X_ts,  # time series data; can be 2D (static) or 3D (dynamic)
    head,
    cmap='seismic',
    clim=clim,
    view_type='hbo_scalp',
    title_str='HbO',
    filename=filename_multiview_scalp,
    SAVE=True,
    time_range=(-5,30,0.5)*units.s,
    fps=6,
    geo3d_plot = geo3d_plot,
    wdw_size = (1024, 768)
)
[43]:
display(Image(data=open(filename_multiview_scalp+'.gif','rb').read(), format='png'))
../../_images/examples_head_models_40_image_reconstruction_76_0.png