Motion Artefact Detection and Correction (WIP / in validation)
The xarray-based masks can be used for indicating motion-artefacts. The example below shows how to check channels for motion artefacts using standard thresholds from Homer2/3. The output is a mask that can be handed to motion correction algorithms.
[1]:
import matplotlib.pyplot as p
import cedalion
import cedalion.datasets as datasets
import cedalion.nirs
import cedalion.sigproc.motion_correct as motion_correct
import cedalion.sigproc.quality as quality
import cedalion.sim.synthetic_artifact as synthetic_artifact
from cedalion import units
[2]:
# get example finger tapping dataset
rec = datasets.get_fingertapping()
rec["od"] = cedalion.nirs.int2od(rec["amp"])
# Add some synthetic spikes and baseline shifts
artifacts = {
"spike": synthetic_artifact.gen_spike,
"bl_shift": synthetic_artifact.gen_bl_shift,
}
timing = synthetic_artifact.random_events_perc(rec["od"].time, 0.01, ["spike"])
timing = synthetic_artifact.add_event_timing(
[(200, 0), (400, 0)], "bl_shift", None, timing
)
rec["od"] = synthetic_artifact.add_artifacts(rec["od"], timing, artifacts)
# Plot some data for visual validation
f, ax = p.subplots(1, 1, figsize=(12, 4))
ax.plot(
rec["od"].time, rec["od"].sel(channel="S3D3", wavelength="850"), "r-", label="850nm"
)
ax.plot(
rec["od"].time, rec["od"].sel(channel="S3D3", wavelength="760"), "g-", label="760nm"
)
# indicate added artefacts
for _,row in timing.iterrows():
p.axvline(row["onset"], c="k", alpha=.2)
p.legend()
ax.set_xlim(0, 500)
ax.set_xlabel("time / s")
ax.set_ylabel("OD")
display(rec["od"])
<xarray.DataArray (channel: 28, wavelength: 2, time: 23239)> Size: 10MB <Quantity([[[ 0.04042072 0.04460046 0.04421587 ... 0.08227263 0.08328687 0.07824392] [ 0.0238205 0.02007699 0.03480909 ... 0.11612429 0.11917232 0.12386444]] [[-0.00828006 -0.01784406 -0.00219874 ... 0.05383301 0.05068234 0.05268067] [-0.03725579 -0.04067296 -0.02826115 ... 0.08155008 0.07904583 0.07842621]] [[ 0.10055823 0.09914287 0.11119026 ... 0.03696155 0.04202579 0.04167814] [ 0.049938 0.04755176 0.06016311 ... 0.0744283 0.07835364 0.07515144]] ... [[ 0.0954341 0.11098679 0.10684828 ... 0.02764187 0.03057966 0.02245211] [ 0.03858011 0.06286433 0.0612825 ... 0.10304278 0.10240304 0.09205354]] [[ 0.1550658 0.17214468 0.16880747 ... -0.00790466 -0.00773703 -0.01269059] [ 0.10250045 0.12616269 0.12619078 ... 0.04663763 0.04687754 0.03974277]] [[ 0.05805322 0.06125157 0.06083507 ... -0.00101062 -0.000856 -0.00219674] [ 0.02437702 0.03088664 0.03219055 ... 0.01326252 0.01341195 0.01118119]]], 'dimensionless')> Coordinates: * time (time) float64 186kB 0.0 0.128 0.256 ... 2.974e+03 2.974e+03 samples (time) int64 186kB 0 1 2 3 4 5 ... 23234 23235 23236 23237 23238 * channel (channel) object 224B 'S1D1' 'S1D2' 'S1D3' ... 'S8D8' 'S8D16' source (channel) object 224B 'S1' 'S1' 'S1' 'S1' ... 'S8' 'S8' 'S8' detector (channel) object 224B 'D1' 'D2' 'D3' 'D9' ... 'D7' 'D8' 'D16' * wavelength (wavelength) float64 16B 760.0 850.0

[3]:
display(timing)
onset | duration | trial_type | value | channel | |
---|---|---|---|---|---|
0 | 660.399305 | 0.381871 | spike | 1 | None |
1 | 740.372036 | 0.240634 | spike | 1 | None |
2 | 777.734641 | 0.162431 | spike | 1 | None |
3 | 316.234799 | 0.181865 | spike | 1 | None |
4 | 723.513333 | 0.222625 | spike | 1 | None |
... | ... | ... | ... | ... | ... |
120 | 2420.927808 | 0.275907 | spike | 1 | None |
121 | 1789.398557 | 0.218355 | spike | 1 | None |
122 | 81.090041 | 0.143824 | spike | 1 | None |
123 | 200.000000 | 0.000000 | bl_shift | 1 | None |
124 | 400.000000 | 0.000000 | bl_shift | 1 | None |
125 rows × 5 columns
Detecting Motion Artifacts and generating the MA mask
[4]:
# we use Optical Density data for motion artifact detection
fnirs_data = rec["od"]
# define parameters for motion artifact detection. We follow the method from Homer2/3:
# "hmrR_MotionArtifactByChannel" and "hmrR_MotionArtifact".
t_motion = 0.5 * units.s # time window for motion artifact detection
t_mask = 1.0 * units.s # time window for masking motion artifacts
# (+- t_mask s before/after detected motion artifact)
stdev_thresh = 7.0 # threshold for std. deviation of the signal used to detect
# motion artifacts. Default is 50. We set it very low to find
# something in our good data for demonstration purposes.
amp_thresh = 5.0 # threshold for amplitude of the signal used to detect motion
# artifacts. Default is 5.
# to identify motion artifacts with these parameters we call the following function
ma_mask = quality.id_motion(fnirs_data, t_motion, t_mask, stdev_thresh, amp_thresh)
# it hands us a boolean mask (xarray) of the input dimension, where False indicates a
# motion artifact at a given time point:
ma_mask
[4]:
<xarray.DataArray (channel: 28, wavelength: 2, time: 23239)> Size: 1MB array([[[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True]], [[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True]], [[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True]], ..., [[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True]], [[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True]], [[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True]]]) Coordinates: * time (time) float64 186kB 0.0 0.128 0.256 ... 2.974e+03 2.974e+03 samples (time) int64 186kB 0 1 2 3 4 5 ... 23234 23235 23236 23237 23238 * channel (channel) object 224B 'S1D1' 'S1D2' 'S1D3' ... 'S8D8' 'S8D16' source (channel) object 224B 'S1' 'S1' 'S1' 'S1' ... 'S8' 'S8' 'S8' detector (channel) object 224B 'D1' 'D2' 'D3' 'D9' ... 'D7' 'D8' 'D16' * wavelength (wavelength) float64 16B 760.0 850.0
The output mask is quite detailed and still contains all original dimensions (e.g. single wavelengths) and allows us to combine it with a mask from another motion artifact detection method. This is the same approach as for the channel quality metrics above.
Let us now plot the result for an example channel. Note, that for both wavelengths a different number of artifacts was identified, which can sometimes happen:
[5]:
p.figure()
p.plot(ma_mask.time, ma_mask.sel(channel="S3D3", wavelength="760"), "b-")
p.plot(ma_mask.time, ma_mask.sel(channel="S3D3", wavelength="850"), "r-")
# indicate added artefacts
for _,row in timing.iterrows():
p.axvline(row["onset"], c="k", alpha=.2)
p.xlim(0, 500)
p.xlabel("time / s")
p.ylabel("Motion artifact mask")
p.show()

Plotting the mask and the data together (we have to rescale a bit to make both fit):
[6]:
p.figure()
p.plot(fnirs_data.time, fnirs_data.sel(channel="S3D3", wavelength="760"), "r-")
p.plot(ma_mask.time, ma_mask.sel(channel="S3D3", wavelength="760") / 10, "k-")
# indicate added artefacts
for _,row in timing.iterrows():
p.axvline(row["onset"], c="k", alpha=.2)
p.xlim(0, 500)
p.xlabel("time / s")
p.ylabel("fNIRS Signal / Motion artifact mask")
p.show()

Refining the MA Mask
At the latest when we want to correct motion artifacts, we usually do not need the level of granularity that the mask provides. For instance, we usually want to treat a detected motion artifact in either of both wavelengths or chromophores of one channel as a single artifact that gets flagged for both. We might also want to flag motion artifacts globally, i.e. mask time points for all channels even if only some of them show an artifact. This can easily be done by using the “id_motion_refine” function. The function also returns useful information about motion artifacts in each channel in “ma_info”
[7]:
# refine the motion artifact mask. This function collapses the mask along dimensions
# that are chosen by the "operator" argument. Here we use "by_channel", which will yield
# a mask for each channel by collapsing the masks along either the wavelength or
# concentration dimension.
ma_mask_refined, ma_info = quality.id_motion_refine(ma_mask, "by_channel")
# show the refined mask
ma_mask_refined
[7]:
<xarray.DataArray (channel: 28, time: 23239)> Size: 651kB array([[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], ..., [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True]]) Coordinates: * time (time) float64 186kB 0.0 0.128 0.256 ... 2.974e+03 2.974e+03 samples (time) int64 186kB 0 1 2 3 4 5 ... 23234 23235 23236 23237 23238 * channel (channel) object 224B 'S1D1' 'S1D2' 'S1D3' ... 'S8D8' 'S8D16' source (channel) object 224B 'S1' 'S1' 'S1' 'S1' ... 'S7' 'S8' 'S8' 'S8' detector (channel) object 224B 'D1' 'D2' 'D3' 'D9' ... 'D7' 'D8' 'D16'
Now the mask does not have the “wavelength” or “concentration” dimension anymore, and the masks of these dimensions are combined:
[8]:
# plot the figure
p.figure()
p.plot(fnirs_data.time, fnirs_data.sel(channel="S3D3", wavelength="760"), "r-")
p.plot(ma_mask_refined.time, ma_mask_refined.sel(channel="S3D3") / 10, "k-")
# indicate added artefacts
for _,row in timing.iterrows():
p.axvline(row["onset"], c="k", alpha=.2)
p.xlim(0, 500)
p.xlabel("time / s")
p.ylabel("fNIRS Signal / Refined Motion artifact mask")
p.show()
# show the information about the motion artifacts: we get a pandas dataframe telling us
# 1) for which channels artifacts were detected,
# 2) what is the fraction of time points that were marked as artifacts and
# 3) how many artifacts where detected
ma_info

[8]:
channel | ma_fraction | ma_count | |
---|---|---|---|
0 | S1D1 | 0.086751 | 94 |
1 | S1D2 | 0.055940 | 66 |
2 | S1D3 | 0.061104 | 73 |
3 | S1D9 | 0.042988 | 51 |
4 | S2D1 | 0.044365 | 53 |
5 | S2D3 | 0.063600 | 75 |
6 | S2D4 | 0.029605 | 34 |
7 | S2D10 | 0.071432 | 83 |
8 | S3D2 | 0.037695 | 46 |
9 | S3D3 | 0.058651 | 69 |
10 | S3D11 | 0.060459 | 72 |
11 | S4D3 | 0.019407 | 24 |
12 | S4D4 | 0.029907 | 28 |
13 | S4D12 | 0.043978 | 53 |
14 | S5D5 | 0.088042 | 95 |
15 | S5D6 | 0.060201 | 70 |
16 | S5D7 | 0.040750 | 49 |
17 | S5D13 | 0.090494 | 97 |
18 | S6D5 | 0.052111 | 60 |
19 | S6D7 | 0.042170 | 49 |
20 | S6D8 | 0.049787 | 55 |
21 | S6D14 | 0.052111 | 62 |
22 | S7D6 | 0.084384 | 93 |
23 | S7D7 | 0.075175 | 84 |
24 | S7D15 | 0.049572 | 56 |
25 | S8D7 | 0.024657 | 31 |
26 | S8D8 | 0.031499 | 33 |
27 | S8D16 | 0.037093 | 44 |
Now we look at the “all” operator, which will collapse the mask across all dimensions except time, leading to a single motion artifact mask
[9]:
# "all", yields a mask that flags an artifact at any given time if flagged for
# any channetransl, wavelength, chromophore, etc.
ma_mask_refined, ma_info = quality.id_motion_refine(ma_mask, 'all')
# show the refined mask
ma_mask_refined
[9]:
<xarray.DataArray (time: 23239)> Size: 23kB array([ True, True, True, ..., True, True, True]) Coordinates: * time (time) float64 186kB 0.0 0.128 0.256 ... 2.974e+03 2.974e+03 samples (time) int64 186kB 0 1 2 3 4 5 ... 23234 23235 23236 23237 23238
[10]:
# plot the figure
p.figure()
p.plot(fnirs_data.time, fnirs_data.sel(channel="S3D3", wavelength="760"), "r-")
p.plot(ma_mask_refined.time, ma_mask_refined/10, "k-")
p.xlim(0,500)
p.xlabel("time / s")
p.ylabel("fNIRS Signal / Refined Motion artifact mask")
p.show()
# show the information about the motion artifacts: we get a pandas dataframe telling us
# 1) that the mask is for all channels
# 2) fraction of time points that were marked as artifacts for this mask across all
# channels
# 3) how many artifacts where detected in total
ma_info

[10]:
channel | ma_fraction | ma_count | |
---|---|---|---|
0 | all channels combined | [0.9977193510908386, 0.9951374844012221] | [3, 6] |
Motion Correction
Illustrate effect of different motion correction methods
[11]:
def compare_raw_cleaned(rec, key_raw, key_cleaned, title):
chwl = dict(channel="S3D3", wavelength="850")
f, ax = p.subplots(1, 1, figsize=(12, 4))
ax.plot(
rec[key_raw].time,
rec[key_raw].sel(**chwl),
"r-",
label="850nm raw",
)
ax.plot(
rec[key_cleaned].time,
rec[key_cleaned].sel(**chwl),
"g-",
label="850nm cleaned",
)
ax.set_xlim(0, 500)
ax.set_ylabel("OD")
ax.set_xlabel("time / s")
ax.set_title(title)
ax.legend()
# indicate added artefacts
for _,row in timing.iterrows():
p.axvline(row["onset"], c="k", alpha=.2)
SplineSG method:
identifies baselineshifts in the data and uses spline interpolation to correct these shifts
uses a Savitzky-Golay filter to remove spikes
[12]:
frame_size = 10 * units.s
rec["od_splineSG"] = motion_correct.motion_correct_splineSG(
rec["od"], frame_size=frame_size, p=1
)
compare_raw_cleaned(rec, "od", "od_splineSG", "SplineSG")

TDDR:
Temporal Derivative Distribution Repair (TDDR) is a robust regression based motion correction algorithm.
Doesn’t require any user-supplied parameters
See [FLVM19]
[13]:
rec["od_tddr"] = motion_correct.tddr(rec["od"])
compare_raw_cleaned(rec, "od", "od_tddr", "TDDR")

PCA
Apply motion correction using PCA filter on motion artefact segments (identified by mask).
Implementation is based on Homer3 v1.80.2 “hmrR_MotionCorrectPCA.m”
[14]:
rec["od_pca"], nSV_ret, svs = motion_correct.motion_correct_PCA(
rec["od"], ma_mask_refined
)
compare_raw_cleaned(rec, "od", "od_pca", "PCA")

Recursive PCA
If any active channel exhibits signal change greater than STDEVthresh or AMPthresh, then that segment of data is marked as a motion artefact.
motion_correct_PCA is applied to all segments of data identified as a motion artefact.
This is called until maxIter is reached or there are no motion artefacts identified.
[15]:
rec["od_pca_r"], svs, nSV, tInc = motion_correct.motion_correct_PCA_recurse(
rec["od"], t_motion, t_mask, stdev_thresh, amp_thresh
)
compare_raw_cleaned(rec, "od", "od_pca_r", "Recursive PCA")

Wavelet Motion Correction
Focused on spike artifacts
Can set iqr factor, wavelet, and wavelet decomposition level.
Higher iqr factor leads to more coefficients being discarded, i.e. more drastic correction.
[16]:
rec["od_wavelet"] = motion_correct.motion_correct_wavelet(rec["od"])
compare_raw_cleaned(rec, "od", "od_wavelet", "Wavelet")
