Tip

Download this tutorial as a Jupyter notebook, or as a python script with code cells. We highly recommend using Visual Studio Code to execute this tutorial. Alternatively, you could run the Python script in a terminal with python basics.py from the folder where the file is located.

Tutorial 1: Basics

This tutorial will walk you through the basics of optimap.

optimap is a package that can be imported and used in Python scripts. Python is a freely available programming language that is also very popular in scientific computing. The simplest way to access the functionality of optimap is to import it at the beginning of a new Python script as follows:

import optimap as om

When running this command in a terminal / Python shell or as part of a Python script and optimap was installed correctly, there should be no further output. We now have access to all the functions in the optimap package, and it can then be accessed by typing om. followed by a specific function as shown below (e.g. om.load_video()). If the import produces an error then optimap was not installed correctly, see Installation Guide for further details.

optimap relies heavily on other open-source software packages and libraries, foremost NumPy, which is a numerical programming library, Matplotlib, which is a library for plotting data, and OpenCV, which is a library for computer vision. In this example, we will import numpy and matplotlib right after optimap as follows:

import numpy as np
import matplotlib.pyplot as plt

You do not necessarily have to import numpy and matplotlib in your analysis scripts as it will be imported internally by optimap. Here we imported these packages to be able to work directly with some of their functions in our script and write some custom code for analysis and plotting.

If you encounter any issues with the steps above please see the Installation Guide.

Loading a Video file

The following file formats are currently supported by optimap:

  • .tif, .tiff (TIFF) image stacks

  • Folder containing sequence of TIFF or .png (PNG) images

  • .gsd, .gsh (SciMedia MiCAM 05)

  • .rsh, .rsm, .rsd (SciMedia MiCAM ULTIMA)

  • .dat (MultiRecorder)

  • .npy (numpy array)

  • .mat (MATLAB), loads the first field in the file

We can use the optimap.load_video() function to load a video file, see also Tutorial 13: Import / Export (IO). The code below will automatically download an example file from our website cardiacvision.ucsf.edu and load it into our workspace as a video. Alternatively, you could load your own file by replacing filepath with the filename of your video which you have stored somewhere on your computer. The example file shows a fibrillating, weakly contracting rabbit heart stained with voltage-sensitive dye (Di-4-ANEPPS) imaged using a Basler acA720-520um camera at 500fps. Due to the staining, the action potential wave is inverted, i.e. an upstroke is observed as a negative deflection. The data is from [Chowdhary et al., 2023] and we extracted a short part of the original recording and saved the otherwise unprocessed raw video data as a numpy file (.npy). Experimenters: Jan Lebert, Shrey Chowdhary & Jan Christoph (University of California, San Francisco, USA), 2023.

filepath = om.download_example_data("VF_Rabbit_1.npy")
# alternative if you downloaded the file to your desktop
# filepath = 'VF_Rabbit_1.npy'
video = om.load_video(filepath)

om.print_properties(video)
------------------------------------------------------------------------------------------
array with dimensions: (1000, 390, 300)
datatype of array: uint16
minimum value in entire array: 51
maximum value in entire array: 3884
------------------------------------------------------------------------------------------

optimap imports video data as three-dimensional NumPy array, where the first dimension is time and the other two dimensions are the x- and y-dimensions, respectively. This convention is used throughout the library. The function print_properties() displays the dimensions and maximal and minimal intensity values of a video. Our example file has 1040 video frames. See load_video() for additional arguments, e.g. to load only a subset of the frames or to use memory mapping to reduce memory usage.

video = om.load_video('Example.dat', start_frame=100, frames=1000, step=2, use_mmap=True)

For some file formats, optimap can also load the corresponding metadata using load_metadata(). For example, the following code loads the metadata of a MiCAM ULTIMA recording:

metadata = om.load_metadata('Example.rsh')

To crop, rotate, or flip a video see optimap.video for a list of available functions.

Playing Videos

Videos can be viewed using either:

  1. the built-in viewer show_video() based on matplotlib

  2. Monochrome, which is a more advanced and performant viewer with better interactive features

Using the built-in Viewer

om.show_video(video, skip_frame=3);

See API documentation for show_video() for a list of available arguments. For instance, it is possible to specify a title, a value range (vmin=0, vmax=1) and colormap:

om.show_video(video, title="Example video", vmin=0, vmax=1, cmap="gray", interval=20)

Using Monochrome

Monochrome needs to be installed separately. We will provide installation instructions soon.

import monochrome as mc
mc.show(video, "raw video")

See the monochrome documentation and monochrome.show() for more information. For example, click in the video to view time traces at the selected positions.

Viewing and Extracting Traces

Using optimap it is possible to quickly extract and display optical traces from any location in a video and display them using the built-in player. With the select_traces() it is possible to interactively select, extract and view an optical trace. Click on the video image on the left to select a single position or multiple positions. Right click to remove positions. Close the window to continue.

traces, positions = om.select_traces(video, size=5, fps=500)
../../_images/e960f2428678d6181d867771620b3f0e4a0136b136e15ca6a323c230061d26db.png

Internally select_traces() uses extract_traces(), see below. Traces can be extracted from a single pixel or from a small window surrounding the pixel. The size parameter controls the dimensions of the window. By default, this window is a rectangle with dimensions (size, size), but it can also be set to 'disc' using the window parameter, which then sets the window to a circular region with diameter size around the position. Use optimap.trace.set_default_trace_window() to change the default window type (e.g. by calling it before select_traces(), extract_traces() or at the beginnig of the script with 'disc' as input parameter). The default size of the window is 5 by 5 pixels (rectangular) or a diameter of 5 pixels (disc). To get the exact pixel values without spatial averaging, set size=1. Note that the traces above include strong motion artifacts. If you would like to display the time axis in seconds rather than frames, use the fps (frames per second) parameter.

traces = om.extract_traces(video, positions, size=1, show=True, fps=500)
../../_images/173ac402a272cec1c8396d78ca968236a3ca51058638a7b615ff688d5a9217e6.png

Note that extract_traces() uses positions as input which you already previously selected or defined, see extract_traces() for more information. Internally extract_traces() uses show_traces() to plot traces. In general, all plotting functions in optimap have an ax parameter which can be used to specify a custom matplotlib axes object. For example, we can create a figure with two subplots and show the positions on the first subplot and the traces on the second subplot with milliseconds as time unit:

fig, axs = plt.subplots(1, 2, figsize=(10,5))

om.trace.show_positions(positions, video[0], ax=axs[0])

x_axis_ms = (np.arange(video.shape[0]) / 500.0) * 1000
traces = om.extract_traces(video[:300],
                           positions,
                           x=x_axis_ms[:300],
                           size=5,
                           window='disc',
                           ax=axs[1])
axs[1].set_xlabel('Time [ms]')
plt.show()
../../_images/bc4e82f66ca5a248e7c937746f4d6a7a59f530cccb78dba453dd84d6f32ada62.png

Since the optical traces above were sampled from a raw optical mapping video that shows a fibrillating contracting heart (no Blebbistatin was used during the experiment), the traces contain strong motion artifacts. We will now use optimap to track the motion and to compensate these motion artifacts.

Motion Compensation

optimap provides automatic routines for tracking and stabilizing motion in fluorescence videos. Here, the motion-stabilized videos are interchangeably referred to as warped videos, see also Christoph and Luther[1], Lebert et al.[2], Christoph et al.[3], Kappadan et al.[4], Kappadan et al.[5], Christoph and Ripplinger[6] for further details. Tracking motion and creating a motion-stabilized or warped video is just a few lines of code with optimap. The fibrillating heart from our example contracts rapidly and moves slightly. Even though the motion is small, it can have a strong effect onto the quality of the optical traces and cause motion artifacts. Motion artifacts can in many cases prevent further analysis of the data. We can use optimap’s motion_compensate() function to compensate the motion:

video_warped = om.motion_compensate(video,
                                    contrast_kernel=5,
                                    presmooth_spatial=1,
                                    presmooth_temporal=1)

The video_warped is the motion-stabilized version of the original video. The other parameters are explained in more detail in Tutorial 4: Motion Compensation. You will also find further background information about the motion tracking and compensation routines in [Christoph and Luther, 2018] and [Lebert et al., 2022]. Let’s view the original video and the motion-compensated video side by side using optimap.show_video_pair():

om.show_video_pair(video,
                   video_warped,
                   title1="with motion",
                   title2="without motion",
                   skip_frame=3);

We can see the effect of the numerical motion-stabilization when plotting the same traces as above but extract them from the warped video:

traces = om.extract_traces(video_warped, positions, size=5, show=True, fps=500)
../../_images/5666e6384113df4ff1b7406140d5de92c873ac39d3b41068146895a6a1324793.png

Overall, motion artifacts are significantly reduced: the strong baseline fluctuations are gone and action potential waves are less distorted. The residual motion artifacts vary and depend on factors such as contractile strength, fluorescent signal strength, and illumination, among others, see [Christoph and Luther, 2018] and [Lebert et al., 2022] for a more detailed discussion. It is possible to further reduce motion artifacts using ratiometric imaging, see Tutorial 4.

Saving and Rendering Videos

Let’s save the motion-compensated recording as a tiff stack and also render it to a .mp4 video file.

om.video.save_video('warped_recording.tiff', video_warped)
om.video.export_video('warped_recording.mp4', video_warped, fps=50)
saving video to tiff stack warped_recording.tiff
exporting video:   0%|          | 0/1000 [00:00<?, ?it/s]
exporting video: 100%|██████████| 1000/1000 [00:18<00:00, 53.26it/s]

video exported to warped_recording.mp4

The optimap video-player functions such as optimap.show_video_pair() can also be exported to a video file:

animation = om.show_video_pair(video, video_warped, title1='Raw', title2='Compensated')
animation.save('Example.mp4')

See matplotlib.animation.Animation.save() for more details and Tutorial 13 for more information on how to export video files.

Visualization of Action Potential Waves

In this example, we stained the heart with the voltage-sensitive fluorescent dye Di-4-ANEPPS. Together with an appropriate bandpass filter, Di-4-ANEPPS produces a slight decrease in the measured fluorescence when the tissue depolarizes. Accordingly, in order to visualize action potential waves, this small optical signal needs to be amplified (numerically). One way to achieve this is to normalize the optical traces ‘pixel-wise’: each time-series measured in a single pixel is normalized individually. In simple terms, when normalizing an optical trace one divides each value of the time-series by the maximal value found in the entire time-series and subtracts the minimum of the time-series. This removes the baseline of the time-series and all values subsequently fluctuate between 0 and 1. In our case, the depolarized phase of the action potential corresponds to values close to 0 and the diastolic interval to values close to 1. Correspondingly, the action potential wave darkens the video image. optimap has several built-in routines, which perform these normalization steps automatically. A more detailed explanation of visualizing action potential or calcium waves with different post-processing and normalization functions is given in Tutorial 2.

Sliding-window Normalization

We can compute a ‘pixel-wise normalized video’ using optimap’s rolling- or sliding-window normalization function (video.normalize_pixelwise_slidingwindow()), see Tutorial 2 for more information. This function is applied to the motion-stabilized or warped video. We slide a short temporal window over the time-series extracted from this video and perform the normalization sequentially for one value after another with the corresponding local maxima and minima. The window_size parameter controls the length of the sliding window, which needs to be adjusted to the period of the action potential or calcium waves. Here, we use a window size of 60 frames (120 ms).

video_warped_normalized = om.video.normalize_pixelwise_slidingwindow(video_warped, window_size=60)

Before we display the motion-stabilized warped video, we mask out regions that do not show tissue using optimap’s optimap.background_mask():

mask = om.background_mask(video_warped[0])
video_warped_normalized[:, mask] = 1.0
Creating mask with detected threshold 401.0
../../_images/7ebedad47400c74efadc2974d64bab163fd6124cf207dfcffe70360714e8ecf1.png

Using a grayscale colormap, action potential waves can be visualized as black/dark waves due to the pixel-wise normalization:

om.show_video(video_warped_normalized, interval=20);

In this tutorial, we demonstrated that one can visualize vortex-like rotating action potential waves across the surface of a moving fibrillating heart using optical mapping. The action potential waves were imaged on the (slightly) contracting, fibrillating heart surface in a co-moving frame of reference.

Further reading: