Content-Aware Computation for Optical Microscopy



NIH.AI Workshop: Optical Microscopy

Matthew Guay, NIH/NIBIB
matthew.guay@nih.gov
leapmanlab.github.io/nihai/jan20/

Current applications

CARE network restoring a low-SNR flatworm image
(Weigert et al., 2018). Restoring flatworm stained nuclei from low SNR images.
Comparison of flatworm noisy data, CARE restoration, and ground truth
(Weigert et al., 2018). Medium-zoom comparison between a low-SNR flatworm image, CARE restoration, and high-SNR ground truth.
Comparison of flatworm noisy data, CARE restoration, and ground truth
(Weigert et al., 2018). High-zoom comparison between a low-SNR flatworm image, CARE restoration, and high-SNR ground truth.
CARE network restoring a low-SNR beetle embryo image
(Weigert et al., 2018). Restoring red flour beetle embryo stained nuclei from low SNR images.
Diffraction-limited structure enhancement: rat INS-1 cell 1
(Weigert et al., 2018). CARE enhancement of diffraction-limited secretory granules and microtubules in rat INS-1 cells.
Diffraction-limited structure enhancement: rat INS-1 cell 2
(Weigert et al., 2018). CARE enhancement of diffraction-limited secretory granules and microtubules in rat INS-1 cells.
Superresolution to correct axial anisotropy in a zebrafish retina image
(Weigert et al., 2018). CARE superresolution to correct axial anisotropy in a zebrafish retina image.
Additional superresolution examples
Additional superresolution examples from the (Belthangady et al., 2019) review paper.
Label-free fluorescence imaging: training
(Ounkomol et al., 2018)[PDF]. Label-free prediction model training on a fluorescence target.
Label-free fluorescence imaging: prediction
(Ounkomol et al., 2018). Multi-target prediction across a time series by combining individually-trained networks.

Computational tools

Example of a CNN architecture
Source. Example of a small CNN architecture. Convolutional features are learned and transformed into classification predictions about image content.
Example of a 2D convolution operation
Source. 2D convolution example. A sharpening kernel is applied to an "image".
Visualizing layers in a simple CNN
Source. Visualization of layers inside a small image classification CNN.
Example of a U-Net architecture
(Ronneberger et al., 2015). Diagram of the original U-Net architecture.
Segmentation with U-Net
(Ronneberger et al., 2015). Image segmentation example with the U-Net. Blue input tiles provide spatial context for yellow output tiles. Large images can be segmented tile by tile.
Comparison of image-to-image translation networks
(Belthangady et al., 2019). Comparison of image-to-image translation networks. A U-Net builds on an encoder-decoder by adding skip connections between convolution blocks. A GAN builds on a generator by forcing its output to be indistinguishable from training samples.

Example: thispersondoesnotexist.com

Additional superresolution examples
(Belthangady et al., 2019). Two superresolution approaches use GANs to force (fast) trainable network outputs to be indistinguishable from (slow) ground truth high-resolution computations.

Trust

Network trust example: word restoration
(Belthangady et al., 2019). Demonstration of the pitfalls of data-driven image restoration. An image of the word "Witenagemot" is restored by U-Nets trained on different datasets. Each network produces a visually-plausible answer, but only the one trained on the right dataset produces a correct answer.
Quantifying network confidence by predicting probability distributions
(Weigert et al., 2018). Predicting per-pixel probability distributions allows one to build pseudo-confidence intervals for network predictions. Here, even when ground-truth values differ from predictions, they fall within the predicted confidence interval. Still no guarantees the learned probability distributions converge to anything physical.
Using ensemble disagreement to measure prediction confidence
(Weigert et al., 2018). Using ensemble disagreement to measure prediction confidence.
Quantifying network confidence with bayesian neural network ensembles
(Xue et al., 2019). The authors use ensembles of Bayesian CNNs to quantify uncertainty relative to the data and relative to the model.

Thank you!

Please send questions or comments to matthew.guay@nih.gov