Welcome to CellSeg3D!#
CellSeg3D is a toolbox for 3D segmentation of cells in light-sheet microscopy images, using napari. Use CellSeg3D to:
Review labeled cell volumes from whole-brain samples of mice imaged by mesoSPIM microscopy [1]
Train and use segmentation models from the MONAI project [2] or implement your own custom 3D segmentation models using PyTorch.
No labeled data? Try our unsupervised model, based on the WNet model, to automate your data labelling.
The models provided should be adaptable to other tasks related to detection of 3D objects, outside of whole-brain light-sheet microscopy. This applies to the unsupervised model as well, feel free to try to generate labels for your own data!
Requirements#
Important
This package requires PyQt5 or PySide2 to be installed first for napari to run.
If you do not have a Qt backend installed you can use :
pip install napari[all]
to install PyQt5 by default.
This package depends on PyTorch and certain optional dependencies of MONAI. These come as requirements, but if you need further assistance, please see below.
Note
A CUDA-capable GPU is not needed but very strongly recommended, especially for training and to a lesser degree inference.
For help with PyTorch, please see PyTorch’s website for installation instructions, with or without CUDA according to your hardware. Depending on your setup, you might wish to install torch first.
If you get errors from MONAI regarding missing readers, please see MONAI’s optional dependencies page for instructions on getting the readers required by your images.
Installation#
CellSeg3D can be run on Windows, Linux, or MacOS.
For detailed installation instructions, including installing pre-requisites, please see Installation guide ⚙
Warning
ARM64 MacOS users, please refer to the dedicated section
You can install napari-cellseg3d
via pip:
pip install napari-cellseg3d
For local installation after cloning from GitHub, please run the following in the CellSeg3D folder:
pip install -e .
If the installation was successful, you will find the napari-cellseg3D plugin in the Plugins section of napari.
Usage#
To use the plugin, please run:
napari
Then go into Plugins > CellSeg3D
and choose the correct tool to use:
Labeling🔍: Examine and refine your labels, whether manually annotated or predicted by a pre-trained model.
Training📉: Train segmentation algorithms on your own data.
Inference📊: Use pre-trained segmentation algorithms on volumes to automate cell labelling.
Utilities 🛠: Leverage various utilities, including cropping your volumes and labels, converting semantic to instance labels, and more.
Help/About… : Quick access to version info, Github pages and documentation.
Hint
Many buttons have tooltips to help you understand what they do. Simply hover over them to see the tooltip.
Documentation contents#
From this page you can access the guides on the several modules available for your tasks, such as :
- Main modules :
- Utilities :
- Advanced :
Other useful napari plugins#
Important
brainreg-napari : Whole-brain registration in napari
napari-brightness-contrast : Adjust brightness and contrast of your images, visualize histograms and more
napari-pyclesperanto-assistant : Image processing workflows using pyclEsperanto
napari-skimage-regionprops : Compute region properties on your labels
Acknowledgments & References#
This plugin has been developed by Cyril Achard and Maxime Vidal, supervised by Mackenzie Mathis for the Mathis Laboratory of Adaptive Motor Control.
We also greatly thank Timokleia Kousi for her contributions to this project and the Wyss Center for project funding.
The TRAILMAP models and original weights used here were ported to PyTorch but originate from the TRAILMAP project on GitHub. We also provide a model that was trained in-house on mesoSPIM nuclei data in collaboration with Dr. Stephane Pages and Timokleia Kousi.
This plugin mainly uses the following libraries and software:
MONAI project (various models used here are credited on their website)
pyclEsperanto (for the Voronoi Otsu labeling) by Robert Haase
A new unsupervised 3D model based on the WNet by Xia and Kulis [3]
References