These scripts provide a comprehensive OMERO integration for running bioimage analysis workflows on SLURM clusters.
- Multi-format support: TIFF, OME-TIFF, and ZARR
- Automatic data export from OMERO to SLURM clusters
- Intelligent format conversion with optimization
- Comprehensive workflow tracking and monitoring
- Automatic result import back to OMERO
- Configurable output organization options
These scripts work together with the BIOMERO library to enable seamless bioimage analysis workflows directly from OMERO.
For the easiest deployment and integration with other FAIR infrastructure, consider using the NL-BIOMERO stack:
- NL-BIOMERO deployment repo: https://github.com/Cellular-Imaging-Amsterdam-UMC/NL-BIOMERO
- OMERO.biomero OMERO.web plugin: https://github.com/Cellular-Imaging-Amsterdam-UMC/OMERO.biomero
- Pre-built BIOMERO processor container: https://hub.docker.com/r/cellularimagingcf/biomero
The NL-BIOMERO stack provides Docker Compose configurations that automatically set up OMERO.web with the OMERO.biomero plugin, databases, and all necessary dependencies.
In the figure below we show our BIOMERO framework, for BioImage analysis in OMERO.
BIOMERO consists of the Python library BIOMERO and the integrations within OMERO through the scripts in this repository.
In addition to these command-line scripts, BIOMERO 2.0 introduces a modern web-based user interface through the OMERO.biomero web plugin. This plugin provides:
- Interactive Workflow Management: Browse and launch workflows with a modern web interface
- Real-time Progress Tracking: Monitor job progress with live updates
- Workflow History: View past executions with full tracking and metadata
- Dashboard Overview: Get an overview of all your workflows at a glance
For new users, we recommend the NL-BIOMERO stack with the web interface for the complete experience. These scripts remain fully supported for advanced users who need custom scripting capabilities.
SLURM_Run_Workflow.py: Primary workflow orchestrator with ZARR supportSLURM_Run_Workflow_Batched.py: Batch processing variant for multiple datasetsSLURM_CellPose_Segmentation.py: Specialized CellPose segmentation workflow
_SLURM_Image_Transfer.py: Export data from OMERO to SLURM (with cleanup)SLURM_Remote_Conversion.py: Intelligent format conversion on SLURMSLURM_Get_Results.py: Import workflow results back to OMEROSLURM_Get_Update.py: Monitor and update workflow status
SLURM_Init_environment.py: Initialize SLURM environmentSLURM_check_setup.py: Validate BIOMERO configuration
- Export: Selected data transferred from OMERO to SLURM cluster
- Convert: Smart format conversion (with ZARR no-op optimization)
- Process: Computational workflows executed on SLURM
- Monitor: Job progress tracking and status updates
- Import: Results imported back to OMERO with configurable organization
- Cleanup: Temporary artifacts automatically removed
-
Change into the scripts location of your OMERO installation
cd /opt/omero/server/OMERO.server/lib/scripts/ -
Clone the repository with a unique name (e.g. "biomero")
git clone https://github.com/NL-BioImaging/biomero-scripts.git biomero -
Update your list of installed scripts by examining the list of scripts in OMERO.insight or OMERO.web, or by running the following command
<path>/<to>/<bin>/omero script list -
Install system requirements on the PROCESSOR nodes:
python3 -m pip install biomero ezomero==1.1.1 tifffile==2020.9.3 omero-metadata==0.12.0- the OMERO CLI Zarr plugin, e.g.
python3 -m pip install omero-cli-zarr==0.5.3&&yum install -y blosc-devel - the bioformats2raw-0.7.0, e.g.
unzip -d /opt bioformats2raw-0.7.0.zip && export PATH="$PATH:/opt/bioformats2raw-0.7.0/bin"
These examples work on Linux CentOS (i.e. the official OMERO containers); for Windows, or other Linux package managers, check with the original repositories (OMERO CLI ZARR and BioFormats2RAW) for more details on installation.
Just to reiterate, you need all these requirements installed to run all these scripts, on the OMERO PROCESSOR node:
- Python libraries:
- biomero (latest version, or at least matching the version number of this repository)
- ezomero==1.1.1
- tifffile==2020.9.3
- omero-metadata==0.12.0
- omero-cli-zarr==0.5.3 (see below)
- the OMERO CLI Zarr plugin, e.g.
python3 -m pip install omero-cli-zarr==0.5.3&&yum install -y blosc-devel - the bioformats2raw-0.7.0, e.g.
unzip -d /opt bioformats2raw-0.7.0.zip && export PATH="$PATH:/opt/bioformats2raw-0.7.0/bin"
-
Change into the repository location cloned into during installation
cd /opt/omero/server/OMERO.server/lib/scripts/<UNIQUE_NAME> -
Update the repository to the latest version
git pull --rebase -
Update your list of installed scripts by examining the list of scripts in OMERO.insight or OMERO.web, or by running the following command
<path>/<to>/<bin>/omero script list
This repository provides example OMERO scripts for using BIOMERO. These scripts do not work without installing that client on your OMERO servers/processors that will run these scripts.
Always start with initiating the Slurm environment at least once, for example using admin/SLURM Init environment. This might take a while to download all container images if you configured a lot.
For example, __workflows/SLURM Run Workflow should provide an easy way to send data to Slurm, run the configured and chosen workflow, poll Slurm until jobs are done (or errors) and retrieve the results when the job is done. This workflow script uses some of the other scripts, like
_data/SLURM Image Transfer: to export your selected images / dataset / screen as ZARR files to a Slurm dir._data/SLURM Get Results: to import your Slurm job results back into OMERO as a zip, dataset or attachment.
Other example OMERO scripts are:
-
_data/SLURM Get Update: to run while you are waiting on a job to finish on Slurm; it will try to get a%progress from your job's logfile. Depends on your job/workflow logging a%of course. -
__workflows/SLURM Run Workflow Batched: This will allow you to run several__workflows/SLURM Run Workflowin parallel, by batching your input images into smaller chunks (e.g. turn 64 images into 2 batches of 32 images each). It will then poll all these jobs. -
__workflows/SLURM CellPose Segmentation: This is a more primitive script that only runs the actual workflowCellPose(if correctly configured). You will need to manually transfer data first (with_data/SLURM Image Transfer) and manually retrieve data afterward (with_data/SLURM Get Results).
BIOMERO.scripts already have comprehensive DEBUG logging enabled by default! All scripts are configured with:
- DEBUG level logging to rotating log files (
biomero.login/opt/omero/server/OMERO.server/var/log/) - INFO level logging to stdout (visible in OMERO.web script output)
- Rotating log files (500MB max, 9 backups) to prevent disk space issues
- Pre-silenced verbose libraries (omero.gateway.utils, paramiko.transport, invoke) at WARNING level
Each script automatically configures logging like this:
if __name__ == '__main__':
# Comprehensive DEBUG logging to rotating biomero.log file
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setLevel(logging.INFO) # Only INFO+ to stdout
logging.basicConfig(level=logging.DEBUG, # Full DEBUG to file
format="%(asctime)s %(levelname)-5.5s [%(name)40s] "
"[%(process)d] (%(threadName)-10s) %(message)s",
handlers=[
stream_handler,
logging.handlers.RotatingFileHandler(
os.path.join(LOGDIR, 'biomero.log'),
maxBytes=500000000, backupCount=9)
])
# Silence verbose libraries
logging.getLogger('omero.gateway.utils').setLevel(logging.WARNING)
logging.getLogger('paramiko.transport').setLevel(logging.WARNING)
logging.getLogger('invoke').setLevel(logging.WARNING)
runScript()If the default DEBUG logging is too verbose, you can modify any script to use less logging:
# Change DEBUG to INFO for less verbose logging
logging.basicConfig(level=logging.INFO, ...)
# Or silence additional libraries
logging.getLogger('biomero').setLevel(logging.INFO)
logging.getLogger('fabric').setLevel(logging.WARNING)- Main logs:
/opt/omero/server/OMERO.server/var/log/biomero.log* - OMERO logs: Standard OMERO logging locations
- Rotation: Logs rotate when reaching 500MB, keeping 9 backups
See LICENSE. Note this is copy-left, as we copied from OME's scripts with copy-left license.
This section provides machine-readable information about your scripts. It will be used to help generate a landing page and links for your work. Please modify all values on each branch to describe your scripts.
BIOMERO.scripts repository
5.6
5.6
T.T. Luik
Amsterdam UMC
https://nl-bioimaging.github.io/biomero/
These scripts are to be used with the BIOMERO library.
They show how to use the library to run workflows directly from OMERO on a Slurm cluster.
- Select your images, datasets, or plates in OMERO
- Run the SLURM Run Workflow script
- Choose your desired workflow (e.g., CellPose, StarDist)
- Configure workflow parameters
- Select output organization options
- Execute - data will be automatically exported, processed, and imported back
- Select your data in OMERO
- Run the SLURM Run Workflow script
- ✅ Check "Use ZARR Format" for workflows that support native ZARR input
- Choose your ZARR-compatible workflow
- Configure parameters and output options
- Execute - conversion step will be skipped for efficiency
For advanced users who need custom processing:
- Use SLURM Image Transfer to export data in your preferred format
- Use SLURM Remote Conversion if format conversion is needed
- Process data using custom workflows on SLURM
- Use SLURM Get Results to import results back to OMERO
- SLURM Check Setup: Validate your BIOMERO configuration
- SLURM Get Update: Monitor job progress and retrieve logs
- SLURM Init Environment: Initialize or update SLURM environment