nf-core containers automation: how it’ll all work behind the curtain
Introduction
In nf-core, we’ve been excited to adopt Wave to automate software container builds, and have been
looking for the right way to do it.
With the announcement of Seqera Containers we felt it was the right time to put in
the effort to migrate our containers to be built using Wave, using Seqera Containers to host the container images for our modules.
You can read more about our motivation for this change in Part 1 of this blog post.
Here, in Part 2, we will dig into the technical details: how it all works behind the curtain.
You don’t need to know or understand any of this as an end-user of nf-core pipelines,
or even as a contributor to nf-core modules, but we thought it would be interesting to share the details.
It’s mostly to serve as an architectural plan for the nf-core maintainers and infrastructure teams.
Too Long; Didn't Read
Module contributors edit environment.yml files to update software dependencies
Containers are automatically build for Docker + Singularity, linux/amd64 + linux/arm64
Conda lock-files are saved for more reproducible and faster conda environments
Details are stored in the module’s meta.yml
Pipelines auto-generate Nextflow config files when modules are updated
Pipeline usage remains basically unchanged
The end goal
Before we dig into how the details of how the automation will work, let’s summarise the end goal of this migration.
These pin the exact dependency stack used by the build, not just the top-level primary tool being requested.
This effectively removes the need for conda to solve the build and also ships md5 hashes for every package.
This will greatly improve the reproducibility of the software environments for conda users and the reliability of Conda CI tests.
They look something like this:
Singularity: oras or https?
Unfamiliar with oras://? Don’t worry, it’s relatively new in the field.
It’s a new protocol to reference container images, similar to docker:// or shub://.
It allows Singularity to interact with any OCI (Open Container Initiative)
compliant registry to pull images.
Using oras has some advantages:
Singularity handles pulls in the process task, rather than in the Nextflow head job
This means less resource usage on the head node, and more parallelisation
Singularity can use authentication to pull from private registries
(see Singularity docs for more information).
However, there are some downsides:
Shared cache Nextflow options such as $NXF_SINGULARITY_CACHEDIR and $NXF_SINGULARITY_LIBRARYDIR are not used
Singularity must be installed when downloading images for offline use
oras:// is only supported by recent versions of Singularity / Apptainer
As such, we will continue to use https downloads for Singularity SIF images for now.
However, we will start to provide new -profile singularity_oras profiles for anyone who
would prefer to fetch images using the newer oras protocol.
If you’d like to know more, check out the amazing bytesize talk
by Marco Claudio De La Pierre (@marcodelapierre) from June 2024:
Modules
All nf-core pipelines use a single container per process, and the majority of processes are
encapsulated within shared modules in the nf-core/modules repository.
As such, we must start with containers at the module level.
GitHub issue
For the latest discussion and progress on bulk-updating existing nf-core modules, see GitHub issue
nf-core/modules#6698.
Changes to main.nf
With this switch, we simplify the container declaration, listing only the default container image: Docker, for linux/amd64.
There will no longer be any string interpolation or logic within the container string.
The container string is never edited by hand and is fully handled by the modules automation.
We considered removing both conda and container declarations from the module main.nf file entirely.
However, we see benefit in keeping these in this form because:
It’s clearer to those exploring the code about what the module requires.
The container string is needed to tie the module to the pipeline config files
(see Building config files below)
Removing the container logic from the string should be a big win for readability.
Changes to meta.yml
Through the magic of automation, we will append and then validate the following fields
within the module’s meta.yml file. Following the
FastQC example
from above:
All images are all built at the same time, avoiding Conda dependency drift.
The build and scan IDs allow us to trace back to the build logs and security scans for these images.
The Conda lock files are a new addition to the nf-core ecosystem and will help reproducibility for Conda users.
These are generated during the Docker image build and are specific to architecture.
The lock files can be accessed remotely via the Wave API, so we can treat them much in the same
way that we treat remote container images.
Pipelines
Container information at module-level is great, but it’s not enough.
Nextflow doesn’t know about module meta.yml files (they’re an nf-core invention),
so we need to tie these into the pipeline code where they will run.
The heart of the solution is to auto-generate a config file for each software packaging type (Docker, Singularity, Conda)
and platform (linux/arch64 and linux/arm64).
These will be created by the nf-core/tools CLI and never be edited by hand, so no manual merging will be required.
They’ll simply be regenerated and overwritten every time a version of a module is updated.
Each config file will specify the container or conda directive for every process in the pipeline:
Likewise, the conda config files will point to the lock files for each process:
The main nextflow.config file will import these config files, depending on the profile selected
by the person running the pipeline.
Singularity will have separate config files and associated -profiles for both oras and https containers,
so that users can choose which to use.
Local modules and any edge-case shared modules that cannot use the Seqera Containers automation
will need the pipeline developer to hardcode container names and conda lock files manually.
These can be added to the above config files as long as they remain above the comment line:
We’re also taking this opportunity to update the apptainer and mamba profiles too,
they will import the exact same config files as the singularity and conda profiles.
Here’s roughly how the nextflow.config file with the -profile config includes will look:
nextflow.config
Note
Boilerplate code (eg. disabling other container engines) has been removed from this
blog post code snippet for clarity. It will still be included in the pipelines.
We may move this whole code block into it’s own separate config file with includeConfig
so that the main nextflow.config file is easier to read.
Note that there are a few changes here:
New profiles with _arm suffixes for linux/arm64 architectures
New profiles for _oras suffixes for using the oras:// protocol
The apptainer profiles now uses the singularity config files
The conda profiles now use Conda lock files instead of environment.yml files
New conda_env profiles for those wanting to keep the old behaviour
New mamba profiles, using the conda config files
Base registries set to Seqera Containers
Because we’re only defining the image name and making use of the base container registry config option,
it should still be simple to mirror containers to custom Docker registries and overwrite only
docker.registry as before.
Automation - Modules
The nf-core community loves automation.
It’s baked into the core of our community from our shared interest in automating workflows.
We have linting bots, template updates, slack workflows, pipeline announcements. You name it, we’ve automated it.
In these sections, we’ll cover how we’re going to build all of these shiny new things without manual intervention.
GitHub issue
For the latest updates on modules container automation, see
nf-core/modules#6694.
Updating conda packages
The automation begins when a contributor wants to add a piece of software to a container.
For instance, they decided that they need samtools installed.
The contributor updates the environment.yml and adds a line with samtools:
That will kick off the container image generation factory
(it could equally be a change to remove a package, or change a pinned version).
A commit will be pushed automatically with an updated meta.yml file pointing to the new containers,
plus new nf-test snapshots for the software version checks.
Only the interesting part needs to be edited by the developer (which tools to use) and all other
steps are fully automated.
Container image creation
As with most automation in nf-core, container creation will happen in GitHub Actions.
Edits to a module’s environment.yml file will trigger a workflow that uses the
wave-cli to build the container images.
GitHub Actions identifies changes in the environment.yml file.
wave-cli is executed on the updated environment file.
Seqera Containers builds new containers for various platforms and architectures.
GitHub Actions runs stub tests commits the updated the
version snapshot.
Once this GitHub Actions run completes it will push the new commits back to the PR
and the regular nf-test CI will run.
nf-test versions snapshot
One of the primary reasons that we were so excited to adopt nf-test was the snapshot functionality.
Every test has a snapshot file with the expected outputs, and the outputs are deterministic
(not a binary file, and there’s no dates).
In that snapshot, we also capture the versions of the dependencies for the module
(example shown for bowtie2):
This gives a second level of confirmation that the containers were correctly generated.
When updating containers, the nf-test snapshot is parsed and compared to the snapshot
from before the new containers were built.
The snapshot changes are discarded if anything other than the versions key changed,
so it should only vary in the software versions reported.
If any other changes are detected, the snapshot change will be rejected and not committed back to the PR.
Then PR reviewers will see failing tests and need to manually update the snapshot file.
This means that we can automatically commit the updated snapshot file in the PR if
the tool output is unchanged, saving the developer from taking this extra step.
Tip
Note that in the above example, the versions are in the snapshot in plain text,
not using an md5 hash. This is a change that we will roll out for all modules,
as it makes verification in the PR much easier.
Automatic version bumps with Renovate
We’ve recently adopted Renovate, a tool for automated dependency updates.
It’s multi-platform and multi-language and has become pretty popular in the devops space.
It’s similar to GitHub’s dependabot,
but supports more languages and frameworks, and more importantly for nf-core, enables us to write our own custom dependencies.
Renovate runs on a schedule and automatically updates software versions for us based on the
specifications we’ve laid out in a common config.
The magic starts with some nf-core automation to add renovate comments to the environment.yml file:
The comments allow some scary regexes to find the conda dependencies and their versions
in nf-core/modules, and check if there’s a new version available.
If there is a new version available, the Renovate bot will create a PR bumping the version,
which in turn will kick off the container creation GitHub Action.
The process will be very similar to the diagram laid out above, however we can go a step further:
if the new software versions have no effect on the results of the tests, the PR will be automatically merged:
So: if all tests pass, the pull request is automatically merged without human intervention.
In case of test failures, the Renovate bot automatically requests a review from the appropriate
module maintainer using the CODEOWNERS file.
The maintainer then steps in to fix failing tests and request a final review before merging.
This efficient process ensures that software dependencies stay current with minimal manual oversight,
reducing noise and streamlining development workflows.
This will hopefully be the end of the “can I get a review on this version bump” requests in #review-requests!
Automation - Pipelines
We now have nice, up to date software packaging for the shared nf-core/modules with minimal manual intervention.
However, we need to propagate these changes to the pipelines that use these modules.
There are two main areas that we need to address:
Building config files
As described above, pipelines will have a set of config files automatically generated
that specify the container or conda environment for each process in the pipeline.
Creation of these files will be triggered whenever installing, updating or removing a module, via the nf-core CLI.
The config files will be completely regenerated each time, so there will never be any manual merging required.
The trickiest part of this process is linking the module containers to the pipeline processes.
Modules can be imported into pipelines with any alias or scope.
We need to match this against the values that we find in the module meta.yml files:
Run nextflow inspect to generate default Docker linux/arch64 config
Copy this file and replace the container names with the relevant container for each platform,
using the meta.yml files that match the Docker container name
This is why we duplicate the default docker container in both main.nf and meta.yml files -
it allows us to link the module to the pipeline.
Warning
Currently, nextflow inspect works by executing a dry-run of the pipeline.
We can run using -profile test, but any processes that are skipped due to pipeline logic
will not be included in the config file.
This is a known limitation and we are currently working hard with the Nextflow team at Seqera on a new version of nextflow inspect
which will return all processes, regardless of whether they are skipped or not.
This will be a requirement for progression with the Seqera Containers migration.
Note
This process of copying files and using string substituion is a bit of a hack.
If you have any ideas on how to improve this, please let us know!
Edge cases and local modules
There will always be edge cases that won’t fit the automation described above.
Not all software tools can be packaged on Bioconda (though we encourage it where possible!).
For example, some tools have licensing restrictions that prevent them from being distributed in this way.
Other edge-cases include optional usage of container variants for GPU support, or other hardware.
Local modules won’t be able to benefit from the Wave CLI automation to fetch containers from Seqera Containers
and will have to be manually updated by the pipeline developer.
For these reasons, we will still support custom container declarations in modules
without use of Seqera Containers. It will be up to the module contributors to ensure that these
are correctly specified and kept up to date manually.
These can be specified in the main.nf file and added to the autogenerated platform-specific config files,
as long as they remain above the comment line:
If at all possible then software should be packaged with Bioconda and Seqera Containers.
Failing that, custom containers should be stored under the nf-core account on quay.io.
The only time other docker registries / accounts should be used are if there are licensing issues
restricting software redistribution.
Custom containers can be also be built using Wave in continuous integration,
it’s just that they can’t be pushed to the Seqera Containers registry.
However, they can be pushed to quay.io automatically.
We can do this using a similar mechanism to the automation used for changing environment.yml files,
simply replacing it with Dockerfiles (see nf-core/modules#4940).
Downloads
The nf-core CLI has a download command that downloads pipeline code and software for offline use.
This has a lot of hardcoded logic around the previous syntax of container strings and will need a significant rewrite.
By the time we get to running this tool, the pipeline has all containers defined in configuration files.
As such, we should be able to run the new and improved nextflow inspect command to return the container names
for every process for a given platform. Once collected, we can download the images as before.
The advantage of using this approach is that the download logic can be far simpler.
Complex syntax for container strings is not a problem, as we can rely on Nextflow to resolve these to simple strings.
Roadmap
This blog post lays out a future vision for this project.
It serves as both a rubber-duck for the authors, a place to request feedback from the community,
and as a roadmap for developers.
There are many pieces of work that must come together for its completion,
including but not limited to: