Reproducible computational environments using containers: Introduction to Singularity

Singularity: Getting started

Overview

Teaching: 30 min
Exercises: 20 min
Questions
  • What is Singularity and why might I want to use it?

Objectives
  • Understand what Singularity is and when you might want to use it.

  • Undertake your first run of a simple Singularity container.

The episodes in this lesson will introduce you to the Singularity container platform and demonstrate how to set up and use Singularity.

This material is split into 2 parts:

Part I: Basic usage, working with images

  1. Singularity: Getting started: This introductory episode

Working with Singularity containers:

  1. The singularity cache: Why, where and how does Singularity cache images locally?
  2. Running commands within a Singularity container: How to run commands within a Singularity container.
  3. Working with files and Singularity containers: Moving files into a Singularity container; accessing files on the host from within a container.
  4. Using Docker images with Singularity: How to run Singularity containers from Docker images.

Part II: Creating images, running parallel codes

  1. Preparing to build Singularity images: Getting started with the Docker Singularity container.
  2. Building Singularity images: Explaining how to build and share your own Singularity images.
  3. Running MPI parallel jobs using Singularity containers: Explaining how to run MPI parallel codes from within Singularity containers.

Work in progress…

This lesson is new material that is under ongoing development. We will introduce Singularity and demonstrate how to work with it. As the tools and best practices continue to develop, elements of this material are likely to evolve. We welcome any comments or suggestions on how the material can be improved or extended.

Singularity - Part I

What is Singularity?

Singularity is another container platform. In some ways it appears similar to Docker from a user perspective, but in others, particularly in the system’s architecture, it is fundamentally different. These differences mean that Singularity is particularly well-suited to running on distributed, High Performance Computing (HPC) infrastructure.

System administrators will not, generally, install Docker on shared computing platforms such as lab desktops, research clusters or HPC platforms because the design of Docker presents potential security issues for shared platforms with multiple users. Singularity, on the other hand, can be run by end-users entirely within “user space”, that is, no special administrative privileges need to be assigned to a user in order for them to run and interact with containers on a platform where Singularity has been installed.

Getting started with Singularity

A little history…

Singularity is open source and was initially developed within the research community. Some months ago, the project was “forked” something that is not uncommon within the open source software community, with the software effectively splitting into two projects going in different directions. The fork is being developed by a commercial entity, Sylabs.io who provide both the free, open source SingularityCE (Community Edition) and Pro/Enterprise editions of the software. The original open source Singularity project has recently been renamed to Apptainer and has moved into the Linux Foundation. At the time of writing, Apptainer has recently made their initial release available but this has not yet propagated to many HPC systems. We will generally be working with versions of Singularity released before the fork as part of this course so these changes are not directly relevant. However, it is useful to be aware of this history and that you may see both Singularity and Apptainer being used within the research community over the coming months and years.

Container technologies on HPC

Singularity/Apptainer are not the only container technologies used on HPC systems - you may also see other container technologies used on HPC platforms you have access to (e.g. Podman, CharlieCloud, Sarus). Singularity/Apptainer are, currently, the most widespread technolgies available on HPC systems. However, many of these technologies work ina similar way for users so what you learn here will help you if you need to use these technologies. Furthermore, all of these technologies provide a way to convert Docker container images to their format as we do with Singularity in this lesson.

Part I of this Singularity material is intended to be undertaken on a remote platform where Singularity has been pre-installed.

If you’re attending a taught version of this course, you will be provided with access details for a remote platform made available to you for use for Part I of the Singularity material. This platform will have the Singularity software pre-installed.

Installing Singularity on your own laptop/desktop

If you have a Linux system on which you have administrator access and you would like to install Singularity on this system, some information is provided at the start of Part II of the Singularity material. Unless you are experienced with building software from source code in a Linux environment, we strongly recommend working on a platform with Singularity pre-installed when undertaking this section of the course.

Later in this material, when building Singularity images, we’ll look at running Singularity on your local system through Docker.

Sign in to the remote platform, with Singularity installed, that you’ve been provided with access to. Check that the singularity command is available in your terminal:

Loading a module

HPC systems often use modules to provide access to software on the system so you may need to use the command:

$ module load singularity

before you can use the singularity command on the system. Note: this is not needed on ARCHER2.

$ singularity --version
singularity version 3.7.3

Depending on the version of Singularity installed on your system, you may see a different version.

Images and containers

We’ll start with a brief note on the terminology used in this section of the course. We refer to both images and containers. What is the distinction between these two terms?

Images are bundles of files including an operating system, software and potentially data and other application-related files. They may sometimes be referred to as a disk image or container image and they may be stored in different ways, perhaps as a single file, or as a group of files. Either way, we refer to this file, or collection of files, as an image.

A container is a virtual environment that is based on an image. That is, the files, applications, tools, etc that are available within a running container are determined by the image that the container is started from. It may be possible to start multiple container instances from an image. You could, perhaps, consider an image to be a form of template from which running container instances can be started.

Getting an image and running a Singularity container

If you recall from learning about Docker, Docker images are formed of a set of layers that make up the complete image. When you pull a Docker image from Docker Hub, you see the different layers being downloaded to your system. They are stored in your local Docker repository on your system and you can see details of the available images using the docker command.

Singularity images are a little different. Singularity uses the Singularity Image Format (SIF) and images are provided as single SIF files (with a .sif filename extension). Singularity images can be pulled from Singularity Hub, a registry for container images. Singularity is also capable of running containers based on images pulled from Docker Hub and some other sources. We’ll look at accessing containers from Docker Hub later in the Singularity material.

Singularity Hub

Singularity Hub is a repository for storing Singularity images. You can browse the stored images by visiting the website. Images can be accessed and run via the singularity command.

Let’s begin by creating a test directory, changing into it and pulling a test Hello World image from Singularity Hub:

$ mkdir test
$ cd test
$ singularity pull hello-world.sif shub://vsoch/hello-world
INFO:    Downloading shub image
 59.8 MiB / 59.8 MiB [===============================================================================================================] 100.00% 52.03 MiB/s 1s

What just happened?! We pulled a SIF image from Singularity Hub using the singularity pull command and directed it to store the image file using the name hello-world.sif in the current directory. If you run the ls command, you should see that the hello-world.sif file is now present in the current directory. This is our image and we can now run a container based on this image:

$ singularity run hello-world.sif
RaawwWWWWWRRRR!! Avocado!

The above command ran a singularity container from the hello-world.sif image that we downloaded from Singularity Hub and the resulting output was shown.

How did the container determine what to do when we ran it?! What did running the container actually do to result in the displayed output?

When you run a container from a Singularity image without using any additional command line arguments, the container runs the default run script that is embedded within the image. This is a shell script that can be used to run commands, tools or applications stored within the image on container startup. We can inspect the image’s run script using the singularity inspect command:

$ singularity inspect -r hello-world.sif
#!/bin/sh 

exec /bin/bash /rawr.sh

This shows us the script within the hello-world.sif image configured to run by default when we use the singularity run command.

That concludes this introductory Singularity episode. The next episode looks in more detail at running containers.

Key Points

  • Singularity is another container platform and it is often used in cluster/HPC/research environments.

  • Singularity has a different security model to other container platforms, one of the key reasons that it is well suited to HPC and cluster environments.

  • Singularity has its own container image format (SIF).

  • The singularity command can be used to pull images from Singularity Hub and run a container from an image file.


The Singularity cache

Overview

Teaching: 15 min
Exercises: 0 min
Questions
  • Why does Singularity use a local cache?

  • Where does Singularity store images?

Objectives
  • Learn about Singularity’s image cache.

  • Learn how to manage Singularity images stored locally.

Singularity’s image cache

While Singularity doesn’t have a local image repository in the same way as Docker, it does cache downloaded image files. As we saw in the previous episode, images are simply .sif files stored on your local disk.

If you delete a local .sif image that you have pulled from a remote image repository and then pull it again, if the image is unchanged from the version you previously pulled, you will be given a copy of the image file from your local cache rather than the image being downloaded again from the remote source. This removes unnecessary network transfers and is particularly useful for large images which may take some time to transfer over the network. To demonstrate this, remove the hello-world.sif file stored in your test directory and then issue the pull command again:

$ rm hello-world.sif
$ singularity pull hello-world.sif shub://vsoch/hello-world
INFO:    Use cached image

As we can see in the above output, the image has been returned from the cache and we don’t see the output that we saw previously showing the image being downloaded from Singularity Hub.

How do we know what is stored in the local cache? We can find out using the singularity cache command:

$ singularity cache list
There are 1 container file(s) using 59.75 MiB and 0 oci blob file(s) using 0.00 KiB of space
Total space used: 59.75 MiB

This tells us how many container files are stored in the cache and how much disk space the cache is using but it doesn’t tell us what is actually being stored. To find out more information we can add the -v verbose flag to the list command:

$ singularity cache list -v
NAME                     DATE CREATED           SIZE             TYPE
3bac21df631874e3cbb3f0   2022-01-12 13:20:44    59.75 MB         shub

There are 1 container file(s) using 59.75 MiB and 0 oci blob file(s) using 0.00 KiB of space
Total space used: 59.75 MiB

This provides us with some more useful information about the actual images stored in the cache. In the TYPE column we can see that our image type is shub because it’s a SIF image that has been pulled from Singularity Hub.

Cleaning the Singularity image cache

We can remove images from the cache using the singularity cache clean command. Running the command without any options will display a warning and ask you to confirm that you want to remove everything from your cache.

You can also remove specific images or all images of a particular type. Look at the output of singularity cache clean --help for more information.

Cache location

By default, Singularity uses $HOME/.singularity/cache as the location for the cache. You can change the location of the cache by setting the SINGULARITY_CACHEDIR environment variable to the cache location you want to use.

Key Points

  • Singularity caches downloaded images so that an unchanged image isn’t downloaded again when it is requested using the singularity pull command.

  • You can free up space in the cache by removing all locally cached images or by specifying individual images to remove.


Break

Overview

Teaching: min
Exercises: min
Questions
Objectives

Comfort break

Key Points


Using Singularity containers to run commands

Overview

Teaching: 15 min
Exercises: 10 min
Questions
  • How do I run different commands within a container?

  • How do I access an interactive shell within a container?

Objectives
  • Learn how to run different commands when starting a container.

  • Learn how to open an interactive shell within a container environment.

Running specific commands within a container

We saw earlier that we can use the singularity inspect command to see the run script that a container is configured to run by default. What if we want to run a different command within a container?

If we know the path of an executable that we want to run within a container, we can use the singularity exec command. For example, using the hello-world.sif container that we’ve already pulled from Singularity Hub, we can run the following within the test directory where the hello-world.sif file is located:

$ singularity exec hello-world.sif /bin/echo Hello World!
Hello World!

Here we see that a container has been started from the hello-world.sif image and the /bin/echo command has been run within the container, passing the input Hello World!. The command has echoed the provided input to the console and the container has terminated.

Note that the use of singularity exec has overriden any run script set within the image metadata and the command that we specified as an argument to singularity exec has been run instead.

Basic exercise: Running a different command within the “hello-world” container

Can you run a container based on the hello-world.sif image that prints the current date and time?

Solution

$ singularity exec hello-world.sif /bin/date
Tue Jan 18 11:58:53 GMT 2022


The difference between singularity run and singularity exec

Above we used the singularity exec command. In earlier episodes of this course we used singularity run. To clarify, the difference between these two commands is:

Opening an interactive shell within a container

If you want to open an interactive shell within a container, Singularity provides the singularity shell command. Again, using the hello-world.sif image, and within our test directory, we can run a shell within a container from the hello-world image:

$ singularity shell hello-world.sif
Singularity> whoami
[<your username>]
Singularity> ls
hello-world.sif
Singularity> 

As shown above, we have opened a shell in a new container started from the hello-world.sif image. Note that the shell prompt has changed to show we are now within the Singularity container.

Discussion: Running a shell inside a Singularity container

Q: What do you notice about the output of the above commands entered within the Singularity container shell?

Q: Does this differ from what you might see within a Docker container?

Use the exit command to exit from the container shell.

Key Points

  • The singularity exec is an alternative to singularity run that allows you to start a container running a specific command.

  • The singularity shell command can be used to start a container and run an interactive shell within it.


Files in Singularity containers

Overview

Teaching: 10 min
Exercises: 10 min
Questions
  • How do I make data available in a Singularity container?

  • What data is made available by default in a Singularity container?

Objectives
  • Understand that some data from the host system is usually made available by default within a container

  • Learn more about how Singularity handles users and binds directories from the host filesystem.

The way in which user accounts and access permissions are handeld in Singularity containers is very different from that in Docker (where you effectively always have superuser/root access). When running a Singularity container, you only have the same permissions to access files as the user you are running as on the host system.

In this episode we’ll look at working with files in the context of Singularity containers and how this links with Singularity’s approach to users and permissions within containers.

Users within a Singularity container

The first thing to note is that when you ran whoami within the container shell you started at the end of the previous episode, you should have seen the username that you were signed in as on the host system when you ran the container.

For example, if my username were jc1000, I’d expect to see the following:

$ singularity shell hello-world.sif
Singularity> whoami
jc1000

But hang on! I downloaded the standard, public version of the hello-world.sif image from Singularity Hub. I haven’t customised it in any way. How is it configured with my own user details?!

If you have any familiarity with Linux system administration, you may be aware that in Linux, users and their Unix groups are configured in the /etc/passwd and /etc/group files respectively. In order for the shell within the container to know of my user, the relevant user information needs to be available within these files within the container.

Assuming this feature is enabled within the installation of Singularity on your system, when the container is started, Singularity appends the relevant user and group lines from the host system to the /etc/passwd and /etc/group files within the container [1].

This means that the host system can effectively ensure that you cannot access/modify/delete any data you should not be able to on the host system and you cannot run anything that you would not have permission to run on the host system since you are restricted to the same user permissions within the container as you are on the host system.

Files and directories within a Singularity container

Singularity also binds some directories from the host system where you are running the singularity command into the container that you’re starting. Note that this bind process is not copying files into the running container, it is making an existing directory on the host system visible and accessible within the Singularity container environment. If you write files to this directory within the running container, when the container shuts down, those changes will persist in the relevant location on the host system.

There is a default configuration of which files and directories are bound into the container but ultimate control of how things are set up on the system where you are running Singularity is determined by the system administrator. As a result, this section provides an overview but you may find that things are a little different on the system that you’re running on.

One directory that is likely to be accessible within a container that you start is your home directory. You may also find that the directory from which you issued the singularity command (the current working directory) is also mapped.

The mapping of file content and directories from a host system into a Singularity container is illustrated in the example below showing a subset of the directories on the host Linux system and in a Singularity container:

Host system:                                                      Singularity container:
-------------                                                     ----------------------
/                                                                 /
├── bin                                                           ├── bin
├── etc                                                           ├── etc
│   ├── ...                                                       │   ├── ...
│   ├── group  ─> user's group added to group file in container ─>│   ├── group
│   └── passwd ──> user info added to passwd file in container ──>│   └── passwd
├── home                                                          ├── usr
│   └── jc1000 ───> user home directory made available ──> ─┐     ├── sbin
├── usr                 in container via bind mount         │     ├── home
├── sbin                                                    └────────>└── jc1000
└── ...                                                           └── ...

Questions and exercises: Files in Singularity containers

Q1: What do you notice about the ownership of files in a container started from the hello-world image? (e.g. take a look at the ownership of files in the root directory (/))

Exercise 1: In this container, try editing (for example using the editor vi which should be avaiable in the container) the /rawr.sh file. What do you notice?

If you’re not familiar with vi there are many quick reference pages online showing the main commands for using the editor, for example this one.

Exercise 2: In your home directory within the container shell, try and create a simple text file. Is it possible to do this? If so, why? If not, why not?! If you can successfully create a file, what happens to it when you exit the shell and the container shuts down?

Answers

A1: Use the ls -l command to see a detailed file listing including file ownership and permission details. You should see that most of the files in the / directory are owned by root, as you’d probably expect on any Linux system. If you look at the files in your home directory, they should be owned by you.

A Ex1: We’ve already seen from the previous answer that the files in / are owned by root so we wouldn’t expect to be able to edit them if we’re not the root user. However, if you tried to edit /rawr.sh you probably saw that the file was read only and, if you tried for example to delete the file you would have seen an error similar to the following: cannot remove '/rawr.sh': Read-only file system.

A Ex2: Within your home directory, you should be able to successfully create a file. Since you’re seeing your home directory on the host system which has been bound into the container, when you exit and the container shuts down, the file that you created within the container should still be present when you look at your home directory on the host system.

Binding additional host system directories to the container

You will sometimes need to bind additional host system directories into a container you are using over and above those bound by default. For example:

The -B option to the singularity command is used to specify additonal binds. For example, to bind the /work/z19/shared directory into a container you could use (note this directory is unlikely to exist on the host system you are using so you’ll need to test this using a different directory):

$ singularity shell -B /work/z19/shared hello-world.sif
Singularity> ls /work/z19/shared
CP2K-regtest	    cube	     eleanor		   image256x192.pgm		kevin		    pblas			    q-e-qe-6.7 
ebe		    evince.simg	     image512x384.pgm	   low_priority.slurm           pblas.tar.gz	                                    q-qe
Q1529568	    edge192x128.pgm  extrae		   image768x1152.pgm		mkdir		    petsc			    regtest-ls-rtp_forCray
adrianj		    edge256x192.pgm  gnuplot-5.4.1.tar.gz  image768x768.pgm		moose.job	    petsc-hypre			    udunits-2.2.28.tar.gz
antlr-2.7.7.tar.gz  edge512x384.pgm  hj			   job-defmpi-cpe-21.03-robust	mrb4cab		    petsc-hypre-cpe21.03	    xios-2.5
cdo-archer2.sif     edge768x768.pgm  image192x128.pgm	   jsindt			paraver		    petsc-hypre-cpe21.03-gcc10.2.0

Note that, by default, a bind is mounted at the same path in the container as on the host system. You can also specify where a host directory is mounted in the container by separating the host path from the container path by a colon (:) in the option:

$ singularity shell -B /work/z19/shared:/shared-data hello-world.sif
Singularity> ls /shared-data
CP2K-regtest	    cube	     eleanor		   image256x192.pgm		kevin		    pblas			    q-e-qe-6.7 
ebe		    evince.simg	     image512x384.pgm	   low_priority.slurm           pblas.tar.gz	                                    q-qe
Q1529568	    edge192x128.pgm  extrae		   image768x1152.pgm		mkdir		    petsc			    regtest-ls-rtp_forCray
adrianj		    edge256x192.pgm  gnuplot-5.4.1.tar.gz  image768x768.pgm		moose.job	    petsc-hypre			    udunits-2.2.28.tar.gz
antlr-2.7.7.tar.gz  edge512x384.pgm  hj			   job-defmpi-cpe-21.03-robust	mrb4cab		    petsc-hypre-cpe21.03	    xios-2.5
cdo-archer2.sif     edge768x768.pgm  image192x128.pgm	   jsindt			paraver		    petsc-hypre-cpe21.03-gcc10.2.0

You can also specify multiple binds to -B by separating them by commas (,).

An alternative option to binding a directory from the host system into a container is to copy required data into a container image at build time.

References

[1] Gregory M. Kurzer, Containers for Science, Reproducibility and Mobility: Singularity P2. Intel HPC Developer Conference, 2017. Available at: https://www.intel.com/content/dam/www/public/us/en/documents/presentation/hpc-containers-singularity-advanced.pdf

Key Points

  • Your current directory and home directory are usually available by default in a container.

  • You have the same username and permissions in a container as on the host system.

  • You can specify additional host system directories to be available in the container.


Using Docker images with Singularity

Overview

Teaching: 5 min
Exercises: 10 min
Questions
  • How do I use Docker images with Singularity?

Objectives
  • Learn how to run Singularity containers based on Docker images.

Using Docker images with Singularity

Singularity can also start containers directly from Docker images, opening up access to a huge number of existing container images available on Docker Hub and other registries.

While Singularity doesn’t actually run a container using the Docker image (it first converts it to a format suitable for use by Singularity), the approach used provides a seamless experience for the end user. When you direct Singularity to run a container based on a Docker image, Singularity pulls the slices or layers that make up the Docker image and converts them into a single-file Singularity SIF image.

For example, moving on from the simple Hello World examples that we’ve looked at so far, let’s pull one of the official Docker Python images. We’ll use the image with the tag 3.9.9-slim-buster which has Python 3.9.9 installed on Debian’s Buster (v10) Linux distribution:

$ singularity pull python-3.9.9.sif docker://python:3.9.9-slim-buster
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob 72a69066d2fe done  
Copying blob c8da7e1588a5 done  
Copying blob 42005bf1c050 done  
Copying blob cb37373634ff done  
Copying blob dab7c446025c done  
Copying config 786aede17e done  
Writing manifest to image destination
Storing signatures
2022/01/18 12:40:31  info unpack layer: sha256:72a69066d2febc34d8f3dbcb645f7b851a57e9681322ece7ad8007503b783c19
2022/01/18 12:40:32  info unpack layer: sha256:c8da7e1588a5d7907234c843859e39c9897d78b1f9543d48a3462bb0567b80d1
2022/01/18 12:40:32  info unpack layer: sha256:42005bf1c0507e5bb17947782fc9e58762e01c13fe2a2b5317454633d8430b77
2022/01/18 12:40:32  info unpack layer: sha256:cb37373634ff895d3cdfad5b9a6ad810549a230f57072d4d7b7d0fd9580878d2
2022/01/18 12:40:32  info unpack layer: sha256:dab7c446025cb2fab258102db3d936cb74ab6f974711fd7aab2914bfbbcea36c
INFO:    Creating SIF file...

Note how we see singularity saying that it’s “Converting OCI blobs to SIF format”. We then see the layers of the Docker image being downloaded and unpacked and written into a single SIF file. Once the process is complete, we should see the python-3.9.9.sif image file in the current directory.

We can now run a container from this image as we would with any other singularity image.

Running the Python 3.9.9 image that we just pulled from Docker Hub

Try running the Python 3.9.9 image. What happens?

Try running some simple Python statements…

Running the Python 3.9.9 image

$ singularity run python-3.9.9.sif

This should put you straight into a Python interactive shell within the running container:

Python 3.9.9 (main, Dec 21 2021, 10:35:05) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 

Now try running some simple Python statements:

>>> import math
>>> math.pi
3.141592653589793
>>> 

In addition to running a container and having it run the default run script, you could also start a container running a shell in case you want to undertake any configuration prior to running Python. This is covered in the following exercise:

Open a shell within a Python container

Try to run a shell within a singularity container based on the python-3.9.9.sif image. That is, run a container that opens a shell rather than the default Python interactive console as we saw above. See if you can find more than one way to achieve this.

Within the shell, try starting the Python interactive console and running some Python commands.

Solution

Recall from the earlier material that we can use the singularity shell command to open a shell within a container. To open a regular shell within a container based on the python-3.9.9.sif image, we can therefore simply run:

$ singularity shell python-3.9.9.sif
Singularity> echo $SHELL
/bin/bash
Singularity> cat /etc/issue
Debian GNU/Linux 10 \n \l

Singularity> python
Python 3.9.9 (main, Dec 21 2021, 10:35:05) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> print('Hello World!')
Hello World!
>>> exit()

Singularity> exit
$ 

It is also possible to use the singularity exec command to run an executable within a container. We could, therefore, use the exec command to run /bin/bash:

$ singularity exec python-3.9.9.sif /bin/bash
Singularity> echo $SHELL
/bin/bash

You can run the Python console from your container shell simply by running the python command.

This concludes the fifth episode and Part I of the Singularity material. Part II contains a further three episodes where we’ll look at creating your own images and then more advanced use of containers for running MPI parallel applications.

References

[1] Gregory M. Kurzer, Containers for Science, Reproducibility and Mobility: Singularity P2. Intel HPC Developer Conference, 2017. Available at: https://www.intel.com/content/dam/www/public/us/en/documents/presentation/hpc-containers-singularity-advanced.pdf

Key Points

  • Singularity can start a container from a Docker image which can be pulled directly from Docker Hub.


Lunch

Overview

Teaching: min
Exercises: min
Questions
Objectives

Lunch break

Key Points


Building Singularity images

Overview

Teaching: 20 min
Exercises: 25 min
Questions
  • What environment do I need to build a container image to use with Singularity?

Objectives
  • Recap building Docker container images and pushing to Dockerhub.

  • Demonstrate pulling the container image from Dockerhub onto the remote system using Singularity.

Singularity - Part II

Brief recap

In the five episodes covering Part I of this Singularity material we’ve seen how Singularity can be used on a computing platform where you don’t have any administrative privileges. The software was pre-installed and it was possible to work with existing images such as Singularity image files already stored on the platform or images obtained from a remote image repository such as Singularity Hub or Docker Hub.

It is clear that between Singularity Hub and Docker Hub there is a huge array of images available, pre-configured with a wide range of software applications, tools and services. But what if you want to create your own images or customise existing images?

In this first of three episodes in Part II of the Singularity material, we’ll recap on how to build Docker images and push them to Dockerhub so they can be downloaded for use with Singularity on the remote system.

Building Docker container images - recap

The key things needed to build a Docker image to push to Dockerhub are:

We will build a simple container containing a Python 3 installation based on Ubuntu

Create a working directory for this image on your local host system where you have administrator privileges and move into it:

mkdir ubuntu-python
cd ubuntu-python

Next create s simple Dockerfile that starts from Ubuntu, installs Python 3 and, by default, prints the Python version:

FROM ubuntu:20.04
RUN apt-get -y update
RUN apt-get install -y python3
CMD ["python3", "--version"]

Use Docker to build the image and make sure it is compatible with the HPC platform by specifying the x86_64 architecture:

docker image build -t alice/ubuntu-python --platform linux/amd64 .
[+] Building 76.3s (8/8) FINISHED
 => [internal] load build definition from Dockerfile                                               0.0s
 => => transferring dockerfile: 137B                                                               0.0s
 => [internal] load .dockerignore                                                                  0.0s
 => => transferring context: 2B                                                                    0.0s
 => [internal] load metadata for docker.io/library/ubuntu:20.04                                    1.9s
 => [auth] library/ubuntu:pull token for registry-1.docker.io                                      0.0s
 => [1/3] FROM docker.io/library/ubuntu:20.04@sha256:450e066588f42ebe1551f3b1a535034b6aa46cd936fe7f2c6b0d72997ec61dbd  22.6s
 => => resolve docker.io/library/ubuntu:20.04@sha256:450e066588f42ebe1551f3b1a535034b6aa46cd936fe7f2c6b0d72997ec61dbd                        0.0s
 => => sha256:450e066588f42ebe1551f3b1a535034b6aa46cd936fe7f2c6b0d72997ec61dbd 1.42kB / 1.42kB     0.0s
 => => sha256:b25ef49a40b7797937d0d23eca3b0a41701af6757afca23d504d50826f0b37ce 529B / 529B         0.0s
 => => sha256:680e5dfb52c74a1fbc99c2922c8e25b5736e6cd1a3d9430890d52a4f8f44087a 1.46kB / 1.46kB     0.0s
 => => sha256:eaead16dc43bb8811d4ff450935d607f9ba4baffda4fc110cc402fa43f601d83 28.58MB / 28.58MB  21.7s
 => => extracting sha256:eaead16dc43bb8811d4ff450935d607f9ba4baffda4fc110cc402fa43f601d83          0.7s
 => [2/3] RUN apt-get -y update                                                                   27.7s
 => [3/3] RUN apt-get install -y python3                                                          23.9s
 => exporting to image                                                                             0.1s 
 => => exporting layers                                                                            0.1s 
 => => writing image sha256:c58fd754f0e5540b4a9c7768816eea8dcf742bd7bb06a38a37f06390ddc035cb       0.0s 
 => => naming to docker.io/aturnerepcc/ubuntu-python   

Note: if we are on an x86_64 platform (Intel or AMD processor) then the --platform linux/amd64 flag is not strictly needed in this case. You should check what processor architecture your HPC system has to choose the right flag for building container images.

Push to Dockerhub:

docker push alice/ubuntu-python
Using default tag: latest
The push refers to repository [docker.io/aturnerepcc/ubuntu-python]
cfd62b5df445: Pushed 
4a912cbded83: Pushed 
f4462d5b2da2: Mounted from library/ubuntu 
latest: digest: sha256:17823c31c6b86f117bf24df4e19a39077ba36a9c6e45010b0a4853de789a245a size: 953

We should now have a Docker container image hosted on Dockerhub with the correct architecture for our HPC system. We will now log into the HPC system and pull and run the image using Singularity:

remote> singularity pull docker://alice/ubuntu-python
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob eaead16dc43b done  
Copying blob ba18c8174437 done  
Copying blob c3ad949686c8 done  
Copying config e395a30cf4 done  
Writing manifest to image destination
Storing signatures
2022/12/07 19:20:02  info unpack layer: sha256:eaead16dc43bb8811d4ff450935d607f9ba4baffda4fc110cc402fa43f601d83
2022/12/07 19:20:02  warn xattr{etc/gshadow} ignoring ENOTSUP on setxattr "user.rootlesscontainers"
2022/12/07 19:20:02  warn xattr{/tmp/build-temp-818601781/rootfs/etc/gshadow} destination filesystem does not support xattrs, further warnings will be suppressed
2022/12/07 19:20:03  info unpack layer: sha256:ba18c8174437bd8247c0b264b8fca14e42ff8eddee632c444ed5da6440432e07
2022/12/07 19:20:03  warn xattr{var/lib/apt/lists/auxfiles} ignoring ENOTSUP on setxattr "user.rootlesscontainers"
2022/12/07 19:20:03  warn xattr{/tmp/build-temp-818601781/rootfs/var/lib/apt/lists/auxfiles} destination filesystem does not support xattrs, further warnings will be suppressed
2022/12/07 19:20:03  info unpack layer: sha256:c3ad949686c81069aaf91b0ec53b6110109e8172e4d067cb2f50b1f155ae5b7c
2022/12/07 19:20:04  warn xattr{usr/local/lib/python3.8} ignoring ENOTSUP on setxattr "user.rootlesscontainers"
2022/12/07 19:20:04  warn xattr{/tmp/build-temp-818601781/rootfs/usr/local/lib/python3.8} destination filesystem does not support xattrs, further warnings will be suppressed
INFO:    Creating SIF file...
remote> singularity run ubuntu-python_latest.sif
Python 3.8.10

Cluster platform configuration for running Singularity containers

For testing purposes, we can run our image that we’ve created on ARCHER2 by copying the image to the platform and running it interactively at the terminal.

However, for a real job, this is not an option and we would be required to submit our job to the job scheduler on ARCHER2 to have it run on the system’s compute nodes.

Doing this requires a number of configuration parameters to be specified. The parameters that need to be set and the values that they need to be set to are specific to the cluster you’re running on. In the next section, where we look at running parallel jobs, we’ll provide platform-specfic parameters for ARCHER2. However, if you’re running on your own institutional cluster or another HPC platform that you have access to, it is likely that you’ll need to liaise with the system administrators or look at documentation to identify the parameters that need to be passed to Singularity.

Additional Singularity features

Singularity has a wide range of features. You can find full details in the Singularity User Guide and we highlight a couple of key features here that may be of use/interest:

Signing containers: If you do want to share container image (.sif) files directly with colleagues or collaborators, how can the people you send an image to be sure that they have received the file without it being tampered with or suffering from corruption during transfer/storage? And how can you be sure that the same goes for any container image file you receive from others? Singularity supports signing containers. This allows a digital signature to be linked to an image file. This signature can be used to verify that an image file has been signed by the holder of a specific key and that the file is unchanged from when it was signed. You can find full details of how to use this functionality in the Singularity documentation on Signing and Verifying Containers.

Installing Singularity on your local system (optional) [Advanced task]

If you are running Linux and would like to install Singularity locally on your system, the source code is provided via the Apptainer project’s Singularity repository. See the releases here. You will need to install various dependencies on your system and then build Singularity from source code.

If you are not familiar with building applications from source code in a Linux environment, it is strongly recommended that you use Singularity installed on a remote HPC resource rather than attempting to build and install Singularity yourself. The installation process is an advanced task that is beyond the scope of this session.

However, if you have Linux systems knowledge and would like to attempt a local install of Singularity, you can find details in the INSTALL.md file within the Singularity repository that explains how to install the prerequisites and build and install the software. Singularity is written in the Go programming language and Go is the main dependency that you’ll need to install on your system. The process of installing Go and any other requirements is detailed in the INSTALL.md file.

Note

If you do not have access to a system with Docker installed, or a Linux system where you can build and install Singularity but you have administrative privileges on another system, you could look at installing a virtualisation tool such as VirtualBox on which you could run a Linux Virtual Machine (VM) image. Within the Linux VM image, you will be able to install Singularity. Again this is beyond the scope of the course.

If you are not able to access/run Singularity yourself on a system where you have administrative privileges, you can still follow through this material as it is being taught (or read through it in your own time if you’re not participating in a taught version of the course) since it will be helpful to have an understanding of how Singularity images can be built.

You could also attempt to follow this section of the lesson without using root and instead using the singularity command’s --fakeroot option. However, you may encounter issues with permissions when trying to build images and run your containers and this is why running the commands as root is strongly recommended and is the approach described in this lesson.

Key Points

  • You can build container images using Docker and run them using Singularity.

  • You must build the Docker container image with the correct processor architecture to match your remote HPC system.

  • Dockerhub makes it easier to transfer container images from your local system to a remote HPC system.


Running MPI parallel jobs using Singularity containers

Overview

Teaching: 30 min
Exercises: 40 min
Questions
  • How do I set up and run an MPI job from a Singularity container?

Objectives
  • Learn how MPI applications within Singularity containers can be run on HPC platforms

  • Understand the challenges and related performance implications when running MPI jobs via Singularity

MPI overview

MPI - Message Passing Interface - is a widely used standard for parallel programming. It is used for exchanging messages/data between processes in a parallel application. If you’ve been involved in developing or working with computational science software, you may already be familiar with MPI and running MPI applications.

When working with an MPI code on a large-scale cluster, a common approach is to compile the code yourself, within your own user directory on the cluster platform, building against the supported MPI implementation on the cluster. Alternatively, if the code is widely used on the cluster, the platform administrators may build and package the application as a module so that it is easily accessible by all users of the cluster.

MPI codes with Singularity containers

We’ve already seen that building Singularity containers can be impractical without root access. Since we’re highly unlikely to have root access on a large institutional, regional or national cluster, building a container directly on the target platform is not normally an option.

If our target platform uses OpenMPI, one of the two widely used source MPI implementations, we can build/install a compatible OpenMPI version on our local build platform, or directly within the image as part of the image build process. We can then build our code that requires MPI, either interactively in an image sandbox or via a definition file.

If the target platform uses a version of MPI based on MPICH, the other widely used open source MPI implementation, there is ABI compatibility between MPICH and several other MPI implementations. In this case, you can build MPICH and your code on a local platform, within an image sandbox or as part of the image build process via a definition file, and you should be able to successfully run containers based on this image on your target cluster platform.

As described in Singularity’s MPI documentation, support for both OpenMPI and MPICH is provided. Instructions are given for building the relevant MPI version from source via a definition file. We will use a modified version of this to build a container image using Docker below.

Container portability and performance on HPC platforms

While building a container on a local system that is intended for use on a remote HPC platform does provide some level of portability, if you’re after the best possible performance, it can present some issues. The version of MPI in the container will need to be built and configured to support the hardware on your target platform if the best possible performance is to be achieved. Where a platform has specialist hardware with proprietary drivers, building on a different platform with different hardware present means that building with the right driver support for optimal performance is not likely to be possible. This is especially true if the version of MPI available is different (but compatible). Singularity’s MPI documentation highlights two different models for working with MPI codes. The hybrid model that we’ll be looking at here involves using the MPI executable from the MPI installation on the host system to launch singularity and run the application within the container. The application in the container is linked against and uses the MPI installation within the container which, in turn, communicates with the MPI daemon process running on the host system. In the following section we’ll look at building a Singularity image containing a small MPI application that can then be run using the hybrid model.

Building and running a Singularity image for an MPI code

Building and testing an image

This example makes the assumption that you’ll be building a container image on a local platform and then deploying it to a cluster with a different but compatible MPI implementation. See Singularity and MPI applications in the Singularity documentation for further information on how this works.

We’ll build an image from a definition file. Containers based on this image will be able to run MPI benchmarks using the OSU Micro-Benchmarks software.

In this example, the target platform is the ARCHER2 supercomputer (a HPE Cray EX system with 750,080 CPU cores on 5,860 compute nodes). ARCHER2 uses HPE Cray MPICH as the MPI library - a derivative of the open source MPICH MPI distribution modified for use with the HPE Cray Slingshot interconnect on ARCHER2.

Begin by creating a osu-benchmarks directory on your local system:

mkdir osu-benchmarks
cd osu-benchmarks

In the same directory, create a Dockerfile that looks like:

FROM ubuntu:20.04

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get -y update && apt-get -y install curl build-essential libfabric-dev libibverbs-dev gfortran

RUN curl -sSLO http://www.mpich.org/static/downloads/3.4.2/mpich-3.4.2.tar.gz \
   && tar -xzf mpich-3.4.2.tar.gz -C /root \
   && cd /root/mpich-3.4.2 \
   && ./configure --prefix=/usr --with-device=ch4:ofi --disable-fortran \
   && make -j8 install \
   && rm -rf /root/mpich-3.4.2 \
   && rm /mpich-3.4.2.tar.gz

RUN curl -sSLO http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.4.1.tar.gz \
   && tar -xzf osu-micro-benchmarks-5.4.1.tar.gz -C /root \
   && cd /root/osu-micro-benchmarks-5.4.1 \
   && ./configure --prefix=/usr/local CC=/usr/bin/mpicc CXX=/usr/bin/mpicxx \
   && cd mpi \
   && make -j8 install \
   && rm -rf /root/osu-micro-benchmarks-5.4.1 \
   && rm /osu-micro-benchmarks-5.4.1.tar.gz

ENV PATH /usr/local/libexec/osu-micro-benchmarks/mpi/pt2pt:$PATH
ENV PATH /usr/local/libexec/osu-micro-benchmarks/mpi/collective:$PATH

CMD osu_latency

A quick overview of what the above Dockerfile is doing:

Note that base path of the the executable to run ($OSU_DIR) is hardcoded in the run script. The command line parameter that you provide when running a container instance based on the image is then added to this base path. Example command line parameters include: startup/osu_hello, collective/osu_allgather, pt2pt/osu_latency, one-sided/osu_put_latency.

Build and test the OSU Micro-Benchmarks image

Using the above Dockerfile, build a Docker container image for the linux/amd64 platform named <your-dockerhub-id>/osu-benchmarks.

Note: this will take a long time to build (sometimes over 1 hour)! (We have a prebuilt version for you to use on ARCHER2 so you can carry on with the course without waiting for the build process to complete.)

Once the image has finished building, push it to Dockerhub.

Solution

You should be able to build a Docker container image as follows (for a Dockerhub username alice):

docker image build --platform linux/amd64 -t alice/osu-benchmarks .
[+] Building 3908.4s (9/9) FINISHED                                                 
=> [internal] load build definition from Dockerfile                                               0.0s
=> => transferring dockerfile: 1.06kB                                      0.0s
=> [internal] load .dockerignore                    0.0s
=> => transferring context: 2B                       0.0s
=> [internal] load metadata for docker.io/library/ubuntu:20.04                1.1s
=> [auth] library/ubuntu:pull token for registry-1.docker.io                      0.0s
=> CACHED [1/4] FROM docker.io/library/ubuntu:20.04@sha256:450e066588f42ebe1551f3b1a535034b6aa46cd936fe7f2c6b0d72997ec61dbd    0.0s
=> [2/4] RUN apt-get -y update && apt-get -y install curl build-essential libfabric-dev libibverbs-dev gfortran               152.4s
=> [3/4] RUN curl -sSLO http://www.mpich.org/static/downloads/3.4.2/mpich-3.4.2.tar.gz    && tar -xzf mpich-3.4.2.tar.gz -C /
root    && cd /root/mpich-3.4.2     3639.3s
=> [4/4] RUN curl -sSLO http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.4.1.tar.gz  && tar -xzf osu-micro-benchmarks-5.4.1.tar.gz -C  114.6s 
=> exporting to image         0.9s 
=> => exporting layers           0.9s 
=> => writing image sha256:a649a8826b6e47b1ae1835bea3a8148bf67bda75ed0fb62bc3412dd361d43871           0.0s 
=> => naming to docker.io/aturnerepcc/osu-benchmarks    

Once it has built, you can push to Dockerhub with (again for Dockerhub username alice):

docker push alice/osu-benchmarks

Running Singularity containers via MPI

We can now try undertaking a parallel run of one of the OSU benchmarking tools within our container image.

This is where things get interesting and we’ll begin by looking at how Singularity containers are run within an MPI environment.

If you’re familiar with running MPI codes, you’ll know that you use mpirun, mpiexec, srun or a similar MPI executable to start your application. This executable may be run directly on the local system or cluster platform that you’re using, or you may need to run it through a job script submitted to a job scheduler. Your MPI-based application code, which will be linked against the MPI libraries, will make MPI API calls into these MPI libraries which in turn talk to the MPI daemon process running on the host system. This daemon process handles the communication between MPI processes, including talking to the daemons on other nodes to exchange information between processes running on different machines, as necessary.

When running code within a Singularity container, we don’t use the MPI executables stored within the container (i.e. we DO NOT run singularity exec mpirun -np <numprocs> /path/to/my/executable). Instead we use the MPI installation on the host system to run Singularity and start an instance of our executable from within a container for each MPI process. Without Singularity support in an MPI implementation, this results in starting a separate Singularity container instance within each process. This can present some overhead if a large number of processes are being run on a host. Where Singularity support is built into an MPI implementation this can address this potential issue and reduce the overhead of running code from within a container as part of an MPI job.

Ultimately, this means that our running MPI code is linking to the MPI libraries from the MPI install within our container and these are, in turn, communicating with the MPI daemon on the host system which is part of the host system’s MPI installation. In the case of MPICH, these two installations of MPI may be different but as long as there is ABI compatibility between the version of MPI installed in your container image and the version on the host system, your job should run successfully.

We can now try running a 2-process MPI run of a point to point benchmark osu_latency.

Undertake a parallel run of the osu_latency benchmark

ARCHER2, the UK National Supercomputing Service, uses the Slurm workload manager to manage the submission and running of jobs. We provide you with a template Slurm job submission script in this section for running a parallel job via your Singularity container on ARCHER2.

This version of the exercise, for undertaking a parallel run of the osu_latency benchmark with your Singularity container that contains an MPI build, is specific to this run of the course.

Log into ARCHER2 and move to your work file space:

remote> cd /work/ta089/ta089/$USER

Pull the osu-benchmarks contianer image from Dockerhub. If your build process finished in time, you can substitute the aturnerepcc Dockerhub ID for your own Dockerhub ID.

remote> singularity pull osu-benchmarks.sif docker://aturnerepcc/osu-benchmarks
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob eaead16dc43b skipped: already exists  
Copying blob 4080d37551cf done  
Copying blob 1e049ea0e2a4 done  
Copying blob 3cfa71560203 done  
Copying config 2ee4e1caa7 done  
Writing manifest to image destination
Storing signatures
2022/12/07 20:47:52  info unpack layer: sha256:eaead16dc43bb8811d4ff450935d607f9ba4baffda4fc110cc402fa43f601d83
2022/12/07 20:47:52  warn xattr{etc/gshadow} ignoring ENOTSUP on setxattr "user.rootlesscontainers"
2022/12/07 20:47:52  warn xattr{/tmp/build-temp-489555399/rootfs/etc/gshadow} destination filesystem does not support xattrs, further warnings will be suppressed
2022/12/07 20:47:53  info unpack layer: sha256:4080d37551cf3ae041eee7c1915aac5c8a034fe4f31a5bf7cbbbbc6c34d25415
2022/12/07 20:47:53  warn xattr{etc/gshadow} ignoring ENOTSUP on setxattr "user.rootlesscontainers"
2022/12/07 20:47:53  warn xattr{/tmp/build-temp-489555399/rootfs/etc/gshadow} destination filesystem does not support xattrs, further warnings will be suppressed
2022/12/07 20:47:56  info unpack layer: sha256:1e049ea0e2a4b7cd0213ace5976336f19748288926768f5d2002118f8641ddea
2022/12/07 20:47:57  info unpack layer: sha256:3cfa71560203aa648cee53b4767aeab39b9fbf2bff3bb356c098368c6f0dc91e
INFO:    Creating SIF file...

It is now necessary to create a Slurm job submission script to run the benchmark example.

Download this template script on ARCHER2 and edit it to suit your configuration. (Copy the link and use a tool such as wget to download the template.)

Submit the job submission script to the Slurm scheduler using the sbatch command.

remote> sbatch osu_latency.slurm

Expected output and discussion

As you will have seen in the commands using the provided template job submission script, we have called srun on the host system and are passing to MPI the singularity executable for which the parameters are the image file and any parameters we want to pass to the image’s run script. In this case, the parameter is the name of the benchmark executable to run - osu_latency.

The following shows an example of the output you should expect to see. You should have latency values shown for message sizes up to 4MB.

# OSU MPI Latency Test v5.8
# Size          Latency (us)
0                       2.23
1                       2.22
2                       2.22
...
4194304               354.06

This has demonstrated that we can successfully run a parallel MPI executable from within a Singularity container. However, depending on the configuration of the target cluster platform, it’s possible that the two processes will have run on the same physical node - if so, this is not testing the performance of the interconnects between nodes.

You could now try running a larger-scale test. You can also try running a benchmark that uses multiple processes, for example try osu_allreduce.

Using the host MPI with the container

Lets take a closer look at the job submission script for the OSU benchmarks:

#!/bin/bash

# Slurm job options (name, compute nodes, job time)
#SBATCH --job-name=osu_latency
#SBATCH --time=0:10:0
#SBATCH --nodes=2
#SBATCH --tasks-per-node=1
#SBATCH --cpus-per-task=1

# Specify the partition, QoS and account code to use for the job
#SBATCH --partition=standard
#SBATCH --qos=short
#SBATCH --account=ta089

# Setup the job environment to enable MPI ABI compatibility
module load cray-mpich-abi

# Set the number of threads to 1
#   This prevents any threaded system libraries from automatically 
#   using threading.
export OMP_NUM_THREADS=1

# Set the LD_LIBRARY_PATH environment variable within the Singularity container
# to ensure that it used the correct MPI libraries
export SINGULARITYENV_LD_LIBRARY_PATH=/opt/cray/pe/mpich/8.1.4/ofi/gnu/9.1/lib-abi-mpich:/opt/cray/pe/pmi/6.0.10/lib:/opt/cray/libfabric/1.11.0.4.71/lib64:/usr/lib64/host:/usr/lib/x86_64-linux-gnu/libibverbs:/.singularity.d/libs:/opt/cray/pe/gcc-libs

# Set the BIND options for the Singularity executable.
# This makes sure Cray Slingshot interconnect libraries are available
# from inside the container.
export SINGULARITY_BIND="/opt/cray,/usr/lib64/libibverbs.so.1,/usr/lib64/librdmacm.so.1,/usr/lib64/libnl-3.so.200,/usr/lib64/libnl-route-3.so.200,/usr/lib64/libpals.so.0,/var/spool/slurmd/mpi_cray_shasta,/usr/lib64/libibverbs/libmlx5-rdmav25.so,/etc/libibverbs.d,/opt/gcc"

# Launch the parallel job
srun --hint=nomultithread --distribution=block:block singularity run osu-benchmarks.sif osu_latency

There are three key lines in here that are required to make the MPI in the Singularity container work with the host MPI on ARCHER2 itself:

module load cray-mpich-abi - This loads a special version of the Cray MPICH library on ARCHER2 that contains the ABI compatibility layer to allow it to interact with other compiled versions of MPICH (such as the one in our container image).

export SINGULARITYENV_LD_LIBRARY_PATH - This line sets the LD_LIBRARY_PATH environment variable *in the Singularity container` and tells the OSU benchmark programs where to find all the libraries required to make them work with MPI on the ARCHER2 host system,

export SINGULARITY_BIND - This line bind mounts the specified paths from the ARCHER2 host system into our Singularity container image at runtime. This makes the software libraries available in the container that are required for MPI from the container with the host MPI installation. (This could also have been specified using the -B option to the Singularity command.)

[Advanced] Investigate performance when using a container image built on a local system and run on a cluster

This is an advanced exercise, we will not cover it during the taught version of the course but provide it as something you could try if you’re interested to investigate potential performance differences between different approaches to building and running MPI code.

To get an idea of any difference in performance between the code within your Singularity image and the same code built natively on the target HPC platform, try building the OSU benchmarks from source, locally on the cluster. Then try running the same benchmark(s) that you ran via the singularity container. Have a look at the outputs you get when running osu_allreduce or one of the other collective benchmarks to get an idea of whether there is a performance difference and how significant it is.

Try running with enough processes that the processes are spread across different physical nodes so that you’re making use of the cluster’s network interconnects.

What do you see?

Discussion

You may find that performance is significantly better with the version of the code built directly on the HPC platform. Alternatively, performance may be similar between the two versions.

How big is the performance difference between the two builds of the code?

What might account for any difference in performance between the two builds of the code?

If performance is an issue for you with codes that you’d like to run via Singularity, you are advised to take a look at using the bind model for building/running MPI applications through Singularity.

Key Points

  • Singularity images containing MPI applications can be built on one platform and then run on another (e.g. an HPC cluster) if the two platforms have compatible MPI implementations.

  • When running an MPI application within a Singularity container, use the MPI executable on the host system to launch a Singularity container for each process.

  • Think about parallel application performance requirements and how where you build/run your image may affect that.


Finish

Overview

Teaching: min
Exercises: min
Questions
Objectives

Course ends here, see items in schedule marked “Additional material” for further content.

Key Points