AWS recently announced a preview of its new generation of Amazon EC2 M6g instances that are powered by 64 bit ARM based AWS Graviton2 processors. The projected performance and pricing advantages over the latest generation of AWS x86–64 instances are too impressive to ignore.
While we could simply use standard Docker on ARM to build images for these new AWS Graviton processors, there are many benefits to supporting both architectures rather than abandoning the x86–64 ship:
- Developers need to be able to run their CI/CD generated Docker images locally. For the foreseeable future, developer machines will continue to use x86–64 CPUs.
- Share common containers across x86–64 and Graviton2 clusters.
- Run staging environments on ARM and production on x86–64 until Graviton2 is out of preview.
- Once Graviton2s are generally available, quickly switch back to x86–64 if a service migration to ARM causes any issues.
Building multi-architecture Docker images is still an experimental feature. However, hosting multi-architecture images is already well supported by Docker’s Registry, both self hosted and on hub.docker.com. Your mileage may vary with 3rd party Docker registry implementations
In this post, we’ll demonstrate how to build and publish multi-architecture Docker images on an ARM Linux host for both x86–64 (AMD64) and ARM64 so you can run a Docker container from the image on either architecture.
Note: if you’re OK building your images on your macOS or Windows desktop, Docker Desktop ships out of the box with support for building multi-architecture Docker images. However, if you run Linux, or want to build your Docker images correctly, as part of your CI/CD pipeline, read on.
Install Docker 19.03 or Later
To start, we’re going to need an ARM64 Linux host capable of running Docker 19.03 or later. You could use an x86–64 host as well.
However, since we’re looking to benefit from the cost savings of ARM, we’ll use one as our build server with Ubuntu 19.10. Ubuntu is a popular Linux distribution supported by multiple cloud services, however, other recent distributions should work fine as well. However, you’ll need to make sure you’re running a Linux kernel 5.x or later. On AWS, you can use the Ubuntu 19.10 AMI.
On Ubuntu, install
docker.io for Ubuntu’s repository. We also install
qemnu-user-static. QEMU enables a single host to build images for multiple architectures and
binfmt-support adds multiple binary format support to the Linux kernel. Note that
binfmt-support version 2.1.43 or later is required.
Add your user to the Docker group to enable commands to be run from your user account. Remember to reboot or log out and back in after running:
1. #!/bin/bash #Install Docker and mult-arch dependencies 2. 3. sudo apt-get install binfmt-support qemu-user-static 4. sudo apt-get install docker.io 5. sudo usermod -aG docker $USERp 6. sudo reboot
Install Docker Buildx
Next, we need to install Docker’s
buildx command. Buildx is in technology preview and offers experimental build features such as multi-architecture builds. If you enabled docker to be run as your user, you can install this as your regular user, rather than root.
buildx command line plugin for Docker. The code below will install the latest release for ARM 64-bit.
1. #!/bin/bash 2. #Install buildx for arm64 and enable the Docker CLI plugin 3. 4. sudo apt-get install jq 5. mkdir -p ~/.docker/cli-plugins 6. BUILDX_URL=$(curl https://api.github.com/repos/docker/buildx /releases/latest | jq -r .assets.browser_download_url | grep arm64 7. wget $BUILDX_URL -O ~/.docker/cli-plugins/docker-build 8. chmod +x ~/.docker/cli-plugins/docker-buildx
Build Multi-Architecture Images
Building multi-architecture images (Docker’s documentation refers to these as multi-platform images) requires a builder backed by the
docker-container driver and supports two strategies for building cross platform images:
- Using QEMU emulation support in the kernel
- Building on multiple native nodes coordinated by a single builder
Here we’re using the QEMU approach as it is the cheaper of the two options, since it only requires a single build host for all targeted architectures. Additionally, Docker is not using QEMU here to create fully functional virtual machine,. We’re using QEMU user mode, so only system calls need to be emulated.
As your CI/CD needs evolve you may wish to invest in a build farm of native nodes to speed up the build process.
Let’s create a bootstrap the builder, you can give it whatever name you’d like:
1. $ docker buildx create --name mbuilder 2. mbuilder 3. 4. $ docker buildx use mbuilder 5. 6. $ docker buildx inspect --bootstrap 7. Name: mbuilder 8. Driver: docker-container 9. 10. Nodes: 11. Name: mbuilder0 12. Endpoint: unix:///var/run/docker.sock 13. Status: running 14. Platforms: linux/arm64, linux/amd64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
Perfect, we now have a builder capable of targeting linux/arm64 linux/amd64, and other architectures!
Now let’s build an image that can be run on both Linux amd64 and arm64 from a simple Dockerfile.
Note that the image you are pulling from must also support the architectures you plan to target. This can be checked using:
$ docker buildx imagetools inspect alpine
FROM alpine RUN apk add util-linux CMD ["lscpu"] $ docker buildx build --platform linux/amd64,linux/arm64 -t foo4u/demo-mutliarch:2 --push . [+] Building 4.7s (9/9) FINISHED => [internal] load build definition from Dockerfile => => transferring dockerfile: 31B => [internal] load .dockerignore => => transferring context: 2B => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest => [linux/amd64 1/2] FROM docker.io/library/alpine@sha256:2171658620155679240babee0a7714f6509fae66898db422ad803b951257db78 => => resolve docker.io/library/alpine@sha256:2171658620155679240babee0a7714f6509fae66898db422ad803b951257db78 => CACHED [linux/amd64 2/2] RUN apk add util-linux => [linux/arm64 1/2] FROM docker.io/library/alpine@sha256:2171658620155679240babee0a7714f6509fae66898db422ad803b951257db78 => => resolve docker.io/library/alpine@sha256:2171658620155679240babee0a7714f6509fae66898db422ad803b951257db78 => CACHED [linux/arm64 2/2] RUN apk add util-linux => exporting to image => => exporting layers => => exporting manifest sha256:cb54200a7c04dded134ca9e3e6a0e434c2fdf851fb3a7226941d0983ad5bfb88 => => exporting config sha256:307b885367f8ef4dc443dc35d6ed3298b9a3a48a846cf559a676c028a359731b => => exporting manifest sha256:6f4fe17def66ef5bc79279448e1cb77a1642d460ed58d5dc60d0e472c023e2eb => => exporting config sha256:26e6b092c7c1efffe51ce1d5f68e3359ab44152d33df39e5b85cd4ff6cfed3d4 => => exporting manifest list sha256:3b4e4135b92017e5214421543b813e83a77fcea759af8067c685b70a5d978497 => => pushing layers => => pushing manifest for docker.io/foo4u/demo-mutliarch:2 There’s a lot going on here so let’s unpack it: 1. Docker transfers the build context to our builder container 2. The builder builds an image for each architecture we requested with the --platform argument 3. The images are pushed to Docker Hub 4. Buildx generates a manifest JSON file pushes that to Docker Hub as the image tag.
imagetools to inspect the generated Docker image:
1. $ docker buildx imagetools inspect foo4u/demo-mutliarch:2 2. Name: docker.io/foo4u/demo-mutliarch:2 3. MediaType: application/vnd.docker.distribution.manifest.list.v2+json 4. Digest: sha256:3b4e4135b92017e5214421543b813e83a77fcea759af8067c685b70a5d978497 5. 6. Manifests: 7. Name: docker.io/foo4u/demo-mutliarch:2@sha256:cb54200a7c04dded134ca9e3e6a0e434c2fdf851fb3a7226941d0983ad5bfb88 8. MediaType: application/vnd.docker.distribution.manifest.v2+json 9. Platform: linux/amd64 10. 11. Name: docker.io/foo4u/demo-12. mutliarch:2@sha256:6f4fe17def66ef5bc79279448e1cb77a1642d460ed58d5dc60d0e472c023e2eb 12. MediaType: application/vnd.docker.distribution.manifest.v2+json 13. Platform: linux/arm64
Here we can see that
foo4u/demo-multiarch:2 is a JSON manifest pointing to the manifests for each of the platforms we targeted during the build. Although the image appears on the registry as a single image, it’s actually a manifest containing links to the platform specific images. Buildx built and a published an image per architecture and then generated a manifest linking them together.
Docker uses this information when pulling the image to download the appropriate image for the machine’s runtime architecture.
Let’s run the image on x86–64 / amd64:
$ docker run --rm foo4u/demo-mutliarch:2 Unable to find image 'foo4u/demo-mutliarch:2' locally 2: Pulling from foo4u/demo-mutliarch e6b0cf9c0882: Already exists Status: Downloaded newer image for foo4u/demo-mutliarch:2 Architecture: x86_64
Now let’s run the image on arm64:
$ docker run --rm foo4u/demo-mutliarch:2 Unable to find image 'foo4u/demo-mutliarch:2' locally 2: Pulling from foo4u/demo-mutliarch Status: Downloaded newer image for foo4u/demo-mutliarch:2 Architecture: aarch64
That’s it! Now we have a fully functioning Docker image that we can run on either our existing x86–64 servers or our shiny new ARM 64 servers!
In conclusion, getting started with multi-architecture Docker images on Linux isn’t so hard. We can even use an ARM server to build the images, potentially saving us money on our CI/CD server(s) as well as our staging and production infrastructure.
Bonus: you can further optimize your Docker builds if the language you use for has good multi-architecture support (such as Java or Go). For example, you can build a Spring Boot application with a single platform compile:
1. FROM --platform=$BUILDPLATFORM amazoncorretto:11 as builder 2. 3. COPY . /srv/ 4. WORKDIR /srv 5. RUN ./mvnw -DskipTests=true package spring-boot:repackage 6. 7. FROM amazoncorretto:11 8. 9. COPY --from=builder /srv/target/my-service-0.0.1-SNAPSHOT.jar /srv/ 10. 11. EXPOSE 8080 12. 13. ENTRYPOINT ["java", "-jar", "/srv/my-service-0.0.1-SNAPSHOT.jar"]