Run Docker commands in Bitbucket Pipelines

Bitbucket Pipelines allows you to build a Docker image from a Dockerfile in your repository and to push that to a Docker registry, by running Docker commands within your build pipeline. Dive straight in – the pipeline environment is provided by default and you don't need to customize it!

Enable access to Docker

To enable access to Docker daemon, you can either add docker as a service on the step (recommended), or add the global option in your bitbucket-pipelines.yml.

1 2 3 4 5 6 7 pipelines: default: - step: script: - ... services: - docker

Note that Docker does not need to be declared as a service in the definitions section. It is a default service that is provided by Pipelines without a definition.

Add Docker to all build steps in your repository

1 2 options: docker: true

Note that even if you declare Docker here, it still counts as a service for Pipelines, has a limit of 1 GB memory, and can only be run with two other services in your build step. This setting is provided for legacy support, and we recommend setting it on a step level so there's no confusion about how many services you can run in your pipeline.

How it works

Configuring Docker as a service will:

  • mount the Docker CLI executable in your build container

  • run and provide your build access to a Docker daemon

You can verify this by running docker version:

1 2 3 4 5 6 7 pipelines: default: - step: script: - docker version services: - docker

You can check your bitbucket-pipelines.yml file with our online validator.

Enable Docker BuildKit

To use Docker BuildKit in a Bitbucket Pipeline, set the DOCKER_BUILDKIT=1 environment variable in the pipeline configuration (bitbucket-pipelines.yml).

For example:

1 2 3 4 5 6 7 8 pipelines: default: - step: script: - export DOCKER_BUILDKIT=1 - docker build . services: - docker

For information on Docker BuildKit, visit: Docker Docs — Build images with BuildKit.

Docker BuildKit restrictions

To protect the security of our users, the following Docker BuildKit features have been disabled in addition to the features listed in Running Docker commands:

  • multi-architecture builds

  • the --platform option (such as docker run --platform linux/arm/v7)

  • the Docker BuildKit Buildx plugin

The following Dockerfile RUN directive options, also known as Dockerfile frontend syntaxes, have been disabled:

  • RUN --mount=type=ssh — To access your Bitbucket Pipelines SSH keys, use the --ssh option with the BITBUCKET_SSH_KEY_FILE variable, such as --ssh default=$BITBUCKET_SSH_KEY_FILE

  • RUN --security=insecure

For information on BuildKit Dockerfile frontend syntaxes, visit: Docker BuildKit GitHub repository — Dockerfile frontend syntaxes.

Docker BuildKit caching limitations

The predefined docker cache used for caching the layers produced during Docker Build operations does not cache layers produced when using BuildKit.

The RUN --mount=type=cache Docker frontend syntax will only retain the cache until the pipeline step is complete; it will not be available for other steps in the pipeline or new pipeline runs.

If Docker BuildKit is enabled and the build layers need to be cached, we recommend using the Docker Build --cache-from option. This allows one or more tagged images stored in an external image registry to be used as a cache source. This methods also avoids the 1GB size limit of the predefined docker cache.

Currently we do not support BuildKit advanced features such as the --cache-to option.

For example:

1 2 3 4 5 6 7 docker build \ --cache-from $IMAGE:latest \ --tag $IMAGE:$BITBUCKET_BUILD_NUMBER \ --tag $IMAGE:latest \ --file ./Dockerfile \ --build-arg BUILDKIT_INLINE_CACHE=1 \ "."

Where --cache-from $IMAGE:latest points to the previous successful deployment stored on an external registry, such as Docker Hub. For information about using the Docker build --cache-from option, visit: Docker docs — Specifying external cache sources.

Using secrets and secure variables with Docker BuildKit

Do not pass secrets or secure variables (such as passwords and API keys) to BuildKit using the docker build --build-arg option. This will cause the secret to be included in the resulting Docker image and the Pipeline logs.

Docker BuildKit includes secret handling; helping to keep your passwords, API keys, and other sensitive information out of the Docker images you generate. To use BuildKit secrets, use the --secret Docker Build option, and the --mount=type=secret BuildKit frontend syntax.

The following examples show how to use BuildKit secrets with:

Use Bitbucket Pipelines Secure variables with BuildKit

Bitbucket Pipelines Secure variables can be passed directly to a BuildKit build using the --secret option, then the secret can be used inside the BuildKit build using the --mount=type=secret BuildKit frontend syntax.

For example, using a Bitbucket Pipelines Secure variable named MY_SECRET, the pipeline step would be:

1 2 3 4 5 6 7 8 9 10 11 12 pipelines: default: - step: name: 'Using secure variables with BuildKit' script: # Enable Docker BuildKit - export DOCKER_BUILDKIT=1 # Pass the MY_SECRET variable into the BuildKit docker build # and prevent it from being cached. - docker image build -t latest --secret id=MY_SECRET --progress=plain --no-cache dockerfile services: - docker

The secure variable then can be mounted on the RUN instruction from the Docker default secret store (/run/secrets/*) such as:

1 2 3 4 5 FROM ubuntu:latest # Mount and print MY_SECRET RUN --mount=type=secret,id=MY_SECRET \ cat /run/secrets/MY_SECRET

In this example, the secret will return a blank line in the pipeline log and would be printed in the container used to generate the image layer by the cat command.

Use externally sourced secrets with BuildKit in Bitbucket Pipelines

Secrets from external secret managers (such as HashiCorp Vault, Azure Key Vault, and Google Cloud Secret Manager) need to be stored into a file on the pipeline before they can be used. The file will be deleted once the pipeline step in complete and the container is removed.

For example, to use MY_OTHER _SECRET from an external provider; get the secret from the external provider, store it in a file, and pass it to the build using the --secret option. This example uses echo 'My secret API Key' instead of retrieving a secret from an external provider.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 pipelines: default: - step: name: 'Using secrets with BuildKit' script: # Enable Docker BuildKit - export DOCKER_BUILDKIT=1 # Store external secret to a file on the pipeline, remember that the file and container are deleted at the end of the step. - echo 'My secret API key' > /my_secret_file # Pass MY_OTHER_SECRET into the BuildKit docker build # and prevent it from being cached. - docker image build -t latest --secret id=MY_OTHER_SECRET,src=/my_secret_file --progress=plain --no-cache . services: - docker

The secure variable can then be mounted on the RUN instruction from the custom secret location (/my_secret_file), such as:

1 2 3 4 5 FROM ubuntu:latest # Mount and print MY_OTHER_SECRET RUN --mount=type=secret,id=MY_OTHER_SECRET,dst=/my_secret_file \ cat /run/secrets/MY_OTHER_SECRET

In this example, the secret will return a blank line in the pipeline log and would be printed in the container used to generate the image layer by the cat command.

Troubleshooting Docker BuildKit

If you are experiencing issues Docker build issues due to Docker BuildKit, you can disable BuildKit by setting DOCKER_BUILDKIT=0 before the docker command.

Such as:

1 2 3 4 5 6 7 8 pipelines: default: - step: script: - export DOCKER_BUILDKIT=0 - docker build . services: - docker

Running Docker commands

Inside your Pipelines script you can run most Docker commands. See the Docker command line reference for information on how to use these commands.

These restrictions, including the restricted commands listed below, only apply to the pipelines executed on our cloud infrastructure. These restrictions don't apply to the self-hosted pipeline Runners.

We've had to restrict a few for security reasons, including:

  • Docker Swarm-related commands

  • mapping volumes with a source outside $BITBUCKET_CLONE_DIR

  • the --platform option

  • docker run --privileged

  • docker run --mount

Full list of restricted commands

The security of your data is really important to us, especially when you are trusting it to the cloud. To keep everybody safe we've restricted the following:

For docker container run/docker run we don't allow:

  • --cap-add

  • --device

  • --ipc

  • --mount

  • --pid

  • --privileged

  • --security-opt

  • --userns

  • --uts

  • --volume, -v (other than /opt/atlassian/bitbucketci/agent/build/.* or /opt/atlassian/pipelines/agent/build/.*)

For docker container update/docker update we don't allow:

  • --devices

For docker container exec/docker exec we don't allow:

  • --privileged

For docker image build / docker build we don't allow:

  • --security-opt

Using Docker Compose

If you'd like to use Docker Compose in your container, you''ll need to install a binary that is compatible with your specified build container.

Using an external Docker daemon

If you have configured your build to run commands against your own Docker daemon hosted elsewhere, you can continue to do so. In this case, you should provide your own CLI executable as part of your build image (rather than enabling Docker in Pipelines), so the CLI version is compatible with the daemon version you are running.

Docker layer caching

If you have added Docker as a service, you can also add a Docker cache to your steps. Adding the cache can speed up your build by reusing previously built layers and only creating new dynamic layers as required in the step.

1 2 3 4 5 6 7 8 9 pipelines: default: - step: script: - docker build ... services: - docker caches: - docker # adds docker layer caching

A common use case for Docker cache is when you are building images. However, if you find that performance slows with the cache enabled, check you are not invalidating the layers in your dockerfile.

Docker layer caches have the same limitations and behaviors as regular caches as described on Caching Dependencies.

Docker memory limits

By default, the Docker daemon in Pipelines has a total memory limit of 1024 MB. This allocation includes all containers run via docker run commands, as well as the memory needed to execute docker build commands.

To increase the memory available to Docker you can change the memory limit for the built-in docker service. The memory parameter is a whole number of megabytes greater than 128 and not larger than the available memory for the step.

Below is a working example of how you can set memory limits to multiple Docker services and use the appropriate service depending on the step requirements.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 definitions: services: docker: memory: 512 docker-with-more-memory: memory: 2048 type: docker docker-with-large-memory: memory: 5120 type: docker pipelines: custom: pipeline1: - step: services: [docker] script: - echo "Docker service with 512 MB memory" pipeline2: - step: services: [docker-with-more-memory] script: - echo "Docker service with 2048 MB memory" pipeline3: - step: services: [docker-with-large-memory] size: 2x script: - echo "Docker service with 5120 MB memory"

In the example below, we are are giving the docker service twice the default allocation of 1024 MB (2048). Depending on your other services and whether you have configured large builds for extra memory, you can increase this even further (learn more about memory limits).

1 2 3 4 5 6 7 8 9 10 11 pipelines: default: - step: script: - docker version services: - docker definitions: services: docker: memory: 2048

Authenticate when pushing to a registry

To push images to a registry, you need to use docker login to authenticate prior to calling docker push. You should set your username and password using variables.

For example, add this to your pipeline script:

1 docker login --username $DOCKER_USERNAME --password $DOCKER_PASSWORD

Reserved ports

Port restrictions

There are some reserved ports which can't be used:

  • 29418

Still need help?

The Atlassian Community is here for you.