Write a pipe for Bitbucket Pipelines

A pipe is a custom Docker image for a container, which contains a script to perform a task. There are a bunch of available Pipes, but you can write your own too.

A pipe is made up of a few different files:

  • A script, or binary, the code that performs the task.

  • A Dockerfile, which tells us how to build the Docker container that runs your script.

  • (Optional) metadata and readme docs, to make your pipe easy to understand.

  • (Optional) some CI/CD configuration so that you can easily update the pipe.

These files are stored in a single place, usually a pipe repository.

Why write a pipe?

There are a few reasons to write a pipe:

  • to do the same action in several steps of your pipeline

  • to run similar tasks in multiple repositories

  • if you are a vendor, to make your software or service easier to use in pipelines

  • to perform an action which needs dependencies that your main pipeline doesn't have.

By making a pipe you simplify the configuration of pipelines, and make re-use easy and efficient.

The possibilities for pipes are endless, but already there are pipes to:

How to write a pipe

Depending on what you need it for, you can make a simple pipe or a complete pipe. They work in the same way, the only difference is how much detail and configuration you add.



Get going fast

Best practice

Updating later on can be more complex

CI/CD to automate versioning

Minimal configuration

Good documentation for others or your future self

Private use only

Eligible to be added to our marketplace

3 files:

  • your script

  • a Dockerfile

  • a basic pipeline to update Dockerhub

The 3 files already mentioned and:

  • a metadata file

  • a readme

  • a testing script

  • any other files you want to include!

In this guide, we'll make a simple pipe first, and then show you the steps to make it a complete pipe. We'll build the container and upload it to Dockerhub, so make sure you have an account there (it's free to set up!).


  • Pipes only work with a public image in docker hub.

  • If you are skilled in Docker and want to make a pipe only for private use, you can just make your own Docker container containing all the files required.

Step 1 - Create or import a repository

First, we need a place to put your files, so we start by creating a repository.

There are 3 main ways to make a pipe repository:

  1. create an empty repository (don't worry, we'll guide you as to what to put in it)

  2. import one of our example repositories

  3. use our generator to create a local repository (recommended only for complete pipes)

1. How to create a new repo on Bitbucket

From Bitbucket, click the  +  icon in the global sidebar and select Repository.
Bitbucket displays the Create a new repository page. Take some time to review the dialog's contents. With the exception of the Repository type, everything you enter on this page you can later change.

We also have 3 example repositories: a simple pipe repository, and 2 complete pipe repositories (for Bash and Python) which you can use as a reference, or import if you like.

2. How to import a repo

  1. Open up http://bitbucket.org and make sure you are logged in

  2. Click the + on the left side bar then, under the import heading, select repository

  3. Enter the example repo URL (for a bash complete pipe - https://bitbucket.org/atlassian/demo-pipe-bash/src/main/)

  4. Give your pipe repo a name (my first bash pipe)

If you already know you want to make a complete pipe, you can use our generator to create the framework, and partially fill out the files.

3. How to get, and use, the complete pipe generator

  1. If you don't have it already, install node.js on your local machine

  2. Our generator runs using Yeoman, so we need to install that, and our pipe generator:

    1 npm install -g yo generator-bitbucket-pipe
  3. Then you are ready to go! Run our generator and follow the prompts:

    1 yo bitbucket-pipe

    You'll be asked:

  • Your Bitbucket account name

    • What you want to call your repository (we generate it locally)

    • Your Dockerhub username is

    • What title you'd like to give your pipe

    • Who will be maintaining the pipe

    • If there is an SVG logo hosted elsewhere you'd like to use for your pipe, its URL (Otherwise we default to a location in your repo)

    • If there are any metadata tags you'd like to use to describe your pipe, for example 'deployment' or 'kubernetes'

With this info we'll make the files you need, and fill out as much as we can automatically.

Step 2 - Create your script or binary

This is the main part of your pipe, which runs when your pipe is called. It contains all the commands and logic to perform the pipe task.  Use any coding language of your choice to make a script, or binary file.

A simple script might look like:

Example: pipe.sh

1 2 3 #!/usr/bin/env bash set -e echo 'Hello World'

But you'll probably want to do more than that! A good next step might be using variables.

You can use any of the default variables available to the pipeline step that calls the pipe (see this list of default variables), and any pipe variables that are provided when the pipe is called. You can only use user defined variables (account and team, repository, or deployment) if you list them in you pipe.yml (more on this later).

Example: pipe.sh

1 2 3 4 5 6 7 8 9 #!/usr/bin/env bash set -e echo 'Hello $BITBUCKET_REPO_OWNER' #when you call the pipe from your pipeline #you can provide variables, for example here: GREETING echo '$GREETING'

You can make some variables mandatory, and some where you use default values you've specified. There are 2 ways to specify a default value, here we'll show defining it in your script, but later on we'll show your the more powerful way, using a pipe.yml file. We show both in our complete pipe for bash example:

Example pipe.sh with mandatory and suggested variables

In the complete repos we keep the scripts in the pipe directory.

In the script below, we can use 3 variables, but keep things simple by setting sensible defaults for 2 of them. That way the end user of the pipe only has to provide $NAME to get the pipe working.

It's also great to have a debug mode to help with troubleshooting. In this example we just print the commands to the terminal, but you could add all sorts of extra detail here, to help a user track down the source of a problem.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 # # Required globals: # NAME # # Optional globals: # GREETING (default: "Hello World") # DEBUG (default: "false") source "$(dirname "$0")/common.sh" info "Executing the pipe..." enable_debug() { if [[ "${DEBUG}" == "true" ]]; then info "Enabling debug mode." set -x fi } enable_debug # required parameters NAME=${NAME:?'NAME environment variable missing.'} # default parameters GREETING=${GREETING:="Hello world"} DEBUG=${DEBUG:="false"} run echo "${GREETING} ${NAME}" if [[ "${status}" == "0" ]]; then success "Success!" else fail "Error!" fi

Check out the pipe.yml file for a more powerful way to do this, later in this guide.

To make life easiest for the end user of the pipe, we recommend keeping mandatory variables to a minimum. If there are sensible defaults for a variable, provide those in the script and the end user can choose to override them if needed.

We also recommend taking the time to add colors to your log output, and provide clickable links to any external output.

As we are going to be running the script, it needs to be executable (so in your terminal you might run: chmod +x pipe.sh ) . If you are using our example repositories, this is done for you already.

Step 3 - Configure the Dockerfile

To run the script you just wrote, we need to put it into a Docker container.  The Dockerfile defines the details of how this Docker container should be built. At the most basic it needs to have values for FROM, COPY, and ENTRYPOINT.

More details on FROM, COPY, and ENTRYPOINT


Details and tips

FROM <imagename>

Tells us which image to use as a base for the Docker container.

Example: To use Alpine Linux, version 3.8, use: alpine:3.8


  • Use official Docker images.

  • Use lightweight images where possible. These are often called 'alpine' or 'slim' versions. They are smaller and quicker to download making your pipe faster.

  • Specify an image version rather than using the latest tag, to make sure any image changes don't break your pipe

For more information on working with images see Docker's best practices documentation.

COPY <source> <destination>

Copies your scripts (or binaries) into the container.

Example: To copy the contents of the pipe directory into the root of the container use: 

COPY pipe/


Only copy the files that your pipe needs to run, to keep your pipe as quick as possible.

ENTRYPOINT [“<path to script/binary”]

Path to the script or binary to run when the container is made.

Example: To run the script, pipe.sh, that you just copied into the root of the container:

ENTRYPOINT [“/pipe.sh”]

Tip: remember that the path is where it is in the container not in the original pipe repo

The complete pipe for bash example contains a Dockerfile already in it's root directory:

1 2 3 4 FROM alpine:3.8 RUN apk update && apk add bash COPY pipe / ENTRYPOINT ["/pipe.sh"]

This means the container will

  • use an Alpine Linux 3.8 image

  • run an update command and install bash into the container

  • have the contents of the pipe directory copied into it's root directory

  • start running pipe.sh

You can edit these to suit your needs, want Alpine Linux 3.9? No problem, just change the FROM command to read FROM alpine:3.9

Want to install more packages into Linux? Add more to the RUN command. Before you do, though, have a look in Dockerhub to see if there is an image that already has those packages installed. It will save you precious time!

Sometimes getting the script and the container exactly how you want it can take a few iterations. With this in mind we recommend installing Docker locally on your machine, so you can test building your pipe container and running it, without using build minutes. 

More detail on using docker locally

Once Docker is installed first you need to build your image:

1 docker build -t bitbucketpipelines/demo-pipe-bash:0.1.0 .


  • -t  : Creates an image for the account with the name provided, with the image name provided and with the tag provided.

  • . : Tells Docker to use the current directory as the root of the Docker build (it will automatically look for a file called Dockerfile in your current directory).

Then you can run your freshly built image, passing it variables using -e,

1 2 3 4 5 6 docker run \ -e NAME="first last" \ -e DEBUG="true" \ -v $(pwd):$(pwd) \ -w $(pwd) \ bitbucketpipelines/demo-pipe-bash:0.1.0

Learn more about running Docker locally.

Step 4 - Make a basic pipeline to update your pipe container to Dockerhub

The final step in making a simple pipe is to build your container, and upload it to Dockerhub.

Using a pipeline to do that isn't strictly necessary, but it makes future updates easier, and automatically updates the version number so you can quickly make sure you are using the latest version.

The example bitbucket-pipelines.yml below builds and pushes a new version of your container to Dockerhub whenever you commit. So if you update which image you want to use for your Docker container, or make some changes to your script, this will automatically make sure the version on Dockerhub is up to date. Make sure you have a dockerhub account and then all you need to do is add 2 variables to your pipe repositoryDOCKERHUB_USERNAME and  DOCKERHUB_PASSWORD, and enable pipelines.

Example: bitbucket-pipelines.yml

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 image: name: atlassian/default-image:2 pipelines: default: - step: name: Build and Push script: # Build and push image - VERSION="1.$BITBUCKET_BUILD_NUMBER" - echo ${DOCKERHUB_PASSWORD} | docker login --username "$DOCKERHUB_USERNAME" --password-stdin - IMAGE="$DOCKERHUB_USERNAME/$BITBUCKET_REPO_SLUG" - docker build -t ${IMAGE}:${VERSION} . - docker tag ${IMAGE}:${VERSION} ${IMAGE}:latest - docker push ${IMAGE} # Push tags - git tag -a "${VERSION}" -m "Tagging for release ${VERSION}" - git push origin ${VERSION} services: - docker

Congratulations you've made a simple pipe!

And that's all you need for a simple pipe! You can now refer to your pipe in a step using the syntax:

1 pipe: docker://<DockerAccountName>/<ImageName>:<version>

The next steps of pipe creation are designed to make your life easier in the long run, and make it simpler for other people to use your pipe. They are required for anyone who wants to make an officially supported pipe.

If you are making a complete pipe, you'll also need to set up:

  • pipe metadata - details of your pipe, for example naming the maintainer

  • a readme - details on how to use your pipe, for example, the variables you need

  • automated testing - to make sure changes haven't broken anything

  • semantic versioning - to make it clear which version of your pipe to use

  • debug logging - to make it easier for end users to troubleshoot if something goes wrong

Don't worry, this is already configured in our example repos (Bash and Python), so take a peek at them and we'll guide you through the next steps!

Step 5 - Make pipe.yml - the metadata file

The pipe.yml file provides handy information to catagorize the pipe




The name or title of the pipe as we should display it.


The pipe Docker image you created on Dockerhub in the form: account/repo:tag


Category of the pipe. Can be one of:

  • Alerting

  • Artifact management

  • Code quality

  • Deployment

  • Feature flagging

  • Monitoring

  • Notifications

  • Security

  • Testing

  • Utilities

  • Workflow automation


A short summary describing what the pipe does.


Bitbucket pipe repository absolute URL. Example: atlassian/demo-pipe-bash


Object that contains name, website and email.

  • Name is the name of maintainer company.

  • Website is site of maintainer company.

  • Email is email address of the team or person who will be maintaining the pipe. (Optional)


Object that contains name, website and email. For vendor pipes this field is mandatory.

  • Name is the name of vendor company.

  • Website is site of vendor company.

  • Email is email address of the a vendor’s company, team or person (Optional).


Keywords to help users find and categorize the pipe. Options could include the type of function your pipe performs (deploy, notify, test) or your product, or company, name, or specific tools you are integrating with.

variables (Optional)

Object that contains name and default.

  • name - the name of the variable.

  • default - the default value or the variable to pull from the pipeline’s configured environment variables.

A pipe.yml file might look like this:

Example pipe.yml

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 name: My demo pipe image: bitbucketpipelines/demo-pipe-bash:0.1.0 category: Utilities description: Showing how easy is to make pipes for Bitbucket Pipelines. variables: - name: ENV_NAME default: ‘Production’ - name: SECRET_KEY default: ‘${SECRET_KEY}’ # this is pulling the value of $SECRET_KEY from the pipeline - name: COMMIT_HASH default: "$BITBUCKET_COMMIT" repository: https://bitbucket.org/atlassian/demo-pipe-bash maintainer: name: Atlassian website: https://www.atlassian.com/ email: contact@atlassian.com vendor: name: Demo website: https://example.com/ email: contact@example.com tags: - helloworld - example

Step 6 - Write README.md - your readme file

Your readme is how your users know how to use your pipe. We can display this in Bitbucket, so it needs to be written with markdown, in a specific format, with the headings listed in order:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 # Bitbucket Pipelines Pipe: <pipe_name> <pipe_short_description> ## YAML Definition Add the following snippet to the script section of your `bitbucket-pipelines.yml` file: <pipe_code_snippet> ## Variables <pipe_variables_table> ## Details <pipe_long_description> ## Prerequisites <pipe_prerequisites> ## Examples <pipe_code_examples> ## Support <pipe_support>




The pipe name


Short summary of what the pipe does - we recommend using the format "[action verb] (to) [destination | vendor | suite]" for example "Deploy to Dockerhub" or "Notify Opsgenie"


What someone needs to copy and paste into their pipeline to use your pipe


A list of the variables your pipe needs, making it clear if they are mandatory or optional


Detailed explanation of usage, configuration, setup, etc.


Anything that people need to have in place before using the pipe, for example: installed packages, accounts on third-party systems, etc.


Code snippets with example variables. It’s recommendable to cover at least:

  • Basic: with only mandatory variables.

  • Advanced: with all variables.


Give details on how people can contact you for questions and support.
As a backup you can also reference contacting our community, so that if other have the same question, they can find the answer easily, but this may not give as good an experience to your pipe user!

Check out the following example readme...

Here's the readme from our slack notification pipe.

Bitbucket Pipelines Pipe: Slack Notify

Sends a notification to Slack.

YAML Definition

Add the following snippet to the script section of your bitbucket-pipelines.yml file:

1 2 3 4 5 - pipe: atlassian/slack-notify:0.2.1 variables: WEBHOOK_URL: '<string>' MESSAGE: '<string>' # DEBUG: '<boolean>' # Optional.





Incoming Webhook URL. It is recommended to use a secure repository variable.


Notification message.


Turn on extra debug information. Default: false.

(*) = required variable.


To send notifications to Slack, you need an Incoming Webhook URL. You can follow the instructions here to create one.


Basic example:

1 2 3 4 5 script: - pipe: atlassian/slack-notify:0.2.1 variables: WEBHOOK_URL: $WEBHOOK_URL MESSAGE: 'Hello, world!'


If you’d like help with this pipe, or you have an issue or feature request, let us know on Community.

If you’re reporting an issue, please include:

  • the version of the pipe

  • relevant logs and error messages

  • steps to reproduce


Copyright (c) 2018 Atlassian and others. Apache 2.0 licensed, see LICENSE.txt file.

Step 7 - Write your tests

It's good practice to add automated integration testing to your pipe, so before you send it out into the world you can make sure it does what you expect it to do. For example, you could test how it deals with variables that are unexpected, or that it can successfully connect to any third-party services it needs to. For any pipes that are going to become officially supported, it's essential that they are tested regularly.

Example tests

In our bash demo pipe, we use bats (Bash Automated Testing System) to run a basic test to make sure that we can run a Docker container with a NAME variable passed to it.

Example: test.bats

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 #!/usr/bin/env bats setup() { DOCKER_IMAGE=${DOCKER_IMAGE:="test/demo-pipe-bash"} echo "Building image..." run docker build -t ${DOCKER_IMAGE} . } teardown() { echo "Teardown happens after each test." } @test "Dummy test" { run docker run \ -e NAME="baz" \ -v $(pwd):$(pwd) \ -w $(pwd) \ ${DOCKER_IMAGE} echo "Status: $status" echo "Output: $output" [ "$status" -eq 0 ] }

Step 8 - Make your pipe easy to debug

In your script we recommend writing in a debug mode which will output extra information about what is going on in the pipe.

We also recommend that you make any links shown in the logs clickable, and that you use colors in your output to highlight key information.

How you do this will depend on the language you are using to write your script, but you can see an example of this in the common.sh file in our bash demo repo.

Step 9 - Set up CI/CD to automate testing and updates

We also recommend using CI/CD to:

  • automate testing

  • automatically upload it to Dockerhub (or a site of your choice)

  • automatically update the version number in

    • the changelog

    • the readme

    • the metadata

Once you have your bitbucket-pipelines.yml file configured you can enable pipelines: Your repo Settings > Pipelines section > Settings > Enable pipelines

An example CI/CD workflow

The workflow we recommend is to do all your pipe development work on a feature branch. Set up your pipeline so that any commits on a feature branch will run the tests for you.

If the tests pass you can then merge to your main branch with confidence. This merge triggers a main branch specific pipeline which updates the version of your pipe (we'll talk about how to do that in the next step) and uploads your image to Docker.

Here's how we setup those pipelines in our bash demo repo:

Example: bitbucket-pipelines.yml

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 image: name: atlassian/default-image:2 test: &test step: name: Test image: atlassian/default-image:2 script: - npm install -g bats - bats test/test.bats services: - docker push: &push step: name: Push and Tag image: python:3.6.7 script: - pip install semversioner==0.6.16 - ./ci-scripts/bump-version.sh - ./ci-scripts/docker-release.sh bitbucketpipelines/$BITBUCKET_REPO_SLUG - ./ci-scripts/git-push.sh services: - docker pipelines: default: - <<: *test branches: main: - <<: *test - <<: *push

Step 10 - Set up semantic versioning

We encourage you to use semantic versioning (semver) for your pipe, so that it's clear what version is the latest, which version people should use, and if there is any chance of an update breaking things. The version has 3 parts: <major>.<minor>.<patch>, for example 6.5.2

You increase the version number depending on the changes you made. Update:

  • major: if you make changes that could break existing users, for example, changing the name of a mandatory variable

  • minor: if you add functionality in a backwards-compatible manner

  • patch: if you backwards-compatible bug fixes

There are a few places you would want to update when you change the version, so to simplify that there is a tool called semversioner (https://pypi.org/project/semversioner/). This will generate a new entry in your changelog, update the version number, and commit back to the repository.

More detail on using semversioner

1. Install semversioner on your local machine:

pip install semversioner

2. When you are developing, the changes you are integrating to main will need one or more changeset files. Use semversioner to generate the changeset.

semversioner add-change --type patch --description "Fix security vulnerability."

3. Commit the changeset files generated in .change/next-release/ folder with your code. For example:

git add .

git commit -m "BP-234 FIX security issue with authentication"

git push origin

4. If you've imported our demo repo, then that's it! Merge to main and Bitbucket Pipelines will do the rest:

  • Generate a new version number based on the changeset types major, minor, patch.

  • Generate a new file in .changes directory with all the changes for this specific version.

  • (Re)generate the CHANGELOG.md file.

  • Bump the version number in README.mdexample and pipe.yml metadata.

  • Commit and push back to the repository.

  • Tag your commit with the new version number.

We achieve this by the pipeline calling 3 scripts that use semversioner and the variables available to the pipe repo. Have a look at our example CI scripts.

Step 11 - Use your new complete pipe!

As with the simple version of the pipe, the last step is to build and push your container to Dockerhub. 

There are 2 ways to refer to this pipe in other repositories. In your bitbucket-pipelines.yml file you can:

  1. refer to the docker image directly 
    pipe: docker://acct/repo:tag (where acct/repo is the Dockerhub account and repo) Note: You must always specify docker:// when referencing public Docker images.

  2. refer to a pipe repo hosted on Bitbucket
    pipe: <BB_acct>/repo:tag (where BB_acct/repo is your Bitbucket account and pipe repo)
    This version of the reference looks in your pipe.yml file to find out where to get the image from.

Advanced tips and secrets

  • You don't have to use Dockerhub, if you have another service to host Docker images, but the image does have to be public.

  • If you install Docker to your local machine you can test out everything works well, before uploading.

  • If there are variables that you rely on, make sure your script tests that they have been provided and are valid.

  • Try and keep mandatory variables to a minimum, supply default values if you can!

  • Make sure you have a process in place so you can quickly and efficiently provide pipe support, in case something unexpected happens. You'll get feedback quicker and your pipe users will have a better experience.

  • If you are creating a file you wish to share with other pipes, or use in your main pipeline, you will need to edit the permissions for that file. A common way to do this would be to put umask 000 at the beginning of your pipe script. If you'd prefer to just modify values for one file you could also use the chown or chmod command.

  • Check out more advanced pipe writing techniques, including the best way to quote things, and passing array variables.

Contributing an official pipe

You can submit your pipe to be considered for our official list - make sure you've done it in the complete way if you do!

Have a look at the full details of how to contribute a pipe.

Still need help?

The Atlassian Community is here for you.