New to Bitbucket Cloud? Check out our get started guides for new users.
New to Bitbucket Cloud? Check out our get started guides for new users.
A workspace contains projects and repositories. Learn how to create a workspace, control access, and more.
Whether you have no files or many, you'll want to create a repository. These topics will teach you everything about repositories.
Pipelines is an integrated CI/CD service built into Bitbucket. Learn how to build, test, and deploy code using Pipelines.
Learn how to manage your plans and billing, update settings, and configure SSH and two-step verification.
Learn how to integrate Bitbucket Cloud with Jira, Marketplace apps, and use the Atlassian for VS Code extension.
Learn everything you need to know about how to build third-party apps with Bitbucket Cloud REST API, as well as how to use OAuth.
Access security advisories, end of support announcements for features and functionality, as well as common FAQs.
Become a member of our fictitious team when you try our tutorials on Git, Sourcetree, and pull requests.
Write a pipe for Bitbucket Pipelines
A pipe is a custom Docker image for a container, which contains a script to perform a task. There are a bunch of available Pipes, but you can write your own too.
A pipe is made up of a few different files:
A script, or binary, the code that performs the task.
A Dockerfile, which tells us how to build the Docker container that runs your script.
(Optional) metadata and readme docs, to make your pipe easy to understand.
(Optional) some CI/CD configuration so that you can easily update the pipe.
These files are stored in a single place, usually a pipe repository.
Why write a pipe?
There are a few reasons to write a pipe:
to do the same action in several steps of your pipeline
to run similar tasks in multiple repositories
if you are a vendor, to make your software or service easier to use in pipelines
to perform an action which needs dependencies that your main pipeline doesn't have.
By making a pipe you simplify the configuration of pipelines, and make re-use easy and efficient.
The possibilities for pipes are endless, but already there are pipes to:
How to write a pipe
Depending on what you need it for, you can make a simple pipe or a complete pipe. They work in the same way, the only difference is how much detail and configuration you add.
Get going fast
Updating later on can be more complex
CI/CD to automate versioning
Good documentation for others or your future self
Private use only
Eligible to be added to our marketplace
The 3 files already mentioned and:
In this guide, we'll make a simple pipe first, and then show you the steps to make it a complete pipe. We'll build the container and upload it to Dockerhub, so make sure you have an account there (it's free to set up!).
Pipes only work with a public image in docker hub.
If you are skilled in Docker and want to make a pipe only for private use, you can just make your own Docker container containing all the files required.
Step 1 - Create or import a repository
First, we need a place to put your files, so we start by creating a repository.
There are 3 main ways to make a pipe repository:
create an empty repository (don't worry, we'll guide you as to what to put in it)
import one of our example repositories
use our generator to create a local repository (recommended only for complete pipes)
1. How to create a new repo on Bitbucket
From Bitbucket, click the + icon in the global sidebar and select Repository.
Bitbucket displays the Create a new repository page. Take some time to review the dialog's contents. With the exception of the Repository type, everything you enter on this page you can later change.
2. How to import a repo
Open up http://bitbucket.org and make sure you are logged in
Click the + on the left side bar then, under the import heading, select repository
Enter the example repo URL (for a bash complete pipe - https://bitbucket.org/atlassian/demo-pipe-bash/src/master/)
Give your pipe repo a name (my first bash pipe)
If you already know you want to make a complete pipe, you can use our generator to create the framework, and partially fill out the files.
3. How to get, and use, the complete pipe generator
If you don't have it already, install node.js on your local machine
Our generator runs using Yeoman, so we need to install that, and our pipe generator:
Then you are ready to go! Run our generator and follow the prompts:
You'll be asked:
Your Bitbucket account name
What you want to call your repository (we generate it locally)
Your Dockerhub username is
What title you'd like to give your pipe
Who will be maintaining the pipe
If there is an SVG logo hosted elsewhere you'd like to use for your pipe, its URL (Otherwise we default to a location in your repo)
If there are any metadata tags you'd like to use to describe your pipe, for example 'deployment' or 'kubernetes'
With this info we'll make the files you need, and fill out as much as we can automatically.
Step 2 - Create your script or binary
This is the main part of your pipe, which runs when your pipe is called. It contains all the commands and logic to perform the pipe task. Use any coding language of your choice to make a script, or binary file.
A simple script might look like:
But you'll probably want to do more than that! A good next step might be using variables.
You can use any of the default variables available to the pipeline step that calls the pipe (see this list of default variables), and any pipe variables that are provided when the pipe is called. You can only use user defined variables (account and team, repository, or deployment) if you list them in you pipe.yml (more on this later).
You can make some variables mandatory, and some where you use default values you've specified. There are 2 ways to specify a default value, here we'll show defining it in your script, but later on we'll show your the more powerful way, using a pipe.yml file. We show both in our complete pipe for bash example:
Example pipe.sh with mandatory and suggested variables
In the complete repos we keep the scripts in the pipe directory.
In the script below, we can use 3 variables, but keep things simple by setting sensible defaults for 2 of them. That way the end user of the pipe only has to provide $NAME to get the pipe working.
It's also great to have a debug mode to help with troubleshooting. In this example we just print the commands to the terminal, but you could add all sorts of extra detail here, to help a user track down the source of a problem.
Check out the pipe.yml file for a more powerful way to do this, later in this guide.
To make life easiest for the end user of the pipe, we recommend keeping mandatory variables to a minimum. If there are sensible defaults for a variable, provide those in the script and the end user can choose to override them if needed.
We also recommend taking the time to add colors to your log output, and provide clickable links to any external output.
As we are going to be running the script, it needs to be executable (so in your terminal you might run: chmod +x pipe.sh ) . If you are using our example repositories, this is done for you already.
Step 3 - Configure the Dockerfile
To run the script you just wrote, we need to put it into a Docker container. The Dockerfile defines the details of how this Docker container should be built. At the most basic it needs to have values for FROM, COPY, and ENTRYPOINT.
More details on FROM, COPY, and ENTRYPOINT
Details and tips
Tells us which image to use as a base for the Docker container.
Example: To use Alpine Linux, version 3.8, use: alpine:3.8
For more information on working with images see Docker's best practices documentation.
COPY <source> <destination>
Copies your scripts (or binaries) into the container.
Example: To copy the contents of the pipe directory into the root of the container use:
Only copy the files that your pipe needs to run, to keep your pipe as quick as possible.
ENTRYPOINT [“<path to script/binary”]
Path to the script or binary to run when the container is made.
Example: To run the script, pipe.sh, that you just copied into the root of the container:
Tip: remember that the path is where it is in the container not in the original pipe repo
The complete pipe for bash example contains a Dockerfile already in it's root directory:
This means the container will
use an Alpine Linux 3.8 image
run an update command and install bash into the container
have the contents of the pipe directory copied into it's root directory
start running pipe.sh
You can edit these to suit your needs, want Alpine Linux 3.9? No problem, just change the FROM command to read FROM alpine:3.9
Want to install more packages into Linux? Add more to the RUN command. Before you do, though, have a look in Dockerhub to see if there is an image that already has those packages installed. It will save you precious time!
Sometimes getting the script and the container exactly how you want it can take a few iterations. With this in mind we recommend installing Docker locally on your machine, so you can test building your pipe container and running it, without using build minutes.
More detail on using docker locally
Once Docker is installed first you need to build your image:
-t : Creates an image for the account with the name provided, with the image name provided and with the tag provided.
. : Tells Docker to use the current directory as the root of the Docker build (it will automatically look for a file called Dockerfile in your current directory).
Then you can run your freshly built image, passing it variables using -e,
Learn more about running Docker locally.
Step 4 - Make a basic pipeline to update your pipe container to Dockerhub
The final step in making a simple pipe is to build your container, and upload it to Dockerhub.
Using a pipeline to do that isn't strictly necessary, but it makes future updates easier, and automatically updates the version number so you can quickly make sure you are using the latest version.
The example bitbucket-pipelines.yml below builds and pushes a new version of your container to Dockerhub whenever you commit. So if you update which image you want to use for your Docker container, or make some changes to your script, this will automatically make sure the version on Dockerhub is up to date. Make sure you have a dockerhub account and then all you need to do is add 2 variables to your pipe repository: DOCKERHUB_USERNAME and DOCKERHUB_PASSWORD, and enable pipelines.
Congratulations you've made a simple pipe!
And that's all you need for a simple pipe! You can now refer to your pipe in a step using the syntax:
The next steps of pipe creation are designed to make your life easier in the long run, and make it simpler for other people to use your pipe. They are required for anyone who wants to make an officially supported pipe.
If you are making a complete pipe, you'll also need to set up:
pipe metadata - details of your pipe, for example naming the maintainer
a readme - details on how to use your pipe, for example, the variables you need
automated testing - to make sure changes haven't broken anything
semantic versioning - to make it clear which version of your pipe to use
debug logging - to make it easier for end users to troubleshoot if something goes wrong
Step 5 - Make pipe.yml - the metadata file
The pipe.yml file provides handy information to catagorize the pipe
The name or title of the pipe as we should display it.
The pipe Docker image you created on Dockerhub in the form: account/repo:tag
Category of the pipe. Can be one of:
A short summary describing what the pipe does.
Bitbucket pipe repository absolute URL. Example: atlassian/demo-pipe-bash
Object that contains name, website and email.
Object that contains name, website and email. For vendor pipes this field is mandatory.
Keywords to help users find and categorize the pipe. Options could include the type of function your pipe performs (deploy, notify, test) or your product, or company, name, or specific tools you are integrating with.
A pipe.yml file might look like this:
Step 6 - Write README.md - your readme file
Your readme is how your users know how to use your pipe. We can display this in Bitbucket, so it needs to be written with markdown, in a specific format, with the headings listed in order:
The pipe name
Short summary of what the pipe does - we recommend using the format "[action verb] (to) [destination | vendor | suite]" for example "Deploy to Dockerhub" or "Notify Opsgenie"
What someone needs to copy and paste into their pipeline to use your pipe
A list of the variables your pipe needs, making it clear if they are mandatory or optional
Detailed explanation of usage, configuration, setup, etc.
Anything that people need to have in place before using the pipe, for example: installed packages, accounts on third-party systems, etc.
Code snippets with example variables. It’s recommendable to cover at least:
Give details on how people can contact you for questions and support.
Check out the following example readme...
Here's the readme from our slack notification pipe.
Bitbucket Pipelines Pipe: Slack Notify
Sends a notification to Slack.
Add the following snippet to the script section of your bitbucket-pipelines.yml file:
Incoming Webhook URL. It is recommended to use a secure repository variable.
Turn on extra debug information. Default: false.
(*) = required variable.
To send notifications to Slack, you need an Incoming Webhook URL. You can follow the instructions here to create one.
If you’d like help with this pipe, or you have an issue or feature request, let us know on Community.
If you’re reporting an issue, please include:
the version of the pipe
relevant logs and error messages
steps to reproduce
Copyright (c) 2018 Atlassian and others. Apache 2.0 licensed, see LICENSE.txt file.
Step 7 - Write your tests
It's good practice to add automated integration testing to your pipe, so before you send it out into the world you can make sure it does what you expect it to do. For example, you could test how it deals with variables that are unexpected, or that it can successfully connect to any third-party services it needs to. For any pipes that are going to become officially supported, it's essential that they are tested regularly.
Step 8 - Make your pipe easy to debug
In your script we recommend writing in a debug mode which will output extra information about what is going on in the pipe.
We also recommend that you make any links shown in the logs clickable, and that you use colors in your output to highlight key information.
How you do this will depend on the language you are using to write your script, but you can see an example of this in the common.sh file in our bash demo repo.
Step 9 - Set up CI/CD to automate testing and updates
We also recommend using CI/CD to:
automatically upload it to Dockerhub (or a site of your choice)
automatically update the version number in
Once you have your bitbucket-pipelines.yml file configured you can enable pipelines: Your repo Settings > Pipelines section > Settings > Enable pipelines
An example CI/CD workflow
The workflow we recommend is to do all your pipe development work on a feature branch. Set up your pipeline so that any commits on a feature branch will run the tests for you.
If the tests pass you can then merge to your master branch with confidence. This merge triggers a master branch specific pipeline which updates the version of your pipe (we'll talk about how to do that in the next step) and uploads your image to Docker.
Here's how we setup those pipelines in our bash demo repo:
Step 10 - Set up semantic versioning
We encourage you to use semantic versioning (semver) for your pipe, so that it's clear what version is the latest, which version people should use, and if there is any chance of an update breaking things. The version has 3 parts: <major>.<minor>.<patch>, for example 6.5.2
You increase the version number depending on the changes you made. Update:
major: if you make changes that could break existing users, for example, changing the name of a mandatory variable
minor: if you add functionality in a backwards-compatible manner
patch: if you backwards-compatible bug fixes
There are a few places you would want to update when you change the version, so to simplify that there is a tool called semversioner (https://pypi.org/project/semversioner/). This will generate a new entry in your changelog, update the version number, and commit back to the repository.
More detail on using semversioner
1. Install semversioner on your local machine:
pip install semversioner
2. When you are developing, the changes you are integrating to master will need one or more changeset files. Use semversioner to generate the changeset.
semversioner add-change --type patch --description "Fix security vulnerability."
3. Commit the changeset files generated in .change/next-release/ folder with your code. For example:
git add .
git commit -m "BP-234 FIX security issue with authentication"
git push origin
4. If you've imported our demo repo, then that's it! Merge to master and Bitbucket Pipelines will do the rest:
Generate a new version number based on the changeset types major, minor, patch.
Generate a new file in .changes directory with all the changes for this specific version.
(Re)generate the CHANGELOG.md file.
Bump the version number in README.mdexample and pipe.yml metadata.
Commit and push back to the repository.
Tag your commit with the new version number.
We achieve this by the pipeline calling 3 scripts that use semversioner and the variables available to the pipe repo. Have a look at our example CI scripts.
Step 11 - Use your new complete pipe!
As with the simple version of the pipe, the last step is to build and push your container to Dockerhub.
There are 2 ways to refer to this pipe in other repositories. In your bitbucket-pipelines.yml file you can:
refer to the docker image directly
pipe: docker://acct/repo:tag (where acct/repo is the Dockerhub account and repo)
refer to a pipe repo hosted on Bitbucket
pipe: <BB_acct>/repo:tag (where BB_acct/repo is your Bitbucket account and pipe repo)
This version of the reference looks in your pipe.yml file to find out where to get the image from.
Advanced tips and secrets
You don't have to use Dockerhub, if you have another service to host Docker images, but the image does have to be public.
If you install Docker to your local machine you can test out everything works well, before uploading.
If there are variables that you rely on, make sure your script tests that they have been provided and are valid.
Try and keep mandatory variables to a minimum, supply default values if you can!
Make sure you have a process in place so you can quickly and efficiently provide pipe support, in case something unexpected happens. You'll get feedback quicker and your pipe users will have a better experience.
If you are creating a file you wish to share with other pipes, or use in your main pipeline, you will need to edit the permissions for that file. A common way to do this would be to put umask 000 at the beginning of your pipe script. If you'd prefer to just modify values for one file you could also use the chown or chmod command.
Check out more advanced pipe writing techniques, including the best way to quote things, and passing array variables.
Contributing an official pipe
You can submit your pipe to be considered for our official list - make sure you've done it in the complete way if you do!
Have a look at the full details of how to contribute a pipe.
Was this helpful?