Get started with Bitbucket Cloud
New to Bitbucket Cloud? Check out our get started guides for new users.
Bitbucket Pipelines allows you to run multiple Docker containers from your build pipeline. You'll want to start additional containers if your pipeline requires additional services when testing and operating your application. These extra services may include data stores, code analytics tools and stub web services.
You define these additional services (and other resources) in the definitions section of the bitbucket-pipelines.yml file. These services can then be referenced in the configuration of any pipeline that needs them.
When a pipeline runs, services referenced in a step of your bitbucket-pipeline.yml will be scheduled to run with your pipeline step. These services share a network adapter with your build container and all open their ports on localhost. No port mapping or hostnames are required. For example, if you were using Postgres, your tests just connect to port 5432 on localhost. The service logs are also visible in the Pipelines UI if you need to debug anything.
Pipelines enforces a maximum of 5 service containers per build step. See sections below for how memory is allocated to service containers.
In the following tutorial you’ll learn how to define a service and how to use it in a pipeline.
Services in Pipelines have the following limitations:
Maximum of 5 services for a step
Memory limits as described below
No REST API for accessing services and logs under pipeline results
No mechanism to wait for service startup
If you want to run a larger number of small services, use Docker run or docker-compose
Port 29418 can’t be used
Services are defined in the definitions section of the bitbucket-pipelines.yml file.
For example, the following defines two services: one named redis that uses the library image redis from Docker Hub (version 3.2), and another named database that uses the official Docker Hub MySQL image (version 5.7).
The variables section allows you define variables, either literal values or existing pipelines variables.
1
2
3
4
5
6
7
8
9
definitions:
services:
redis:
image: redis:3.2
mysql:
image: mysql:5.7
variables:
MYSQL_DATABASE: my-db
MYSQL_ROOT_PASSWORD: $password
Each service definition can also define a custom memory limit for the service container, by using the memory keyword (in megabytes).
The relevant memory limits and default allocations are as follows:
Regular steps have 4096 MB of memory in total, large build steps (which you can define using size: 2x) have 8192 MB in total.
The total memory allocated to the build step is distributed to the build container and any service containers defined in the build step. The build container executes the scripts defined in the build step. The service containers run the services, if there are any.
The remaining memory after the allocation to the service containers will be allocated to the build container (see examples below). All the memory allocated to the build step will be allocated to the build container if there is no service defined.
The build container requires a minimum 1024 MB of memory. This build container memory covers your build process and some Pipelines overhead (agent container, logging, etc). This will result in a maximum 3072/7128 MB of memory remaining for 1x/2x steps respectively to be allocated for the service containers.
Service containers get 1024 MB memory by default, but can be configured to use between 128 MB and the step maximum allowed (3072/7128 MB).
The Docker service used for operations in Pipelines has a 1024 MB default memory, but this value can be adjusted to any value between 128 MB and 3027/7128 MB by changing the memory setting on the built-in Docker service in the Definitions section. Note: all pipes require Docker service even if it is not explicitly specified.
In the example below, the build container is allocated 2048 MB of memory from the total memory available for the build step (4096 MB):
Docker service is allocated 512 MB of memory
Redis service is allocated 512 MB of memory
MySQL service allocated default memory of 1024 MB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
default:
# "Build step is allocated 4096 MB of memory"
- step:
services:
- redis
- mysql
- docker
script:
- echo "Build container is allocated 2048 MB of memory"
- echo "Services are allocated the memory configured. docker 512 MB, redis 512 MB, mysql 1024 MB"
definitions:
services:
redis:
image: redis:3.2
memory: 512
docker:
memory: 512 # reduce memory for docker-in-docker from 1GB to 512MB
mysql:
image: mysql:5.7
# memory: 1024 # default value
variables:
MYSQL_DATABASE: my-db
MYSQL_ROOT_PASSWORD: $password
In the example below, no service is being used hence the build container is allocated all the memory available for the build step (4096 MB):
1
2
3
4
5
6
default:
# "Build step is allocated 4096 MB of memory"
- step:
script:
- echo "Build container allocated 4096 MB of memory"
In the example below, pipe is treated as a service container, and has a default memory allocated of 1024 MB. The build container is allocated 3072 MB of memory from the total memory available for the build step (4096 MB):
1
2
3
4
5
6
7
8
9
10
11
12
default:
# "Build step is allocated 4096 MB of memory"
- step:
script:
- echo "Build container is allocated 3072 MB of memory"
- echo "Pipe use Docker service that use 1024 MB of memory"
- pipe: atlassian/scp-deploy:1.4.1
variables:
USER: 'ec2-user'
SERVER: '127.0.0.1'
REMOTE_PATH: '/var/www/build/'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'
If a service has been defined in the 'definitions' section of the bitbucket-pipelines.yml file, you can reference that service in any of your pipeline steps.
For example, the following causes the redis service to run with the step:
1
2
3
4
5
6
7
8
default:
- step:
image: node
script:
- npm install
- npm test
services:
- redis
You can define a service that has restricted access like in the following example:
1
2
3
4
5
6
services:
redis:
image:
name: redis:3.2
username: username@organisation.com
password: $DOCKER_PASSWORD
For more complete example of using docker images from different registries and different formats, see Use Docker images as build environments
This example bitbucket-pipelines.yml file shows both the definition of a service and its use in a pipeline step. A breakdown of how it works is presented below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
pipelines:
branches:
main:
- step:
image: redis
script:
- redis-cli -h localhost ping
services:
- redis
- mysql
definitions:
services:
redis:
image: redis:3.2
mysql:
image: mysql:5.7
variables:
MYSQL_DATABASE: my-db
MYSQL_ROOT_PASSWORD: $password
When testing with a database, we recommend that you use service containers to run database services in a linked container. Docker has a number of official images of popular databases on Docker Hub.
This page has example bitbucket-pipelines.yml files showing how to connect to the following DB types.
You can check your bitbucket-pipelines.yml file with our online validator.
See also Use services and databases in Bitbucket Pipelines.
Alternatively, you can use a Docker image that contains the database you need – see Use a Docker image configured with a database on this page.
Using the Mongo image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- mongo
definitions:
services:
mongo:
image: mongo
MongoDB will be available on 127.0.0.1:27017 without authentication. As you connect to a database, MongoDB will create it for you.
Note that MongoDB's default configuration only listens for connections on IPv4, whereas some platforms (like Ruby) default to connecting via IPv6 if your Mongo connection is configured to use localhost. This is why we recommend connecting on 127.0.0.1 rather than localhost.
Using the MySQL image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- mysql
definitions:
services:
mysql:
image: mysql:5.7
variables:
MYSQL_DATABASE: 'pipelines'
MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
MYSQL_USER: 'test_user'
MYSQL_PASSWORD: 'test_user_password'
If you use the example above, MySQL (version 5.7) will be available on:
Host name: 127.0.0.1 (avoid using localhost, as some clients will attempt to connect via a local "Unix socket", which will not work in Pipelines)
Port: 3306
Default database: pipelines
User: test_user, password: test_user_password. (The root user of MySQL will not be accessible.)
You will need to populate the pipelines database with your tables and schema. If you need to configure the underlying database engine further, refer to the official Docker Hub image for details.
Using the MySQL image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- mysql
definitions:
services:
mysql:
image: mysql:5.7
variables:
MYSQL_DATABASE: 'pipelines'
MYSQL_ROOT_PASSWORD: 'let_me_in'
If you use the example above, MySQL (version 5.7) will be available on:
Host name: 127.0.0.1 (avoid using localhost, as some clients will attempt to connect via a local "Unix socket", which will not work in Pipelines)
Port: 3306
Default database: pipelines
User: root, password: let_me_in
You will need to populate the pipelines database with your tables and schema. If you need to configure the underlying database engine further, refer to the official Docker Hub image for details.
Using the Postgres image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- postgres
definitions:
services:
postgres:
image: postgres
PostgreSQL will be available on localhost:5432 with default database 'postgres', user 'postgres' and no password. You will need to populate the postgres database with your tables and schema, or create a second database for your use. If you need to configure the underlying database engine further, please refer to the official dockerhub image for details.
Using the Postgres image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- postgres
definitions:
services:
postgres:
image: postgres
variables:
POSTGRES_DB: 'pipelines'
POSTGRES_USER: 'test_user'
POSTGRES_PASSWORD: 'test_user_password'
PostgreSQL will be available on localhost:5432 with default a database named 'pipelines', user 'test_user' and password 'test_user_password'. You will need to populate the pipelines database with your tables and schema. If you need to configure the underlying database engine further, please refer to the official dockerhub image for details.
Using the Redis image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- redis
definitions:
services:
redis:
image: redis
Redis will be available on localhost:6379 without authentication.
Using the Cassandra image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- sleep 10 # wait for cassandra
- npm test
services:
- cassandra
definitions:
services:
cassandra:
image: cassandra
variables:
MAX_HEAP_SIZE: '512M' # Need to restrict the heapsize or else Cassandra will OOM
HEAP_NEWSIZE: '128M'
Cassandra will be available on localhost:9042.
As an alternative to running a separate container for the database (which is our recommended approach), you can use a Docker image that already has the database installed. The following images for Node and Ruby contain databases, and can be extended or modified for other languages and databases.
You can also use a custom name for the docker service by explicitly adding the ‘docker-custom’ call and defining the ‘type’ with your custom name - see the example below.
Docker service with a custom name:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
definitions:
services:
docker-custom:
type: docker
image: docker:dind
pipelines:
default:
- step:
runs-on:
- 'self.hosted'
- 'my.custom.label'
services:
- docker
script:
- docker info
Was this helpful?