Get started with Bitbucket Cloud
New to Bitbucket Cloud? Check out our get started guides for new users.
Bitbucket Pipelines allows you to run multiple Docker containers from your build pipeline. You'll want to start additional containers if your pipeline requires additional services when testing and operating your application. These extra services may include data stores, code analytics tools and stub web services.
You define these additional services (and other resources) in the definitions section of the bitbucket-pipelines.yml file. These services can then be referenced in the configuration of any pipeline that needs them.
When a pipeline runs, services referenced in a step of your bitbucket-pipeline.yml will be scheduled to run with your pipeline step. These services share a network adapter with your build container and all open their ports on localhost. No port mapping or hostnames are required. For example, if you were using Postgres, your tests just connect to port 5432 on localhost. The service logs are also visible in the Pipelines UI if you need to debug anything.
Pipelines enforces a maximum of 5 service containers per build step. See sections below for how memory is allocated to service containers.
In the following tutorial you’ll learn how to define a service and how to use it in a pipeline.
Services in Pipelines have the following limitations:
Maximum of 5 services for a step
Memory limits as described above
No REST API for accessing services and logs under pipeline results
No mechanism to wait for service startup.
If you want to run a larger number of small services, use Docker run or docker-compose.
Port 29418 can’t be used.
Services are defined in the definitions section of the bitbucket-pipelines.yml file.
For example, the following defines two services: one named redis that uses the library image redis from Docker Hub (version 3.2), and another named database that uses the official Docker Hub MySQL image (version 5.7).
The variables section allows you define variables, either literal values or existing pipelines variables.
1
2
3
4
5
6
7
8
9
definitions:
services:
redis:
image: redis:3.2
mysql:
image: mysql:5.7
variables:
MYSQL_DATABASE: my-db
MYSQL_ROOT_PASSWORD: $password
Each service definition can also define a custom memory limit for the service container, by using the memory keyword (in megabytes).
The relevant memory limits and default allocations are as follows:
Regular steps have 4096 MB of memory in total, large build steps (which you can define using size: 2x) have 8192 MB in total.
The build container is given 1024 MB of the total memory, which covers your build process and some Pipelines overheads (agent container, logging, etc).
The total memory for services on each pipeline step must not exceed the remaining memory, which is 3072/7128 MB for 1x/2x steps respectively.
Service containers get 1024 MB memory by default, but can be configured to use between 128 MB and the step maximum (3072/7128 MB).
The Docker-in-Docker daemon used for Docker operations in Pipelines is treated as a service container, and so has a default memory limit of 1024 MB. This can also be adjusted to any value between 128 MB and 3072/7128 MB by changing the memory setting on the built-in docker service in the definitions section.
In the example below, the build container has a memory limit of 2048 MB:
Docker has 512 MB
Redis has 512 MB
MySQL uses the default memory limit (1024 MB)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
default:
- step:
services:
- redis
- mysql
- docker
script:
- echo "This step is only allowed to consume 2048 MB of memory"
- echo "Services are consuming the rest. docker 512 MB, redis 512 MB, mysql 1024 MB"
definitions:
services:
redis:
image: redis:3.2
memory: 512
docker:
memory: 512 # reduce memory for docker-in-docker from 1GB to 512MB
mysql:
image: mysql:5.7
# memory: 1024 # default value
variables:
MYSQL_DATABASE: my-db
MYSQL_ROOT_PASSWORD: $password
If a service has been defined in the 'definitions' section of the bitbucket-pipelines.yml file, you can reference that service in any of your pipeline steps.
For example, the following causes the redis service to run with the step:
1
2
3
4
5
6
7
8
default:
- step:
image: node
script:
- npm install
- npm test
services:
- redis
You can define a service that has restricted access like in the following example:
1
2
3
4
5
6
services:
redis:
image:
name: redis:3.2
username: username@organisation.com
password: $DOCKER_PASSWORD
For more complete example of using docker images from different registries and different formats, see Use Docker images as build environments
This example bitbucket-pipelines.yml file shows both the definition of a service and its use in a pipeline step. A breakdown of how it works is presented below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
pipelines:
branches:
main:
- step:
image: redis
script:
- redis-cli -h localhost ping
services:
- redis
- mysql
definitions:
services:
redis:
image: redis:3.2
mysql:
image: mysql:5.7
variables:
MYSQL_DATABASE: my-db
MYSQL_ROOT_PASSWORD: $password
When testing with a database, we recommend that you use service containers to run database services in a linked container. Docker has a number of official images of popular databases on Docker Hub.
This page has example bitbucket-pipelines.yml files showing how to connect to the following DB types.
You can check your bitbucket-pipelines.yml file with our online validator.
See also Use services and databases in Bitbucket Pipelines.
Alternatively, you can use a Docker image that contains the database you need – see Use a Docker image configured with a database on this page.
Using the Mongo image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- mongo
definitions:
services:
mongo:
image: mongo
MongoDB will be available on 127.0.0.1:27017 without authentication. As you connect to a database, MongoDB will create it for you.
Note that MongoDB's default configuration only listens for connections on IPv4, whereas some platforms (like Ruby) default to connecting via IPv6 if your Mongo connection is configured to use localhost. This is why we recommend connecting on 127.0.0.1 rather than localhost.
Using the MySQL image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- mysql
definitions:
services:
mysql:
image: mysql:5.7
variables:
MYSQL_DATABASE: 'pipelines'
MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
MYSQL_USER: 'test_user'
MYSQL_PASSWORD: 'test_user_password'
If you use the example above, MySQL (version 5.7) will be available on:
Host name: 127.0.0.1 (avoid using localhost, as some clients will attempt to connect via a local "Unix socket", which will not work in Pipelines)
Port: 3306
Default database: pipelines
User: test_user, password: test_user_password. (The root user of MySQL will not be accessible.)
You will need to populate the pipelines database with your tables and schema. If you need to configure the underlying database engine further, refer to the official Docker Hub image for details.
Using the MySQL image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- mysql
definitions:
services:
mysql:
image: mysql:5.7
variables:
MYSQL_DATABASE: 'pipelines'
MYSQL_ROOT_PASSWORD: 'let_me_in'
If you use the example above, MySQL (version 5.7) will be available on:
Host name: 127.0.0.1 (avoid using localhost, as some clients will attempt to connect via a local "Unix socket", which will not work in Pipelines)
Port: 3306
Default database: pipelines
User: root, password: let_me_in
You will need to populate the pipelines database with your tables and schema. If you need to configure the underlying database engine further, refer to the official Docker Hub image for details.
Using the Postgres image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- postgres
definitions:
services:
postgres:
image: postgres
PostgreSQL will be available on localhost:5432 with default database 'postgres', user 'postgres' and no password. You will need to populate the postgres database with your tables and schema, or create a second database for your use. If you need to configure the underlying database engine further, please refer to the official dockerhub image for details.
Using the Postgres image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- postgres
definitions:
services:
postgres:
image: postgres
variables:
POSTGRES_DB: 'pipelines'
POSTGRES_USER: 'test_user'
POSTGRES_PASSWORD: 'test_user_password'
PostgreSQL will be available on localhost:5432 with default a database named 'pipelines', user 'test_user' and password 'test_user_password'. You will need to populate the pipelines database with your tables and schema. If you need to configure the underlying database engine further, please refer to the official dockerhub image for details.
Using the Redis image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- redis
definitions:
services:
redis:
image: redis
Redis will be available on localhost:6379 without authentication.
Using the Cassandra image on Docker Hub.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- sleep 10 # wait for cassandra
- npm test
services:
- cassandra
definitions:
services:
cassandra:
image: cassandra
variables:
MAX_HEAP_SIZE: '512M' # Need to restrict the heapsize or else Cassandra will OOM
HEAP_NEWSIZE: '128M'
Cassandra will be available on localhost:9042.
As an alternative to running a separate container for the database (which is our recommended approach), you can use a Docker image that already has the database installed. The following images for Node and Ruby contain databases, and can be extended or modified for other languages and databases.
You can also use a custom name for the docker service by explicitly adding the ‘docker-custom’ call and defining the ‘type’ with your custom name - see the example below.
Docker service with a custom name:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
definitions:
services:
docker-custom:
type: docker
image: docker:dind
pipelines:
default:
- step:
runs-on:
- 'self.hosted'
- 'my.custom.label'
services:
- docker
script:
- docker info
Was this helpful?