Get started with Bitbucket Cloud
New to Bitbucket Cloud? Check out our get started guides for new users.
These options are used to define the steps in a pipeline, including the required script property, which is used to run commands and scripts such as running compilers and tests.
Steps accept the following options.
The step option is used to define a build execution unit. Steps are executed in the order that they appear in the bitbucket-pipelines.yml file. A single pipeline can have up to 100 steps.
Each step in your pipeline will start a separate Docker container to run the commands configured in the script option. Each step can be customized using options such as:
runtime – to customize the runtime environment of the step.
cloud
atlassian-ip-ranges
image — to use a different Docker image.
max-time — to set the maximum time allowed for a step.
caches and services — for specific caches and services.
artifacts — to retain artifacts that subsequent steps can consume.
clone — to customize the Git clone operation for the step.
trigger — to set the step to manual, requiring manual before the pipeline can continue.
Property — step
Required — Yes
Data type — Block of new-line separated key-value pairs (YAML spec - Block Mapping)
Allowed parent properties — branches, custom, default, pull-requests, steps, and tags.
Allowed child properties — Requires the script property. The following properties are optional: after-script, artifacts, caches, clone, condition, deployment, fail-fast, image, name, oidc, runs-on, runtime, services, size, and trigger.
1
2
3
4
5
pipelines:
default:
- step:
script:
- echo "Hello, World!"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
pipelines:
default:
- step: # sequential step
name: Build
script:
- ./build.sh
- step: # sequential step
name: Build
script:
- ./build.sh
- parallel: # these 2 steps will run in parallel
steps:
- step:
name: Integration 1
script:
- ./integration-tests.sh --batch 1
- step:
name: Integration 2
script:
- ./integration-tests.sh --batch 2
- step: # non-parallel step
script:
- ./deploy.sh
The script property is used to specify a list of commands to be run for a step. The list of commands will be run in the order they are listed, without any automatic clean-up operations between the commands. We recommend that you move large scripts to a separate script file and call it from the bitbucket-pipelines.yml.
Property — script
Required — Yes
Data type — A list of Strings and/or pipe properties (YAML spec - Sequence)
Allowed parent properties — step
Allowed child properties — pipe (optional)
1
2
3
4
5
pipelines:
default:
- step:
script:
- echo "Hello, World!"
1
2
3
4
5
pipelines:
default:
- step:
script:
- ./long-build-script.sh
1
2
3
4
5
6
7
pipelines:
default:
- step:
script:
- echo "Hello,"
- echo "World!"
- ./build.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
pipelines:
default:
- step:
name: Alert Opsgenie
script:
- echo "Sending an alert through Opsgenie"
- pipe: atlassian/opsgenie-send-alert:latest
variables:
GENIE_KEY: $GENIE_KEY
MESSAGE: "Danger, Will Robinson!"
DESCRIPTION: "An Opsgenie alert sent from Bitbucket Pipelines"
SOURCE: "Bitbucket Pipelines"
PRIORITY: "P1"
The max-time option sets the maximum length of time a step can run before timing out (in minutes). The max-time option can be set using both the global options property and on individual pipeline steps. The default maximum time for pipeline steps is 120 minutes.
Property — max-time
Required — No
Data type — Integer
Allowed values — A positive integer between 1 and 720
Default value — 120
Allowed parent properties — options and step
1
2
3
4
5
6
7
8
9
options:
max-time: 30
pipelines:
default:
- step:
name: Sleeping step
script:
- sleep 120m # This step will timeout after 30 minutes
1
2
3
4
5
6
7
8
9
10
11
12
13
options:
max-time: 60
pipelines:
default:
- step:
name: Sleeping step
script:
- sleep 120m # This step will timeout after 60 minutes
- step:
name: quick step
max-time: 5
script:
- sleep 120m # This step will timeout after 5 minutes
The size option allocates additional resources to a step or a whole pipeline, when running on Bitbucket Cloud infrastructure or Linux Docker self-hosted runners.
This option has no effect on shell-based runners, such as Windows PowerShell, macOS shell, and Linux shell runners, which use all available resources on on the host machine.
By default, a step running on Bitbucket Cloud infrastructure or a Linux Docker self-hosted runner has access to 4GB of memory, 4 CPUs (which might be shared with other tasks), and 64 GB of disk space per step for mounting volumes.
By specifying a size of 2x, your step or pipeline will have double the memory available. Note that the memory allocated is shared by both the script in the step and any services on the step.
Choosing a size options above 4x also grants additional CPU resources and disk space. A stepassigned a size of 4x or greater is guaranteed dedicated access to the relevant number of CPUs, and more disk space for mounting volumes. .
4x steps use four times the number of build minutes of 1x steps, 2x steps use twice the number of build minutes of 1x steps, and so on.
Size | CPU | Memory | Volume size |
---|---|---|---|
1x | 4 (shared) | 4 | 64GB |
2x | 4 (shared) | 8 | 64GB |
4x | 8 (dedicated) | 16 | 256GB |
8x | 16 (dedicated) | 32 | 256GB |
Property — size
Required — No
Data type — String
Allowed values — Either:
1x, 2x, 4x, or 8x for pipeline steps run on Bitbucket Cloud.
1x, 2x, 4x, or 8x for pipeline steps run on a self-hosted pipeline runner.
4x and 8x pipelines size options are only available for builds running under a paid Bitbucket Cloud plan (Standard or Premium).
Default value — 1x
Allowed parent properties — options and step
1
2
3
4
5
6
7
8
options:
size: 2x
pipelines:
default:
- step:
script:
- echo "2x memory is probably not needed for an echo command"
1
2
3
4
5
6
pipelines:
default:
- step:
size: 2x
script:
- echo "This step gets double the memory!"
The runtime configuration to be applied to the step.
Property — runtime
Required — No
Data type — Block of new-line separated key-value pairs (YAML spec - Block Mapping)
Allowed parent properties — options and step
Allowed child properties — cloud (required)
1
2
3
4
5
6
7
8
9
10
options:
runtime:
cloud:
atlassian-ip-ranges: true
pipelines:
default:
- step:
size: 4x
script:
- echo "I use atlassian-ip-ranges"
1
2
3
4
5
6
7
8
9
pipelines:
default:
- step:
size: 4x
runtime:
cloud:
atlassian-ip-ranges: true
script:
- echo "I use atlassian-ip-ranges"
The runtime configuration to be applied to cloud steps.
Property — cloud
Required — No
Data type — Block of new-line separated key-value pairs (YAML spec - Block Mapping)
Allowed parent properties — runtime
Allowed child properties — atlassian-ip-ranges (required)
This option indicates whether to use the default aws-ip-ranges or the atlassian-ip-ranges when executing your step for ingress/egress traffic.
Property — atlassian-ip-ranges
Required — Yes
Data type — Boolean
Allowed values — true or false
Default value — false
Allowed parent properties — cloud
The after-script option lists the commands that will run after the script in the step is completed, regardless of whether the script succeeds or fails. This could be useful for clean up commands, test coverage, notifications, or rollbacks you might want to run. The BITBUCKET_EXIT_CODE pipeline variable can be used to determine if the script in the step has succeed or failed.
If any commands in the after-script fail:
we won't run any more commands listed in the after-script of that step.
it will not affect the reported status of the step.
Property — after-script
Required — No
Data type — A list of Strings and/or pipe properties (YAML spec - Sequence)
Allowed parent properties — step
Allowed child properties — pipe (optional)
1
2
3
4
5
6
7
8
9
pipelines:
default:
- step:
name: Build and test
script:
- npm install
- npm test
after-script:
- echo "after script has run!"
A name for the step or stage. The name will be shown in the Bitbucket Pipeline logs and the Bitbucket UI. Names should be unique (within the pipeline) and describe the step or the steps in the stage.
Property — name
Required — No
Data type — String
Allowed parent properties — step and stage
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pipelines:
default:
- stage:
name: Build and test
steps:
- step:
name: Build app
script:
- sh ./build-app.sh
- step:
name: Run unit tests
script:
- sh ./run-tests.sh
- step:
script:
- pipe: atlassian/slack-notify:latest
variables:
WEBHOOK_URL: $SLACK_WEBHOOK
PRETEXT: 'Hello, Slack!'
MESSAGE: 'Hello, Slack!!'
fail-fast can be applied to all parallel steps or to a specific step in a parallel group:
If a step has fail-fast: false, then the step can fail without the whole parallel group stopping.
If a step has fail-fast: true, then the whole parallel group will stop if the step fails.
Property — fail-fast
Required — No
Data type — Boolean
Allowed values — true or false
Default value — false
Allowed parent properties — step and parallel
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
pipelines:
default:
- step:
name: Build
script:
- ./build.sh
- parallel:
# this option allows a force stop on all running steps if any step fails
fail-fast: true
steps:
- step:
name: Integration 1
script:
- ./integration-tests.sh --batch 1
- step:
name: Integration 2
script:
- ./integration-tests.sh --batch 2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
pipelines:
default:
- step:
name: Build
script:
- ./build.sh
- parallel:
# this option allows a force stop on all running steps if any step fails
fail-fast: true
steps:
- step:
name: Integration 1
script:
- ./integration-tests.sh --batch 1
- step:
name: Integration 2
script:
- ./integration-tests.sh --batch 2
- step:
# option can be disabled for a step
# and its failure won't stop other steps in a group
fail-fast: false
name: Upload metadata
script:
- ./upload-metadata.sh
The step caches option is used to indicate steps where dependencies are downloaded from external sources (such as package repositories like npm, and PyPI). This allows the previously defined cache to be created, updated, or reused to avoid re-downloading external build dependencies. You can use a defined custom cache, or use one of the predefined caches. For a complete list of predefined caches, see Caches — Predefined caches.
For information on using the caches option, see Caches.
Property — caches
Required — No
Data type — A list of Strings (YAML spec - Sequence)
Allowed values — Names of the Pre-defined Pipelines caches and the names of custom caches.
Allowed parent properties — step
1
2
3
4
5
6
7
8
9
10
11
12
13
definitions:
caches:
my-bundler-cache: vendor/bundle
pipelines:
default:
- step:
caches:
- my-bundler-cache # Cache is defined above in the definitions section
- node # Pre-defined Pipelines cache
script:
- bundle install --path vendor/bundle
- ruby -e 'print "Hello, World\n"'
The artifacts option is used to list the files or directories that contain build artifacts that are required for steps later in the pipeline. The artifact paths are relative to the BITBUCKET_CLONE_DIR variable and can be defined using glob patterns.
For details on artifacts, see using artifacts in steps.
Property — artifacts
Required — No
Data type — Either:
A list of file paths (glob patterns are allowed) (YAML spec - Sequence)
Block of new-line separated key-value pairs (YAML spec - Block Mapping)
Allowed parent properties — step
Allowed child properties — download and paths
1
2
3
4
5
6
7
8
9
10
11
12
13
pipelines:
default:
- step:
name: Build and test
script:
- npm install
- npm run build
artifacts:
- dist/**
- step:
name: Test code from build step stored in the dist/ directory
script:
- npm test
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
pipelines:
default:
- step:
name: Build and test
script:
- npm install
- npm run build
artifacts: # Store build artifacts for use in the following steps
- dist/**
- step:
name: lint code and store results
script:
- npm lint > results.txt
artifacts:
download: false # Block artifacts downloading, they're not needed for this step
paths: # Store the linting result (in addition to the dist/ directory)
- results.txt
- step:
name: Test code from build step stored in the dist/ directory
script:
- npm test
The artifacts download option is used to control whether artifacts from previous steps are downloaded at the start of the step.
download: true — (default behavior) the artifacts from previous steps will be started at the start of step and the artifacts will be available for the scripts in the step.
download: false — artifacts from previous steps will not be available in this step.
For details on using artifacts, see using artifacts in steps.
Property — download
Required — No
Data type — Boolean
Allowed values — true or false
Default value — True
Allowed parent properties — artifacts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
pipelines:
default:
- step:
name: Build and test
script:
- npm install
- npm run build
artifacts: # Store build artifacts for use in the following steps
- dist/**
- step:
name: lint code and store results
script:
- npm lint > results.txt
artifacts:
download: false # Block artifacts downloading, they're not needed for this step
- step:
name: Test code from build step stored in the dist/ directory
script:
- npm test
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
pipelines:
default:
- step:
name: Build and test
script:
- npm install
- npm run build
artifacts: # Store build artifacts for use in the following steps
- dist/**
- step:
name: lint code and store results
script:
- npm lint > results.txt
artifacts:
download: false # Block artifacts downloading, they're not needed for this step
paths: # Store the linting result (in addition to the dist/ directory)
- results.txt
- step:
name: Test code from build step stored in the dist/ directory
script:
- npm test
The artifact paths option is used to list the files or directories that contain build artifacts that are required for steps later in the pipeline. The paths option is only needed if the artifacts download option is defined, otherwise these paths can be listed under the artifacts option. The artifact paths are relative to the BITBUCKET_CLONE_DIR variable and can be defined using glob patterns.
For details on artifacts, see using artifacts in steps.
Property — paths
Required — No
Data type — A list of file paths (glob patterns are allowed) (YAML spec - Sequence)
Allowed parent properties — artifacts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
pipelines:
default:
- step:
name: Build and test
script:
- npm install
- npm run build
artifacts: # Store build artifacts for use in the following steps
- dist/**
- step:
name: lint code and store results
script:
- npm lint > results.txt
artifacts:
download: false # Block artifacts downloading, they're not needed for this step
paths: # Store the linting result (in addition to the dist/ directory)
- results.txt
- step:
name: Test code from build step stored in the dist/ directory
script:
- npm test
Pipes make complex tasks easier, by doing a lot of the work behind the scenes. This means you can just select which pipe you want to use, and supply the necessary variables. You can look at the repository for the pipe to see what commands it is running.
For information on Pipes, including how to create and use custom Pipes, see Use pipes in Bitbucket Pipelines.
For a list of available Pipes and instructions on how to use them, see Bitbucket Pipes Integrations.
Property — pipe
Required — No
Data type — String
Allowed values — Address of a Docker-based pipe.
Allowed parent properties — script and after-script
Allowed child properties — variables (required for most pipes)
1
2
3
4
5
6
7
8
9
10
11
12
pipelines:
default:
- step:
name: Alert Opsgenie
script:
- pipe: atlassian/opsgenie-send-alert:latest
variables:
GENIE_KEY: $GENIE_KEY
MESSAGE: "Danger, Will Robinson!"
DESCRIPTION: "An Opsgenie alert sent from Bitbucket Pipelines"
SOURCE: "Bitbucket Pipelines"
PRIORITY: "P1"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
pipelines:
default:
- step:
name: Alert everyone!
script:
- pipe: atlassian/opsgenie-send-alert:latest
variables:
GENIE_KEY: $GENIE_KEY
MESSAGE: 'Wake up!'
- pipe: atlassian/slack-notify:latest
name: Send alert to Slack
variables:
WEBHOOK_URL: $SLACK_WEBHOOK
PRETEXT: 'Alert Everyone!'
MESSAGE: 'We have a problem!'
1
2
3
4
5
6
7
8
9
pipelines:
default:
- step:
name: Running my custom pipe
script:
- pipe: docker://<DockerAccountName>/<ImageName>:<version>
variables:
USERNAME: $My_username
PASSWORD: $Password
The pipe variables options is used to configure the environmental variables of a pipe. The variables required or available varies between pipes.
For information on Pipes, including how to create and use custom Pipes, see Use pipes in Bitbucket Pipelines.
For a list of available Pipes and instructions on how to use them, see Bitbucket Pipes Integrations.
Secrets and login credentials should be stored as user-defined pipeline variables to avoid being leaked. For details, see Variables and secrets — User-defined variables.
Property — variables
Required — Varies between pipes
Data type — Block of new-line separated key-value pairs (YAML spec - Block Mapping)
Allowed parent properties — pipe
The following example shows the Opsgenie Send Alert pipe and the Slack Notify pipe.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
pipelines:
default:
- step:
name: Alert everyone!
script:
- pipe: atlassian/opsgenie-send-alert:latest
name: Send alert to Opsgenie
variables:
GENIE_KEY: $GENIE_KEY
MESSAGE: 'Wake up!'
- pipe: atlassian/slack-notify:latest
name: Send alert to Slack
variables:
WEBHOOK_URL: $SLACK_WEBHOOK
PRETEXT: 'Alert Everyone!'
MESSAGE: 'We have a problem!'
Only available for self-hosted pipeline runners.
To run a pipeline step on a self-hosted runner, add the runs-on option to the step. When the pipeline is run, the step will run on the next available runner that has all the listed labels. If all matching runners are busy, your step will wait until one becomes available again. If you don't have any online runners in your repository that match all labels, the step will fail.
For information on:
Self-hosted pipeline runners, see Runners.
Configuring your pipeline steps to use a runner, see Configure your runner in bitbucket-pipelines.yml.
Property — runs-on
Required — No
Data type — Either:
A String
A list of Strings (YAML spec - Sequence)
Allowed values — Any Label assigned to a self-hosted Repository or Workspace Pipeline runner (such as self.hosted).
Allowed parent properties — step
1
2
3
4
5
6
7
8
9
10
11
12
13
pipelines:
default:
- step:
name: Step 1
runs-on:
- 'self.hosted'
- 'my.custom.label'
script:
- echo "This step will run on a self-hosted runner with the 'my.custom.label' and 'self.hosted' labels.";
- step:
name: Step 2
script:
- echo "This step will run on Atlassian's infrastructure as usual.";
For details on the image options, including using the image options for steps, see Docker image options.
For details on the clone options, including using the clone options for steps, see Git clone behavior.
The condition option prevents a step or stage from running unless a condition or rule is satisfied. Currently, the only condition supported is changesets. Use changesets to execute a step or stage only if one of the modified files matches the expression in includePaths. The file match pattern specified in the includePaths is relative to the $BITBUCKET_CLONE_DIR directory.
In a pull-requests pipeline, all commits are taken into account, and if you provide an includePath list of patterns, the step or stage will be executed when at least one commit change matches one of the conditions. The format for pattern matching follows the glob patterns.
For other types of pipelines, only the last commit is considered. If you push multiple commits to branch at the same time or if you push multiple times to given branch, you might experience non-intuitive behavior when failing pipelines turn green only because the failing step or stage is skipped on the next run.
Property — condition
Required — No
Data type — Block of new-line separated key-value pairs (YAML spec - Block Mapping)
Allowed parent properties — step and stage
Allowed child properties — changesets (required)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
pipelines:
default:
- step:
name: step1
script:
- echo "failing paths"
- exit 1
condition:
changesets:
includePaths:
# only xml files directly under path1 directory
- "path1/*.xml"
# any changes in deeply nested directories under path2
- "path2/**"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pipelines:
default:
- stage:
name: Build and test
condition:
changesets:
includePaths:
# only xml files directly under path1 directory
- "path1/*.xml"
# any changes in deeply nested directories under path2
- "path2/**"
steps:
- step:
name: Build app
script:
- sh ./build-app.sh
- step:
name: Run unit tests
script:
- sh ./run-tests.sh
The changesets option is used to indicate that the condition for a step or stage is a change in a particular file or files (includePaths).
Property — changesets
Required — Required when using the condition option.
Data type — Block of new-line separated key-value pairs (YAML spec - Block Mapping)
Allowed parent properties — condition
Allowed child properties — includePaths (required)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
pipelines:
default:
- step:
name: step1
script:
- echo "failing paths"
- exit 1
condition:
changesets:
includePaths:
# only xml files directly under path1 directory
- "path1/*.xml"
# any changes in deeply nested directories under path2
- "path2/**"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pipelines:
default:
- stage:
name: Build and test
condition:
changesets:
includePaths:
# only xml files directly under path1 directory
- "path1/*.xml"
# any changes in deeply nested directories under path2
- "path2/**"
steps:
- step:
name: Build app
script:
- sh ./build-app.sh
- step:
name: Run unit tests
script:
- sh ./run-tests.sh
When used with the condition and changesets options, the includePaths option allows you to provide a list of files or directories to check for changes. If a file in the list is changed by a commit, the step or stage will run, otherwise the step will be skipped.
Property — includePaths
Required — No
Data type — A list of file paths (glob patterns are allowed) (YAML spec - Sequence)
Allowed parent properties — changesets
1
2
3
4
5
6
7
8
9
10
11
12
13
14
pipelines:
default:
- step:
name: step1
script:
- echo "failing paths"
- exit 1
condition:
changesets:
includePaths:
# only xml files directly under path1 directory
- "path1/*.xml"
# any changes in deeply nested directories under path2
- "path2/**"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pipelines:
default:
- stage:
name: Build and test
condition:
changesets:
includePaths:
# only xml files directly under path1 directory
- "path1/*.xml"
# any changes in deeply nested directories under path2
- "path2/**"
steps:
- step:
name: Build app
script:
- sh ./build-app.sh
- step:
name: Run unit tests
script:
- sh ./run-tests.sh
Sets the stage to run automatically (default behavior) or only when manually triggered by a user in the Bitbucket user interface. The first stage in a pipeline can't be manual. To set a whole pipeline to run manually, use a custom pipeline trigger. Manual steps and stages:
Can’t be the first step or stage in a pipeline.
Can only be executed in the order that they are configured. You cannot skip a manual step or stage.
Can only be executed if the previous step or stage has successfully completed.
Can only be triggered by users with write access to the repository.
Are triggered through the Pipelines web interface.
If your build uses both manual steps and artifacts, the artifacts are stored for 14 days following the execution of the step that produced them. After this time, the artifacts expire and any manual steps and manual stages in the pipeline can no longer be executed.
Property — trigger
Required — No
Data type — String
Allowed values — automatic and manual
Default value — automatic
Allowed parent properties — step and stage
1
2
3
4
5
6
7
8
9
10
11
12
13
pipelines:
default:
- step:
name: Build
script:
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
trigger: manual
script:
- ./deploy.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pipelines:
default:
- stage:
name: Linting
steps:
- step:
script:
- sh ./run-linter.sh
- stage:
name: Build and test
trigger: manual
steps:
- step:
name: Build app
script:
- sh ./build-app.sh
- step:
name: Run unit tests
script:
- sh ./run-tests.sh
The oidc option enables the use of OpenID Connect to connect a pipeline step to a resource server. The oidc value must be set to true to set up and configure OpenID Connect. For details on using OIDC with pipelines, see Integrate Pipelines with resource servers using OIDC.
Property — oidc
Required — No
Data type — Boolean
Allowed values — true or false
Default value — false
Allowed parent properties — step
1
2
3
4
5
6
7
pipelines:
default:
- step:
oidc: true
script:
- echo "I can access data through OpenID Connect!"
- aws sts assume-role-with-web-identity --role-arn arn:aws:iam::XXXXXX:role/projectx-build --role-session-name build-session --web-identity-token "$BITBUCKET_STEP_OIDC_TOKEN" --duration-seconds 1000
Bitbucket Pipelines can create separate Docker containers for services, which results in faster builds, and easy service editing. For details on creating services see Databases and service containers. This services option is used to indicate which steps require previously defined services.
Property — services
Required — No
Data type — A list of Strings (YAML spec - Sequence)
Allowed values — Names of services defined under definitions > services
Allowed parent properties — step
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
definitions:
services:
my-service-name:
image: mariadb:latest
variables:
MARIADB_USER: $MY_MARIADB_USER
MARIADB_PASSWORD: $MY_MARIADB_PASSWORD
MARIADB_ROOT_PASSWORD: $MARIADB_ADMIN_PASSWORD
pipelines:
default:
- step:
name: Hello world example
services:
- my-service-name
script:
- echo "Hello, World"
Sets the environment for a Deployment stage or step and is used to organize the Deployment dashboard. All steps that belong to the Deployment stage will be a Deployment step. The default environments are: test, staging, or production. To set the deployment environment of a step or stage, include the Environment name.
For details on:
deployment stages, see Deployment stages.
creating and configuring deployment environments, see Set up and monitor deployments.
Property — deployment
Required — No
Data type — String
Allowed values — The name of a Deployment environment
Allowed parent properties — step and stage
1
2
3
4
5
6
7
pipelines:
default:
- step:
name: Deploy to production
deployment: production env 1
script:
- python deploy.py prod_env_1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
pipelines:
default:
- stage:
name: Build and test
deployment: staging
steps:
- step:
name: Build app
script:
- sh ./build-app.sh
- step:
name: Run unit tests
script:
- sh ./run-tests.sh
- stage:
name: Deploy to Production
deployment: prod
trigger: manual
steps:
- step:
name: Build app
script:
- sh ./build-app.sh
- step:
name: Run unit tests
script:
- sh ./run-tests.sh
Was this helpful?