Get started with Bitbucket Cloud
New to Bitbucket Cloud? Check out our get started guides for new users.
Variables that are used in pipes get passed between lots of different systems, and if you are not careful this can lead to unexpected results. For example, if someone using your pipe writes: MY_VARIABLE: "\\\\" then the value of MY_VARIABLE your pipe receives is '\'!
(You can read more about issues escaping characters, on Community).
Fortunately there are some best practices that will help you avoid this kind of peril.
If you need to use a command with multiple variables, use the variables separately, don't join them together.
in a shell script, always use double quotes when referencing a variable
do:
command --option1 "$VAR1" --option2 "$VAR2" --option3 "$VAR3"
don't do:
ARGS= "--option1 $VAR1 --option2 $VAR2 --option3 $VAR3"
command $ARGS
If you need to use a variable containing a list of values, for example, a list of extra arguments, or an array you can define it directly in the yaml, for example:
1
2
variables:
EXTRA_ARGS: ['--description', 'text containing spaces', '--verbose']
Inside the pipe this results in the following variables:
1
2
3
4
EXTRA_ARGS_COUNT: 3
EXTRA_ARGS_0: --description
EXTRA_ARGS_1: text containing spaces
EXTRA_ARGS_2: --verbose
If you then need to turn this set of variables into an array type variable within the pipe you could use something like this bash script defining a function:
1
2
3
4
5
6
7
8
init_array_var() {
local array_var=${1}
local count_var=${array_var}_COUNT
for (( i = 0; i < ${!count_var:=0}; i++ ))
do
eval ${array_var}[$i]='$'${array_var}_${i}
done
}
and then to use the function you might write something like this:
1
2
3
init_array_var 'EXTRA_ARGS'
command --option1 "$VAR1" --option2 "$VAR2" --option3 "$VAR3" "${EXTRA_ARGS[@]}"
If you want pipes to be able to share state, or other, information between pipes we have 2 built in variables for you that exist for the duration of a pipeline:
BITBUCKET_PIPE_STORAGE_DIR You can use this directory to pass information or artifacts between subsequent runs of your pipe.
BITBUCKET_PIPE_SHARED_STORAGE_DIR If you need to get information from another pipe, you can read it from this directory.
So, if you want to store data for your own pipe's consumption in a later step in the pipeline, you might write a script that looks like this:
1
2
3
#!/bin/sh
set -e
echo "{'key': 'value'}" >> $BITBUCKET_PIPE_STORAGE_DIR/<filename>
Then to read this data back in a later step:
1
2
3
#!/bin/sh
set -e
cat $BITBUCKET_PIPE_STORAGE_DIR/<filename>
You can even read data from another pipe!
To do this you need to know the key for the pipe you want to use, and the account that owns it.
1
2
3
#!/bin/sh
set -e
cat $BITBUCKET_PIPE_SHARED_STORAGE_DIR/<pipeAccount>/<pipeName>/metadata.json
For example: To access the aws-lambda-deploy-env file from the aws-lambda-deploy pipe you would write:
1
2
3
#!/bin/sh
set -e
cat $BITBUCKET_PIPE_SHARED_STORAGE_DIR/atlassian/aws-lambda-deploy/aws-lamda-deploy-env
Was this helpful?