Bitbucket Pipeline build failed with 'An error occurred while processing an artifact'
Platform Notice: Cloud Only - This article only applies to Atlassian apps on the cloud platform.
Summary
Learn what to do if you encounter a failed Pipeline build with 'An error occurred while processing an artifact' error.
Platform: Bitbucket Pipeline Self-hosted Runners - All
Possible causes for the error are:
An outdated version of the Runner is currently in use.
The server where the runner is running is behind a firewall, and the IPs or domain name used for artifact upload haven't been whitelisted on this firewall.
You use a runner version >= 3.0.0 and version < 3.17.0, and the artifact needs more than thirty seconds to upload. Thirty seconds is the default timeout for artifact upload with these versions of the runner.
Solution
Solution 1
First, verify the version of the runner. Check the build log of a failed build, expand the Runner section in the build log, and look for the current and the latest runner version in that section. If the current runner version you are using is not the latest, you can update the runner version following the steps in this article. Please keep in mind that artifacts no longer work with runner versions less than 3.0.0; it is required to use a runner version equal to or greater than 3.0.0.
Solution 2
The runner communicates directly with file storage in AWS S3 to upload and download artifacts. If your self-hosted runners are operating behind a firewall that filters connections based on IP addresses or URLs, it is essential to ensure that you unblock the following for both incoming and outgoing traffic when upgrading to version 3.0.0 or above:
If you use IP-based blocking
Utilizing the following endpoint, you can find a comprehensive list of IP addresses that traffic can route to AWS. It is important to filter for records where the service equals S3 and specify the us-east-1 and us-west-2 regions.
To facilitate the filtering process, the following command can be employed to retrieve the latest list of AWS S3 IPs used by the self-hosted runners:
curl https://ip-ranges.amazonaws.com/ip-ranges.json | jq -r '.prefixes[] | select((.region=="us-east-1" or .region=="us-west-2") and .service=="S3") | .ip_prefix'
Please note that it is necessary to allowlist all these IPs irrespective of the step size.
If you use domain-based blocking
You will need to allowlist the domain:
micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com
Solution 3
For runner versions equal to or greater than 3.0.0 and less than 3.17.0, the default timeout for artifact upload is 30 seconds. If your artifacts need more time than that to get uploaded, you will get an error. You can upgrade the runner to the latest version to prevent this error (see Solution 1 of this article).
If you don't want to update the runner version yet, you can adjust the default timeout by adjusting the preconfigured command that you use to start the runner.
If you are using Docker-based runners, please add this to the command that initiates the runner:
-e S3_READ_TIMEOUT_SECONDS=<secondsvalue>
For Linux Shell and MacOS Shell runners, please add this to the command that initiates the runner:
--s3ReadTimeoutSeconds <secondsvalue>
For Windows Shell runners, please add this to the command that initiates the runner:
-s3ReadTimeoutSeconds "<secondsvalue>"
Replace <secondsvalue> with a value in seconds based on your estimation of how long the artifact upload will take, keeping in mind the expected size of the artifact. For example, you may start with 600 (equivalent to 10 minutes) and adjust it to a lower or higher value as necessary.
Was this helpful?