HTTP timeouts and errors 502, 503, 504, 499 logged by reverse proxy with Bitbucket Data Center

Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.

Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

When accessing Bitbucket over HTTP, reverse proxy sometimes reports HTTP errors 502, 503, 504. 499 as if there was no reply from the Bitbucket, or it was so slow that HTTP connection timed out.

Environment

The solution has been validated in Bitbucket Data Center 8.9.9 but is applicable to other versions.

This affects Bitbucket DC running with reverse proxy (Nginx or any other).

Diagnosis

When accessing Bitbucket over HTTP, it looks like the connection times out and the reverse proxy reports HTTP errors 502, 503, 504, 499. It seems as if there was no reply from the Bitbucket, or it was so slow that HTTP connection times out. At the same time, SSH access to the Bitbucket may work just fine. The problem goes away after Bitbucket restart, but after some time it reappears. Bitbucket log files don't reveal much, except messages "The remote client has aborted the connection" logged while serving HTTP requests, like this one:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 2024-11-25 19:26:10,763 INFO [http-nio-7990-exec-5042 url: /rest/SOME/URL; user: XXXXX] XXXXX @DVLK10x432x123022x198 1a9bjlm X.X.X.X,0:0:0:0:0:0:0:1 "GET /rest/SOME/URL HTTP/1.0" c. a.s.i.w.filters.StreamGuardFilter The remote client has aborted the connection org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:309) at com.atlassian.stash.internal.web.util.web.FilterServletOutputStream.flush(FilterServletOutputStream.java:28) at java.base/sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:320) at java.base/sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:160) at java.base/java.io.OutputStreamWriter.flush(OutputStreamWriter.java:248) at java.base/java.io.BufferedWriter.flush(BufferedWriter.java:257) at com.atlassian.applinks.core.rest.context.ContextFilter.doFilter(ContextFilter.java:24) ... Caused by: java.io.IOException: Broken pipe at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method) ...

The setup uses reverse proxy in front of the Bitbucket, and its log files contain references to various HTTP errors, like 502, 503, 504, 499.

For example, in case of the Nginx reverse proxy, the log files may contain lines like these:

  • Access log:

    1 2 3 4 5 1.2.3.4 - - [25/Nov/2024:20:07:01 +0200] "GET /projects/PROJ HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0" 1.5.6.7 - - [25/Nov/2024:20:07:29 +0200] "GET /scm/some/repo-url.git/info/refs?service=git-upload-pack HTTP/1.1" 504 167 "-" "git/2.27.0" 1.2.3.4 - - [25/Nov/2024:20:07:50 +0200] "POST /rest/analytics/1.0/publish/bulk HTTP/1.1" 504 167 "https://bitbucket.server.name/projects/PROJ/repos/REPO/pull-requests/185/overview" "Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0" 1.9.5.100 - - [25/Nov/2024:20:08:12 +0200] "GET /rest/ui/latest/dashboard/pull-request-suggestions?changesSince=86400&limit=3 HTTP/1.1" 502 157 "https://bitbucket.server.name/dashboard" "Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0" 1.10.9.8 - autobuild [25/Nov/2024:20:08:15 +0200] "GET /rest/api/1.0/projects/PROJ/repos/REPO/pull-requests/186 HTTP/1.1" 499 0 "-" "Apache-HttpClient/4.5.14 (Java/17.0.11)"
  • Error log:

    1 2 3 4 5 2024/11/25 20:08:02 [error] 579262#579262: *1104558 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 1.2.3.4, server: bitbucket.server.name, request: "GET /projects/PROJ HTTP/1.1", upstream: "http://127.0.0.1:7990/projects/PROJ", host: "bitbucket.server.name" 2024/11/25 20:07:29 [error] 579262#579262: *1104548 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 1.5.6.7, server: bitbucket.server.name, request: "GET /scm/some/repo-url.git/info/refs?service=git-upload-pack HTTP/1.1", upstream: "http://127.0.0.1:7990//scm/some/repo-url.git/info/refs?service=git-upload-pack", host: "bitbucket.server.name" 2024/11/25 20:07:50 [error] 579263#579263: *1104525 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 1.2.3.4, server: bitbucket.server.name, request: "POST /rest/analytics/1.0/publish/bulk HTTP/1.1", upstream: "http://127.0.0.1:7990/rest/analytics/1.0/publish/bulk", host: "bitbucket.server.name", referrer: "https://bitbucket.server.name/projects/PROJ/repos/REPO/pull-requests/185/overview" 2024/11/25 20:08:12 [error] 579262#579262: *1104564 no live upstreams while connecting to upstream, client: 1.9.5.100, server: bitbucket.server.name, request: "GET /rest/ui/latest/dashboard/pull-request-suggestions?changesSince=86400&limit=3 HTTP/1.1", upstream: "http://localhost/rest/ui/latest/dashboard/pull-request-suggestions?changesSince=86400&limit=3", host: "bitbucket.server.name", referrer: "https://bitbucket.server.name/dashboard" 2024/11/25 20:08:15 [error] 579262#579262: *1104560 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 1.10.9.8, server: bitbucket.server.name, request: "GET /rest/api/1.0/projects/PROJ/repos/REPO/pull-requests/186 HTTP/1.1", upstream: "http://127.0.0.1:7990/rest/api/1.0/projects/PROJ/repos/REPO/pull-requests/186", host: "bitbucket.server.name"

Cause

When checking reverse proxy configuration, the timeout settings look fine - for example, default 60 seconds in case of Nginx. Bitbucket access logs show no long-running connections - all are completing in a few seconds - so it is not the timeout that makes a problem.

However, after looking more closely at the Bitbucket access logs, we can see that the number of the simultaneous connections is high, around 200:

  • The page How to read the Bitbucket Data Center Log Formats describes how to decipher Bitbucket access logs.

  • To check the number of simultaneously processed HTTP requests, you can use the command from below to parse it:

    1 2 3 4 5 6 7 8 9 10 11 $ cat atlassian-bitbucket-access{-*,}.log | grep 'o@' | grep -i http | awk -vFS="|" '{ print $3 }' | awk -vFS="x" '{print $4}' | sort -n | tail -10 202 203 203 203 204 204 204 204 206 207
  • As shown in the example above, the number of simultaneous connections is hitting and going over 200.

  • Our page Scaling Bitbucket Data Center- HTTPS states

    By default, Tomcat allows up to 200 threads to process incoming requests. Bitbucket 7.4 introduced the use of asynchronous requests to move processing for hosting operations to a background threadpool, freeing up Tomcat's threads to handle other requests (like web UI or REST requests). The background threadpool allows 250 threads by default. If the background threadpool is fully utilized, subsequent HTTPS hosting operations are handled directly on Tomcat's threads. When all 200 Tomcat threads are in use, a small number of additional requests are allowed to queue (50 by default) before subsequent requests are rejected.

  • The strange "499" HTTP error logged by Nginx means, as many web sites explain, that the client (web browser) closed the connection while the HTTP transaction was still ongoing.

    Usually, this happens when an impatient user refreshes the browser before the content is loaded or before reverse proxy produces an HTTP 50x error.

In this case, the reason for getting various HTTP 502, 503, 504, 499 errors is the large number of simultaneous connections coming to Bitbucket.

Solution

According to the previous analysis, the reason for getting HTTP 502, 503, 504, 499 errors is the large number of simultaneous connections coming to Bitbucket.

Usually, this happens when there are CI/CD systems polling data instead of relying on webhook notification mechanism or integration plugins.

The steps to solve the issue include a bit of investigation:

  1. Check the Git clients; are they mainly CI/CD systems, that are making that huge number of HTTP requests? If yes, prefer using webhook notification mechanism or integration plugins instead systems polling data for CI/CD.

  2. If the large number of the HTTP requests are directed towards Bitbucket REST API, you can improve instance stability with rate limiting.

  3. If you are using Jenkins as a CI/CD tool, check the Bitbucket access log and count the total number of and the ratio of requests for the Jenkinsfile in relation to the number of all requests.

    If there is a lot of polling for Jenkinsfile, consider using Jenkins - Bitbucket Server Integration.

    1 2 3 4 5 6 # check total number of requests coming to Bitbucket cat atlassian-bitbucket-access{-*,}.log | grep 'o@' |grep -i http | wc -l # check number of requests for "Jenkinsfile" coming to Bitbucket cat atlassian-bitbucket-access{-*,}.log | grep 'o@' |grep -i http | grep 'Jenkinsfile' | wc -l
Updated on April 8, 2025

Still need help?

The Atlassian Community is here for you.