Bamboo fails to start when JDK is patched from 1.8.0_291 to 1.8.0.361
Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.
Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Summary
Bamboo instance fails to start post-patching JDK from 1.8.0_291 to 1.8.0_361.
Environment
Any applicable version
Diagnosis
It is possible that when you upgrade Oracle JDK from 1.8.0_291 to 1.8.0_361 on Bamboo server, it fails to start. Either nothing gets logged in the atlassian-bamboo.log
or if you try to delete the symlink which you have created to the JDK, then you will see the below message in the catalina.out
file:
1
/app/bamboo/bamboo/current/bin/catalina.sh: line 504: /app/java/current/bin/java: No such file or directory
This could have either due to Control Groups (cgroups)where sometimes there is an incorrect detection logic on some systems or if you don't use cgroups v2. Cgroups is a feature of the Linux kernel thats allow you to limit the access processes and containers have to system resources such as CPU, RAM, IOPS and network. To understand this, please read more on this - https://dockerlabs.collabnix.com/advanced/security/cgroups/#:~:text=Control%20Groups%20(cgroups)%20are%20a,resources%20available%20to%20Docker%20containers.
To identify if you this is the cause, please run the below test commands to see if the cgroup is version 1 or version 2:
Check if filesystem is mounted with cgroup v2 with the following commands:
1
mount -l | grep cgroup
You should see a result similar to the below:
1
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
Confirm that cgroup v2 is mounted with ll command, and the results should be similar to this below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
ll /sys/fs/cgroup/ total 0 dr-xr-xr-x 13 root root 0 jan 23 2035 ./ drwxr-xr-x 9 root root 0 jan 23 2035 ../ -r--r--r-- 1 root root 0 jan 23 2035 cgroup.controllers -rw-r--r-- 1 root root 0 jan 26 09:19 cgroup.max.depth -rw-r--r-- 1 root root 0 jan 26 09:19 cgroup.max.descendants -rw-r--r-- 1 root root 0 jan 23 2035 cgroup.procs -r--r--r-- 1 root root 0 jan 26 09:19 cgroup.stat -rw-r--r-- 1 root root 0 jan 23 2035 cgroup.subtree_control -rw-r--r-- 1 root root 0 jan 26 09:19 cgroup.threads -rw-r--r-- 1 root root 0 jan 26 09:19 cpu.pressure -r--r--r-- 1 root root 0 jan 26 09:19 cpuset.cpus.effective -r--r--r-- 1 root root 0 jan 26 09:19 cpuset.mems.effective -r--r--r-- 1 root root 0 jan 26 09:19 cpu.stat drwxr-xr-x 2 root root 0 jan 26 09:39 dev-hugepages.mount/ drwxr-xr-x 2 root root 0 jan 26 09:39 dev-mqueue.mount/ drwxr-xr-x 2 root root 0 jan 23 2035 init.scope/ -rw-r--r-- 1 root root 0 jan 26 09:19 io.cost.model -rw-r--r-- 1 root root 0 jan 26 09:19 io.cost.qos -rw-r--r-- 1 root root 0 jan 26 09:19 io.pressure -rw-r--r-- 1 root root 0 jan 26 09:19 io.prio.class -r--r--r-- 1 root root 0 jan 26 09:19 io.stat -r--r--r-- 1 root root 0 jan 26 09:19 memory.numa_stat -rw-r--r-- 1 root root 0 jan 26 09:19 memory.pressure -r--r--r-- 1 root root 0 jan 26 09:19 memory.stat -r--r--r-- 1 root root 0 jan 26 09:19 misc.capacity drwxr-xr-x 2 root root 0 jan 26 09:39 proc-fs-nfsd.mount/ drwxr-xr-x 2 root root 0 jan 26 09:39 proc-sys-fs-binfmt_misc.mount/ drwxr-xr-x 2 root root 0 jan 26 09:39 sys-fs-fuse-connections.mount/ drwxr-xr-x 2 root root 0 jan 26 09:39 sys-kernel-config.mount/ drwxr-xr-x 2 root root 0 jan 26 09:39 sys-kernel-debug.mount/ drwxr-xr-x 2 root root 0 jan 26 09:39 sys-kernel-tracing.mount/ drwxr-xr-x 89 root root 0 feb 8 13:03 system.slice/ drwxr-xr-x 4 root root 0 feb 8 05:23 user.slice/
The /sys/fs/cgroup/ directory, also called the root control group contains interface files (starting with cgroup) and controller-specific files such as cpuset.cpus.effective.
If command 1 in the above section contains the string "cgroup2" and command 2 contains cpuset.cpus.effective, then the system is mounted with cgroup V2.
If command 1 does not contain "cgroup2" the system is not mounted with cgroup v2.
Cause
If the system is not mounted with cgroup V2, CgroupV2Subsystem.initSubsystem will fail to trigger the BUG https://bugs.java.com/view_bug.do?bug_id=8230305 and you will see below error in the log file:
1 2
Caused by: java.lang.ArrayIndexOutOfBoundsException: 4 at jdk.internal.platform.cgroupv2.CgroupV2Subsystem.initSubsystem(CgroupV2Subsystem.java:73)
If it is mounted with cgroup v2, and the failure still occurs, then it is the bug: JDK-8245543 : Cgroups: Incorrect detection logic on some systems.
Solution
The solution for this is to upgrade JDK to Upgrade to 8u372+. However there is a workaround for this. Please follow the below steps:
If you want to keep JDK 1.8.0_361-b26, then you will have to disable JDK UseContainerSupport support;
1
: ${JVM_SUPPORT_RECOMMENDED_ARGS="-XX:-UseContainerSupport"}
⚠️ Note: This will prevent the JVM from adjusting the maximum heap size when running in a Docker container.
Was this helpful?