Jira pod on kubernetes gets terminated because of low memory resource limits configuration

Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.

Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

When Jira is deployed to kubernetes using Atlassian Data Center Helm Charts, if the resource limits for memory is not set properly, Jira pod might get terminated. 

Environment

Jira deployments on Kubernetes cluster using Atlassian Data Center Helm Charts

Diagnosis

If the Jira pod gets terminated consecutively, this issue can be identified by the following command output.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 $ kubectl describe pod jira-0 Name: jira-0 Namespace: default Priority: 0 Service Account: jira ... Status: Running ... Containers: jira: ... State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: OOMKilled      Exit Code: 137

This shows that the pod is terminated due to a OOMKilled.

Cause

When this issue happens, Jira pods starts without any issue but gets terminated when the pod memory utilization hits to the limit set in the values.yaml file. Kubernetes by default enforces the limit for the memory and terminates the pod if it goes beyond that value. Unlike memory, cpu resource is throttled according to the limit set in the values.yaml file. 

Pod resources can be monitored by the following command.

1 2 3 $ kubectl top pod jira-0 NAME CPU(cores) MEMORY(bytes) jira-0 1804m 539Mi

Solution

You can either remove the limits configuration or set the container resource limits according to the maximum heap and code cache settings.

(maxHeap+codeCache)*1.5

1 2 3 4 5 6 7 8 9 10 11 12 13 resources: jvm: maxHeap: "2G" minHeap: "2G" reservedCodeCache: "512m" container: requests: cpu: "2" memory: "2G" limits: cpu: "2" memory: "4G"

Updated on February 24, 2025

Still need help?

The Atlassian Community is here for you.