Jira is slow or unresponsive with many DEBUG messages in the log

Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.

Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

Jira becomes unresponsive or very slow and there are far too many DEBUG messages in any of the log files.

Environment

All versions of Jira Core/Software 7 and 8.

All versions of Jira Service Management 3 and 4.

Diagnosis

Check the main log files (mainly access_log, atlassian-jira.log, atlassian-jira-http-dump.log and atlassian-jira-sql.log) to see if there are out of order log messages:

1 $ cat atlassian-jira.log | cut -d" " -f-2
1 2 3 4 5 6 7 8 9 10 2020-11-12 10:50:54,684-0300 2020-11-12 10:50:54,684-0300 2020-11-12 10:50:54,705-0300 2020-11-12 10:50:54,684-0300 2020-11-12 10:50:54,705-0300 2020-11-12 10:50:54,705-0300 2020-11-12 10:50:54,705-0300 2020-11-12 10:50:54,705-0300 2020-11-12 10:50:54,705-0300 2020-11-12 10:50:54,684-0300

Notice there are log entries from millisecond 684 written after 705. They should always be in crescent order.

This other Linux-like command can be used to print the out-of-order-lines and the total count:

1 $ grep -E -a -h "^20[0-9]{2}-[0-9]{2}" application-logs/atlassian-jira.log | awk 'BEGIN {prevdate=$1" "$2; prevline=$0; ordercount=0}; {currdate=$1" "$2; currline=$0; if (prevdate > currdate) {{printf "%s\n%s\n\n", substr(prevline, 1, 100)"...", substr(currline, 1, 100)"..."}; ordercount++}; prevdate=currdate; prevline=currline}; END {print "Number of out-of-order entries: "ordercount}'

Sample output:

1 2 3 4 5 6 7 8 9 10 2024-03-20 14:56:06,070+0000 https-jsse-nio-8443-exec-47 url: /browse/JIRA-123456, /secure/ProjectIs... 2024-03-20 14:56:06,069+0000 Caesium-1-4 DEBUG ServiceRunner [c.o.scriptrunner.runner.ScriptBind... 2024-03-20 15:46:19,573+0000 https-jsse-nio-8443-exec-34 url: /rest/scriptrunner/behaviours/latest/v... 2024-03-20 15:46:19,572+0000 https-jsse-nio-8443-exec-122 url: /rest/scriptrunner/behaviours/latest/... 2024-03-20 15:58:50,748+0000 https-jsse-nio-8443-exec-123 url: /rest/wrm/2.0/resources; user: admini... 2024-03-20 15:58:50,747+0000 https-jsse-nio-8443-exec-146 url: /rest/scriptrunner/behaviours/latest/... Number of out-of-order entries: 3

This other Linux-like command groups the packages by log verbosity and prints the top-10 from the logs:

1 2 3 4 5 6 7 8 9 10 11 12 $ grep -a -E " (DEBUG|INFO|WARN|ERROR|SEVERE|FATAL) " -h atlassian-jira.log* | awk '{match($0, / (DEBUG|INFO|WARN|ERROR|SEVERE|FATAL) /); my_level=substr($0, RSTART+1, RLENGTH-2); match($0, /\[[a-zA-Z0-9\-_ \.]+\]/); my_package=substr($0, RSTART, RLENGTH); print my_level, my_package}' | sort | uniq -c | sort -nr | column -t | head -10 86455 DEBUG [c.o.scriptrunner.runner.ScriptBindingsManager] 46103 DEBUG [c.o.jira.groovy.Behaviour] 43116 WARN [c.o.scriptrunner.runner.ScriptBindingsManager] 28296 WARN [c.a.j.plugin.editor.RteDataProvider] 23922 DEBUG [c.o.jira.behaviours.BehaviourManagerImpl] 4557 DEBUG [c.o.j.groovy.jql.ScriptedFunctionClauseFactory] 3034 ERROR [c.o.scriptrunner.customfield.GroovyCustomField] 2459 INFO [c.a.j.p.webhooks.matcher.JqlEventMatcher_SLOW] 704 WARN [c.a.j.issue.index.DefaultIssueIndexer] 672 INFO [c.a.jira.index.QueueingIndexStats]

You can change the head -10 by higher numbers to get more top entries.

In this example, we see ScriptRunner packages (com.onresolve) with DEBUG level.

Cause

I/O operations like writing to files are exclusively locked, so only one Thread at a time can write to a file. When there's something causing too much concurrency on trying to write to a file, Threads may get blocked too often and the application hangs or operates slower than usual.

This is likely because a DEBUG level was enabled in Jira or some process started logging far too much information — in which case we could see INFO or WARN messages instead of DEBUG.

The timestamp (date) written to the log file is obtained almost at the same time as the writing of the message itself, so when we see a log message with an earlier timestamp written after another with a later timestamp we may expect the system to hang.

Solution

If you have recently enabled a DEBUG or profiling on Logging and Profiling, you should revert it or restart Jira so any changes made to the logs by the UI are reverted.

If you had recently changed the log4j file to increase verbosity of any package, you may revert that edition too and restart Jira.

If no recent changes were made or a restart didn't fix, you may change the log level for the specific package logging so many messages and see if it resolves the issue.

Updated on March 17, 2025

Still need help?

The Atlassian Community is here for you.