Too many open files error in Jira server

Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.

Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

Symptoms

A Jira application experiences a general loss of functionality in several areas.

The following error appears in the atlassian-jira.log:

1 java.io.IOException: java.io.IOException: Too many open files

To identify the current open file handler count limit, run:

1 ulimit -aS | grep open

And then look at the "open files" row, which looks like:

1 open files (-n) 2560

Diagnosis

Option 1: lsof command

In order to identify the open files that are completely unlinked, the lsof +L1 command can be used, for example:

1 lsof +L1 > open_files.txt
1 2 3 4 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME java 2565 dos 534r REG 8,17 11219 0 57809485 /home/dos/deploy/applinks-jira/temp/jar_cache3983695525155383469.tmp (deleted) java 2565 dos 536r REG 8,17 29732 0 57809486 /home/dos/deploy/applinks-jira/temp/jar_cache5041452221772032513.tmp (deleted) java 2565 dos 537r REG 8,17 197860 0 57809487 /home/dos/deploy/applinks-jira/temp/jar_cache6047396568660382237.tmp (deleted)

Option 2: /proc examination

Linux kernel exports information about all of the running processes on a Linux system via the proc pseudo file system, usually mounted at /proc.

In the /proc pseudo file system, we can find the open file descriptors under /proc/<pid>/fd/ where <pid> is the JAVA PID.

To count the open files opened by the JAVA process, we can use the command below to get a count:

1 ls -U /proc/\{JAVA-PID}/fd | wc -l

or to show files:

1 ls -lU /proc/\{JAVA-PID}/fd | tail -n+2

Atlassian Support can investigate this further if the resolution does not work.

To request support assistance, please provide the following to Atlassian Support.

  1. The output of Option 1 and/or Option 2.

    ⚠️ The commands must be executed by the Jira user, or a user who can view the files. For example, if Jira is running as root (which is not at all recommended), executing this command as jira will not show the open files.

  2. A heap dump taken at the time of the exception being thrown, as per Generating a Heap Dump.

  3. A Jira application Support ZIP.

Cause

UNIX systems have a limit on the number of files that can be concurrently open by any one process. The default for most distributions is only 1024 files, and for certain configurations of Jira applications, this is too small a number. When that limit is hit, the above exception is generated and Jira applications can fail to function as it can't open the required files to complete its current operation.

There are certain bugs in the application that are known to cause this behavior:

We have the following improvement requests to better handle this in Jira applications:

Solution

Resolution

These changes will only work on an installation that uses built-in initd script for starting Jira. For installations that use custom build service for systemd (latest versions of linux OS-es) changes will need to be applied directly in that systemd service configuration (ie, update the /usr/lib/systemd/system/jira.service file, followed by systemctl daemon-reload) in a form of:

1 2 [Service] LimitNOFILE=20000

In most cases, setting the ulimit in Jira's setenv.sh should work:

  1. If the $JIRA_HOME/caches/indexes folder is mounted over NFS move it to a local mount (i.e. storage on the same server as the Jira instance). NFS is not supported as per our Jira application Supported Platforms and will cause this problem to occur at a much higher frequency.

  2. Stop the Jira application.

  3. Edit $JIRA_INSTALL/bin/setenv.sh to include the following at the top of the file:

    1 ulimit -n 16384

    This will set that value each time Jira applications are started, however, it will need to be manually migrated when upgrading Jira applications.

  4. Start your Jira application.

  5. The changes can be verified by running /proc/<pid>/limits where <pid> is the application process ID.

If you are using a Jira application version with the bug in JRASERVER-29587, upgrade it to the latest version. If using NFS, migrate to a local storage mount.

Notice that some operating systems may require additional configuration for setting the limits.

For most Linux systems you would modify the limits.conf file:

  1. To modify the limits.conf file, use the following command:

    sudo vim /etc/security/limits.conf

  2. Add/Edit the following for the user that runs Jira application. If you have used the bundled installer, this will be jira.

    limits.conf

    1 2 3 4 5 6 7 8 9 10 11 12 13 #<domain> <type> <item> <value> # #* soft core 0 #root hard core 100000 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #ftp - chroot /ftp #@student - maxlogins 4 jira soft nofile 16384 jira hard nofile 32768 
  3. Modify the common-session file with the following:

    sudo vim /etc/pam.d/common-session

    ℹ️common-session is a file only available in debian/ubuntu.

  4. Add the following line:

    common-session

    1 2 # The following changes were made from the JIRA KB (https://confluence.atlassian.com/display/JIRAKB/Loss+of+Functionality+due+to+Too+Many+Open+Files+Error): session required pam_limits.so

In some circumstances it is necessary to check and configure the changes globally, this is done in 2 different places, it is advised to check all of this for what is in effect on your system and configure accordingly:

  1. The first place to check is the systctl.conf file.

    1. Modify the systctl.conf file with the following command:

      sudo vim /etc/sysctl.conf

    2. Add the following to the bottom of the file or modify the existing value if present:

      sysctl.conf

      1 fs.file-max=16384
    3. Users will need to logout and login again for the changes to take effect. If you want to apply the limit immediately, you can use the following command:

    4. 1 sysctl -p
  2. The other place to configure this is within the sysctl.d directory, the filename can be anything you as long as it follows with ".conf" e.g. "30-jira.conf", the number here helps to give an order of precedence see the previous link for more info.

    1. To create/edit the file use the following:

    2. sudo vim /etc/sysctl.d/30-jira.conf

    3. Add the following to the bottom of the file or modify the existing value if present:

      30-jira.conf

      1 fs.file-max=16384
  3. ℹ️Note: for RHEL/CentOS/Fedora/Scientific Linux, you'll need to modify the login file with the following:

    sudo vim /etc/pam.d/login

  4. Add the following line to the bottom of the file:

    login

    1 2 # The following changes were made from the JIRA KB (https://confluence.atlassian.com/display/JIRAKB/Loss+of+Functionality+due+to+Too+Many+Open+Files+Error): session required pam_limits.so
  5. See this external blog post for a more detailed write-up on configuring the open file limits in Linux: Linux Increase The Maximum Number Of Open Files / File Descriptors (FD)

Note: For any other operating system (please see your operating system's manual for details).

Lastly, restart the Jira server application to take effect.

For exceedingly large instances, we recommend consulting with our partners for scaling Jira applications. 

Updated on April 15, 2025

Still need help?

The Atlassian Community is here for you.