XML Import Failed with 'Too many open files' Error

Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.

Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Atlassian recommends disabling the XML backup both for performance and reliability. Setting up a test server and the Production Backup Strategy is better done with an SQL dump. Upgrading Confluence is better done without the XML backup.

The one operation for which an XML backup is required is database migration. For this we recommend a commercial database migration tool. Vote for CONF-12599 to add a more robust strategy for large implementation migrations. Atlassian does not support migrating to a new database.

Symptoms

The XML import fails.

2009-08-20 16:09:52,753 ERROR [Importing data task] [confluence.importexport.impl.BackupImporter] restoreDirectory  Couldn't restore directory from backup! : : Caused by: java.io.IOException: Too many open files at java.io.UnixFileSystem.createFileExclusively(Native Method) at java.io.File.createNewFile(File.java:883) at com.atlassian.core.util.FileUtils.copyFile(FileUtils.java:465) at com.atlassian.core.util.FileUtils.copyDirectory(FileUtils.java:351) at com.atlassian.confluence.importexport.impl.FileBackupImporter.restoreDirectory(FileBackupImporter.java:457) ... 20 more

Often you will also see "Import failed. Error finding and producing a stream of the export properties file" in the XML import dialog.

Cause

This error means you have reached the limit of your operating system of the number of open files you can have.

This can occur in Confluence instances with particularly large heap space memory allocations. On each search, a lock file will be placed on the file system and deleted, however the handle will not be released by JAVA until the garbage collector runs a Full Garbage Collection, removes these dereferenced objects, and clears the file handle through each object's finalize() method. If there has not been a collection for some time, it can precipitate this error.

Please note that there is also a bug (CONF-16067) in Confluence 3.x and earlier, where com.atlassian.core.util.FileUtils.copyDirectory does not close input stream properly.

Resolution

There are a few suggestions to try, in no particular order:

  1. On Linux, you can set ulimit -n 10000 for the user Confluence is running as and have it set in its environment.

  2. If you are importing a complete site backup, do the index creation separately.

  3. This issue can also affect space imports where plenty of attachments are involved. If this happens to you, you can try:

    1. Importing backup without attachments. The attachments can be copied to Confluence Home folder instead.

    2. Increasing total file descriptors in your OS (either per process file descriptors or system wide's can work). Depending

      on your operating system the way to increase the file descriptors can vary. Please consult your chosen OS's documentation

      for detailed instruction.

Updated on May 22, 2025

Still need help?

The Atlassian Community is here for you.