Disk quota exceeded with AWS EFS

Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.

Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

Problem

When runnings Fisheye and Crucible in AWS where $FISHEYE_INST is mounted to an AWS Elastic File System (EFS), indexing of repositories may stop when certain limits are reached.

The following appears in the atlassian-fisheye-YYYY-MM-DD.log:

1 2 3 Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/efs/var/cache/glass-100/idx2/write.lock: java.io.IOException: Disk quota exceeded

Diagnosis

Environment

  • Fisheye and Crucible running in AWS using EFS for $FISHEYE_INST

Diagnostic Steps

  • The number of locks currently acquired can be seen with lslocks | wc -l

  • The number of open files can be checked with sysctl fs.file-nr

Cause

AWS EFS has limits on the number of active users that can have files open, the number of open files, and the number of locks that can be acquired. If one of these limits is exceeded, a Disk quota exceeded message appears in the logs. With Fisheye and Crucible, it's more likely that you'll hit the open file limit or number of locks that can be acquired. The limits are:

  • Up to 128 active user accounts can have files open at once for an instance.

  • Up to 32,768 files can be open at once for an instance.

  • Each unique mount on the instance can acquire up to a total of 8,192 locks across 256 unique file-process pairs. For example, a single process can acquire one or more locks on 256 separate files, or eight processes can each acquire one or more locks on 32 files.

Information on the limits of EFS is available in AWS's Documentation at https://docs.aws.amazon.com/efs/latest/ug/troubleshooting-efs-fileop-errors.html#diskquotaerror

Solution

Workaround

Since these are hard limits in EFS, there is limited ability to work around them. You may be able to reduce the number of open files or disable unused repositories to free up open files or locks.

Resolution

Due to the way that Fisheye and Crucible use locks and open files, larger instances that are running into this limit will need to migrate to use AWS Elastic Block Store (EBS) which does not impose these same limits.

Updated on April 2, 2025

Still need help?

The Atlassian Community is here for you.