Messages about database inserts violating cluster_lock_name_idx are safe to ignore

Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.

Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

Problem

An error message indicating an insert failure due to a cluster_lock_name_idx violation may occasionally appear in logs associated with Jira. This issue may be observed:

  • In the logs of certain database servers, when using default settings (such as Postgres) or customized configurations.

  • In specific situations within the Jira log.

Example of error in Postgres log

1 2 3 4 2022-08-18 06:15:17.304 CEST [28256] ERROR:  duplicate key value violates unique constraint "cluster_lock_name_idx" 2022-08-18 06:15:17.304 CEST [28256] DETAIL:  Key (lock_name)=(sla_issue_update_103176) already exists. 2022-08-18 06:15:17.304 CEST [28256] STATEMENT:  insert into public.clusterlockstatus (lock_name, update_time, id)     values ($1, $2, $3) 

Examples in log appearing in case of non-default Jira logs

1 Duplicate entry 'xray.class com.xpandit.raven.customfield.TestRunStatusCustomFiel' for key 'clusterlockstatus.cluster_lock_name_idx'
1 `"SQLServerException: Cannot insert duplicate key row in object 'dbo.clusterlockstatus' with unique index 'cluster_lock_name_idx'. The duplicate key value is (sla_issue_update_xxxxxxxx)"

Solution

Impact on Jira

This error message may be disregarded without concern. It arises during a normal phase of the Cluster Lock's lifecycle within the database, specifically when synchronizing the state with clusterlockstatus table.

More details on why this happens

Cluster Locks in Jira Data Center are a mechanism that allows one thread running on a single node exclusive access to a resource or exclusive rights to perform an operation (e.g. fetching e-mail from a remote server and creating new issues based on it). The information on who holds a Cluster Lock is kept in the clusterlockstatus database table. In addition, the node holding the lock has to be aware of this database record state, effectively holding its own copy of that state. Each lock uses a unique name as its only reference. To guarantee that, the clusterlockstatus table has a unique index cluster_lock_name_idx defined on the column storing that value: lock_name.

As can be expected, a duplicate entry error due to the violation of cluster_lock_name_idx happens when a node attempts to insert a record to the clusterlockstatus table that already contains a lock record with the given name. A node needs to perform this operation to have a guarantee that a lock's record with a given name exists, before attempting to claim it in a nonblocking way. To do that with a minimal number of database queries, cluster nodes try to perform an insert blindly on the first locks used during the node’s runtime. Regardless of the outcome of this operation, the Cluster Lock claim attempt proceeds as long as the record is confirmed to exist.

Error message occurrence

This error isn't logged in Jira logs as part of the default logging configuration and it’s not escalated to caller processes when it happens. It's only expected to appear in some database servers' logs, either with default (Postgres) or customized settings.

On the (off by default) TRACE level for the JiraClusterLockQueryDSLDao class, Jira indeed logs the occurrence of this error but only in the following form:

1 "Lock <lockname> already exists, skipping insert."

FAQ

How do I know the error happening on my instance doesn’t indicate some unexpected app behavior after all?

Cluster Locks are designed with the assumption that any other node can acquire them during their acquisition by a cluster node, so they gracefully handle situations such as missing records unexpectedly appearing or becoming claimed. Even if, hypothetically, some external process would insert records to this table (which is highly discouraged), affected Cluster Locks would remain usable and unaffected.

Note: We highly discourage any manipulation (especially deletions or updates) of the clusterlockstatus table used by an active cluster.

Should I perform operations advised in the knowledge base for duplicate constraint violations just in case?

Our knowledge base contains some articles with general recommendations on how to proceed in case unique constraint violations or duplicate entry errors are visible in Jira logs, for example:

https://confluence.atlassian.com/jirakb/duplicated-entry---unique-constraint-violated-397808094.html,

https://confluence.atlassian.com/jirakb/duplicate-key-value-errors-in-logs-in-jira-server-using-postgresql-958771364.html.

These recommendations aren't expected to reveal anything meaningful in the case of errors related to the cluster_lock_name_idx index or the lock_name column, as explained in more detail above, Cluster Locks are designed to expect and gracefully handle this situation.

Why is this error logged on some Jira instances more often than on others?

Cluster Locks are used by many functionalities built into the core Jira, Jira Software, Jira Service Management, and apps both by Atlassian and third-party developers. Some of these functionalities use a finite amount of locks, which highly decreases the frequency of this error message and limits its occurrence mostly to some period after a node has been started. But some functionalities need to create unique locks as part of their lifecycle (the most recognizable example are SLA updates for Jira Service Management), for example, tied to an issue key or some other source of nonfinite values. If a Jira cluster uses such functionalities, they might produce these logs proportionally to traffic and during all runtime of cluster nodes.

Updated on April 2, 2025

Still need help?

The Atlassian Community is here for you.