Duplicate Key error after Bitbucket Data Center database copy
Platform Notice: Data Center Only - This article only applies to Atlassian apps on the Data Center platform.
Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Summary
Bitbucket encounters a java.lang.IllegalStateException: Duplicate key com.atlassian.crowd.manager.directory.monitor.poller.DirectoryPollerManager.<directoryid> error during startup and every two minutes thereafter, following the database being copied from another Bitbucket instance. This error does not occur in the original instance from which the data was copied.
Most commonly, this occurs when setting up a staging environment with production data or updating the database on a target Bitbucket server.
Environment
Bitbucket Data Center 8.19.20, but may apply to other versions.
Full error log
The atlassian-bitbucket.log will display the following error message.
2025-08-28 18:52:28,878 ERROR [Caesium-1-2] c.a.scheduler.core.JobLauncher Scheduled job with ID 'com.atlassian.crowd.manager.directory.monitor.DirectoryMonitorRefresherStarter-job' failed
java.lang.IllegalStateException: Duplicate key com.atlassian.crowd.manager.directory.monitor.poller.DirectoryPollerManager.32770 (attempted merging values com.atlassian.crowd.directory.DbCachingRemoteDirectory@c91f88 and com.atlassian.crowd.directory.DbCachingRemoteDirectory@60eed73c)
at java.base/java.util.stream.Collectors.duplicateKeyException(Collectors.java:135)
at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:182)
at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:411)
at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:762)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at com.atlassian.crowd.manager.directory.monitor.DirectoryMonitorRefresherJob.runJob(DirectoryMonitorRefresherJob.java:115)
at com.atlassian.scheduler.core.JobLauncher.runJob(JobLauncher.java:134)
at com.atlassian.scheduler.core.JobLauncher.launchAndBuildResponse(JobLauncher.java:106)
at com.atlassian.scheduler.core.JobLauncher.launch(JobLauncher.java:90)
at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.launchJob(CaesiumSchedulerService.java:518)
at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJob(CaesiumSchedulerService.java:513)
at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJobWithRecoveryGuard(CaesiumSchedulerService.java:537)
at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeQueuedJob(CaesiumSchedulerService.java:433)
at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeJob(SchedulerQueueWorker.java:66)
at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeNextJob(SchedulerQueueWorker.java:60)
at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.run(SchedulerQueueWorker.java:35)
at java.base/java.lang.Thread.run(Thread.java:840)Cause
The cwd_directory table contains rows with duplicate values in the ID column. This prevents the DirectoryMonitorRefresherJob from creating a map due to duplicate directory IDs.
Solution
Bitbucket enforces unique constraints sys_pk_10127 and sys_idx_sys_ct_10128_10130, which prevent duplicate id and lower_directory_name values in the cwd_directory table. However, if the data becomes duplicated during the database dump or import process, and these constraints are either missing from the table definition or not enforced by the database during the import, such issues can arise.
Attempt a fresh database dump and import
The issue occurs outside of Bitbucket during the database dump import or export process. The best approach is to redo the procedures, as duplicates might exist in various tables, making it impractical to identify and fix them individually.
If the issue arises while creating the staging environment, please redo the procedure by following How to setup staging or test server environments for Bitbucket Data Center. Alternatively, if it occurs during the migration of Bitbucket Data Center to another server, redo the procedure by following Migrating Bitbucket Data Center to another server.
Was this helpful?