• Get started
  • Documentation

What are the requirements and limits of Data Manager?

Assets Data Manager for Jira Service Management Cloud is currently rolling out in Open Beta and will be available to all Premium + Enterprise sites by end of October 2024.

Although Assets Data Manager is built to handle very large data sets, it does have a maximum load.

Each of these limits will have potential workarounds that may involve modifications to your local system, or may involve modifying how your data is structured in Data Manager.

What is the minimum disk space required?

The minimum disk space required can be calculated by: (the total size of local files) * (number of days of storage).

The Cleanse and Import Client runs on your local system. Because this is a middleware client running locally, the amount of data it can handle is entirely dependent on your system hardware and configuration.

If you are loading large files or adding data frequently you will need to ensure that you have enough disk space to hold your data. To save disk space, consider removing files that have been processed more than 14 days ago.

If you are using data from tools or databases, but not files, the disk space requirements may be reduced.

For example:

If you have 10 CSV files that are less than 100mb, you will need 100mb * 14 days = 1.5gb in available disk space (on top of any OS requirements).

What is the minimum memory required?

The Cleanse and Import Client runs locally on your system, if you load large files or add data frequently you will need to ensure that you have enough RAM to run the client.

The memory requirements may be reduced if you are using data from files such as CSV but not tools or databases.

For example:

If you are using Flat Files and have 75,000 objects, we recommend having at least 8gb of memory available.

If you are using SQL or APIs and have 175,000 objects, we recommend having at least 8gb of memory available.

Some adapters also work where all data is loaded into memory, so the impact on memory should be reviewed and adjusted during the first load, as required.

The sizing guide below should help you determine the amount of storage you’ll need to account for.

Data Manager Sizing Guide

Flat Files

Number of compute objects

Number of flat file sources

Memory on local server (Gb)

Additional storage on local server (Gb)

1,000

3

8

50

25,000

4

8

50

50,000

5

8

50

100,000

6

8

50

250,000

8

16

80

500,000

10

16

80

1,000,000

12

16

100

  • Assumes 15% growth in the number of objects over the next 3 years.

SQL & ODBC or APIs

Number of compute objects

Number of direct connections to source

Memory on local server (Gb)

Additional storage on local server (Gb)

1,000

3

8

N/A

25,000

4

8

N/A

50,000

5

8

N/A

100,000

6

8

N/A

250,000

8

16

N/A

500,000

10

16

N/A

1,000,000

12

16

N/A

  • Assumes 15% growth in the number of objects over the next 3 years.

  • The table above can be referred to in connecting to data sources via SQL & ODBC or API except the Qualys, ServiceNow and FNMS/Flexera One adapters which load all data into memory.

  • Please monitor memory usage when first running the Adapters Client job to ensure that there is sufficient memory. If you are using a combination of the Flat Files and SQL & ODBC or API connectors, please take the higher of the two recommendations.

What is the maximum import size?

Assets Data Manager runs in the cloud within Jira Service Management. Because this is a cloud-based system, there are some hard limits to ensure that it remains available and efficient for all users.

The maximum import size based on ImportScore is 40,000.

The maximum import size can be calculated by determining the ImportScore for that import. Other factors, such as the number of existing data sources in the system, the cardinality of data across various data sources, the complexity of attributes, and the number of users currently using the UI may also affect the speed or success of jobs.

The ImportScore can be calculated by multiplying the number of mapped attributes with the sum of Cleansed Records for the Object Class, divided by 1000.

ImportScore (per object class) =

numOfMappedAttributes x sumOfCleansedRecords (for this object class) / 1000

  • An import with 7 Attributes, 200k Cleansed Records would have an ImportScore of 1400.

  • An import with 16 Attributes, 80k Cleansed Records would have an ImportScore of 1280.

How do I reduce the ImportScore of a job?

If you attempt an Import that is larger than the maximum ImportScore, you will see the following message within your Import Results screen:

We're unable to process this import request because its ImportScore is greater than the maximum limit. Learn about ImportScore and how it can be reduced.

The solution to this is to reduce the ImportScore of this import and try it again. There are three potential ways to reduce the ImportScore of an import - in most cases, you may only need to select one of these options:

  1. Reduce the number of attributes - In most cases, not all attributes from all data sources are needed. To reduce your ImportScore you can reduce the total number of attributes that are Imported by assigning them to <ignore> when you Configure your attribute mapping.

  2. Add more cleansing rules - Cleansing rules help reduce the size of an import by removing duplicate or erroneous records.

  3. Reduce the number of data sources - In some cases, not all Data Sources may be needed. Review your list of Data Sources and remove any that are not required.

What is the maximum number of Important attributes that are allowed?

56 Important attributes are allowed for each object class.

Important attributes are attributes that are compared against data sources to ensure that they are correctly and consistently represented in your data. You may select any attribute as important by checking the Important checkbox when editing that attribute in the Attributes screen.

When an important attribute is Imported, it is compared against data sources. If the data for an attribute matches all the compared sources, it is given the Verified flag.

 

Still need help?

The Atlassian Community is here for you.