Find all Assets object custom fields in Jira Service Management Cloud
Platform Notice: Cloud Only - This article only applies to Atlassian products on the cloud platform.
Summary
This article provides an alternate solution using a Python script to export a list of all Assets object custom fields in your Jira Cloud site. This can help Jira admins better manage the existing fields linked to Assets.
Solution
It's not possible to sort and filter custom fields by type
Jira Cloud doesn't have the option to sort and filter custom fields by field type (yet). We have a feature request for this: JRACLOUD-64995 - Ability to sort and filter by type on custom field admin page.
Export a list of Assets object custom fields using a script
You can use the Get fields paginated endpoint of the Jira Cloud REST API to filter for the Assets object custom field.
Prerequisites
Replace placeholders in the script with your actual Jira Cloud domain, email, and API token:
JIRA_BASE_URL: Your Jira Cloud instance URL (https://<your-site>.atlassian.net).
AUTH_EMAIL: Your Jira account email.
API_TOKEN: Your Jira API token, which you can generate from your Jira account settings.
Install required Python libraries:
The script uses the requests library for making API calls. Install it via pip, if not already installed:
pip install requests
How it works
The script queries the Jira Cloud REST API to retrieve fields in batches (maxResults=50).
It checks each field's schema to determine if it is of type "Assets object."
Matching fields are written to a CSV file.
Example Output
If the script finds Assets object custom fields, it will write the output to a CSV file with the name (custom_fields_assets_object.csv) in the same directory where the script is run.
Here is an example of a Python Script that achieves this:
This script is provided as-is without warranties and is unsupported. Use at your own discretion and test in a non-production environment. Scripts and custom development are not in the scope of Atlassian Support Offerings.
import requests
import csv
import time
import logging
from requests.exceptions import HTTPError
from requests.auth import HTTPBasicAuth
# Configure logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s"
)
# Jira Cloud configuration
JIRA_BASE_URL = "https://[REPLACE-ME].atlassian.net" # Replace with your Jira Cloud domain
API_ENDPOINT = "/rest/api/3/field/search"
AUTH_EMAIL = "[REPLACE-ME]" # Replace with your Jira account email
API_TOKEN = '[REPLACE-ME]' # Replace with your API token
# Headers for the API request
HEADERS = {
"Accept": "application/json"
}
AUTH = HTTPBasicAuth(AUTH_EMAIL, API_TOKEN)
# Output CSV file
OUTPUT_CSV = "custom_fields_assets_object.csv"
# Function to check if a field is of type "Assets object"
def is_assets_object_field(field):
try:
schema = field.get("schema", {})
return (
schema.get("type") == "array"
and schema.get("items") == "cmdb-object-field"
and schema.get("custom") == "com.atlassian.jira.plugins.cmdb:cmdb-object-cftype"
)
except Exception as e:
logging.error(f"Error checking field type: {e}")
return False
# Function to fetch fields using the paginated API
def fetch_custom_fields():
custom_fields = []
start_at = 0
max_results = 50 # Maximum allowed by the API
while True:
try:
# Make API request with pagination
response = requests.get(
f"{JIRA_BASE_URL}{API_ENDPOINT}",
headers=HEADERS,
auth=AUTH,
params={"startAt": start_at, "maxResults": max_results}
)
response.raise_for_status() # Raise exception for HTTP errors
data = response.json()
fields = data.get("values", [])
# Filter for "Assets object" custom fields
for field in fields:
if is_assets_object_field(field):
custom_fields.append({
"id": field["id"],
"name": field["name"]
})
# Log progress
logging.info(f"Fetched {len(fields)} fields (startAt={start_at}).")
# Check if this is the last page
if data.get("isLast"):
break
# Move to the next page
start_at += len(fields)
except HTTPError as http_err:
logging.error(f"HTTP error occurred: {http_err}")
break
except Exception as err:
logging.error(f"An error occurred: {err}")
break
# Respect rate limits (if necessary)
time.sleep(1)
return custom_fields
# Function to write custom fields to a CSV file
def write_to_csv(fields):
try:
with open(OUTPUT_CSV, mode="w", newline="", encoding="utf-8") as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=["id", "name"])
writer.writeheader()
writer.writerows(fields)
logging.info(f"Successfully wrote {len(fields)} fields to {OUTPUT_CSV}.")
except Exception as e:
logging.error(f"Error writing to CSV: {e}")
# Main function
def main():
logging.info("Starting script to fetch custom fields of type 'Assets object'.")
custom_fields = fetch_custom_fields()
logging.info(f"Found {len(custom_fields)} 'Assets object' custom fields.")
write_to_csv(custom_fields)
logging.info("Script completed.")
if __name__ == "__main__":
main()
Was this helpful?