Bulk edit custom field options via CSV in Jira Cloud
Platform Notice: Cloud Only - This article only applies to Atlassian products on the cloud platform.
Summary
Updating multiple options in Jira's select list custom fields manually is time-consuming. Learn how to streamline this process using a CSV file.
Solution
A Python script is available that utilizes the Jira REST API to bulk update custom field options based on a CSV mapping file.
Retrieve the Context ID:
You can use the "Get custom field contexts API" to find the context ID associated with the target custom field.
API Endpoint:
https://your-jira-site.atlassian.net/rest/api/3/field/customfield_{custom_field_id}/context/
Fetch Existing Options:
Use the "Get custom field options (context) API" to retrieve all current options for the identified context.
API Endpoint:
https://your-jira-site.atlassian.net/rest/api/3/field/customfield_{custom_field_id}/context/{context_id}/option
Note: This API is paginated, returning a maximum of 100 records per request. Implement pagination to retrieve all options.
Prepare CSV Mapping:
Create a CSV file named
update_mapping.csv
with two columns:OLD_NAME
andNEW_NAME
. This file maps the existing option values to their new values.
Update Options via Script:
The provided Python script reads the CSV file, retrieves existing options from Jira, identifies options to update based on the CSV mapping, and uses the "Update custom field options (context) API" to update the options in bulk.
The script chunks the updates to avoid API timeouts.
API Endpoint:
https://your-jira-site.atlassian.net/rest/api/3/field/customfield_{custom_field_id}/context/{context_id}/option
APIs Used
Code Snippet
bulk_edit_options.py
import requests
import csv
import json
# Configuration
site_url = "https://some-site.atlassian.net"
custom_field_id = 10241
context_id = 10438
api_url_base = f"{site_url}/rest/api/3/field/customfield_{custom_field_id}/context/{context_id}/option"
emailID = ""
apiToken = ""
auth = (emailID, apiToken) # Replace with your actual credentials
headers = {
"Accept": "application/json",
"Content-Type": "application/json"
}
#Load mapping from CSV. CSV File Must be named update_mapping.csv in the same directory of the code with headers OLD_NAME,NEW_NAME
update_map = {}
with open("update_mapping.csv", mode="r", encoding="utf-8") as csv_file:
reader = csv.DictReader(csv_file)
for row in reader:
old_name = row["OLD_NAME"].strip()
new_name = row["NEW_NAME"].strip()
update_map[old_name] = new_name
#Fetching all current options
start_at = 0
max_results = 100
all_options = []
while True:
params = {"startAt": start_at, "maxResults": max_results}
response = requests.get(api_url_base, headers=headers, params=params, auth=auth)
response.raise_for_status()
data = response.json()
values = data.get("values", [])
all_options.extend(values)
if data.get("isLast", True):
break
start_at += max_results
#Preparing payload with updated values
updated_options = []
for option in all_options:
old_value = option["value"]
if old_value in update_map:
updated_option = {
"id": option["id"],
"value": update_map[old_value],
"disabled": option["disabled"]
}
updated_options.append(updated_option)
#Chunking the updates so as to avoid any API timeouts due to large payload.
chunk_size = 50
for i in range(0, len(updated_options), chunk_size):
chunk = updated_options[i:i+chunk_size]
payload = {"options": chunk}
put_response = requests.put(api_url_base, headers=headers, auth=auth, data=json.dumps(payload))
put_response.raise_for_status()
print(f"Updated options {i+1} to {i+len(chunk)}")
if(updated_options == []):
print("No matching values")
else:
print("All matching values have been updated.")
CSV Format
OLD_NAME,NEW_NAME
Scranton,Scranton City
Manhattan,Manhattan Island
The Electric City,Electric Dreams
Was this helpful?