• Products
  • Documentation
  • Resources

Create a custom detection

Beacon is now Guard Detect, which is part of Atlassian Guard. Read the blog

Custom content detections work the same way as other types of content scanning detections. At the point a user publishes or updates a Confluence page, we scan the text, and generate an alert if any of the text matches the terms defined in your custom detection.

For example, if you want to check for mentions of sensitive project codenames, such as Project Ursus or Project Orion, you could create a custom detection that generates an alert whenever “ursus” or “orion” is used on a Confluence page.

Who can do this?
Role: Organization admin, Guard Detect admin
Plan: Atlassian Guard Premium

Create a custom detection

To create a custom content scanning detection:

  1. In Guard Detect, select Detections > Content Scanning from the header.

  2. Select Add custom detection.

  3. Enter a Name for the detection. This will be included in the alert title.

  4. Enter a Description (optional). This appears in the detection details, and can provide additional context about why the terms are sensitive to your organization.

  5. Enter the terms or phrases to detect, separated by a comma. An alert will be generated when ANY of the terms are detected.

  6. Save the detection.

Custom detection form showing name, description, and terms to detect fields.

Limits

  • You can enter up to 50 terms or phrases in one detection.

  • Terms must be minimum 4 characters and are not case sensitive.

  • A single term or phrase must be less than 256 characters.

  • You can create a maximum of 100 custom detections.

Test your custom detection

The best way to test your custom detection is to create a page that contains the terms, and check that an alert is generated. Try out a few variations, and refine the terms to detect until you’re satisfied that it will catch all variations of the term.

It can be a balancing act to get this right, especially if your terms contain commonly used words. A custom detection that generates too many false positives risks being ignored over time, so it is worth spending some time to fine-tune your detection.

We recommend you monitor your new detection for a few days (or weeks if your organization doesn’t create a lot of content) and adjust the terms if necessary. You can also exclude pages, if there are some places that it's acceptable to use the sensitive terms.

Still need help?

The Atlassian Community is here for you.