OpenAI
Flow Designer has a built-in OpenAI Chat Completions step to help you integrate OpenAI into your alert management and incident response flows.
OpenAI Steps
The following steps are available:
- Chat Completions: send a prompt to an OpenAI model and get back a response.

To add a OpenAI step to your flow:
- Go to the Apps tab of the palette, expand the OpenAI section, and drag the Chat Completions step onto the canvas.
- Connect the step to the previous step in the flow. This gives you access to the alert properties and outputs of previous steps when you configure this step's inputs.
- Double-click the step to edit it, and use the Setup tab to configure the inputs. You can use plain text and input variables (or both). See the following section for detailed information on the inputs, including which are required.
- On the Endpoint tab, configure the step to point to your <Product Name> instance.
- You can select a pre-existing endpoint or configure a new endpoint with the following information:
- Name: Type a name that will identify your endpoint.
- Base URL: Enter https://api.<openai>.com.
- Endpoint Type: Select 'Token' (selected by default).
- Header: Select 'Authorization'.
- Token Prefix: Type 'Bearer'.
- Token: Enter your organization's <Product Name> org token or user API key.
- You can select a pre-existing endpoint or configure a new endpoint with the following information:
Chat Completions
Use the OpenAI Chat Completions step to automatically add a send a prompt to an OpenAI model and get back a response. Map outputs from previous steps to the inputs to create the comment and determine what incident to comment on.

Inputs
Inputs with an asterisk* are required.
Name |
Description |
---|---|
Frequency |
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Leave blank to use model’s default value. |
Maximum Completion Tokens | Maximum number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. Leave blank to use model’s default value. |
Model | The model to use. For example: gpt-3.5-turbo, gpt-4, gpt-4-turbo. |
Presence Penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Leave blank to use model’s default value. |
Temperature | Number between 0 and 2 that specifies how specific or random the output should be. Higher values increase the randomness of the output, while lower values make it more focused. Do not alter the Temperature if you plan to alter the Top P value. Leave blank to use model’s default value. |
Top P | Alternative to temperature sampling, the model considers the results of tokens with Top P probability mass. For example, setting the top_p value to 0.1 means only the tokens comprising the top 10% probability mass are considered. Do not alter the Top P if you plan to alter the Temperature value. Leave blank to use model’s default value. |
System Prompt | System prompts define what the model does, specifies how the model generally behaves, and how it should respond. |
User Prompt | User prompts contain the instructions that request an output from the specified model. These prompts are similar to requests an end user might type in to ChatGPT. |
Outputs
Name |
Description |
---|---|
Completion ID |
Unique identifier for the chat completion. |
Finish Reason | Reason why the response finished. Available values are: stop, length, content_filter, tool_calls, function_call. The step will return ‘error’ if any issues are encountered. |
Model | Model used for the chat completion. |
Response | Response that matches the specified input parameters. |
Result | Result returned by OpenAI. Available values are: Success, Failure. |