Google Gemini
Flow Designer has a built-in Google Gemini Text Generation steps to help you integrate Gemini into your alert management and incident response flows.
Gemini Steps
The following steps are available:
- Text Generation: send a prompt to a Google Gemini model and get back a response.
- Text Generation (API Key): send a prompt to a Google Gemini model and get back a response using API key authentication. This step is compatible with OpenAI.
To add a Google Gemini step to your flow:
- Go to the Apps tab of the palette, expand the Google Gemini section, and drag the Text Generation step onto the canvas.
- Connect the step to the previous step in the flow. This gives you access to the alert properties and outputs of previous steps when you configure this step's inputs.
- Double-click the step to edit it, and use the Setup tab to configure the inputs. You can use plain text and input variables (or both). See the following section for detailed information on the inputs, including which are required.
- On the Endpoint tab, configure the step to point to your Google Gemini instance.
- You can select a pre-existing endpoint or configure a new endpoint. See each step for individual endpoint configuration settings.
Text Generation
Use the Text Generation step to automatically add a send a prompt to a Google Gemini model and get back a response. Map outputs from previous steps to the inputs to create the comment and determine what incident to comment on.
Use the following authentication settings to set up a new endpoint to work with Google Gemini.
- Endpoint type: OAuth 2.0 (Authorization Code)
- Name: Type a name that will identify your endpoint.
- Base URL: Enter https://generativelanguage.googleapis.com.
- Authorization URL: https://accounts.google.com/o/oauth2/auth?access_type=offline
- Access Token URL: https://oauth2.googleapis.com/token
- Client ID: The client ID obtained when registering xMatters in your Google cloud project.
- Client Secret: The client secret.
- Token Prefix: Type 'Bearer'.
- Endpoint scopes:
- https://www.googleapis.com/auth/cloud-platform
- https://www.googleapis.com/auth/generative-language.retriever
Inputs
Inputs with an asterisk* are required.
|
Name |
Description |
|---|---|
| Frequency Penalty | Number between -2.0 and below 2.0. Positive values reduce repeated content by penalizing frequent tokens. Leave blank to use model’s default value. Some models do not support this parameter. |
| Maximum Output Tokens | Maximum number of tokens to include in a response. Leave blank to use model’s default value (defaults vary by model). |
| Model | The model to use. For example: gemini-2.0-flash, gemini-2.0-flash-lite, gemini-1.5-pro. |
| Presence Penalty | Number between -2.0 and below 2.0. Positive values penalize reuse of tokens already in the response, increasing the likelihood of new content. Leave blank to use model’s default value. Some models do not support this parameter. |
| Safety Setting |
Comma-separated list of Google Gemini “harm category:filter threshold” pairs. For example: HARM_CATEGORY_1:BLOCK_LOW,HARM_CATEGORY_2:BLOCK_MEDIUM Leave blank to use model’s default settings. |
| System Prompt | System prompts define what the model does, specifies how the model generally behaves, and how it should respond. |
| Temperature | Floating-point number between 0 and below 2 that specifies how specific or random the output should be. Higher values increase the randomness of the output, while lower values make it more focused. Leave blank to use model’s default value. |
| Top P |
Top P controls token selection by probability. The model chooses from the most likely tokens until their combined probability meets the Top P value. For example, with Top P = 0.5, only the top tokens whose probabilities sum to 0.5 are considered. Number between 0.0 and 1.0 (defaults vary by model). Lower values reduce randomness; higher values increase it. Leave blank to use model’s default value. |
| User Prompt | User prompts contain the instructions that request an output from the specified model. These prompts are similar to requests an end user might type in to Google Gemini. |
Outputs
|
Name |
Description |
|---|---|
| Finish Reason | Reason why the response finished. The step will return ‘ERROR’ if any issues are encountered. |
| Model | Model used for the text generation. |
| Response | Response that matches the specified input parameters. |
| Result | Result returned by Google Gemini. Available values are: Success, Failure. |
| Safety Probability Levels | JSON array of harm categories and the probability level of the response being unsafe. Available probability levels are: HIGH, MEDIUM, LOW, or NEGLIGIBLE. Only returned if safety filtering is set by the user or the model. |
Text Generation (API Key)
Use the Text Generation (API Key) step to automatically add a send a prompt to a Google Gemini model and get back a response using API key authentication. Map outputs from previous steps to the inputs to create the comment and determine what incident to comment on.
Use the following authentication settings to set up a new endpoint to work with Google Gemini.
- Endpoint type: Token
- Header: Authorization
- Name: Type a name that will identify your endpoint.
- Token Prefix: Type 'Bearer'.
- Token: Enter your organization's Google Gemini API key.
Inputs
Inputs with an asterisk* are required.
|
Name |
Description |
|---|---|
| Maximum Output Tokens | Maximum number of tokens to include in a response. Leave blank to use model’s default value (defaults vary by model). |
| Model | The model to use. For example: gemini-2.0-flash, gemini-2.0-flash-lite, gemini-1.5-pro. |
| Presence Penalty | Number between -2.0 and below 2.0. Positive values penalize reuse of tokens already in the response, increasing the likelihood of new content. Leave blank to use model’s default value. Some models do not support this parameter. |
| System Prompt | System prompts define what the model does, specifies how the model generally behaves, and how it should respond. |
| Temperature | Floating-point number between 0 and below 2 that specifies how specific or random the output should be. Higher values increase the randomness of the output, while lower values make it more focused. Leave blank to use model’s default value. |
| Top P | Top P controls token selection by probability. The model chooses from the most likely tokens until their combined probability meets the Top P value. For example, with Top P = 0.5, only the top tokens whose probabilities sum to 0.5 are considered. Number between 0.0 and 1.0 (defaults vary by model). Lower values reduce randomness; higher values increase it. Leave blank to use model’s default value. |
| User Prompt | User prompts contain the instructions that request an output from the specified model. These prompts are similar to requests an end user might type in to Google Gemini. |
Outputs
|
Name |
Description |
|---|---|
| Finish Reason | Reason why the response finished. The step will return ‘ERROR’ if any issues are encountered. |
| Model | Model used for the text generation. |
| Response | Response that matches the specified input parameters. |
| Result | Result returned by Google Gemini. Available values are: Success, Failure. |