Google Gemini
Flow Designer has a built-in Google Gemini Text Generation step to help you integrate Gemini into your alert management and incident response flows.
Gemini Steps
The following steps are available:
- Text Generation (AI): send a prompt to a Google Gemini model and get back a response using API key authentication. This step is compatible with OpenAI.
To add a Google Gemini step to your flow:
- Go to the Apps tab of the palette, expand the Google Gemini section, and drag the Text Generation step onto the canvas.
- Connect the step to the previous step in the flow. This gives you access to the alert properties and outputs of previous steps when you configure this step's inputs.
- Double-click the step to edit it, and use the Setup tab to configure the inputs. You can use plain text and input variables (or both). See the following section for detailed information on the inputs, including which are required.
- On the Endpoint tab, configure the step to point to your <Product Name> instance.
- You can select a pre-existing endpoint or configure a new endpoint with the following information:
- Name: Type a name that will identify your endpoint.
- Base URL: Enter https://generativelanguage.googleapis.com.
- Header: Select 'Authorization'.
- Token Prefix: Type 'Bearer'.
- Token: Enter your organization's Google Gemini API key.
- You can select a pre-existing endpoint or configure a new endpoint with the following information:
Text Generation (AI)
Use the Text Generation step to automatically add a send a prompt to a Google Geminimodel and get back a response. Map outputs from previous steps to the inputs to create the comment and determine what incident to comment on.
Inputs
Inputs with an asterisk* are required.
|
Name |
Description |
|---|---|
| Maximum Output Tokens | Maximum number of tokens to include in a response. Leave blank to use model’s default value (defaults vary by model). |
| Model | The model to use. For example: gemini-2.0-flash, gemini-2.0-flash-lite, gemini-1.5-pro. |
| Presence Penalty | Number between -2.0 and below 2.0. Positive values penalize reuse of tokens already in the response, increasing the likelihood of new content. Leave blank to use model’s default value. Some models do not support this parameter. |
| System Prompt | System prompts define what the model does, specifies how the model generally behaves, and how it should respond. |
| Temperature | Floating-point number between 0 and below 2 that specifies how specific or random the output should be. Higher values increase the randomness of the output, while lower values make it more focused. Leave blank to use model’s default value. |
| Top P | Top P controls token selection by probability. The model chooses from the most likely tokens until their combined probability meets the Top P value. For example, with Top P = 0.5, only the top tokens whose probabilities sum to 0.5 are considered. Number between 0.0 and 1.0 (defaults vary by model). Lower values reduce randomness; higher values increase it. Leave blank to use model’s default value. |
| User Prompt | User prompts contain the instructions that request an output from the specified model. These prompts are similar to requests an end user might type in to Google Gemini. |
Outputs
|
Name |
Description |
|---|---|
| Finish Reason | Reason why the response finished. The step will return ‘ERROR’ if any issues are encountered. |
| Model | Model used for the text generation. |
| Response | Response that matches the specified input parameters. |
| Result | Result returned by Google Gemini. Available values are: Success, Failure. |