Gemini Enterprise for Customer Experience API . projects . locations . apps . guardrails

Instance Methods

close()

Close httplib2 connections.

create(parent, body=None, guardrailId=None, x__xgafv=None)

Creates a new guardrail in the given app.

delete(name, etag=None, force=None, x__xgafv=None)

Deletes the specified guardrail.

get(name, x__xgafv=None)

Gets details of the specified guardrail.

list(parent, filter=None, orderBy=None, pageSize=None, pageToken=None, x__xgafv=None)

Lists guardrails in the given app.

list_next()

Retrieves the next page of results.

patch(name, body=None, updateMask=None, x__xgafv=None)

Updates the specified guardrail.

Method Details

close()
Close httplib2 connections.
create(parent, body=None, guardrailId=None, x__xgafv=None)
Creates a new guardrail in the given app.

Args:
  parent: string, Required. The resource name of the app to create a guardrail in. (required)
  body: object, The request body.
    The object takes the form of:

{ # Guardrail contains a list of checks and balances to keep the agents safe and secure.
  "action": { # Action that is taken when a certain precondition is met. # Optional. Action to take when the guardrail is triggered.
    "generativeAnswer": { # The agent will immediately respond with a generative answer. # Optional. Respond with a generative answer.
      "prompt": "A String", # Required. The prompt to use for the generative answer.
    },
    "respondImmediately": { # The agent will immediately respond with a preconfigured response. # Optional. Immediately respond with a preconfigured response.
      "responses": [ # Required. The canned responses for the agent to choose from. The response is chosen randomly.
        { # Represents a response from the agent.
          "disabled": True or False, # Optional. Whether the response is disabled. Disabled responses are not used by the agent.
          "text": "A String", # Required. Text for the agent to respond with.
        },
      ],
    },
    "transferAgent": { # The agent will transfer the conversation to a different agent. # Optional. Transfer the conversation to a different agent.
      "agent": "A String", # Required. The name of the agent to transfer the conversation to. The agent must be in the same app as the current agent. Format: `projects/{project}/locations/{location}/apps/{app}/agents/{agent}`
    },
  },
  "codeCallback": { # Guardrail that blocks the conversation based on the code callbacks provided. # Optional. Guardrail that potentially blocks the conversation based on the result of the callback execution.
    "afterAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "afterModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
  },
  "contentFilter": { # Guardrail that bans certain content from being used in the conversation. # Optional. Guardrail that bans certain content from being used in the conversation.
    "bannedContents": [ # Optional. List of banned phrases. Applies to both user inputs and agent responses.
      "A String",
    ],
    "bannedContentsInAgentResponse": [ # Optional. List of banned phrases. Applies only to agent responses.
      "A String",
    ],
    "bannedContentsInUserInput": [ # Optional. List of banned phrases. Applies only to user inputs.
      "A String",
    ],
    "disregardDiacritics": True or False, # Optional. If true, diacritics are ignored during matching.
    "matchType": "A String", # Required. Match type for the content filter.
  },
  "createTime": "A String", # Output only. Timestamp when the guardrail was created.
  "description": "A String", # Optional. Description of the guardrail.
  "displayName": "A String", # Required. Display name of the guardrail.
  "enabled": True or False, # Optional. Whether the guardrail is enabled.
  "etag": "A String", # Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "llmPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification.
    "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
    "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
    "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
    "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
      "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
      "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
    },
    "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
    "prompt": "A String", # Required. Policy prompt.
  },
  "llmPromptSecurity": { # Guardrail that blocks the conversation if the input is considered unsafe based on the LLM classification. # Optional. Guardrail that blocks the conversation if the prompt is considered unsafe based on the LLM classification.
    "customPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Use a user-defined LlmPolicy to configure the security guardrail.
      "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
      "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
      "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
      "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
        "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
        "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
      },
      "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
      "prompt": "A String", # Required. Policy prompt.
    },
    "defaultSettings": { # Configuration for default system security settings. # Optional. Use the system's predefined default security settings. To select this mode, include an empty 'default_settings' message in the request. The 'default_prompt_template' field within will be populated by the server in the response.
      "defaultPromptTemplate": "A String", # Output only. The default prompt template used by the system. This field is for display purposes to show the user what prompt the system uses by default. It is OUTPUT_ONLY.
    },
    "failOpen": True or False, # Optional. Determines the behavior when the guardrail encounters an LLM error. - If true: the guardrail is bypassed. - If false (default): the guardrail triggers/blocks. Note: If a custom policy is provided, this field is ignored in favor of the policy's 'fail_open' configuration.
  },
  "modelSafety": { # Model safety settings overrides. When this is set, it will override the default settings and trigger the guardrail if the response is considered unsafe. # Optional. Guardrail that blocks the conversation if the LLM response is considered unsafe based on the model safety settings.
    "safetySettings": [ # Required. List of safety settings.
      { # Safety setting.
        "category": "A String", # Required. The harm category.
        "threshold": "A String", # Required. The harm block threshold.
      },
    ],
  },
  "name": "A String", # Identifier. The unique identifier of the guardrail. Format: `projects/{project}/locations/{location}/apps/{app}/guardrails/{guardrail}`
  "updateTime": "A String", # Output only. Timestamp when the guardrail was last updated.
}

  guardrailId: string, Optional. The ID to use for the guardrail, which will become the final component of the guardrail's resource name. If not provided, a unique ID will be automatically assigned for the guardrail.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Guardrail contains a list of checks and balances to keep the agents safe and secure.
  "action": { # Action that is taken when a certain precondition is met. # Optional. Action to take when the guardrail is triggered.
    "generativeAnswer": { # The agent will immediately respond with a generative answer. # Optional. Respond with a generative answer.
      "prompt": "A String", # Required. The prompt to use for the generative answer.
    },
    "respondImmediately": { # The agent will immediately respond with a preconfigured response. # Optional. Immediately respond with a preconfigured response.
      "responses": [ # Required. The canned responses for the agent to choose from. The response is chosen randomly.
        { # Represents a response from the agent.
          "disabled": True or False, # Optional. Whether the response is disabled. Disabled responses are not used by the agent.
          "text": "A String", # Required. Text for the agent to respond with.
        },
      ],
    },
    "transferAgent": { # The agent will transfer the conversation to a different agent. # Optional. Transfer the conversation to a different agent.
      "agent": "A String", # Required. The name of the agent to transfer the conversation to. The agent must be in the same app as the current agent. Format: `projects/{project}/locations/{location}/apps/{app}/agents/{agent}`
    },
  },
  "codeCallback": { # Guardrail that blocks the conversation based on the code callbacks provided. # Optional. Guardrail that potentially blocks the conversation based on the result of the callback execution.
    "afterAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "afterModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
  },
  "contentFilter": { # Guardrail that bans certain content from being used in the conversation. # Optional. Guardrail that bans certain content from being used in the conversation.
    "bannedContents": [ # Optional. List of banned phrases. Applies to both user inputs and agent responses.
      "A String",
    ],
    "bannedContentsInAgentResponse": [ # Optional. List of banned phrases. Applies only to agent responses.
      "A String",
    ],
    "bannedContentsInUserInput": [ # Optional. List of banned phrases. Applies only to user inputs.
      "A String",
    ],
    "disregardDiacritics": True or False, # Optional. If true, diacritics are ignored during matching.
    "matchType": "A String", # Required. Match type for the content filter.
  },
  "createTime": "A String", # Output only. Timestamp when the guardrail was created.
  "description": "A String", # Optional. Description of the guardrail.
  "displayName": "A String", # Required. Display name of the guardrail.
  "enabled": True or False, # Optional. Whether the guardrail is enabled.
  "etag": "A String", # Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "llmPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification.
    "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
    "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
    "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
    "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
      "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
      "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
    },
    "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
    "prompt": "A String", # Required. Policy prompt.
  },
  "llmPromptSecurity": { # Guardrail that blocks the conversation if the input is considered unsafe based on the LLM classification. # Optional. Guardrail that blocks the conversation if the prompt is considered unsafe based on the LLM classification.
    "customPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Use a user-defined LlmPolicy to configure the security guardrail.
      "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
      "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
      "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
      "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
        "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
        "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
      },
      "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
      "prompt": "A String", # Required. Policy prompt.
    },
    "defaultSettings": { # Configuration for default system security settings. # Optional. Use the system's predefined default security settings. To select this mode, include an empty 'default_settings' message in the request. The 'default_prompt_template' field within will be populated by the server in the response.
      "defaultPromptTemplate": "A String", # Output only. The default prompt template used by the system. This field is for display purposes to show the user what prompt the system uses by default. It is OUTPUT_ONLY.
    },
    "failOpen": True or False, # Optional. Determines the behavior when the guardrail encounters an LLM error. - If true: the guardrail is bypassed. - If false (default): the guardrail triggers/blocks. Note: If a custom policy is provided, this field is ignored in favor of the policy's 'fail_open' configuration.
  },
  "modelSafety": { # Model safety settings overrides. When this is set, it will override the default settings and trigger the guardrail if the response is considered unsafe. # Optional. Guardrail that blocks the conversation if the LLM response is considered unsafe based on the model safety settings.
    "safetySettings": [ # Required. List of safety settings.
      { # Safety setting.
        "category": "A String", # Required. The harm category.
        "threshold": "A String", # Required. The harm block threshold.
      },
    ],
  },
  "name": "A String", # Identifier. The unique identifier of the guardrail. Format: `projects/{project}/locations/{location}/apps/{app}/guardrails/{guardrail}`
  "updateTime": "A String", # Output only. Timestamp when the guardrail was last updated.
}
delete(name, etag=None, force=None, x__xgafv=None)
Deletes the specified guardrail.

Args:
  name: string, Required. The resource name of the guardrail to delete. (required)
  etag: string, Optional. The current etag of the guardrail. If an etag is not provided, the deletion will overwrite any concurrent changes. If an etag is provided and does not match the current etag of the guardrail, deletion will be blocked and an ABORTED error will be returned.
  force: boolean, Optional. Indicates whether to forcefully delete the guardrail, even if it is still referenced by app/agents. * If `force = false`, the deletion fails if any apps/agents still reference the guardrail. * If `force = true`, all existing references from apps/agents will be removed and the guardrail will be deleted.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
}
get(name, x__xgafv=None)
Gets details of the specified guardrail.

Args:
  name: string, Required. The resource name of the guardrail to retrieve. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Guardrail contains a list of checks and balances to keep the agents safe and secure.
  "action": { # Action that is taken when a certain precondition is met. # Optional. Action to take when the guardrail is triggered.
    "generativeAnswer": { # The agent will immediately respond with a generative answer. # Optional. Respond with a generative answer.
      "prompt": "A String", # Required. The prompt to use for the generative answer.
    },
    "respondImmediately": { # The agent will immediately respond with a preconfigured response. # Optional. Immediately respond with a preconfigured response.
      "responses": [ # Required. The canned responses for the agent to choose from. The response is chosen randomly.
        { # Represents a response from the agent.
          "disabled": True or False, # Optional. Whether the response is disabled. Disabled responses are not used by the agent.
          "text": "A String", # Required. Text for the agent to respond with.
        },
      ],
    },
    "transferAgent": { # The agent will transfer the conversation to a different agent. # Optional. Transfer the conversation to a different agent.
      "agent": "A String", # Required. The name of the agent to transfer the conversation to. The agent must be in the same app as the current agent. Format: `projects/{project}/locations/{location}/apps/{app}/agents/{agent}`
    },
  },
  "codeCallback": { # Guardrail that blocks the conversation based on the code callbacks provided. # Optional. Guardrail that potentially blocks the conversation based on the result of the callback execution.
    "afterAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "afterModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
  },
  "contentFilter": { # Guardrail that bans certain content from being used in the conversation. # Optional. Guardrail that bans certain content from being used in the conversation.
    "bannedContents": [ # Optional. List of banned phrases. Applies to both user inputs and agent responses.
      "A String",
    ],
    "bannedContentsInAgentResponse": [ # Optional. List of banned phrases. Applies only to agent responses.
      "A String",
    ],
    "bannedContentsInUserInput": [ # Optional. List of banned phrases. Applies only to user inputs.
      "A String",
    ],
    "disregardDiacritics": True or False, # Optional. If true, diacritics are ignored during matching.
    "matchType": "A String", # Required. Match type for the content filter.
  },
  "createTime": "A String", # Output only. Timestamp when the guardrail was created.
  "description": "A String", # Optional. Description of the guardrail.
  "displayName": "A String", # Required. Display name of the guardrail.
  "enabled": True or False, # Optional. Whether the guardrail is enabled.
  "etag": "A String", # Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "llmPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification.
    "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
    "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
    "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
    "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
      "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
      "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
    },
    "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
    "prompt": "A String", # Required. Policy prompt.
  },
  "llmPromptSecurity": { # Guardrail that blocks the conversation if the input is considered unsafe based on the LLM classification. # Optional. Guardrail that blocks the conversation if the prompt is considered unsafe based on the LLM classification.
    "customPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Use a user-defined LlmPolicy to configure the security guardrail.
      "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
      "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
      "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
      "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
        "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
        "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
      },
      "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
      "prompt": "A String", # Required. Policy prompt.
    },
    "defaultSettings": { # Configuration for default system security settings. # Optional. Use the system's predefined default security settings. To select this mode, include an empty 'default_settings' message in the request. The 'default_prompt_template' field within will be populated by the server in the response.
      "defaultPromptTemplate": "A String", # Output only. The default prompt template used by the system. This field is for display purposes to show the user what prompt the system uses by default. It is OUTPUT_ONLY.
    },
    "failOpen": True or False, # Optional. Determines the behavior when the guardrail encounters an LLM error. - If true: the guardrail is bypassed. - If false (default): the guardrail triggers/blocks. Note: If a custom policy is provided, this field is ignored in favor of the policy's 'fail_open' configuration.
  },
  "modelSafety": { # Model safety settings overrides. When this is set, it will override the default settings and trigger the guardrail if the response is considered unsafe. # Optional. Guardrail that blocks the conversation if the LLM response is considered unsafe based on the model safety settings.
    "safetySettings": [ # Required. List of safety settings.
      { # Safety setting.
        "category": "A String", # Required. The harm category.
        "threshold": "A String", # Required. The harm block threshold.
      },
    ],
  },
  "name": "A String", # Identifier. The unique identifier of the guardrail. Format: `projects/{project}/locations/{location}/apps/{app}/guardrails/{guardrail}`
  "updateTime": "A String", # Output only. Timestamp when the guardrail was last updated.
}
list(parent, filter=None, orderBy=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists guardrails in the given app.

Args:
  parent: string, Required. The resource name of the app to list guardrails from. (required)
  filter: string, Optional. Filter to be applied when listing the guardrails. See https://google.aip.dev/160 for more details.
  orderBy: string, Optional. Field to sort by. Only "name" and "create_time" is supported. See https://google.aip.dev/132#ordering for more details.
  pageSize: integer, Optional. Requested page size. Server may return fewer items than requested. If unspecified, server will pick an appropriate default.
  pageToken: string, Optional. The next_page_token value returned from a previous list AgentService.ListGuardrails call.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Response message for AgentService.ListGuardrails.
  "guardrails": [ # The list of guardrails.
    { # Guardrail contains a list of checks and balances to keep the agents safe and secure.
      "action": { # Action that is taken when a certain precondition is met. # Optional. Action to take when the guardrail is triggered.
        "generativeAnswer": { # The agent will immediately respond with a generative answer. # Optional. Respond with a generative answer.
          "prompt": "A String", # Required. The prompt to use for the generative answer.
        },
        "respondImmediately": { # The agent will immediately respond with a preconfigured response. # Optional. Immediately respond with a preconfigured response.
          "responses": [ # Required. The canned responses for the agent to choose from. The response is chosen randomly.
            { # Represents a response from the agent.
              "disabled": True or False, # Optional. Whether the response is disabled. Disabled responses are not used by the agent.
              "text": "A String", # Required. Text for the agent to respond with.
            },
          ],
        },
        "transferAgent": { # The agent will transfer the conversation to a different agent. # Optional. Transfer the conversation to a different agent.
          "agent": "A String", # Required. The name of the agent to transfer the conversation to. The agent must be in the same app as the current agent. Format: `projects/{project}/locations/{location}/apps/{app}/agents/{agent}`
        },
      },
      "codeCallback": { # Guardrail that blocks the conversation based on the code callbacks provided. # Optional. Guardrail that potentially blocks the conversation based on the result of the callback execution.
        "afterAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
          "description": "A String", # Optional. Human-readable description of the callback.
          "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
          "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
          "pythonCode": "A String", # Required. The python code to execute for the callback.
        },
        "afterModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
          "description": "A String", # Optional. Human-readable description of the callback.
          "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
          "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
          "pythonCode": "A String", # Required. The python code to execute for the callback.
        },
        "beforeAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
          "description": "A String", # Optional. Human-readable description of the callback.
          "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
          "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
          "pythonCode": "A String", # Required. The python code to execute for the callback.
        },
        "beforeModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
          "description": "A String", # Optional. Human-readable description of the callback.
          "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
          "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
          "pythonCode": "A String", # Required. The python code to execute for the callback.
        },
      },
      "contentFilter": { # Guardrail that bans certain content from being used in the conversation. # Optional. Guardrail that bans certain content from being used in the conversation.
        "bannedContents": [ # Optional. List of banned phrases. Applies to both user inputs and agent responses.
          "A String",
        ],
        "bannedContentsInAgentResponse": [ # Optional. List of banned phrases. Applies only to agent responses.
          "A String",
        ],
        "bannedContentsInUserInput": [ # Optional. List of banned phrases. Applies only to user inputs.
          "A String",
        ],
        "disregardDiacritics": True or False, # Optional. If true, diacritics are ignored during matching.
        "matchType": "A String", # Required. Match type for the content filter.
      },
      "createTime": "A String", # Output only. Timestamp when the guardrail was created.
      "description": "A String", # Optional. Description of the guardrail.
      "displayName": "A String", # Required. Display name of the guardrail.
      "enabled": True or False, # Optional. Whether the guardrail is enabled.
      "etag": "A String", # Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
      "llmPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification.
        "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
        "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
        "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
        "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
          "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
          "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
        },
        "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
        "prompt": "A String", # Required. Policy prompt.
      },
      "llmPromptSecurity": { # Guardrail that blocks the conversation if the input is considered unsafe based on the LLM classification. # Optional. Guardrail that blocks the conversation if the prompt is considered unsafe based on the LLM classification.
        "customPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Use a user-defined LlmPolicy to configure the security guardrail.
          "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
          "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
          "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
          "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
            "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
            "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
          },
          "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
          "prompt": "A String", # Required. Policy prompt.
        },
        "defaultSettings": { # Configuration for default system security settings. # Optional. Use the system's predefined default security settings. To select this mode, include an empty 'default_settings' message in the request. The 'default_prompt_template' field within will be populated by the server in the response.
          "defaultPromptTemplate": "A String", # Output only. The default prompt template used by the system. This field is for display purposes to show the user what prompt the system uses by default. It is OUTPUT_ONLY.
        },
        "failOpen": True or False, # Optional. Determines the behavior when the guardrail encounters an LLM error. - If true: the guardrail is bypassed. - If false (default): the guardrail triggers/blocks. Note: If a custom policy is provided, this field is ignored in favor of the policy's 'fail_open' configuration.
      },
      "modelSafety": { # Model safety settings overrides. When this is set, it will override the default settings and trigger the guardrail if the response is considered unsafe. # Optional. Guardrail that blocks the conversation if the LLM response is considered unsafe based on the model safety settings.
        "safetySettings": [ # Required. List of safety settings.
          { # Safety setting.
            "category": "A String", # Required. The harm category.
            "threshold": "A String", # Required. The harm block threshold.
          },
        ],
      },
      "name": "A String", # Identifier. The unique identifier of the guardrail. Format: `projects/{project}/locations/{location}/apps/{app}/guardrails/{guardrail}`
      "updateTime": "A String", # Output only. Timestamp when the guardrail was last updated.
    },
  ],
  "nextPageToken": "A String", # A token that can be sent as ListGuardrailsRequest.page_token to retrieve the next page. Absence of this field indicates there are no subsequent pages.
}
list_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.
        
patch(name, body=None, updateMask=None, x__xgafv=None)
Updates the specified guardrail.

Args:
  name: string, Identifier. The unique identifier of the guardrail. Format: `projects/{project}/locations/{location}/apps/{app}/guardrails/{guardrail}` (required)
  body: object, The request body.
    The object takes the form of:

{ # Guardrail contains a list of checks and balances to keep the agents safe and secure.
  "action": { # Action that is taken when a certain precondition is met. # Optional. Action to take when the guardrail is triggered.
    "generativeAnswer": { # The agent will immediately respond with a generative answer. # Optional. Respond with a generative answer.
      "prompt": "A String", # Required. The prompt to use for the generative answer.
    },
    "respondImmediately": { # The agent will immediately respond with a preconfigured response. # Optional. Immediately respond with a preconfigured response.
      "responses": [ # Required. The canned responses for the agent to choose from. The response is chosen randomly.
        { # Represents a response from the agent.
          "disabled": True or False, # Optional. Whether the response is disabled. Disabled responses are not used by the agent.
          "text": "A String", # Required. Text for the agent to respond with.
        },
      ],
    },
    "transferAgent": { # The agent will transfer the conversation to a different agent. # Optional. Transfer the conversation to a different agent.
      "agent": "A String", # Required. The name of the agent to transfer the conversation to. The agent must be in the same app as the current agent. Format: `projects/{project}/locations/{location}/apps/{app}/agents/{agent}`
    },
  },
  "codeCallback": { # Guardrail that blocks the conversation based on the code callbacks provided. # Optional. Guardrail that potentially blocks the conversation based on the result of the callback execution.
    "afterAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "afterModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
  },
  "contentFilter": { # Guardrail that bans certain content from being used in the conversation. # Optional. Guardrail that bans certain content from being used in the conversation.
    "bannedContents": [ # Optional. List of banned phrases. Applies to both user inputs and agent responses.
      "A String",
    ],
    "bannedContentsInAgentResponse": [ # Optional. List of banned phrases. Applies only to agent responses.
      "A String",
    ],
    "bannedContentsInUserInput": [ # Optional. List of banned phrases. Applies only to user inputs.
      "A String",
    ],
    "disregardDiacritics": True or False, # Optional. If true, diacritics are ignored during matching.
    "matchType": "A String", # Required. Match type for the content filter.
  },
  "createTime": "A String", # Output only. Timestamp when the guardrail was created.
  "description": "A String", # Optional. Description of the guardrail.
  "displayName": "A String", # Required. Display name of the guardrail.
  "enabled": True or False, # Optional. Whether the guardrail is enabled.
  "etag": "A String", # Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "llmPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification.
    "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
    "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
    "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
    "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
      "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
      "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
    },
    "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
    "prompt": "A String", # Required. Policy prompt.
  },
  "llmPromptSecurity": { # Guardrail that blocks the conversation if the input is considered unsafe based on the LLM classification. # Optional. Guardrail that blocks the conversation if the prompt is considered unsafe based on the LLM classification.
    "customPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Use a user-defined LlmPolicy to configure the security guardrail.
      "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
      "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
      "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
      "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
        "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
        "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
      },
      "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
      "prompt": "A String", # Required. Policy prompt.
    },
    "defaultSettings": { # Configuration for default system security settings. # Optional. Use the system's predefined default security settings. To select this mode, include an empty 'default_settings' message in the request. The 'default_prompt_template' field within will be populated by the server in the response.
      "defaultPromptTemplate": "A String", # Output only. The default prompt template used by the system. This field is for display purposes to show the user what prompt the system uses by default. It is OUTPUT_ONLY.
    },
    "failOpen": True or False, # Optional. Determines the behavior when the guardrail encounters an LLM error. - If true: the guardrail is bypassed. - If false (default): the guardrail triggers/blocks. Note: If a custom policy is provided, this field is ignored in favor of the policy's 'fail_open' configuration.
  },
  "modelSafety": { # Model safety settings overrides. When this is set, it will override the default settings and trigger the guardrail if the response is considered unsafe. # Optional. Guardrail that blocks the conversation if the LLM response is considered unsafe based on the model safety settings.
    "safetySettings": [ # Required. List of safety settings.
      { # Safety setting.
        "category": "A String", # Required. The harm category.
        "threshold": "A String", # Required. The harm block threshold.
      },
    ],
  },
  "name": "A String", # Identifier. The unique identifier of the guardrail. Format: `projects/{project}/locations/{location}/apps/{app}/guardrails/{guardrail}`
  "updateTime": "A String", # Output only. Timestamp when the guardrail was last updated.
}

  updateMask: string, Optional. Field mask is used to control which fields get updated. If the mask is not present, all fields will be updated.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Guardrail contains a list of checks and balances to keep the agents safe and secure.
  "action": { # Action that is taken when a certain precondition is met. # Optional. Action to take when the guardrail is triggered.
    "generativeAnswer": { # The agent will immediately respond with a generative answer. # Optional. Respond with a generative answer.
      "prompt": "A String", # Required. The prompt to use for the generative answer.
    },
    "respondImmediately": { # The agent will immediately respond with a preconfigured response. # Optional. Immediately respond with a preconfigured response.
      "responses": [ # Required. The canned responses for the agent to choose from. The response is chosen randomly.
        { # Represents a response from the agent.
          "disabled": True or False, # Optional. Whether the response is disabled. Disabled responses are not used by the agent.
          "text": "A String", # Required. Text for the agent to respond with.
        },
      ],
    },
    "transferAgent": { # The agent will transfer the conversation to a different agent. # Optional. Transfer the conversation to a different agent.
      "agent": "A String", # Required. The name of the agent to transfer the conversation to. The agent must be in the same app as the current agent. Format: `projects/{project}/locations/{location}/apps/{app}/agents/{agent}`
    },
  },
  "codeCallback": { # Guardrail that blocks the conversation based on the code callbacks provided. # Optional. Guardrail that potentially blocks the conversation based on the result of the callback execution.
    "afterAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "afterModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute after the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeAgentCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
    "beforeModelCallback": { # A callback defines the custom logic to be executed at various stages of agent interaction. # Optional. The callback to execute before the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing.
      "description": "A String", # Optional. Human-readable description of the callback.
      "disabled": True or False, # Optional. Whether the callback is disabled. Disabled callbacks are ignored by the agent.
      "proactiveExecutionEnabled": True or False, # Optional. If enabled, the callback will also be executed on intermediate model outputs. This setting only affects after model callback. **ENABLE WITH CAUTION**. Typically after model callback only needs to be executed after receiving all model responses. Enabling proactive execution may have negative implication on the execution cost and latency, and should only be enabled in rare situations.
      "pythonCode": "A String", # Required. The python code to execute for the callback.
    },
  },
  "contentFilter": { # Guardrail that bans certain content from being used in the conversation. # Optional. Guardrail that bans certain content from being used in the conversation.
    "bannedContents": [ # Optional. List of banned phrases. Applies to both user inputs and agent responses.
      "A String",
    ],
    "bannedContentsInAgentResponse": [ # Optional. List of banned phrases. Applies only to agent responses.
      "A String",
    ],
    "bannedContentsInUserInput": [ # Optional. List of banned phrases. Applies only to user inputs.
      "A String",
    ],
    "disregardDiacritics": True or False, # Optional. If true, diacritics are ignored during matching.
    "matchType": "A String", # Required. Match type for the content filter.
  },
  "createTime": "A String", # Output only. Timestamp when the guardrail was created.
  "description": "A String", # Optional. Description of the guardrail.
  "displayName": "A String", # Required. Display name of the guardrail.
  "enabled": True or False, # Optional. Whether the guardrail is enabled.
  "etag": "A String", # Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "llmPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification.
    "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
    "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
    "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
    "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
      "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
      "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
    },
    "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
    "prompt": "A String", # Required. Policy prompt.
  },
  "llmPromptSecurity": { # Guardrail that blocks the conversation if the input is considered unsafe based on the LLM classification. # Optional. Guardrail that blocks the conversation if the prompt is considered unsafe based on the LLM classification.
    "customPolicy": { # Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. # Optional. Use a user-defined LlmPolicy to configure the security guardrail.
      "allowShortUtterance": True or False, # Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.
      "failOpen": True or False, # Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
      "maxConversationMessages": 42, # Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
      "modelSettings": { # Model settings contains various configurations for the LLM model. # Optional. Model settings.
        "model": "A String", # Optional. The LLM model that the agent should use. If not set, the agent will inherit the model from its parent agent.
        "temperature": 3.14, # Optional. If set, this temperature will be used for the LLM model. Temperature controls the randomness of the model's responses. Lower temperatures produce responses that are more predictable. Higher temperatures produce responses that are more creative.
      },
      "policyScope": "A String", # Required. Defines when to apply the policy check during the conversation. If set to `POLICY_SCOPE_UNSPECIFIED`, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
      "prompt": "A String", # Required. Policy prompt.
    },
    "defaultSettings": { # Configuration for default system security settings. # Optional. Use the system's predefined default security settings. To select this mode, include an empty 'default_settings' message in the request. The 'default_prompt_template' field within will be populated by the server in the response.
      "defaultPromptTemplate": "A String", # Output only. The default prompt template used by the system. This field is for display purposes to show the user what prompt the system uses by default. It is OUTPUT_ONLY.
    },
    "failOpen": True or False, # Optional. Determines the behavior when the guardrail encounters an LLM error. - If true: the guardrail is bypassed. - If false (default): the guardrail triggers/blocks. Note: If a custom policy is provided, this field is ignored in favor of the policy's 'fail_open' configuration.
  },
  "modelSafety": { # Model safety settings overrides. When this is set, it will override the default settings and trigger the guardrail if the response is considered unsafe. # Optional. Guardrail that blocks the conversation if the LLM response is considered unsafe based on the model safety settings.
    "safetySettings": [ # Required. List of safety settings.
      { # Safety setting.
        "category": "A String", # Required. The harm category.
        "threshold": "A String", # Required. The harm block threshold.
      },
    ],
  },
  "name": "A String", # Identifier. The unique identifier of the guardrail. Format: `projects/{project}/locations/{location}/apps/{app}/guardrails/{guardrail}`
  "updateTime": "A String", # Output only. Timestamp when the guardrail was last updated.
}