Gemini Enterprise for Customer Experience API . projects . locations . apps . scheduledEvaluationRuns

Instance Methods

close()

Close httplib2 connections.

create(parent, body=None, scheduledEvaluationRunId=None, x__xgafv=None)

Creates a scheduled evaluation run.

delete(name, etag=None, x__xgafv=None)

Deletes a scheduled evaluation run.

get(name, x__xgafv=None)

Gets details of the specified scheduled evaluation run.

list(parent, filter=None, orderBy=None, pageSize=None, pageToken=None, x__xgafv=None)

Lists all scheduled evaluation runs in the given app.

list_next()

Retrieves the next page of results.

patch(name, body=None, updateMask=None, x__xgafv=None)

Updates a scheduled evaluation run.

Method Details

close()
Close httplib2 connections.
create(parent, body=None, scheduledEvaluationRunId=None, x__xgafv=None)
Creates a scheduled evaluation run.

Args:
  parent: string, Required. The app to create the scheduled evaluation run for. Format: `projects/{project}/locations/{location}/apps/{app}` (required)
  body: object, The request body.
    The object takes the form of:

{ # Represents a scheduled evaluation run configuration.
  "active": True or False, # Optional. Whether this config is active
  "createTime": "A String", # Output only. Timestamp when the scheduled evaluation run was created.
  "createdBy": "A String", # Output only. The user who created the scheduled evaluation run.
  "description": "A String", # Optional. User-defined description of the scheduled evaluation run.
  "displayName": "A String", # Required. User-defined display name of the scheduled evaluation run config.
  "etag": "A String", # Output only. Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "lastCompletedRun": "A String", # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
  "lastUpdatedBy": "A String", # Output only. The user who last updated the evaluation.
  "name": "A String", # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
  "nextScheduledExecutionTime": "A String", # Output only. The next time this is scheduled to execute
  "request": { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
    "app": "A String", # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
    "appVersion": "A String", # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
    "config": { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
      "evaluationChannel": "A String", # Optional. The channel to evaluate.
      "inputAudioConfig": { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
        "audioEncoding": "A String", # Required. The encoding of the input audio data.
        "noiseSuppressionLevel": "A String", # Optional. Whether to enable noise suppression on the input audio. Available values are "low", "moderate", "high", "very_high".
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the input audio data.
      },
      "outputAudioConfig": { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
        "audioEncoding": "A String", # Required. The encoding of the output audio data.
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the output audio data.
      },
      "toolCallBehaviour": "A String", # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
    },
    "displayName": "A String", # Optional. The display name of the evaluation run.
    "evaluationDataset": "A String", # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
    "evaluations": [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
      "A String",
    ],
    "generateLatencyReport": True or False, # Optional. Whether to generate a latency report for the evaluation run.
    "goldenRunMethod": "A String", # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
    "optimizationConfig": { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
      "assistantSession": "A String", # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
      "errorMessage": "A String", # Output only. The error message if the optimization run failed.
      "generateLossReport": True or False, # Optional. Whether to generate a loss report.
      "lossReport": { # Output only. The generated loss report.
        "a_key": "", # Properties of the object.
      },
      "reportSummary": "A String", # Output only. The summary of the loss report.
      "shouldSuggestFix": True or False, # Output only. Whether to suggest a fix for the losses.
      "status": "A String", # Output only. The status of the optimization run.
    },
    "personaRunConfigs": [ # Optional. The configuration to use for the run per persona.
      { # Configuration for running an evaluation for a specific persona.
        "persona": "A String", # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
        "taskCount": 42, # Optional. The number of tasks to run for the persona.
      },
    ],
    "runCount": 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
    "scheduledEvaluationRun": "A String", # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
  },
  "schedulingConfig": { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
    "daysOfWeek": [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
      42,
    ],
    "frequency": "A String", # Required. The frequency with which to run the eval
    "startTime": "A String", # Required. Timestamp when the eval should start.
  },
  "totalExecutions": 42, # Output only. The total number of times this run has been executed
  "updateTime": "A String", # Output only. Timestamp when the evaluation was last updated.
}

  scheduledEvaluationRunId: string, Optional. The ID to use for the scheduled evaluation run, which will become the final component of the scheduled evaluation run's resource name. If not provided, a unique ID will be automatically assigned.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Represents a scheduled evaluation run configuration.
  "active": True or False, # Optional. Whether this config is active
  "createTime": "A String", # Output only. Timestamp when the scheduled evaluation run was created.
  "createdBy": "A String", # Output only. The user who created the scheduled evaluation run.
  "description": "A String", # Optional. User-defined description of the scheduled evaluation run.
  "displayName": "A String", # Required. User-defined display name of the scheduled evaluation run config.
  "etag": "A String", # Output only. Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "lastCompletedRun": "A String", # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
  "lastUpdatedBy": "A String", # Output only. The user who last updated the evaluation.
  "name": "A String", # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
  "nextScheduledExecutionTime": "A String", # Output only. The next time this is scheduled to execute
  "request": { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
    "app": "A String", # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
    "appVersion": "A String", # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
    "config": { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
      "evaluationChannel": "A String", # Optional. The channel to evaluate.
      "inputAudioConfig": { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
        "audioEncoding": "A String", # Required. The encoding of the input audio data.
        "noiseSuppressionLevel": "A String", # Optional. Whether to enable noise suppression on the input audio. Available values are "low", "moderate", "high", "very_high".
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the input audio data.
      },
      "outputAudioConfig": { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
        "audioEncoding": "A String", # Required. The encoding of the output audio data.
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the output audio data.
      },
      "toolCallBehaviour": "A String", # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
    },
    "displayName": "A String", # Optional. The display name of the evaluation run.
    "evaluationDataset": "A String", # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
    "evaluations": [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
      "A String",
    ],
    "generateLatencyReport": True or False, # Optional. Whether to generate a latency report for the evaluation run.
    "goldenRunMethod": "A String", # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
    "optimizationConfig": { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
      "assistantSession": "A String", # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
      "errorMessage": "A String", # Output only. The error message if the optimization run failed.
      "generateLossReport": True or False, # Optional. Whether to generate a loss report.
      "lossReport": { # Output only. The generated loss report.
        "a_key": "", # Properties of the object.
      },
      "reportSummary": "A String", # Output only. The summary of the loss report.
      "shouldSuggestFix": True or False, # Output only. Whether to suggest a fix for the losses.
      "status": "A String", # Output only. The status of the optimization run.
    },
    "personaRunConfigs": [ # Optional. The configuration to use for the run per persona.
      { # Configuration for running an evaluation for a specific persona.
        "persona": "A String", # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
        "taskCount": 42, # Optional. The number of tasks to run for the persona.
      },
    ],
    "runCount": 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
    "scheduledEvaluationRun": "A String", # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
  },
  "schedulingConfig": { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
    "daysOfWeek": [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
      42,
    ],
    "frequency": "A String", # Required. The frequency with which to run the eval
    "startTime": "A String", # Required. Timestamp when the eval should start.
  },
  "totalExecutions": 42, # Output only. The total number of times this run has been executed
  "updateTime": "A String", # Output only. Timestamp when the evaluation was last updated.
}
delete(name, etag=None, x__xgafv=None)
Deletes a scheduled evaluation run.

Args:
  name: string, Required. The resource name of the scheduled evaluation run to delete. (required)
  etag: string, Optional. The etag of the ScheduledEvaluationRun. If provided, it must match the server's etag.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
}
get(name, x__xgafv=None)
Gets details of the specified scheduled evaluation run.

Args:
  name: string, Required. The resource name of the scheduled evaluation run to retrieve. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Represents a scheduled evaluation run configuration.
  "active": True or False, # Optional. Whether this config is active
  "createTime": "A String", # Output only. Timestamp when the scheduled evaluation run was created.
  "createdBy": "A String", # Output only. The user who created the scheduled evaluation run.
  "description": "A String", # Optional. User-defined description of the scheduled evaluation run.
  "displayName": "A String", # Required. User-defined display name of the scheduled evaluation run config.
  "etag": "A String", # Output only. Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "lastCompletedRun": "A String", # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
  "lastUpdatedBy": "A String", # Output only. The user who last updated the evaluation.
  "name": "A String", # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
  "nextScheduledExecutionTime": "A String", # Output only. The next time this is scheduled to execute
  "request": { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
    "app": "A String", # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
    "appVersion": "A String", # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
    "config": { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
      "evaluationChannel": "A String", # Optional. The channel to evaluate.
      "inputAudioConfig": { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
        "audioEncoding": "A String", # Required. The encoding of the input audio data.
        "noiseSuppressionLevel": "A String", # Optional. Whether to enable noise suppression on the input audio. Available values are "low", "moderate", "high", "very_high".
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the input audio data.
      },
      "outputAudioConfig": { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
        "audioEncoding": "A String", # Required. The encoding of the output audio data.
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the output audio data.
      },
      "toolCallBehaviour": "A String", # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
    },
    "displayName": "A String", # Optional. The display name of the evaluation run.
    "evaluationDataset": "A String", # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
    "evaluations": [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
      "A String",
    ],
    "generateLatencyReport": True or False, # Optional. Whether to generate a latency report for the evaluation run.
    "goldenRunMethod": "A String", # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
    "optimizationConfig": { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
      "assistantSession": "A String", # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
      "errorMessage": "A String", # Output only. The error message if the optimization run failed.
      "generateLossReport": True or False, # Optional. Whether to generate a loss report.
      "lossReport": { # Output only. The generated loss report.
        "a_key": "", # Properties of the object.
      },
      "reportSummary": "A String", # Output only. The summary of the loss report.
      "shouldSuggestFix": True or False, # Output only. Whether to suggest a fix for the losses.
      "status": "A String", # Output only. The status of the optimization run.
    },
    "personaRunConfigs": [ # Optional. The configuration to use for the run per persona.
      { # Configuration for running an evaluation for a specific persona.
        "persona": "A String", # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
        "taskCount": 42, # Optional. The number of tasks to run for the persona.
      },
    ],
    "runCount": 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
    "scheduledEvaluationRun": "A String", # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
  },
  "schedulingConfig": { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
    "daysOfWeek": [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
      42,
    ],
    "frequency": "A String", # Required. The frequency with which to run the eval
    "startTime": "A String", # Required. Timestamp when the eval should start.
  },
  "totalExecutions": 42, # Output only. The total number of times this run has been executed
  "updateTime": "A String", # Output only. Timestamp when the evaluation was last updated.
}
list(parent, filter=None, orderBy=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists all scheduled evaluation runs in the given app.

Args:
  parent: string, Required. The resource name of the app to list scheduled evaluation runs from. (required)
  filter: string, Optional. Filter to be applied when listing the scheduled evaluation runs. See https://google.aip.dev/160 for more details. Currently supports filtering by: * request.evaluations:evaluation_id * request.evaluation_dataset:evaluation_dataset_id
  orderBy: string, Optional. Field to sort by. Supported fields are: "name" (ascending), "create_time" (descending), "update_time" (descending), "next_scheduled_execution" (ascending), and "last_completed_run.create_time" (descending). If not included, "update_time" will be the default. See https://google.aip.dev/132#ordering for more details.
  pageSize: integer, Optional. Requested page size. Server may return fewer items than requested. If unspecified, server will pick an appropriate default.
  pageToken: string, Optional. The next_page_token value returned from a previous list EvaluationService.ListScheduledEvaluationRuns call.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Response message for EvaluationService.ListScheduledEvaluationRuns.
  "nextPageToken": "A String", # A token that can be sent as ListScheduledEvaluationRunsRequest.page_token to retrieve the next page. Absence of this field indicates there are no subsequent pages.
  "scheduledEvaluationRuns": [ # The list of scheduled evaluation runs.
    { # Represents a scheduled evaluation run configuration.
      "active": True or False, # Optional. Whether this config is active
      "createTime": "A String", # Output only. Timestamp when the scheduled evaluation run was created.
      "createdBy": "A String", # Output only. The user who created the scheduled evaluation run.
      "description": "A String", # Optional. User-defined description of the scheduled evaluation run.
      "displayName": "A String", # Required. User-defined display name of the scheduled evaluation run config.
      "etag": "A String", # Output only. Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
      "lastCompletedRun": "A String", # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
      "lastUpdatedBy": "A String", # Output only. The user who last updated the evaluation.
      "name": "A String", # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
      "nextScheduledExecutionTime": "A String", # Output only. The next time this is scheduled to execute
      "request": { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
        "app": "A String", # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
        "appVersion": "A String", # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
        "config": { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
          "evaluationChannel": "A String", # Optional. The channel to evaluate.
          "inputAudioConfig": { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
            "audioEncoding": "A String", # Required. The encoding of the input audio data.
            "noiseSuppressionLevel": "A String", # Optional. Whether to enable noise suppression on the input audio. Available values are "low", "moderate", "high", "very_high".
            "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the input audio data.
          },
          "outputAudioConfig": { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
            "audioEncoding": "A String", # Required. The encoding of the output audio data.
            "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the output audio data.
          },
          "toolCallBehaviour": "A String", # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
        },
        "displayName": "A String", # Optional. The display name of the evaluation run.
        "evaluationDataset": "A String", # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
        "evaluations": [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
          "A String",
        ],
        "generateLatencyReport": True or False, # Optional. Whether to generate a latency report for the evaluation run.
        "goldenRunMethod": "A String", # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
        "optimizationConfig": { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
          "assistantSession": "A String", # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
          "errorMessage": "A String", # Output only. The error message if the optimization run failed.
          "generateLossReport": True or False, # Optional. Whether to generate a loss report.
          "lossReport": { # Output only. The generated loss report.
            "a_key": "", # Properties of the object.
          },
          "reportSummary": "A String", # Output only. The summary of the loss report.
          "shouldSuggestFix": True or False, # Output only. Whether to suggest a fix for the losses.
          "status": "A String", # Output only. The status of the optimization run.
        },
        "personaRunConfigs": [ # Optional. The configuration to use for the run per persona.
          { # Configuration for running an evaluation for a specific persona.
            "persona": "A String", # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
            "taskCount": 42, # Optional. The number of tasks to run for the persona.
          },
        ],
        "runCount": 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
        "scheduledEvaluationRun": "A String", # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
      },
      "schedulingConfig": { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
        "daysOfWeek": [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
          42,
        ],
        "frequency": "A String", # Required. The frequency with which to run the eval
        "startTime": "A String", # Required. Timestamp when the eval should start.
      },
      "totalExecutions": 42, # Output only. The total number of times this run has been executed
      "updateTime": "A String", # Output only. Timestamp when the evaluation was last updated.
    },
  ],
}
list_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.
        
patch(name, body=None, updateMask=None, x__xgafv=None)
Updates a scheduled evaluation run.

Args:
  name: string, Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId} (required)
  body: object, The request body.
    The object takes the form of:

{ # Represents a scheduled evaluation run configuration.
  "active": True or False, # Optional. Whether this config is active
  "createTime": "A String", # Output only. Timestamp when the scheduled evaluation run was created.
  "createdBy": "A String", # Output only. The user who created the scheduled evaluation run.
  "description": "A String", # Optional. User-defined description of the scheduled evaluation run.
  "displayName": "A String", # Required. User-defined display name of the scheduled evaluation run config.
  "etag": "A String", # Output only. Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "lastCompletedRun": "A String", # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
  "lastUpdatedBy": "A String", # Output only. The user who last updated the evaluation.
  "name": "A String", # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
  "nextScheduledExecutionTime": "A String", # Output only. The next time this is scheduled to execute
  "request": { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
    "app": "A String", # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
    "appVersion": "A String", # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
    "config": { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
      "evaluationChannel": "A String", # Optional. The channel to evaluate.
      "inputAudioConfig": { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
        "audioEncoding": "A String", # Required. The encoding of the input audio data.
        "noiseSuppressionLevel": "A String", # Optional. Whether to enable noise suppression on the input audio. Available values are "low", "moderate", "high", "very_high".
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the input audio data.
      },
      "outputAudioConfig": { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
        "audioEncoding": "A String", # Required. The encoding of the output audio data.
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the output audio data.
      },
      "toolCallBehaviour": "A String", # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
    },
    "displayName": "A String", # Optional. The display name of the evaluation run.
    "evaluationDataset": "A String", # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
    "evaluations": [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
      "A String",
    ],
    "generateLatencyReport": True or False, # Optional. Whether to generate a latency report for the evaluation run.
    "goldenRunMethod": "A String", # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
    "optimizationConfig": { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
      "assistantSession": "A String", # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
      "errorMessage": "A String", # Output only. The error message if the optimization run failed.
      "generateLossReport": True or False, # Optional. Whether to generate a loss report.
      "lossReport": { # Output only. The generated loss report.
        "a_key": "", # Properties of the object.
      },
      "reportSummary": "A String", # Output only. The summary of the loss report.
      "shouldSuggestFix": True or False, # Output only. Whether to suggest a fix for the losses.
      "status": "A String", # Output only. The status of the optimization run.
    },
    "personaRunConfigs": [ # Optional. The configuration to use for the run per persona.
      { # Configuration for running an evaluation for a specific persona.
        "persona": "A String", # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
        "taskCount": 42, # Optional. The number of tasks to run for the persona.
      },
    ],
    "runCount": 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
    "scheduledEvaluationRun": "A String", # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
  },
  "schedulingConfig": { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
    "daysOfWeek": [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
      42,
    ],
    "frequency": "A String", # Required. The frequency with which to run the eval
    "startTime": "A String", # Required. Timestamp when the eval should start.
  },
  "totalExecutions": 42, # Output only. The total number of times this run has been executed
  "updateTime": "A String", # Output only. Timestamp when the evaluation was last updated.
}

  updateMask: string, Optional. Field mask is used to control which fields get updated. If the mask is not present, all fields will be updated.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Represents a scheduled evaluation run configuration.
  "active": True or False, # Optional. Whether this config is active
  "createTime": "A String", # Output only. Timestamp when the scheduled evaluation run was created.
  "createdBy": "A String", # Output only. The user who created the scheduled evaluation run.
  "description": "A String", # Optional. User-defined description of the scheduled evaluation run.
  "displayName": "A String", # Required. User-defined display name of the scheduled evaluation run config.
  "etag": "A String", # Output only. Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
  "lastCompletedRun": "A String", # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
  "lastUpdatedBy": "A String", # Output only. The user who last updated the evaluation.
  "name": "A String", # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
  "nextScheduledExecutionTime": "A String", # Output only. The next time this is scheduled to execute
  "request": { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
    "app": "A String", # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
    "appVersion": "A String", # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
    "config": { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
      "evaluationChannel": "A String", # Optional. The channel to evaluate.
      "inputAudioConfig": { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
        "audioEncoding": "A String", # Required. The encoding of the input audio data.
        "noiseSuppressionLevel": "A String", # Optional. Whether to enable noise suppression on the input audio. Available values are "low", "moderate", "high", "very_high".
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the input audio data.
      },
      "outputAudioConfig": { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
        "audioEncoding": "A String", # Required. The encoding of the output audio data.
        "sampleRateHertz": 42, # Required. The sample rate (in Hertz) of the output audio data.
      },
      "toolCallBehaviour": "A String", # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
    },
    "displayName": "A String", # Optional. The display name of the evaluation run.
    "evaluationDataset": "A String", # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
    "evaluations": [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
      "A String",
    ],
    "generateLatencyReport": True or False, # Optional. Whether to generate a latency report for the evaluation run.
    "goldenRunMethod": "A String", # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
    "optimizationConfig": { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
      "assistantSession": "A String", # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
      "errorMessage": "A String", # Output only. The error message if the optimization run failed.
      "generateLossReport": True or False, # Optional. Whether to generate a loss report.
      "lossReport": { # Output only. The generated loss report.
        "a_key": "", # Properties of the object.
      },
      "reportSummary": "A String", # Output only. The summary of the loss report.
      "shouldSuggestFix": True or False, # Output only. Whether to suggest a fix for the losses.
      "status": "A String", # Output only. The status of the optimization run.
    },
    "personaRunConfigs": [ # Optional. The configuration to use for the run per persona.
      { # Configuration for running an evaluation for a specific persona.
        "persona": "A String", # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
        "taskCount": 42, # Optional. The number of tasks to run for the persona.
      },
    ],
    "runCount": 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
    "scheduledEvaluationRun": "A String", # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
  },
  "schedulingConfig": { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
    "daysOfWeek": [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
      42,
    ],
    "frequency": "A String", # Required. The frequency with which to run the eval
    "startTime": "A String", # Required. Timestamp when the eval should start.
  },
  "totalExecutions": 42, # Output only. The total number of times this run has been executed
  "updateTime": "A String", # Output only. Timestamp when the evaluation was last updated.
}