Cloud Document AI API . projects . locations . processors . processorVersions . evaluations

Instance Methods

close()

Close httplib2 connections.

get(name, x__xgafv=None)

Retrieves a specific evaluation.

list(parent, pageSize=None, pageToken=None, x__xgafv=None)

Retrieves a set of evaluations for a given processor version.

list_next()

Retrieves the next page of results.

Method Details

close()
Close httplib2 connections.
get(name, x__xgafv=None)
Retrieves a specific evaluation.

Args:
  name: string, Required. The resource name of the Evaluation to get. `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processorVersion}/evaluations/{evaluation}` (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An evaluation of a ProcessorVersion's performance.
  "allEntitiesMetrics": { # Metrics across multiple confidence levels. # Metrics for all the entities in aggregate.
    "auprc": 3.14, # The calculated area under the precision recall curve (AUPRC), computed by integrating over all confidence thresholds.
    "auprcExact": 3.14, # The AUPRC for metrics with fuzzy matching disabled, i.e., exact matching only.
    "confidenceLevelMetrics": [ # Metrics across confidence levels with fuzzy matching enabled.
      { # Evaluations metrics, at a specific confidence level.
        "confidenceLevel": 3.14, # The confidence level.
        "metrics": { # Evaluation metrics, either in aggregate or about a specific entity. # The metrics at the specific confidence level.
          "f1Score": 3.14, # The calculated f1 score.
          "falseNegativesCount": 42, # The amount of false negatives.
          "falsePositivesCount": 42, # The amount of false positives.
          "groundTruthDocumentCount": 42, # The amount of documents with a ground truth occurrence.
          "groundTruthOccurrencesCount": 42, # The amount of occurrences in ground truth documents.
          "precision": 3.14, # The calculated precision.
          "predictedDocumentCount": 42, # The amount of documents with a predicted occurrence.
          "predictedOccurrencesCount": 42, # The amount of occurrences in predicted documents.
          "recall": 3.14, # The calculated recall.
          "totalDocumentsCount": 42, # The amount of documents that had an occurrence of this label.
          "truePositivesCount": 42, # The amount of true positives.
        },
      },
    ],
    "confidenceLevelMetricsExact": [ # Metrics across confidence levels with only exact matching.
      { # Evaluations metrics, at a specific confidence level.
        "confidenceLevel": 3.14, # The confidence level.
        "metrics": { # Evaluation metrics, either in aggregate or about a specific entity. # The metrics at the specific confidence level.
          "f1Score": 3.14, # The calculated f1 score.
          "falseNegativesCount": 42, # The amount of false negatives.
          "falsePositivesCount": 42, # The amount of false positives.
          "groundTruthDocumentCount": 42, # The amount of documents with a ground truth occurrence.
          "groundTruthOccurrencesCount": 42, # The amount of occurrences in ground truth documents.
          "precision": 3.14, # The calculated precision.
          "predictedDocumentCount": 42, # The amount of documents with a predicted occurrence.
          "predictedOccurrencesCount": 42, # The amount of occurrences in predicted documents.
          "recall": 3.14, # The calculated recall.
          "totalDocumentsCount": 42, # The amount of documents that had an occurrence of this label.
          "truePositivesCount": 42, # The amount of true positives.
        },
      },
    ],
    "estimatedCalibrationError": 3.14, # The Estimated Calibration Error (ECE) of the confidence of the predicted entities.
    "estimatedCalibrationErrorExact": 3.14, # The ECE for the predicted entities with fuzzy matching disabled, i.e., exact matching only.
    "metricsType": "A String", # The metrics type for the label.
  },
  "createTime": "A String", # The time that the evaluation was created.
  "documentCounters": { # Evaluation counters for the documents that were used. # Counters for the documents used in the evaluation.
    "evaluatedDocumentsCount": 42, # How many documents were used in the evaluation.
    "failedDocumentsCount": 42, # How many documents were not included in the evaluation as Document AI failed to process them.
    "inputDocumentsCount": 42, # How many documents were sent for evaluation.
    "invalidDocumentsCount": 42, # How many documents were not included in the evaluation as they didn't pass validation.
  },
  "entityMetrics": { # Metrics across confidence levels, for different entities.
    "a_key": { # Metrics across multiple confidence levels.
      "auprc": 3.14, # The calculated area under the precision recall curve (AUPRC), computed by integrating over all confidence thresholds.
      "auprcExact": 3.14, # The AUPRC for metrics with fuzzy matching disabled, i.e., exact matching only.
      "confidenceLevelMetrics": [ # Metrics across confidence levels with fuzzy matching enabled.
        { # Evaluations metrics, at a specific confidence level.
          "confidenceLevel": 3.14, # The confidence level.
          "metrics": { # Evaluation metrics, either in aggregate or about a specific entity. # The metrics at the specific confidence level.
            "f1Score": 3.14, # The calculated f1 score.
            "falseNegativesCount": 42, # The amount of false negatives.
            "falsePositivesCount": 42, # The amount of false positives.
            "groundTruthDocumentCount": 42, # The amount of documents with a ground truth occurrence.
            "groundTruthOccurrencesCount": 42, # The amount of occurrences in ground truth documents.
            "precision": 3.14, # The calculated precision.
            "predictedDocumentCount": 42, # The amount of documents with a predicted occurrence.
            "predictedOccurrencesCount": 42, # The amount of occurrences in predicted documents.
            "recall": 3.14, # The calculated recall.
            "totalDocumentsCount": 42, # The amount of documents that had an occurrence of this label.
            "truePositivesCount": 42, # The amount of true positives.
          },
        },
      ],
      "confidenceLevelMetricsExact": [ # Metrics across confidence levels with only exact matching.
        { # Evaluations metrics, at a specific confidence level.
          "confidenceLevel": 3.14, # The confidence level.
          "metrics": { # Evaluation metrics, either in aggregate or about a specific entity. # The metrics at the specific confidence level.
            "f1Score": 3.14, # The calculated f1 score.
            "falseNegativesCount": 42, # The amount of false negatives.
            "falsePositivesCount": 42, # The amount of false positives.
            "groundTruthDocumentCount": 42, # The amount of documents with a ground truth occurrence.
            "groundTruthOccurrencesCount": 42, # The amount of occurrences in ground truth documents.
            "precision": 3.14, # The calculated precision.
            "predictedDocumentCount": 42, # The amount of documents with a predicted occurrence.
            "predictedOccurrencesCount": 42, # The amount of occurrences in predicted documents.
            "recall": 3.14, # The calculated recall.
            "totalDocumentsCount": 42, # The amount of documents that had an occurrence of this label.
            "truePositivesCount": 42, # The amount of true positives.
          },
        },
      ],
      "estimatedCalibrationError": 3.14, # The Estimated Calibration Error (ECE) of the confidence of the predicted entities.
      "estimatedCalibrationErrorExact": 3.14, # The ECE for the predicted entities with fuzzy matching disabled, i.e., exact matching only.
      "metricsType": "A String", # The metrics type for the label.
    },
  },
  "kmsKeyName": "A String", # The KMS key name used for encryption.
  "kmsKeyVersionName": "A String", # The KMS key version with which data is encrypted.
  "name": "A String", # The resource name of the evaluation. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}/evaluations/{evaluation}`
}
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Retrieves a set of evaluations for a given processor version.

Args:
  parent: string, Required. The resource name of the ProcessorVersion to list evaluations for. `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processorVersion}` (required)
  pageSize: integer, The standard list page size. If unspecified, at most `5` evaluations are returned. The maximum value is `100`. Values above `100` are coerced to `100`.
  pageToken: string, A page token, received from a previous `ListEvaluations` call. Provide this to retrieve the subsequent page.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # The response from `ListEvaluations`.
  "evaluations": [ # The evaluations requested.
    { # An evaluation of a ProcessorVersion's performance.
      "allEntitiesMetrics": { # Metrics across multiple confidence levels. # Metrics for all the entities in aggregate.
        "auprc": 3.14, # The calculated area under the precision recall curve (AUPRC), computed by integrating over all confidence thresholds.
        "auprcExact": 3.14, # The AUPRC for metrics with fuzzy matching disabled, i.e., exact matching only.
        "confidenceLevelMetrics": [ # Metrics across confidence levels with fuzzy matching enabled.
          { # Evaluations metrics, at a specific confidence level.
            "confidenceLevel": 3.14, # The confidence level.
            "metrics": { # Evaluation metrics, either in aggregate or about a specific entity. # The metrics at the specific confidence level.
              "f1Score": 3.14, # The calculated f1 score.
              "falseNegativesCount": 42, # The amount of false negatives.
              "falsePositivesCount": 42, # The amount of false positives.
              "groundTruthDocumentCount": 42, # The amount of documents with a ground truth occurrence.
              "groundTruthOccurrencesCount": 42, # The amount of occurrences in ground truth documents.
              "precision": 3.14, # The calculated precision.
              "predictedDocumentCount": 42, # The amount of documents with a predicted occurrence.
              "predictedOccurrencesCount": 42, # The amount of occurrences in predicted documents.
              "recall": 3.14, # The calculated recall.
              "totalDocumentsCount": 42, # The amount of documents that had an occurrence of this label.
              "truePositivesCount": 42, # The amount of true positives.
            },
          },
        ],
        "confidenceLevelMetricsExact": [ # Metrics across confidence levels with only exact matching.
          { # Evaluations metrics, at a specific confidence level.
            "confidenceLevel": 3.14, # The confidence level.
            "metrics": { # Evaluation metrics, either in aggregate or about a specific entity. # The metrics at the specific confidence level.
              "f1Score": 3.14, # The calculated f1 score.
              "falseNegativesCount": 42, # The amount of false negatives.
              "falsePositivesCount": 42, # The amount of false positives.
              "groundTruthDocumentCount": 42, # The amount of documents with a ground truth occurrence.
              "groundTruthOccurrencesCount": 42, # The amount of occurrences in ground truth documents.
              "precision": 3.14, # The calculated precision.
              "predictedDocumentCount": 42, # The amount of documents with a predicted occurrence.
              "predictedOccurrencesCount": 42, # The amount of occurrences in predicted documents.
              "recall": 3.14, # The calculated recall.
              "totalDocumentsCount": 42, # The amount of documents that had an occurrence of this label.
              "truePositivesCount": 42, # The amount of true positives.
            },
          },
        ],
        "estimatedCalibrationError": 3.14, # The Estimated Calibration Error (ECE) of the confidence of the predicted entities.
        "estimatedCalibrationErrorExact": 3.14, # The ECE for the predicted entities with fuzzy matching disabled, i.e., exact matching only.
        "metricsType": "A String", # The metrics type for the label.
      },
      "createTime": "A String", # The time that the evaluation was created.
      "documentCounters": { # Evaluation counters for the documents that were used. # Counters for the documents used in the evaluation.
        "evaluatedDocumentsCount": 42, # How many documents were used in the evaluation.
        "failedDocumentsCount": 42, # How many documents were not included in the evaluation as Document AI failed to process them.
        "inputDocumentsCount": 42, # How many documents were sent for evaluation.
        "invalidDocumentsCount": 42, # How many documents were not included in the evaluation as they didn't pass validation.
      },
      "entityMetrics": { # Metrics across confidence levels, for different entities.
        "a_key": { # Metrics across multiple confidence levels.
          "auprc": 3.14, # The calculated area under the precision recall curve (AUPRC), computed by integrating over all confidence thresholds.
          "auprcExact": 3.14, # The AUPRC for metrics with fuzzy matching disabled, i.e., exact matching only.
          "confidenceLevelMetrics": [ # Metrics across confidence levels with fuzzy matching enabled.
            { # Evaluations metrics, at a specific confidence level.
              "confidenceLevel": 3.14, # The confidence level.
              "metrics": { # Evaluation metrics, either in aggregate or about a specific entity. # The metrics at the specific confidence level.
                "f1Score": 3.14, # The calculated f1 score.
                "falseNegativesCount": 42, # The amount of false negatives.
                "falsePositivesCount": 42, # The amount of false positives.
                "groundTruthDocumentCount": 42, # The amount of documents with a ground truth occurrence.
                "groundTruthOccurrencesCount": 42, # The amount of occurrences in ground truth documents.
                "precision": 3.14, # The calculated precision.
                "predictedDocumentCount": 42, # The amount of documents with a predicted occurrence.
                "predictedOccurrencesCount": 42, # The amount of occurrences in predicted documents.
                "recall": 3.14, # The calculated recall.
                "totalDocumentsCount": 42, # The amount of documents that had an occurrence of this label.
                "truePositivesCount": 42, # The amount of true positives.
              },
            },
          ],
          "confidenceLevelMetricsExact": [ # Metrics across confidence levels with only exact matching.
            { # Evaluations metrics, at a specific confidence level.
              "confidenceLevel": 3.14, # The confidence level.
              "metrics": { # Evaluation metrics, either in aggregate or about a specific entity. # The metrics at the specific confidence level.
                "f1Score": 3.14, # The calculated f1 score.
                "falseNegativesCount": 42, # The amount of false negatives.
                "falsePositivesCount": 42, # The amount of false positives.
                "groundTruthDocumentCount": 42, # The amount of documents with a ground truth occurrence.
                "groundTruthOccurrencesCount": 42, # The amount of occurrences in ground truth documents.
                "precision": 3.14, # The calculated precision.
                "predictedDocumentCount": 42, # The amount of documents with a predicted occurrence.
                "predictedOccurrencesCount": 42, # The amount of occurrences in predicted documents.
                "recall": 3.14, # The calculated recall.
                "totalDocumentsCount": 42, # The amount of documents that had an occurrence of this label.
                "truePositivesCount": 42, # The amount of true positives.
              },
            },
          ],
          "estimatedCalibrationError": 3.14, # The Estimated Calibration Error (ECE) of the confidence of the predicted entities.
          "estimatedCalibrationErrorExact": 3.14, # The ECE for the predicted entities with fuzzy matching disabled, i.e., exact matching only.
          "metricsType": "A String", # The metrics type for the label.
        },
      },
      "kmsKeyName": "A String", # The KMS key name used for encryption.
      "kmsKeyVersionName": "A String", # The KMS key version with which data is encrypted.
      "name": "A String", # The resource name of the evaluation. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}/evaluations/{evaluation}`
    },
  ],
  "nextPageToken": "A String", # A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
}
list_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.