classifyContent(body=None, x__xgafv=None)
Analyze a piece of content with the provided set of policies.
Close httplib2 connections.
classifyContent(body=None, x__xgafv=None)
Analyze a piece of content with the provided set of policies. Args: body: object, The request body. The object takes the form of: { # Request proto for ClassifyContent RPC. "classifierVersion": "A String", # Optional. Version of the classifier to use. If not specified, the latest version will be used. "context": { # Context about the input that will be used to help on the classification. # Optional. Context about the input that will be used to help on the classification. "prompt": "A String", # Optional. Prompt that generated the model response. }, "input": { # Content to be classified. # Required. Content to be classified. "textInput": { # Text input to be classified. # Content in text format. "content": "A String", # Actual piece of text to be classified. "languageCode": "A String", # Optional. Language of the text in ISO 639-1 format. If the language is invalid or not specified, the system will try to detect it. }, }, "policies": [ # Required. List of policies to classify against. { # List of policies to classify against. "policyType": "A String", # Required. Type of the policy. "threshold": 3.14, # Optional. Score threshold to use when deciding if the content is violative or non-violative. If not specified, the default 0.5 threshold for the policy will be used. }, ], } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Response proto for ClassifyContent RPC. "policyResults": [ # Results of the classification for each policy. { # Result for one policy against the corresponding input. "policyType": "A String", # Type of the policy. "score": 3.14, # Final score for the results of this policy. "violationResult": "A String", # Result of the classification for the policy. }, ], }
close()
Close httplib2 connections.