Discovery Engine API . projects . locations . notebooks . sources

Instance Methods

batchCreate(parent, body=None, x__xgafv=None)

Creates a list of Sources.

batchDelete(parent, body=None, x__xgafv=None)

Deletes multiple sources

close()

Close httplib2 connections.

get(name, x__xgafv=None)

Gets a Source.

uploadFile(parent, sourceId, body=None, x__xgafv=None)

Uploads a file for Notebook LM to use. Creates a Source.

Method Details

batchCreate(parent, body=None, x__xgafv=None)
Creates a list of Sources.

Args:
  parent: string, Required. The parent resource where the sources will be created. Format: projects/{project}/locations/{location}/notebooks/{notebook} (required)
  body: object, The request body.
    The object takes the form of:

{ # Request for SourceService.BatchCreateSources method.
  "userContents": [ # Required. The UserContents to be uploaded.
    { # The "Content" messages refer to data the user wants to upload.
      "agentspaceContent": { # Agentspace content uploaded as source. # Agentspace content uploaded as source.
        "documentName": "A String", # Optional. The full document name in Agentspace.
        "engineName": "A String", # Optional. Engine to verify the permission of the document.
        "ideaforgeIdeaName": "A String", # Optional. The full idea name for IdeaForge.
      },
      "googleDriveContent": { # The content from Google Drive. # The content from Google Drive.
        "documentId": "A String", # The document id of the selected document.
        "mimeType": "A String", # The mime type of the selected document. This can be used to differentiate type of content selected in the drive picker. Use application/vnd.google-apps.document for Google Docs or application/vnd.google-apps.presentation for Google Slides.
        "sourceName": "A String", # The name to be displayed for the source.
      },
      "textContent": { # The text content uploaded as source. # The text content uploaded as source.
        "content": "A String", # The name to be displayed for the source.
        "sourceName": "A String", # The display name of the text source.
      },
      "videoContent": { # Video content uploaded as source. # The video content uploaded as source.
        "youtubeUrl": "A String", # The youtube url of the video content.
      },
      "webContent": { # The web content uploaded as source. # The web content uploaded as source.
        "sourceName": "A String", # The name to be displayed for the source.
        "url": "A String", # If URL is supplied, will fetch the webpage in the backend.
      },
    },
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Response for SourceService.BatchCreateSources method.
  "sources": [ # The Sources.
    { # Source represents a single source of content.
      "metadata": { # Represents the metadata of a source and some additional information. # Output only. Metadata about the source.
        "agentspaceMetadata": { # Metadata about an agentspace source. # Metadata for an agentspace source.
          "documentName": "A String", # Output only. The full document name in Agentspace.
          "documentTitle": "A String", # Output only. The title of the document.
        },
        "googleDocsMetadata": { # Metadata about a google doc source. # Metadata for a google doc source.
          "documentId": "A String", # Output only. The document id of the google doc.
          "revisionId": "A String", # Output only. Revision id for the doc.
        },
        "sourceAddedTimestamp": "A String", # The timestamp the source was added.
        "tokenCount": 42, # The number of tokens in the source.
        "wordCount": 42, # The word count of the source.
        "youtubeMetadata": { # Metadata about a youtube video source. # Metadata for a youtube video source.
          "channelName": "A String", # Output only. The channel name of the youtube video.
          "videoId": "A String", # Output only. The id of the youtube video.
        },
      },
      "name": "A String", # Identifier. The full resource name of the source. Format: `projects/{project}/locations/{location}/notebooks/{notebook}/sources/{source_id}`. This field must be a UTF-8 encoded string with a length limit of 1024 characters.
      "settings": { # Allows extension of Source Settings in the BatchCreateSources (Formerly AddSource request). # Output only. Status of the source, and any failure reasons.
        "failureReason": { # Failure reason containing details about why a source failed to ingest. # Failure reason containing details about why a source failed to ingest.
          "audioTranscriptionError": { # An audio file transcription specific error. # An audio file transcription specific error.
            "languageDetectionFailed": { # Could not detect language of the file (it may not be speech). # Could not detect language of the file (it may not be speech).
            },
            "noAudioDetected": { # No audio was detected in the input file. # No audio was detected in the input file (it may have been a video).
            },
          },
          "domainBlocked": { # Error to indicate that the source was removed because the domain was blocked. # Error if the user tries to add a source from a blocked domain.
          },
          "googleDriveError": { # A google drive specific error. # A google drive specific error.
            "downloadPrevented": { # The user was prevented from downloading the file. # The user was prevented from downloading the file.
            },
          },
          "ingestionError": { # Indicates an error occurred while ingesting the source. # Indicates an error occurred while ingesting the source.
          },
          "paywallError": { # Indicates that the source is paywalled and cannot be ingested. # Indicates that the source is paywalled and cannot be ingested.
          },
          "sourceEmpty": { # Indicates that the source is empty. # Indicates that the source is empty.
          },
          "sourceLimitExceeded": { # Indicates that the user does not have space for this source. # Error if the user tries to update beyond their limits.
          },
          "sourceTooLong": { # Indicates source word count exceeded the user's limit. # Indicates source word count exceeded the user's limit.
            "wordCount": 42, # The number of words in the source.
            "wordLimit": 42, # The word count limit for the current user at the time of the upload.
          },
          "sourceUnreachable": { # Indicates that the source is unreachable. This is primarily used for sources that are added via URL. # Indicates that the source is unreachable.
            "errorDetails": "A String", # Describes why the source is unreachable.
          },
          "unknown": { # Indicates an unknown error occurred. # Indicates an unknown error occurred.
          },
          "uploadError": { # Indicates an error occurred while uploading the source. # Indicates an error occurred while uploading the source.
          },
          "youtubeError": { # A youtube specific error. # A youtube specific error.
            "videoDeleted": { # Error to indicate that the source was removed because the video was deleted. # Error to indicate that the source was removed because the video was deleted.
            },
          },
        },
        "status": "A String", # Status of the source.
      },
      "sourceId": { # SourceId is the last segment of the source's resource name. # Optional. Output only. Source id, which is the last segment of the source's resource name.
        "id": "A String", # The id of the source.
      },
      "title": "A String", # Optional. Title of the source.
    },
  ],
}
batchDelete(parent, body=None, x__xgafv=None)
Deletes multiple sources

Args:
  parent: string, Required. The parent resource where the sources will be deleted. Format: projects/{project}/locations/{location}/notebooks/{notebook} (required)
  body: object, The request body.
    The object takes the form of:

{ # Request for SourceService.BatchDeleteSourcesRequest method.
  "names": [ # Required. Names of sources to be deleted. Format: projects/{project}/locations/{location}/notebooks/{notebook}/sources/{source}
    "A String",
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
}
close()
Close httplib2 connections.
get(name, x__xgafv=None)
Gets a Source.

Args:
  name: string, Required. The resource name for source Format: projects/{project}/locations/{location}/notebooks/{notebook}/sources/{source} (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Source represents a single source of content.
  "metadata": { # Represents the metadata of a source and some additional information. # Output only. Metadata about the source.
    "agentspaceMetadata": { # Metadata about an agentspace source. # Metadata for an agentspace source.
      "documentName": "A String", # Output only. The full document name in Agentspace.
      "documentTitle": "A String", # Output only. The title of the document.
    },
    "googleDocsMetadata": { # Metadata about a google doc source. # Metadata for a google doc source.
      "documentId": "A String", # Output only. The document id of the google doc.
      "revisionId": "A String", # Output only. Revision id for the doc.
    },
    "sourceAddedTimestamp": "A String", # The timestamp the source was added.
    "tokenCount": 42, # The number of tokens in the source.
    "wordCount": 42, # The word count of the source.
    "youtubeMetadata": { # Metadata about a youtube video source. # Metadata for a youtube video source.
      "channelName": "A String", # Output only. The channel name of the youtube video.
      "videoId": "A String", # Output only. The id of the youtube video.
    },
  },
  "name": "A String", # Identifier. The full resource name of the source. Format: `projects/{project}/locations/{location}/notebooks/{notebook}/sources/{source_id}`. This field must be a UTF-8 encoded string with a length limit of 1024 characters.
  "settings": { # Allows extension of Source Settings in the BatchCreateSources (Formerly AddSource request). # Output only. Status of the source, and any failure reasons.
    "failureReason": { # Failure reason containing details about why a source failed to ingest. # Failure reason containing details about why a source failed to ingest.
      "audioTranscriptionError": { # An audio file transcription specific error. # An audio file transcription specific error.
        "languageDetectionFailed": { # Could not detect language of the file (it may not be speech). # Could not detect language of the file (it may not be speech).
        },
        "noAudioDetected": { # No audio was detected in the input file. # No audio was detected in the input file (it may have been a video).
        },
      },
      "domainBlocked": { # Error to indicate that the source was removed because the domain was blocked. # Error if the user tries to add a source from a blocked domain.
      },
      "googleDriveError": { # A google drive specific error. # A google drive specific error.
        "downloadPrevented": { # The user was prevented from downloading the file. # The user was prevented from downloading the file.
        },
      },
      "ingestionError": { # Indicates an error occurred while ingesting the source. # Indicates an error occurred while ingesting the source.
      },
      "paywallError": { # Indicates that the source is paywalled and cannot be ingested. # Indicates that the source is paywalled and cannot be ingested.
      },
      "sourceEmpty": { # Indicates that the source is empty. # Indicates that the source is empty.
      },
      "sourceLimitExceeded": { # Indicates that the user does not have space for this source. # Error if the user tries to update beyond their limits.
      },
      "sourceTooLong": { # Indicates source word count exceeded the user's limit. # Indicates source word count exceeded the user's limit.
        "wordCount": 42, # The number of words in the source.
        "wordLimit": 42, # The word count limit for the current user at the time of the upload.
      },
      "sourceUnreachable": { # Indicates that the source is unreachable. This is primarily used for sources that are added via URL. # Indicates that the source is unreachable.
        "errorDetails": "A String", # Describes why the source is unreachable.
      },
      "unknown": { # Indicates an unknown error occurred. # Indicates an unknown error occurred.
      },
      "uploadError": { # Indicates an error occurred while uploading the source. # Indicates an error occurred while uploading the source.
      },
      "youtubeError": { # A youtube specific error. # A youtube specific error.
        "videoDeleted": { # Error to indicate that the source was removed because the video was deleted. # Error to indicate that the source was removed because the video was deleted.
        },
      },
    },
    "status": "A String", # Status of the source.
  },
  "sourceId": { # SourceId is the last segment of the source's resource name. # Optional. Output only. Source id, which is the last segment of the source's resource name.
    "id": "A String", # The id of the source.
  },
  "title": "A String", # Optional. Title of the source.
}
uploadFile(parent, sourceId, body=None, x__xgafv=None)
Uploads a file for Notebook LM to use. Creates a Source.

Args:
  parent: string, Required. The parent resource where the sources will be created. Format: projects/{project}/locations/{location}/notebooks/{notebook} (required)
  sourceId: string, The source id of the associated file. If not set, a source id will be generated and a new tentative source will be created. (required)
  body: object, The request body.
    The object takes the form of:

{ # Request for the SourceService.UploadSourceFile method.
  "blob": { # A reference to data stored on the filesystem, on GFS or in blobstore. # Information about the file being uploaded.
    "algorithm": "A String", # Deprecated, use one of explicit hash type fields instead. Algorithm used for calculating the hash. As of 2011/01/21, "MD5" is the only possible value for this field. New values may be added at any time.
    "bigstoreObjectRef": "A String", # Use object_id instead.
    "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
    "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
      "blobGeneration": "A String", # The blob generation id.
      "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
      "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
      "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
      "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
    },
    "compositeMedia": [ # A composite media composed of one or more media objects, set if reference_type is COMPOSITE_MEDIA. The media length field must be set to the sum of the lengths of all composite media objects. Note: All composite media must have length specified.
      { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites.
        "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          "blobGeneration": "A String", # The blob generation id.
          "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
          "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
        "crc32cHash": 42, # crc32.c hash for the payload.
        "inline": "A String", # Media data, set if reference_type is INLINE
        "length": "A String", # Size of the data, in bytes
        "md5Hash": "A String", # MD5 hash for the payload.
        "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          "bucketName": "A String", # The name of the bucket to which this object belongs.
          "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          "objectName": "A String", # The name of the object.
        },
        "path": "A String", # Path to the data, set if reference_type is PATH
        "referenceType": "A String", # Describes what the field reference contains.
        "sha1Hash": "A String", # SHA-1 hash for the payload.
      },
    ],
    "contentType": "A String", # MIME type of the data
    "contentTypeInfo": { # Detailed Content-Type information from Scotty. The Content-Type of the media will typically be filled in by the header or Scotty's best_guess, but this extended information provides the backend with more information so that it can make a better decision if needed. This is only used on media upload requests from Scotty. # Extended content type information provided for Scotty uploads.
      "bestGuess": "A String", # Scotty's best guess of what the content type of the file is.
      "fromBytes": "A String", # The content type of the file derived by looking at specific bytes (i.e. "magic bytes") of the actual file.
      "fromFileName": "A String", # The content type of the file derived from the file extension of the original file name used by the client.
      "fromHeader": "A String", # The content type of the file as specified in the request headers, multipart headers, or RUPIO start request.
      "fromUrlPath": "A String", # The content type of the file derived from the file extension of the URL path. The URL path is assumed to represent a file name (which is typically only true for agents that are providing a REST API).
    },
    "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
    "crc32cHash": 42, # For Scotty Uploads: Scotty-provided hashes for uploads For Scotty Downloads: (WARNING: DO NOT USE WITHOUT PERMISSION FROM THE SCOTTY TEAM.) A Hash provided by the agent to be used to verify the data being downloaded. Currently only supported for inline payloads. Further, only crc32c_hash is currently supported.
    "diffChecksumsResponse": { # Backend response for a Diff get checksums response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_CHECKSUMS_RESPONSE.
      "checksumsLocation": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # Exactly one of these fields must be populated. If checksums_location is filled, the server will return the corresponding contents to the user. If object_location is filled, the server will calculate the checksums based on the content there and return that to the user. For details on the format of the checksums, see http://go/scotty-diff-protocol.
        "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          "blobGeneration": "A String", # The blob generation id.
          "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
          "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
        "crc32cHash": 42, # crc32.c hash for the payload.
        "inline": "A String", # Media data, set if reference_type is INLINE
        "length": "A String", # Size of the data, in bytes
        "md5Hash": "A String", # MD5 hash for the payload.
        "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          "bucketName": "A String", # The name of the bucket to which this object belongs.
          "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          "objectName": "A String", # The name of the object.
        },
        "path": "A String", # Path to the data, set if reference_type is PATH
        "referenceType": "A String", # Describes what the field reference contains.
        "sha1Hash": "A String", # SHA-1 hash for the payload.
      },
      "chunkSizeBytes": "A String", # The chunk size of checksums. Must be a multiple of 256KB.
      "objectLocation": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # If set, calculate the checksums based on the contents and return them to the caller.
        "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          "blobGeneration": "A String", # The blob generation id.
          "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
          "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
        "crc32cHash": 42, # crc32.c hash for the payload.
        "inline": "A String", # Media data, set if reference_type is INLINE
        "length": "A String", # Size of the data, in bytes
        "md5Hash": "A String", # MD5 hash for the payload.
        "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          "bucketName": "A String", # The name of the bucket to which this object belongs.
          "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          "objectName": "A String", # The name of the object.
        },
        "path": "A String", # Path to the data, set if reference_type is PATH
        "referenceType": "A String", # Describes what the field reference contains.
        "sha1Hash": "A String", # SHA-1 hash for the payload.
      },
      "objectSizeBytes": "A String", # The total size of the server object.
      "objectVersion": "A String", # The object version of the object the checksums are being returned for.
    },
    "diffDownloadResponse": { # Backend response for a Diff download response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_DOWNLOAD_RESPONSE.
      "objectLocation": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The original object location.
        "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          "blobGeneration": "A String", # The blob generation id.
          "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
          "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
        "crc32cHash": 42, # crc32.c hash for the payload.
        "inline": "A String", # Media data, set if reference_type is INLINE
        "length": "A String", # Size of the data, in bytes
        "md5Hash": "A String", # MD5 hash for the payload.
        "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          "bucketName": "A String", # The name of the bucket to which this object belongs.
          "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          "objectName": "A String", # The name of the object.
        },
        "path": "A String", # Path to the data, set if reference_type is PATH
        "referenceType": "A String", # Describes what the field reference contains.
        "sha1Hash": "A String", # SHA-1 hash for the payload.
      },
    },
    "diffUploadRequest": { # A Diff upload request. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_UPLOAD_REQUEST.
      "checksumsInfo": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the checksums for the new object. Agents must clone the object located here, as the upload server will delete the contents once a response is received. For details on the format of the checksums, see http://go/scotty-diff-protocol.
        "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          "blobGeneration": "A String", # The blob generation id.
          "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
          "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
        "crc32cHash": 42, # crc32.c hash for the payload.
        "inline": "A String", # Media data, set if reference_type is INLINE
        "length": "A String", # Size of the data, in bytes
        "md5Hash": "A String", # MD5 hash for the payload.
        "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          "bucketName": "A String", # The name of the bucket to which this object belongs.
          "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          "objectName": "A String", # The name of the object.
        },
        "path": "A String", # Path to the data, set if reference_type is PATH
        "referenceType": "A String", # Describes what the field reference contains.
        "sha1Hash": "A String", # SHA-1 hash for the payload.
      },
      "objectInfo": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the new object. Agents must clone the object located here, as the upload server will delete the contents once a response is received.
        "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          "blobGeneration": "A String", # The blob generation id.
          "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
          "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
        "crc32cHash": 42, # crc32.c hash for the payload.
        "inline": "A String", # Media data, set if reference_type is INLINE
        "length": "A String", # Size of the data, in bytes
        "md5Hash": "A String", # MD5 hash for the payload.
        "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          "bucketName": "A String", # The name of the bucket to which this object belongs.
          "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          "objectName": "A String", # The name of the object.
        },
        "path": "A String", # Path to the data, set if reference_type is PATH
        "referenceType": "A String", # Describes what the field reference contains.
        "sha1Hash": "A String", # SHA-1 hash for the payload.
      },
      "objectVersion": "A String", # The object version of the object that is the base version the incoming diff script will be applied to. This field will always be filled in.
    },
    "diffUploadResponse": { # Backend response for a Diff upload request. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_UPLOAD_RESPONSE.
      "objectVersion": "A String", # The object version of the object at the server. Must be included in the end notification response. The version in the end notification response must correspond to the new version of the object that is now stored at the server, after the upload.
      "originalObject": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the original file for a diff upload request. Must be filled in if responding to an upload start notification.
        "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          "blobGeneration": "A String", # The blob generation id.
          "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
          "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
        "crc32cHash": 42, # crc32.c hash for the payload.
        "inline": "A String", # Media data, set if reference_type is INLINE
        "length": "A String", # Size of the data, in bytes
        "md5Hash": "A String", # MD5 hash for the payload.
        "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          "bucketName": "A String", # The name of the bucket to which this object belongs.
          "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          "objectName": "A String", # The name of the object.
        },
        "path": "A String", # Path to the data, set if reference_type is PATH
        "referenceType": "A String", # Describes what the field reference contains.
        "sha1Hash": "A String", # SHA-1 hash for the payload.
      },
    },
    "diffVersionResponse": { # Backend response for a Diff get version response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_VERSION_RESPONSE.
      "objectSizeBytes": "A String", # The total size of the server object.
      "objectVersion": "A String", # The version of the object stored at the server.
    },
    "downloadParameters": { # Parameters specific to media downloads. # Parameters for a media download.
      "allowGzipCompression": True or False, # A boolean to be returned in the response to Scotty. Allows/disallows gzip encoding of the payload content when the server thinks it's advantageous (hence, does not guarantee compression) which allows Scotty to GZip the response to the client.
      "ignoreRange": True or False, # Determining whether or not Apiary should skip the inclusion of any Content-Range header on its response to Scotty.
    },
    "filename": "A String", # Original file name
    "hash": "A String", # Deprecated, use one of explicit hash type fields instead. These two hash related fields will only be populated on Scotty based media uploads and will contain the content of the hash group in the NotificationRequest: http://cs/#google3/blobstore2/api/scotty/service/proto/upload_listener.proto&q=class:Hash Hex encoded hash value of the uploaded media.
    "hashVerified": True or False, # For Scotty uploads only. If a user sends a hash code and the backend has requested that Scotty verify the upload against the client hash, Scotty will perform the check on behalf of the backend and will reject it if the hashes don't match. This is set to true if Scotty performed this verification.
    "inline": "A String", # Media data, set if reference_type is INLINE
    "isPotentialRetry": True or False, # |is_potential_retry| is set false only when Scotty is certain that it has not sent the request before. When a client resumes an upload, this field must be set true in agent calls, because Scotty cannot be certain that it has never sent the request before due to potential failure in the session state persistence.
    "length": "A String", # Size of the data, in bytes
    "md5Hash": "A String", # Scotty-provided MD5 hash for an upload.
    "mediaId": "A String", # Media id to forward to the operation GetMedia. Can be set if reference_type is GET_MEDIA.
    "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
      "bucketName": "A String", # The name of the bucket to which this object belongs.
      "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
      "objectName": "A String", # The name of the object.
    },
    "path": "A String", # Path to the data, set if reference_type is PATH
    "referenceType": "A String", # Describes what the field reference contains.
    "sha1Hash": "A String", # Scotty-provided SHA1 hash for an upload.
    "sha256Hash": "A String", # Scotty-provided SHA256 hash for an upload.
    "timestamp": "A String", # Time at which the media data was last updated, in milliseconds since UNIX epoch
    "token": "A String", # A unique fingerprint/version id for the media data
  },
  "mediaRequestInfo": { # Extra information added to operations that support Scotty media requests. # Media upload request metadata.
    "currentBytes": "A String", # The number of current bytes uploaded or downloaded.
    "customData": "A String", # Data to be copied to backend requests. Custom data is returned to Scotty in the agent_state field, which Scotty will then provide in subsequent upload notifications.
    "diffObjectVersion": "A String", # Set if the http request info is diff encoded. The value of this field is the version number of the base revision. This is corresponding to Apiary's mediaDiffObjectVersion (//depot/google3/java/com/google/api/server/media/variable/DiffObjectVersionVariable.java). See go/esf-scotty-diff-upload for more information.
    "finalStatus": 42, # The existence of the final_status field indicates that this is the last call to the agent for this request_id. http://google3/uploader/agent/scotty_agent.proto?l=737&rcl=347601929
    "notificationType": "A String", # The type of notification received from Scotty.
    "physicalHeaders": "A String", # The physical headers provided by RequestReceivedParameters in Scotty request. type is uploader_service.KeyValuePairs.
    "requestId": "A String", # The Scotty request ID.
    "requestReceivedParamsServingInfo": "A String", # The partition of the Scotty server handling this request. type is uploader_service.RequestReceivedParamsServingInfo LINT.IfChange(request_received_params_serving_info_annotations) LINT.ThenChange()
    "totalBytes": "A String", # The total size of the file.
    "totalBytesIsEstimated": True or False, # Whether the total bytes field contains an estimated data.
  },
  "sourceId": "A String", # The source id of the associated file. If not set, a source id will be generated and a new tentative source will be created.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Response for the SourceService.UploadSourceFile method.
  "mediaResponseInfo": { # This message is for backends to pass their scotty media specific fields to ESF. Backend will include this in their response message to ESF. Example: ExportFile is an rpc defined for upload using scotty from ESF. rpc ExportFile(ExportFileRequest) returns (ExportFileResponse) Message ExportFileResponse will include apiserving.MediaResponseInfo to tell ESF about data like dynamic_dropzone it needs to pass to Scotty. message ExportFileResponse { optional gdata.Media blob = 1; optional apiserving.MediaResponseInfo media_response_info = 2 } # Media upload response metadata.
    "customData": "A String", # Data to copy from backend response to the next backend requests. Custom data is returned to Scotty in the agent_state field, which Scotty will then provide in subsequent upload notifications.
    "dataStorageTransform": "A String", # Specifies any transformation to be applied to data before persisting it or retrieving from storage. E.g., encryption options for blobstore2. This should be of the form uploader_service.DataStorageTransform.
    "destinationBlobMintIndex": 42, # For the first notification of a |diff_encoded| HttpRequestInfo, this is the index of the blob mint that Scotty should use when writing the resulting blob. This field is optional. It's not required ever, even if `original_object_blob_mint_index` is set. In situations like that, we will use the destination blob's mint for the destination blob and regular blob ACL checks for the original object. Note: This field is only for use by Drive API for diff uploads.
    "dynamicDropTarget": "A String", # Specifies the Scotty Drop Target to use for uploads. If present in a media response, Scotty does not upload to a standard drop zone. Instead, Scotty saves the upload directly to the location specified in this drop target. Unlike drop zones, the drop target is the final storage location for an upload. So, the agent does not need to clone the blob at the end of the upload. The agent is responsible for garbage collecting any orphaned blobs that may occur due to aborted uploads. For more information, see the drop target design doc here: http://goto/ScottyDropTarget This field will be preferred to dynamicDropzone. If provided, the identified field in the response must be of the type uploader.agent.DropTarget.
    "dynamicDropzone": "A String", # Specifies the Scotty dropzone to use for uploads.
    "mediaForDiff": { # A reference to data stored on the filesystem, on GFS or in blobstore. # Diff Updates must respond to a START notification with this Media proto to tell Scotty to decode the diff encoded payload and apply the diff against this field. If the request was diff encoded, but this field is not set, Scotty will treat the encoding as identity. This is corresponding to Apiary's DiffUploadResponse.original_object (//depot/google3/gdata/rosy/proto/data.proto?l=413). See go/esf-scotty-diff-upload for more information.
      "algorithm": "A String", # Deprecated, use one of explicit hash type fields instead. Algorithm used for calculating the hash. As of 2011/01/21, "MD5" is the only possible value for this field. New values may be added at any time.
      "bigstoreObjectRef": "A String", # Use object_id instead.
      "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
      "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
        "blobGeneration": "A String", # The blob generation id.
        "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
        "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
        "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
        "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
      },
      "compositeMedia": [ # A composite media composed of one or more media objects, set if reference_type is COMPOSITE_MEDIA. The media length field must be set to the sum of the lengths of all composite media objects. Note: All composite media must have length specified.
        { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites.
          "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            "blobGeneration": "A String", # The blob generation id.
            "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
            "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
          "crc32cHash": 42, # crc32.c hash for the payload.
          "inline": "A String", # Media data, set if reference_type is INLINE
          "length": "A String", # Size of the data, in bytes
          "md5Hash": "A String", # MD5 hash for the payload.
          "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            "bucketName": "A String", # The name of the bucket to which this object belongs.
            "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            "objectName": "A String", # The name of the object.
          },
          "path": "A String", # Path to the data, set if reference_type is PATH
          "referenceType": "A String", # Describes what the field reference contains.
          "sha1Hash": "A String", # SHA-1 hash for the payload.
        },
      ],
      "contentType": "A String", # MIME type of the data
      "contentTypeInfo": { # Detailed Content-Type information from Scotty. The Content-Type of the media will typically be filled in by the header or Scotty's best_guess, but this extended information provides the backend with more information so that it can make a better decision if needed. This is only used on media upload requests from Scotty. # Extended content type information provided for Scotty uploads.
        "bestGuess": "A String", # Scotty's best guess of what the content type of the file is.
        "fromBytes": "A String", # The content type of the file derived by looking at specific bytes (i.e. "magic bytes") of the actual file.
        "fromFileName": "A String", # The content type of the file derived from the file extension of the original file name used by the client.
        "fromHeader": "A String", # The content type of the file as specified in the request headers, multipart headers, or RUPIO start request.
        "fromUrlPath": "A String", # The content type of the file derived from the file extension of the URL path. The URL path is assumed to represent a file name (which is typically only true for agents that are providing a REST API).
      },
      "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
      "crc32cHash": 42, # For Scotty Uploads: Scotty-provided hashes for uploads For Scotty Downloads: (WARNING: DO NOT USE WITHOUT PERMISSION FROM THE SCOTTY TEAM.) A Hash provided by the agent to be used to verify the data being downloaded. Currently only supported for inline payloads. Further, only crc32c_hash is currently supported.
      "diffChecksumsResponse": { # Backend response for a Diff get checksums response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_CHECKSUMS_RESPONSE.
        "checksumsLocation": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # Exactly one of these fields must be populated. If checksums_location is filled, the server will return the corresponding contents to the user. If object_location is filled, the server will calculate the checksums based on the content there and return that to the user. For details on the format of the checksums, see http://go/scotty-diff-protocol.
          "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            "blobGeneration": "A String", # The blob generation id.
            "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
            "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
          "crc32cHash": 42, # crc32.c hash for the payload.
          "inline": "A String", # Media data, set if reference_type is INLINE
          "length": "A String", # Size of the data, in bytes
          "md5Hash": "A String", # MD5 hash for the payload.
          "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            "bucketName": "A String", # The name of the bucket to which this object belongs.
            "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            "objectName": "A String", # The name of the object.
          },
          "path": "A String", # Path to the data, set if reference_type is PATH
          "referenceType": "A String", # Describes what the field reference contains.
          "sha1Hash": "A String", # SHA-1 hash for the payload.
        },
        "chunkSizeBytes": "A String", # The chunk size of checksums. Must be a multiple of 256KB.
        "objectLocation": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # If set, calculate the checksums based on the contents and return them to the caller.
          "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            "blobGeneration": "A String", # The blob generation id.
            "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
            "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
          "crc32cHash": 42, # crc32.c hash for the payload.
          "inline": "A String", # Media data, set if reference_type is INLINE
          "length": "A String", # Size of the data, in bytes
          "md5Hash": "A String", # MD5 hash for the payload.
          "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            "bucketName": "A String", # The name of the bucket to which this object belongs.
            "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            "objectName": "A String", # The name of the object.
          },
          "path": "A String", # Path to the data, set if reference_type is PATH
          "referenceType": "A String", # Describes what the field reference contains.
          "sha1Hash": "A String", # SHA-1 hash for the payload.
        },
        "objectSizeBytes": "A String", # The total size of the server object.
        "objectVersion": "A String", # The object version of the object the checksums are being returned for.
      },
      "diffDownloadResponse": { # Backend response for a Diff download response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_DOWNLOAD_RESPONSE.
        "objectLocation": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The original object location.
          "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            "blobGeneration": "A String", # The blob generation id.
            "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
            "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
          "crc32cHash": 42, # crc32.c hash for the payload.
          "inline": "A String", # Media data, set if reference_type is INLINE
          "length": "A String", # Size of the data, in bytes
          "md5Hash": "A String", # MD5 hash for the payload.
          "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            "bucketName": "A String", # The name of the bucket to which this object belongs.
            "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            "objectName": "A String", # The name of the object.
          },
          "path": "A String", # Path to the data, set if reference_type is PATH
          "referenceType": "A String", # Describes what the field reference contains.
          "sha1Hash": "A String", # SHA-1 hash for the payload.
        },
      },
      "diffUploadRequest": { # A Diff upload request. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_UPLOAD_REQUEST.
        "checksumsInfo": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the checksums for the new object. Agents must clone the object located here, as the upload server will delete the contents once a response is received. For details on the format of the checksums, see http://go/scotty-diff-protocol.
          "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            "blobGeneration": "A String", # The blob generation id.
            "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
            "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
          "crc32cHash": 42, # crc32.c hash for the payload.
          "inline": "A String", # Media data, set if reference_type is INLINE
          "length": "A String", # Size of the data, in bytes
          "md5Hash": "A String", # MD5 hash for the payload.
          "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            "bucketName": "A String", # The name of the bucket to which this object belongs.
            "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            "objectName": "A String", # The name of the object.
          },
          "path": "A String", # Path to the data, set if reference_type is PATH
          "referenceType": "A String", # Describes what the field reference contains.
          "sha1Hash": "A String", # SHA-1 hash for the payload.
        },
        "objectInfo": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the new object. Agents must clone the object located here, as the upload server will delete the contents once a response is received.
          "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            "blobGeneration": "A String", # The blob generation id.
            "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
            "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
          "crc32cHash": 42, # crc32.c hash for the payload.
          "inline": "A String", # Media data, set if reference_type is INLINE
          "length": "A String", # Size of the data, in bytes
          "md5Hash": "A String", # MD5 hash for the payload.
          "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            "bucketName": "A String", # The name of the bucket to which this object belongs.
            "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            "objectName": "A String", # The name of the object.
          },
          "path": "A String", # Path to the data, set if reference_type is PATH
          "referenceType": "A String", # Describes what the field reference contains.
          "sha1Hash": "A String", # SHA-1 hash for the payload.
        },
        "objectVersion": "A String", # The object version of the object that is the base version the incoming diff script will be applied to. This field will always be filled in.
      },
      "diffUploadResponse": { # Backend response for a Diff upload request. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_UPLOAD_RESPONSE.
        "objectVersion": "A String", # The object version of the object at the server. Must be included in the end notification response. The version in the end notification response must correspond to the new version of the object that is now stored at the server, after the upload.
        "originalObject": { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the original file for a diff upload request. Must be filled in if responding to an upload start notification.
          "blobRef": "A String", # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          "blobstore2Info": { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            "blobGeneration": "A String", # The blob generation id.
            "blobId": "A String", # The blob id, e.g., /blobstore/prod/playground/scotty
            "downloadReadHandle": "A String", # Read handle passed from Bigstore -> Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            "readToken": "A String", # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            "uploadMetadataContainer": "A String", # Metadata passed from Blobstore -> Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          "cosmoBinaryReference": "A String", # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn't support JavaScript. This prevents us from including the actual type of this field.
          "crc32cHash": 42, # crc32.c hash for the payload.
          "inline": "A String", # Media data, set if reference_type is INLINE
          "length": "A String", # Size of the data, in bytes
          "md5Hash": "A String", # MD5 hash for the payload.
          "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            "bucketName": "A String", # The name of the bucket to which this object belongs.
            "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            "objectName": "A String", # The name of the object.
          },
          "path": "A String", # Path to the data, set if reference_type is PATH
          "referenceType": "A String", # Describes what the field reference contains.
          "sha1Hash": "A String", # SHA-1 hash for the payload.
        },
      },
      "diffVersionResponse": { # Backend response for a Diff get version response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_VERSION_RESPONSE.
        "objectSizeBytes": "A String", # The total size of the server object.
        "objectVersion": "A String", # The version of the object stored at the server.
      },
      "downloadParameters": { # Parameters specific to media downloads. # Parameters for a media download.
        "allowGzipCompression": True or False, # A boolean to be returned in the response to Scotty. Allows/disallows gzip encoding of the payload content when the server thinks it's advantageous (hence, does not guarantee compression) which allows Scotty to GZip the response to the client.
        "ignoreRange": True or False, # Determining whether or not Apiary should skip the inclusion of any Content-Range header on its response to Scotty.
      },
      "filename": "A String", # Original file name
      "hash": "A String", # Deprecated, use one of explicit hash type fields instead. These two hash related fields will only be populated on Scotty based media uploads and will contain the content of the hash group in the NotificationRequest: http://cs/#google3/blobstore2/api/scotty/service/proto/upload_listener.proto&q=class:Hash Hex encoded hash value of the uploaded media.
      "hashVerified": True or False, # For Scotty uploads only. If a user sends a hash code and the backend has requested that Scotty verify the upload against the client hash, Scotty will perform the check on behalf of the backend and will reject it if the hashes don't match. This is set to true if Scotty performed this verification.
      "inline": "A String", # Media data, set if reference_type is INLINE
      "isPotentialRetry": True or False, # |is_potential_retry| is set false only when Scotty is certain that it has not sent the request before. When a client resumes an upload, this field must be set true in agent calls, because Scotty cannot be certain that it has never sent the request before due to potential failure in the session state persistence.
      "length": "A String", # Size of the data, in bytes
      "md5Hash": "A String", # Scotty-provided MD5 hash for an upload.
      "mediaId": "A String", # Media id to forward to the operation GetMedia. Can be set if reference_type is GET_MEDIA.
      "objectId": { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
        "bucketName": "A String", # The name of the bucket to which this object belongs.
        "generation": "A String", # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
        "objectName": "A String", # The name of the object.
      },
      "path": "A String", # Path to the data, set if reference_type is PATH
      "referenceType": "A String", # Describes what the field reference contains.
      "sha1Hash": "A String", # Scotty-provided SHA1 hash for an upload.
      "sha256Hash": "A String", # Scotty-provided SHA256 hash for an upload.
      "timestamp": "A String", # Time at which the media data was last updated, in milliseconds since UNIX epoch
      "token": "A String", # A unique fingerprint/version id for the media data
    },
    "originalObjectBlobMintIndex": 42, # For the first notification of a |diff_encoded| HttpRequestInfo, this is the index of the blob mint that Scotty should use when reading the original blob. This field is optional. It's not required ever, even if `destination_blob_mint_index` is set. In situations like that, we will use the destination blob's mint for the destination blob and regular blob ACL checks for the original object. Note: This field is only for use by Drive API for diff uploads.
    "requestClass": "A String", # Request class to use for all Blobstore operations for this request.
    "scottyAgentUserId": "A String", # Requester ID passed along to be recorded in the Scotty logs
    "scottyCustomerLog": "A String", # Customer-specific data to be recorded in the Scotty logs type is logs_proto_scotty.CustomerLog
    "trafficClassField": "A String", # Specifies the TrafficClass that Scotty should use for any RPCs to fetch the response bytes. Will override the traffic class GTOS of the incoming http request. This is a temporary field to facilitate whitelisting and experimentation by the bigstore agent only. For instance, this does not apply to RTMP reads. WARNING: DO NOT USE WITHOUT PERMISSION FROM THE SCOTTY TEAM.
    "verifyHashFromHeader": True or False, # Tells Scotty to verify hashes on the agent's behalf by parsing out the X-Goog-Hash header.
  },
  "sourceId": { # SourceId is the last segment of the source's resource name. # The source id of the uploaded source.
    "id": "A String", # The id of the source.
  },
}