Optional
cachedNumber of tokens in the cached part of the prompt (the cached content).
Optional
cacheList of modalities that were processed in the cache input.
Optional
promptNumber of tokens in the prompt. When cached_content
is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
Optional
promptList of modalities that were processed in the request input.
Optional
responseTotal number of tokens across all the generated response candidates.
Optional
responseList of modalities that were returned in the response.
Optional
thoughtsNumber of tokens of thoughts for thinking models.
Optional
toolNumber of tokens present in tool-use prompt(s).
Optional
toolList of modalities that were processed in the tool-use prompt.
Optional
totalTotal token count for prompt, response candidates, and tool-use prompts(if present).
Optional
trafficTraffic type. This shows whether a request consumes Pay-As-You-Go or Provisioned Throughput quota.
Usage metadata about response(s).