Fitness API . users . dataSources . datasets

Instance Methods

close()

Close httplib2 connections.

delete(userId, dataSourceId, datasetId, x__xgafv=None)

Performs an inclusive delete of all data points whose start and end times have any overlap with the time range specified by the dataset ID. For most data types, the entire data point will be deleted. For data types where the time span represents a consistent value (such as com.google.activity.segment), and a data point straddles either end point of the dataset, only the overlapping portion of the data point will be deleted.

get(userId, dataSourceId, datasetId, limit=None, pageToken=None, x__xgafv=None)

Returns a dataset containing all data points whose start and end times overlap with the specified range of the dataset minimum start time and maximum end time. Specifically, any data point whose start time is less than or equal to the dataset end time and whose end time is greater than or equal to the dataset start time.

get_next()

Retrieves the next page of results.

patch(userId, dataSourceId, datasetId, body=None, x__xgafv=None)

Adds data points to a dataset. The dataset need not be previously created. All points within the given dataset will be returned with subsquent calls to retrieve this dataset. Data points can belong to more than one dataset. This method does not use patch semantics: the data points provided are merely inserted, with no existing data replaced.

patch_next()

Retrieves the next page of results.

Method Details

close()
Close httplib2 connections.
delete(userId, dataSourceId, datasetId, x__xgafv=None)
Performs an inclusive delete of all data points whose start and end times have any overlap with the time range specified by the dataset ID. For most data types, the entire data point will be deleted. For data types where the time span represents a consistent value (such as com.google.activity.segment), and a data point straddles either end point of the dataset, only the overlapping portion of the data point will be deleted.

Args:
  userId: string, Delete a dataset for the person identified. Use me to indicate the authenticated user. Only me is supported at this time. (required)
  dataSourceId: string, The data stream ID of the data source that created the dataset. (required)
  datasetId: string, Dataset identifier that is a composite of the minimum data point start time and maximum data point end time represented as nanoseconds from the epoch. The ID is formatted like: "startTime-endTime" where startTime and endTime are 64 bit integers. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format
get(userId, dataSourceId, datasetId, limit=None, pageToken=None, x__xgafv=None)
Returns a dataset containing all data points whose start and end times overlap with the specified range of the dataset minimum start time and maximum end time. Specifically, any data point whose start time is less than or equal to the dataset end time and whose end time is greater than or equal to the dataset start time.

Args:
  userId: string, Retrieve a dataset for the person identified. Use me to indicate the authenticated user. Only me is supported at this time. (required)
  dataSourceId: string, The data stream ID of the data source that created the dataset. (required)
  datasetId: string, Dataset identifier that is a composite of the minimum data point start time and maximum data point end time represented as nanoseconds from the epoch. The ID is formatted like: "startTime-endTime" where startTime and endTime are 64 bit integers. (required)
  limit: integer, If specified, no more than this many data points will be included in the dataset. If there are more data points in the dataset, nextPageToken will be set in the dataset response. The limit is applied from the end of the time range. That is, if pageToken is absent, the limit most recent data points will be returned.
  pageToken: string, The continuation token, which is used to page through large datasets. To get the next page of a dataset, set this parameter to the value of nextPageToken from the previous response. Each subsequent call will yield a partial dataset with data point end timestamps that are strictly smaller than those in the previous partial response.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A dataset represents a projection container for data points. They do not carry any info of their own. Datasets represent a set of data points from a particular data source. A data point can be found in more than one dataset.
  "dataSourceId": "A String", # The data stream ID of the data source that created the points in this dataset.
  "maxEndTimeNs": "A String", # The largest end time of all data points in this possibly partial representation of the dataset. Time is in nanoseconds from epoch. This should also match the second part of the dataset identifier.
  "minStartTimeNs": "A String", # The smallest start time of all data points in this possibly partial representation of the dataset. Time is in nanoseconds from epoch. This should also match the first part of the dataset identifier.
  "nextPageToken": "A String", # This token will be set when a dataset is received in response to a GET request and the dataset is too large to be included in a single response. Provide this value in a subsequent GET request to return the next page of data points within this dataset.
  "point": [ # A partial list of data points contained in the dataset, ordered by endTimeNanos. This list is considered complete when retrieving a small dataset and partial when patching a dataset or retrieving a dataset that is too large to include in a single response.
    { # Represents a single data point, generated by a particular data source. A data point holds a value for each field, an end timestamp and an optional start time. The exact semantics of each of these attributes are specified in the documentation for the particular data type. A data point can represent an instantaneous measurement, reading or input observation, as well as averages or aggregates over a time interval. Check the data type documentation to determine which is the case for a particular data type. Data points always contain one value for each field of the data type.
      "computationTimeMillis": "A String", # DO NOT USE THIS FIELD. It is ignored, and not stored.
      "dataTypeName": "A String", # The data type defining the format of the values in this data point.
      "endTimeNanos": "A String", # The end time of the interval represented by this data point, in nanoseconds since epoch.
      "modifiedTimeMillis": "A String", # Indicates the last time this data point was modified. Useful only in contexts where we are listing the data changes, rather than representing the current state of the data.
      "originDataSourceId": "A String", # If the data point is contained in a dataset for a derived data source, this field will be populated with the data source stream ID that created the data point originally. WARNING: do not rely on this field for anything other than debugging. The value of this field, if it is set at all, is an implementation detail and is not guaranteed to remain consistent.
      "rawTimestampNanos": "A String", # The raw timestamp from the original SensorEvent.
      "startTimeNanos": "A String", # The start time of the interval represented by this data point, in nanoseconds since epoch.
      "value": [ # Values of each data type field for the data point. It is expected that each value corresponding to a data type field will occur in the same order that the field is listed with in the data type specified in a data source. Only one of integer and floating point fields will be populated, depending on the format enum value within data source's type field.
        { # Holder object for the value of a single field in a data point. A field value has a particular format and is only ever set to one of an integer or a floating point value.
          "fpVal": 3.14, # Floating point value. When this is set, other values must not be set.
          "intVal": 42, # Integer value. When this is set, other values must not be set.
          "mapVal": [ # Map value. The valid key space and units for the corresponding value of each entry should be documented as part of the data type definition. Keys should be kept small whenever possible. Data streams with large keys and high data frequency may be down sampled.
            {
              "key": "A String",
              "value": { # Holder object for the value of an entry in a map field of a data point. A map value supports a subset of the formats that the regular Value supports.
                "fpVal": 3.14, # Floating point value.
              },
            },
          ],
          "stringVal": "A String", # String value. When this is set, other values must not be set. Strings should be kept small whenever possible. Data streams with large string values and high data frequency may be down sampled.
        },
      ],
    },
  ],
}
get_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.
        
patch(userId, dataSourceId, datasetId, body=None, x__xgafv=None)
Adds data points to a dataset. The dataset need not be previously created. All points within the given dataset will be returned with subsquent calls to retrieve this dataset. Data points can belong to more than one dataset. This method does not use patch semantics: the data points provided are merely inserted, with no existing data replaced.

Args:
  userId: string, Patch a dataset for the person identified. Use me to indicate the authenticated user. Only me is supported at this time. (required)
  dataSourceId: string, The data stream ID of the data source that created the dataset. (required)
  datasetId: string, This field is not used, and can be safely omitted. (required)
  body: object, The request body.
    The object takes the form of:

{ # A dataset represents a projection container for data points. They do not carry any info of their own. Datasets represent a set of data points from a particular data source. A data point can be found in more than one dataset.
  "dataSourceId": "A String", # The data stream ID of the data source that created the points in this dataset.
  "maxEndTimeNs": "A String", # The largest end time of all data points in this possibly partial representation of the dataset. Time is in nanoseconds from epoch. This should also match the second part of the dataset identifier.
  "minStartTimeNs": "A String", # The smallest start time of all data points in this possibly partial representation of the dataset. Time is in nanoseconds from epoch. This should also match the first part of the dataset identifier.
  "nextPageToken": "A String", # This token will be set when a dataset is received in response to a GET request and the dataset is too large to be included in a single response. Provide this value in a subsequent GET request to return the next page of data points within this dataset.
  "point": [ # A partial list of data points contained in the dataset, ordered by endTimeNanos. This list is considered complete when retrieving a small dataset and partial when patching a dataset or retrieving a dataset that is too large to include in a single response.
    { # Represents a single data point, generated by a particular data source. A data point holds a value for each field, an end timestamp and an optional start time. The exact semantics of each of these attributes are specified in the documentation for the particular data type. A data point can represent an instantaneous measurement, reading or input observation, as well as averages or aggregates over a time interval. Check the data type documentation to determine which is the case for a particular data type. Data points always contain one value for each field of the data type.
      "computationTimeMillis": "A String", # DO NOT USE THIS FIELD. It is ignored, and not stored.
      "dataTypeName": "A String", # The data type defining the format of the values in this data point.
      "endTimeNanos": "A String", # The end time of the interval represented by this data point, in nanoseconds since epoch.
      "modifiedTimeMillis": "A String", # Indicates the last time this data point was modified. Useful only in contexts where we are listing the data changes, rather than representing the current state of the data.
      "originDataSourceId": "A String", # If the data point is contained in a dataset for a derived data source, this field will be populated with the data source stream ID that created the data point originally. WARNING: do not rely on this field for anything other than debugging. The value of this field, if it is set at all, is an implementation detail and is not guaranteed to remain consistent.
      "rawTimestampNanos": "A String", # The raw timestamp from the original SensorEvent.
      "startTimeNanos": "A String", # The start time of the interval represented by this data point, in nanoseconds since epoch.
      "value": [ # Values of each data type field for the data point. It is expected that each value corresponding to a data type field will occur in the same order that the field is listed with in the data type specified in a data source. Only one of integer and floating point fields will be populated, depending on the format enum value within data source's type field.
        { # Holder object for the value of a single field in a data point. A field value has a particular format and is only ever set to one of an integer or a floating point value.
          "fpVal": 3.14, # Floating point value. When this is set, other values must not be set.
          "intVal": 42, # Integer value. When this is set, other values must not be set.
          "mapVal": [ # Map value. The valid key space and units for the corresponding value of each entry should be documented as part of the data type definition. Keys should be kept small whenever possible. Data streams with large keys and high data frequency may be down sampled.
            {
              "key": "A String",
              "value": { # Holder object for the value of an entry in a map field of a data point. A map value supports a subset of the formats that the regular Value supports.
                "fpVal": 3.14, # Floating point value.
              },
            },
          ],
          "stringVal": "A String", # String value. When this is set, other values must not be set. Strings should be kept small whenever possible. Data streams with large string values and high data frequency may be down sampled.
        },
      ],
    },
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A dataset represents a projection container for data points. They do not carry any info of their own. Datasets represent a set of data points from a particular data source. A data point can be found in more than one dataset.
  "dataSourceId": "A String", # The data stream ID of the data source that created the points in this dataset.
  "maxEndTimeNs": "A String", # The largest end time of all data points in this possibly partial representation of the dataset. Time is in nanoseconds from epoch. This should also match the second part of the dataset identifier.
  "minStartTimeNs": "A String", # The smallest start time of all data points in this possibly partial representation of the dataset. Time is in nanoseconds from epoch. This should also match the first part of the dataset identifier.
  "nextPageToken": "A String", # This token will be set when a dataset is received in response to a GET request and the dataset is too large to be included in a single response. Provide this value in a subsequent GET request to return the next page of data points within this dataset.
  "point": [ # A partial list of data points contained in the dataset, ordered by endTimeNanos. This list is considered complete when retrieving a small dataset and partial when patching a dataset or retrieving a dataset that is too large to include in a single response.
    { # Represents a single data point, generated by a particular data source. A data point holds a value for each field, an end timestamp and an optional start time. The exact semantics of each of these attributes are specified in the documentation for the particular data type. A data point can represent an instantaneous measurement, reading or input observation, as well as averages or aggregates over a time interval. Check the data type documentation to determine which is the case for a particular data type. Data points always contain one value for each field of the data type.
      "computationTimeMillis": "A String", # DO NOT USE THIS FIELD. It is ignored, and not stored.
      "dataTypeName": "A String", # The data type defining the format of the values in this data point.
      "endTimeNanos": "A String", # The end time of the interval represented by this data point, in nanoseconds since epoch.
      "modifiedTimeMillis": "A String", # Indicates the last time this data point was modified. Useful only in contexts where we are listing the data changes, rather than representing the current state of the data.
      "originDataSourceId": "A String", # If the data point is contained in a dataset for a derived data source, this field will be populated with the data source stream ID that created the data point originally. WARNING: do not rely on this field for anything other than debugging. The value of this field, if it is set at all, is an implementation detail and is not guaranteed to remain consistent.
      "rawTimestampNanos": "A String", # The raw timestamp from the original SensorEvent.
      "startTimeNanos": "A String", # The start time of the interval represented by this data point, in nanoseconds since epoch.
      "value": [ # Values of each data type field for the data point. It is expected that each value corresponding to a data type field will occur in the same order that the field is listed with in the data type specified in a data source. Only one of integer and floating point fields will be populated, depending on the format enum value within data source's type field.
        { # Holder object for the value of a single field in a data point. A field value has a particular format and is only ever set to one of an integer or a floating point value.
          "fpVal": 3.14, # Floating point value. When this is set, other values must not be set.
          "intVal": 42, # Integer value. When this is set, other values must not be set.
          "mapVal": [ # Map value. The valid key space and units for the corresponding value of each entry should be documented as part of the data type definition. Keys should be kept small whenever possible. Data streams with large keys and high data frequency may be down sampled.
            {
              "key": "A String",
              "value": { # Holder object for the value of an entry in a map field of a data point. A map value supports a subset of the formats that the regular Value supports.
                "fpVal": 3.14, # Floating point value.
              },
            },
          ],
          "stringVal": "A String", # String value. When this is set, other values must not be set. Strings should be kept small whenever possible. Data streams with large string values and high data frequency may be down sampled.
        },
      ],
    },
  ],
}
patch_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.