API Reference#

The main concepts with this API are:

  • Client manages connections to the BigQuery API. Use the client methods to run jobs (such as a QueryJob via query()) and manage resources.
  • Dataset represents a collection of tables.
  • Table represents a single “relation”.

Client#

client.Client([project, credentials, _http, …]) Client to bundle configuration needed for API requests.

Job#

Job Configuration#

job.QueryJobConfig(**kwargs) Configuration options for query jobs.
job.CopyJobConfig(**kwargs) Configuration options for copy jobs.
job.LoadJobConfig(**kwargs) Configuration options for load jobs.
job.ExtractJobConfig(**kwargs) Configuration options for extract jobs.

Job Classes#

job.QueryJob(job_id, query, client[, job_config]) Asynchronous job: query tables.
job.CopyJob(job_id, sources, destination, client) Asynchronous job: copy data into a table from other tables.
job.LoadJob(job_id, source_uris, …[, …]) Asynchronous job for loading data into a table.
job.ExtractJob(job_id, source, …[, job_config]) Asynchronous job: extract data from a table into Cloud Storage.
job.UnknownJob(job_id, client) A job whose type cannot be determined.

Dataset#

dataset.Dataset(dataset_ref) Datasets are containers for tables.
dataset.DatasetListItem(resource) A read-only dataset resource from a list operation.
dataset.DatasetReference(project, dataset_id) DatasetReferences are pointers to datasets.
dataset.AccessEntry(role, entity_type, entity_id) Represents grant of an access role to an entity.

Table#

table.Table(table_ref[, schema]) Tables represent a set of rows whose values correspond to a schema.
table.TableListItem(resource) A read-only table resource from a list operation.
table.TableReference(dataset_ref, table_id) TableReferences are pointers to tables.
table.Row(values, field_to_index) A BigQuery row.
table.RowIterator(client, api_request, path, …) A class for iterating through HTTP/JSON API row list responses.
table.EncryptionConfiguration([kms_key_name]) Custom encryption configuration (e.g., Cloud KMS keys).
table.TimePartitioning([type_, field, …]) Configures time-based partitioning for a table.
table.TimePartitioningType Specifies the type of time partitioning to perform.

Model#

model.Model(model_ref) Model represents a machine learning model resource.
model.ModelReference() ModelReferences are pointers to models.

Routine#

routine.Routine(routine_ref, **kwargs) Resource representing a user-defined routine.
routine.RoutineArgument(**kwargs) Input/output argument of a function or a stored procedure.
routine.RoutineReference() A pointer to a routine.

Schema#

schema.SchemaField(name, field_type[, mode, …]) Describe a single field within a table schema.

Query#

query.ArrayQueryParameter(name, array_type, …) Named / positional query parameters for array values.
query.ScalarQueryParameter(name, type_, value) Named / positional query parameters for scalar values.
query.StructQueryParameter(name, *sub_params) Named / positional query parameters for struct values.
query.UDFResource(udf_type, value) Describe a single user-defined function (UDF) resource.

Retries#

retry.DEFAULT_RETRY The default retry object.

External Configuration#

external_config.ExternalSourceFormat The format for external data files.
external_config.ExternalConfig(source_format) Description of an external data source.
external_config.BigtableOptions() Options that describe how to treat Bigtable tables as BigQuery tables.
external_config.BigtableColumnFamily() Options for a Bigtable column family.
external_config.BigtableColumn() Options for a Bigtable column.
external_config.CSVOptions() Options that describe how to treat CSV files as BigQuery tables.
external_config.GoogleSheetsOptions() Options that describe how to treat Google Sheets as BigQuery tables.

Additional Types#

Protocol buffer classes for working with the Models API.