Python Client for BigQuery Storage API (Beta)#

BigQuery Storage API:

Quick Start#

In order to use this library, you first need to go through the following steps:

  1. Select or create a Cloud Platform project.
  2. Enable billing for your project.
  3. Enable the BigQuery Storage API.
  4. Setup Authentication.

Installation#

Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.

With virtualenv, it’s possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.

Supported Python Versions#

Python >= 3.5

Deprecated Python Versions#

Python == 2.7. Python 2.7 support will be removed on January 1, 2020.

Mac/Linux#

pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install google-cloud-bigquery-storage

Windows#

pip install virtualenv
virtualenv <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install google-cloud-bigquery-storage

Optional Dependencies#

Several features of google-cloud-bigquery-storage require additional dependencies.

  • Parse Avro blocks in a read_rows() stream using fastavro.

    pip install google-cloud-bigquery-storage[fastavro]

  • Write rows to a pandas dataframe.

    pip install google-cloud-bigquery-storage[pandas,fastavro]

Next Steps#

Example Usage#

from google.cloud import bigquery_storage_v1beta1

# TODO(developer): Set the project_id variable.
# project_id = 'your-project-id'
#
# The read session is created in this project. This project can be
# different from that which contains the table.

client = bigquery_storage_v1beta1.BigQueryStorageClient()

# This example reads baby name data from the public datasets.
table_ref = bigquery_storage_v1beta1.types.TableReference()
table_ref.project_id = "bigquery-public-data"
table_ref.dataset_id = "usa_names"
table_ref.table_id = "usa_1910_current"

# We limit the output columns to a subset of those allowed in the table,
# and set a simple filter to only report names from the state of
# Washington (WA).
read_options = bigquery_storage_v1beta1.types.TableReadOptions()
read_options.selected_fields.append("name")
read_options.selected_fields.append("number")
read_options.selected_fields.append("state")
read_options.row_restriction = 'state = "WA"'

# Set a snapshot time if it's been specified.
modifiers = None
if snapshot_millis > 0:
    modifiers = bigquery_storage_v1beta1.types.TableModifiers()
    modifiers.snapshot_time.FromMilliseconds(snapshot_millis)

parent = "projects/{}".format(project_id)
session = client.create_read_session(
    table_ref, parent, table_modifiers=modifiers, read_options=read_options
)  # API request.

# We'll use only a single stream for reading data from the table. Because
# of dynamic sharding, this will yield all the rows in the table. However,
# if you wanted to fan out multiple readers you could do so by having a
# reader process each individual stream.
reader = client.read_rows(
    bigquery_storage_v1beta1.types.StreamPosition(stream=session.streams[0])
)

# The read stream contains blocks of Avro-encoded bytes. The rows() method
# uses the fastavro library to parse these blocks as an interable of Python
# dictionaries. Install fastavro with the following command:
#
# pip install google-cloud-bigquery-storage[fastavro]
rows = reader.rows(session)

# Do any local processing by iterating over the rows. The
# google-cloud-bigquery-storage client reconnects to the API after any
# transient network errors or timeouts.
names = set()
states = set()

for row in rows:
    names.add(row["name"])
    states.add(row["state"])

print("Got {} unique names in states: {}".format(len(names), states))