Python Client for BigQuery Storage API (Beta)#

BigQuery Storage API:

Quick Start#

In order to use this library, you first need to go through the following steps:

  1. Select or create a Cloud Platform project.

  2. Enable billing for your project.

  3. Enable the BigQuery Storage API.

  4. Setup Authentication.


Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.

With virtualenv, it’s possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.

Supported Python Versions#

Python >= 3.5

Deprecated Python Versions#

Python == 2.7. Python 2.7 support will be removed on January 1, 2020.


pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install google-cloud-bigquery-storage


pip install virtualenv
virtualenv <your-env>
<your-env>\Scripts\pip.exe install google-cloud-bigquery-storage

Optional Dependencies#

Several features of google-cloud-bigquery-storage require additional dependencies.

  • Parse Avro blocks in a read_rows() stream using fastavro.

    pip install google-cloud-bigquery-storage[fastavro]

  • Write rows to a pandas dataframe.

    pip install google-cloud-bigquery-storage[pandas,fastavro]

Next Steps#

Example Usage#

from import bigquery_storage_v1beta1

# TODO(developer): Set the project_id variable.
# project_id = 'your-project-id'
# The read session is created in this project. This project can be
# different from that which contains the table.

client = bigquery_storage_v1beta1.BigQueryStorageClient()

# This example reads baby name data from the public datasets.
table_ref = bigquery_storage_v1beta1.types.TableReference()
table_ref.project_id = "bigquery-public-data"
table_ref.dataset_id = "usa_names"
table_ref.table_id = "usa_1910_current"

# We limit the output columns to a subset of those allowed in the table,
# and set a simple filter to only report names from the state of
# Washington (WA).
read_options = bigquery_storage_v1beta1.types.TableReadOptions()
read_options.row_restriction = 'state = "WA"'

# Set a snapshot time if it's been specified.
modifiers = None
if snapshot_millis > 0:
    modifiers = bigquery_storage_v1beta1.types.TableModifiers()

parent = "projects/{}".format(project_id)
session = client.create_read_session(
    # This API can also deliver data serialized in Apache Arrow format.
    # This example leverages Apache Avro.
    # We use a LIQUID strategy in this example because we only read from a
    # single stream. Consider BALANCED if you're consuming multiple streams
    # concurrently and want more consistent stream sizes.
)  # API request.

# We'll use only a single stream for reading data from the table. Because
# of dynamic sharding, this will yield all the rows in the table. However,
# if you wanted to fan out multiple readers you could do so by having a
# reader process each individual stream.
reader = client.read_rows(

# The read stream contains blocks of Avro-encoded bytes. The rows() method
# uses the fastavro library to parse these blocks as an interable of Python
# dictionaries. Install fastavro with the following command:
# pip install google-cloud-bigquery-storage[fastavro]
rows = reader.rows(session)

# Do any local processing by iterating over the rows. The
# google-cloud-bigquery-storage client reconnects to the API after any
# transient network errors or timeouts.
names = set()
states = set()

for row in rows:

print("Got {} unique names in states: {}".format(len(names), states))