Rittman Analytics

View Original

Date Partitioning and Table Clustering in Google BigQuery (and Looker PDTs)

Google BigQuery is a data warehousing-orientated “table-as-a-service” product from Google Cloud Platform that, like Oracle Exadata, optimizes for full-table scans rather than selective row access and stores data organized into columns, rather than rows, to align better with filtering and aggregation workloads associated with data warehousing workloads.

BigQuery charges by amount of data stored in your tables and the data you’ve read in executing SQL queries against those tables, so BI tools such as Looker that work efficiently with BigQuery only request the columns they need when querying BigQuery datasets rather than running a SELECT(*) and throwing away what’s not needed.

To illustrate this let’s run a query that requests all columns (“SELECT (*) FROM …”) from a BigQuery table, and as you can see from the screenshot below it’s reading through all 1.47GB of table data to return that full set of columns to the user.

a7244-1s1cz8nz6xn9lmpduml8ydq.png

If the users’ query only really needed just two of those columns, requesting just those brings down the amount of data read to just under 10MB as most of of that table’s data is stored in columns of data that aren’t needed for the query.

a97d7-1lennmqhbbtooktt5ptlf8w.png

BigQuery historically has supported table partitioning based on the date you loaded data into the table which sounds great until you realise that it’s the transaction date, not the ingest date, that most of your user’s queries filter against.

You could also use table decorators in Legacy SQL to point to the particular day partition your data was stored within but this only went back for a maximum of seven days and required your query tool to support this non-standard feature of SQL; earlier in this year though Google introduced a more flexible form of date partitioning as a beta release feature that allows you to choose the date column your table would be partitioned by, and more recently introduced a feature called table clustering that stores data within a table sorted by the columns you most commonly filter on when running queries against it.

To show how date partitioning and table clustering work, I’ll start by running a query to return just a month’s data from the five years of data held within my table; as you can see in the screenshot below, BigQuery performs a full table scan and reads through all 1.37 GB of data in the table to return just the month of data my query requested.

c53c5-1vxz8mhgaifybfhtmejyb_a.png

Standard SQL now supports DDL commands such as CREATE TABLE and CREATE TABLE … AS SELECT, along with a PARTITION BY clause that lets you specify a timestamp or date column to partition the table by. I’ll use these new features to create a copy of that same table, this time partitioned by the timestamp column I’m filtering on in my query predicate …

2665b-1fao4duvhkc3txox88gci2q.png

… and the DDL statement fails. What’s happened there then?

Turns out that BigQuery tables are limited to 2500 partitions for the moment, with any one single load operation limited to 2000 (hence the error) and with partitioning limited to just date and timestamp columns and partitions a day in length it means any one table can only hold around five years worth of data, beyond that you’ll need to create multiple date partitioned tables and UNION them together through a SQL view.

For now though I load my table with just five years of data and then re-run the query that requests a single day from that overall five years; now BigQuery has only read and processed 57 MB of data and it’d be a fraction of that if I only requested the two columns I needed, not all columns from that table.

87da5-134v1dilwz1wa7yaa884cja.png

But what about queries that filter against the other columns in this table? We can’t set up table partitioning on STRING, INTEGER or any other type of column datatype so my original query if re-run against the date partitioned table reads just as much data as it did before.

0264a-1skwptoedkk3wqhwnq6adgw.png

What we could do is re-create the table with its data pre-sorted by those two particular columns using another new feature called table clustering, so that queries that filter against those columns find the blocks of data they’re looking for faster and can skip completely the ones that don’t.

If like me you’re more familiar with databases such as Oracle, “table clustering” is all about storing data from tables sharing a common cluster key together in the same data blocks, so that queries against that group of tables filtering on that key return data faster.

Table clustering in BigQuery is more analogous to loading regular Oracle tables using data from a pre-sorted file and comes with the same benefits and limitations; in BigQuery’s case it takes care of the pre-sorting and table configuration for you but the same limitations still apply around how you filter the table and what happens when you load more data afterwards.

Let’s set-up a clustered table now that stores its data ordered by the two columns used in the query I ran a few moments ago.

a359f-1wbee2qfj8c47tclrvwbzag.png

Now when I run a query filtering on those columns against this partitioned, clustered table the amount of data read goes down compared to before, and results are returned a bit faster; if I included the partition key column in the query as well, returning just a few days’ data, it’d be faster still.

78852-1xuzxjgxdqunww6_kaua_iw.png

But queries that filter using any of the other columns don’t benefit from this clustering and moreover, the benefits of sorting the data initially loaded into a clustered table are lost over time as you load more (unsorted) data into it, meaning that to really benefit from clustering a table you have to rebuild it regularly using a CREATE TABLE … AS SELECT.

Table clustering in BigQuery is nice to have but similar to pre-sorting a direct path load into Oracle database tables, it’ll take a lot of rebuilding and careful querying to get the most benefit from and with the size of most BigQuery tables, I doubt that rebuilding will happen much in-practice.

BI tools such as Looker can make use of table partitioning and clustering in queries straight away as no changes are required in the query SQL you write, everything is handled in the table definition. Where you might want to set up partitioning yourself as a Looker developer is for the persistent derived tables (PDTs) that Looker can create to materialize the results of a view you define in your Looker model to derive a dimension attribute using a subquery, for example to calculate the sequence number of a users’ order for retention analysis for an eCommerce dashboard as shown in the screenshot below.

58c43-173bzrtzmxplazcskhh-wig.png

Looker has for some time come with database-specific settings for particular source database types such those used for Redshift used in the example above, and now supports date partitioning for persistent derived tables through a new partition_keys setting as announced in this recent Looker forum post.

454f9-12hjjxxrkazidhy53fjafbw.png

Finally, if you’re interested in how partitioning is developing as a feature within BigQuery, and some of the edge-cases and requests for additional functionality that users of partitioning are asking for, this feature request on the BigQuery issue tracker is useful to read-through.

Extending support to more than 2500 partitions seems to be the most popular request along with allowing integer and string datatype columns to be used for the partition key, but also look out for issues around re-loading data into historic partitions and the cost and work involved in rebuilding large tables to re-cluster its data or apply partitioning for the first time.