paint-brush
16 SQL Techniques Every Beginner Needs to Knowby@datamike
17,818 reads
17,818 reads

16 SQL Techniques Every Beginner Needs to Know

by Mike ShakhomirovFebruary 11th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

This blog post explains the most intricate data warehouse SQL techniques in high detail. Use BigQuery standard SQL dialect to scribble down a few thoughts on this topic.
featured image - 16 SQL Techniques Every Beginner Needs to Know
Mike Shakhomirov HackerNoon profile picture
     

On a scale from 1 to 10 how good are your data warehousing skills?

Want to go above 7/10? This article is for you then.


How good is your SQL? Want to get ready for a job interview asap?


This blog post explains the most intricate data warehouse SQL techniques in detail. I will use BigQuery standard SQL dialect to scribble down a few thoughts on this topic.

1. Incremental tables and MERGE

Updating table is important. It is important indeed. Ideal situation is when you have transactions that are a PRIMARY key, unique integers and auto increment. Table update in this case is simple:

That is not always the case when working with denormalized star-schema datasets in modern data warehouses. you might be tasked to create sessions with SQL and/or incrementally update datasets with just a portion of data. transaction_id might not exist but instead you will have to deal with data model where unique key depends on the latest transaction_id (or timestamp) known. For example, user_id in last_online dataset depends on the latest known connection timestamp. In this case you would want to update existing users and insert the new ones.

MERGE and incremental updates

You can use MERGE or you can split the operation into two actions. One to update existing records with new ones and one to insert completely new ones that don't exits (LEFT JOIN situation).

MERGE is a statement that is generally used in relational databases. Google BigQuery MERGE Command is one of the Data Manipulation Language (DML) statements. It is often used to perform three main functions atomically in one single statement. These functions are UPDATE, INSERT, and DELETE.


  • UPDATE or DELETE clause can be used when two or more data match.
  • INSERT clause can be used when two or more data are different and do not match.
  • The UPDATE or DELETE clause can also be used when the given data does not match the source.


This means that the Google BigQuery MERGE Command enables you to merge Google BigQuery data by updating, inserting, and deleting data from your Google BigQuery tables.

Consider this SQL:

2. Counting words

Doing UNNEST() and check if the word you need is in the list you need migth be useful in many situation, i.e. data warehouse sentiment analysis:

3. Using IF() statement outside of the SELECT statement

This gives us an opportunity to save some lines of code and be more eloquent code-wise. Normally you would want to put this into a sub-query, and add a filter in the where clause but you can do this instead:

Another example how NOT to use it with partitioned tables. Don't do this. This is bad example because since the matching table suffixes are probably determined dynamically (based on something in your table) you will be charged for a full table scan.

You can also use it in HAVING clause and AGGREGATE functions.

4. Using GROUP BY ROLLUP

The ROLLUP function is used to perform aggregation at multiple levels. This is useful when you have to work with dimension graphs.

Image by author

The following query returns the total credit spend per day by the transaction type (is_gift) specified in the where clause, and it also shows the total spend for each day and the total spend in all the dates available.

5. Convert table to JSON

Imagine you are required to convert your table into JSON object where each record is an element of nested array. This is where to_json_string() function becomes useful:

Then you can use it anywhere: dates, marketing funnels, indices, histogram graphs, etc.

6. Using PARTITION BY

Given user_id, date and total_cost columns. For EACH date, how do you show the total revenue value for EACH customer while keeping all the rows? You can achieve this like so:

7. Moving average

Very often BI developers are tasked to add a moving average to their reports and fantastic dashboards. This might be 7, 14, 30 day/month or even year MA line graph. So how do we do it?

8. Date arrays

Becomes really handy when you work with user retention or want to check some dataset for missing values, i.e. dates. BigQuery has a function called GENERATE_DATE_ARRAY:

9. Row_number()

This is useful to get something latest from your data, i.e. latest updated record, etc. or even to remove duplicates:

10. NTILE()

Another numbering function. Really useful to monitor things like Login duration in seconds if you have a mobile app. For example, I have my App connected to Firebase and when users login I can see how long it took for them.

Image by author

This function divides the rows into constant_integer_expression buckets based on row ordering and returns the 1-based bucket number that is assigned to each row. The number of rows in the buckets can differ by at most 1. The remainder values (the remainder of number of rows divided by buckets) are distributed one for each bucket, starting with bucket 1. If constant_integer_expression evaluates to NULL, 0 or negative, an error is provided.

11. Rank / dense_rank

They are also called numbering functions. I tend to use DENSE_RANK as default ranking function as it doesn't skip the next available ranking whereas RANK would. It returns consecutive rank values. You can use it with a partition which divides the results into distinct buckets. Rows in each partition receive the same ranks if they have the same values. Example:

Another example with product prices:

12. Pivot / unpivot

Pivot changes rows to columns. It's all it does. Unpivot does the opposite.

13. First_value / last_value

That's another useful function which helps to get a delta for each row against the first / last value in that particular partition.

14. Convert a table into Array of structs and pass them to UDF

This is useful when you need to apply a user defined function (UDF) with some complex logic to each row or a table. You can always consider your table as an array of TYPE STRUCT objects and then pass each one of them to UDF. It depends on your logic. For example, I use it to calculate purchase expire times:

In a similar way you can create tables with no need to use UNION ALL. For example, I use it to mock some test data for unit tests. This way you can do it very fast just by using Alt+Shift+Down in your editor.

15. Creating event funnels using FOLLOWING AND UNBOUNDED FOLLOWING

Good example might be marketing funnels. Your dataset might contain continiously repeating events of the same type but ideally you would want to chain each event with next one of a different type. This might be useful when you need to get a list of something, i.e. events, purchases, etc. in order to build a funnels dataset. Working with PARTITION BY it gives you the opportunity to group all the follwoing events no matter how many of them exists ineach partition.

16. Regexp

You would to use it if you need to extract something from unstructured data, i.e. fx rates, custom groupings, etc.

Working with currency exchange rates using regexp

Consider this example with exchange rates data:

Working with App versions using regexp

Sometimes you might want to use regexp to get major, release or mod versions for your app and a create a custom report:

Conclusion

SQL is a powerful tool that helps to manipulate data. Hopefuly these SQL use cases from digital marketing will be useful for you. It's a handy skill indeed and can help you with many projects. These SQL snippets made my life a lot easier and I use at work alomost every day. More, SQL and modern data warehouses are essentials tools for data science. Its robust dialect features allow to model and visualize data with ease. Because SQL is the language that data warehouses and business intelligence professionals use, it's an excellent selection if you want to share data with them. It is the most common way to communicate with almost every data warehouse / lake solution in the market.


Originally published in mydataschool.com by datamike


Mike is a passionate and digitally focussed individual with an abundance of drive and enthusiasm, loving the challenges the full mix of digital marketing throws up. Lives in the UK, completed MBA from Newcastle University in 2015.