A Closer Look at BigQuery Aggregate Functions

Google BigQuery

icon Get in-depth insights

Modern Data Management Guide

icon Get in-depth insights

Modern Data Management Guide

Discover the capabilities of BigQuery's aggregate functions to transform your data analysis process. This guide dives deep into the syntax, usage, and strategic application of essential aggregate functions, empowering you to tap into their full potential for insightful data processing.

For professionals working with Google's BigQuery, mastering aggregate functions is similar to wielding a magic wand that transforms raw data into insightful, actionable information.

Whether you're a Data Analyst, BI Professional, SQL Developer, or IT Consultant, this guide will take you through the intricacies of BigQuery's aggregate functions, ensuring you emerge with a toolkit brimming with analytical prowess.

Introduction to Aggregate Functions in BigQuery

Aggregate functions in BigQuery serve as the backbone for data summarization and statistical analysis. They allow for the processing of large datasets to derive meaningful patterns, trends, and metrics. Understanding these functions is crucial for anyone looking to dive deep into data analysis, offering a gateway to advanced reporting and data-driven decision-making.

Essential Aggregate Functions for BigQuery Analysts

From calculating averages with AVG to concatenating strings with STRING_AGG, aggregate functions are pivotal for efficient data analysis and reporting. Their proper utilization can significantly enhance the quality and speed of data-driven decision-making.

ANY_VALUE Function

The ANY_VALUE function in BigQuery is a versatile tool designed to return any value from a group of values. It's useful in scenarios where you must bypass aggregation errors arising from GROUP BY clauses requiring aggregated columns.

This function is handy when you're indifferent to which value is returned from each group, or when ensuring query compliance with SQL standards. Its simplicity and utility make it a go-to choice for data analysts looking to streamline their queries without compromising on performance.



  • ANY_VALUE: The function that returns a value from a group in GROUP BY queries, ignoring aggregation errors.
  • expression: This is the column or calculation from which ANY_VALUE returns a value. The expression can be of any data type.

The syntax is straightforward, where the expression is the only parameter, highlighting the function's simplicity and versatility.


Consider a dataset, employee_records, with multiple entries for each employee, including their employee_id, department, and salary.

SELECT department, ANY_VALUE(salary) as sample_salary
FROM employee_records
GROUP BY department;


  • department is the column by which the data is grouped.
  • ANY_VALUE(salary) as sample_salary selects any one salary value from each department group to represent sample_salary.

This query effectively demonstrates how ANY_VALUE can be used to satisfy SQL's requirement for aggregated or group-by columns in a query, by randomly selecting a salary from each department without affecting the group's integrity.


The ARRAY_CONCAT_AGG function in BigQuery is designed to concatenate arrays from multiple rows into a single array. This function facilitates analyses that require collective array operations. It simplifies the handling of array-type data columns across different rows, making it easier to perform comprehensive analyses on grouped data. The use of ARRAY_CONCAT_AGG can significantly enhance the efficiency of data processing tasks involving array data types, providing a straightforward method to compile and assess related data points collectively.



  • expression: The array expression to be aggregated. This expression should return an array data type.
  • IGNORE NULLS|RESPECT NULLS (optional): Specifies whether to ignore or include NULL values in the aggregation. If not specified, the default behavior is to include NULL values (RESPECT NULLS).


Consider a dataset of book transactions, where each row represents a transaction, and there's an array column named authors that list all authors involved in a single transaction. If we want to compile a list of all authors involved in transactions within a specific category, we can use the ARRAY_CONCAT_AGG function.

  ARRAY_CONCAT_AGG(authors) AS all_authors


  • category: The column by which we are grouping the transactions (e.g., fiction, non-fiction).
  • ARRAY_CONCAT_AGG(authors) AS all_authors: Concatenates all authors arrays from the transactions within each category into a single array named all_authors.

This query effectively aggregates all authors from transactions within each book category into a single array. This allows for an easy overview of which authors are involved in transactions for each category, showcasing the ARRAY_CONCAT_AGG function's capability to optimize data aggregation processes.

AVG Function

The AVG function in BigQuery is a powerful tool for data analysts, allowing them to calculate the average value of a numeric column. This function is crucial for summarizing large datasets, identifying trends, and making comparisons across data points.



  • AVG: The function that computes the average.
  • expression: The column or calculation from which the average is computed. This must be numeric.


In this real-life example, we're calculating the average salary of all employees from the employee_data table.

SELECT AVG(salary) avg_salary FROM employee_data;


  • AVG(salary): This calculates the mean salary value.
  • FROM employee_data: This specifies the table from which the data is retrieved.

This query is incredibly useful for HR departments to understand compensation trends and ensure equitable salary distributions. It's a straightforward yet powerful example of how aggregate functions like AVG can provide critical insights into organizational data.

ARRAY_AGG Function

The ARRAY_AGG function in BigQuery is a powerful tool designed to aggregate values from multiple rows into an array. It serves as a bridge between individual data points and grouped data structures, enabling analysts to create compact representations of data for further analysis.

This function is useful in scenarios where understanding the collection of items per group is essential, such as listing all products purchased by a customer or all skills possessed by an employee.


ARRAY_AGG(expression [ORDER BY clause] [LIMIT clause])

  • expression: The column or calculation you wish to aggregate into an array.
  • ORDER BY clause (optional): Specifies the order in which the items in the resulting array should appear.
  • LIMIT clause (optional): Limits the number of items in the resulting array.


Consider a scenario where you have a table of sales with columns customer_id and product_name. You want to list all products purchased by each customer in a single array.

SELECT customer_id, 
ARRAY_AGG(product_name ORDER BY product_name) 
AS products_purchased
FROM sales
GROUP BY customer_id;


  • customer_id: Identifies the customer.
  • ARRAY_AGG(product_name ORDER BY product_name): Aggregates all product_name values per customer_id into an array, ordered alphabetically.
  • AS products_purchased: The name assigned to the output array column.
  • GROUP BY customer_id: Groups the results by customer_id, ensuring that the ARRAY_AGG function aggregates products for each customer individually.

This query would produce a list where each row contains a customer_id and an array of products_purchased by that customer, providing a clear, concise view of customer behavior.


The STRING_AGG function in BigQuery is a powerful tool for data analysts and developers, allowing the concatenation of string values from multiple rows into a single string. This function is particularly useful when you need to aggregate textual data for reporting, summarization, or visualization purposes.


STRING_AGG(expression, separator [OPTIONAL], [DISTINCT] [ORDER BY clause])

  • expression: The column or expression that you want to concatenate.
  • separator: The string used to separate each value in the concatenated string. This is optional; if not specified, the default separator is a comma.
  • DISTINCT: An optional keyword to concatenate only distinct values.
  • ORDER BY clause: Optional. Specifies the order of the concatenated elements.


The scenario involves aggregating employee names from a specific department within an organization or dataset.

SELECT STRING_AGG(employeeName, '; ') 
AS employees 
FROM dataset.department 
WHERE departmentID = 1;


  • SELECT: This keyword initiates a query to retrieve data from a database.
  • STRING_AGG(employeeName, '; '): This function aggregates (combines) the values from the employeeName column into a single string, with each name separated by '; ' (a semicolon followed by a space).
  • AS employees: This part of the query renames the output of the STRING_AGG function to employees for easier reference in the results.
  • FROM dataset.department: Specifies the table from which to retrieve the data.
  • WHERE departmentID = 1: Filters the rows to include only those where the departmentID column has a value of 1.

This query produces a single string that concatenates the names of all employees working in the department with departmentID = 1, with each name separated by a semicolon and a space.

COUNT Function

The COUNT function is a fundamental tool in SQL for BigQuery, allowing users to count the number of items in a column or expression. This function provides a straightforward way to quantify records that meet specific criteria. Whether you're assessing the volume of transactions or the frequency of certain events, COUNT offers a simple yet powerful way to derive insights from your data.



  • expression: This is the column or expression over which the count will be performed. It can be a specific column name, an asterisk (*), or a condition.


This SQL query calculates the total number of customers recorded in the sales table, providing a quick count of all sales transactions that have an associated customer ID.

SELECT COUNT(customer_id) customers FROM sales;


  • SELECT: This keyword starts a query to retrieve data from a database.
  • COUNT(customer_id): This function counts the number of rows that have a non-null value in the customer_id column.
  • FROM sales: Specifies the table from which to retrieve the data.

This query calculates the total number of customers (with non-null customer_id) recorded in the sales table.

COUNTIF Function

The COUNTIF function in BigQuery is a versatile tool that allows analysts to count the number of rows that match a specific condition within a dataset. This function is particularly useful for conditional aggregation, enabling users to perform more nuanced analyses and derive insights based on specific criteria.



  • condition: This is the specific condition or criteria that rows must meet to be counted. It is typically expressed as a logical expression involving columns of the dataset.


SELECT COUNTIF(score > 80) students 
FROM student_grades;


  • SELECT: Initiates a query to retrieve data from a database.
  • COUNTIF(score > 80): This function counts the number of rows where the condition (score > 80) is true.
  • FROM student_grades: Specifies the table from which to retrieve the data.

This query calculates the total number of entries in the student_grades table where the score exceeds 80.


The COUNT DISTINCT function is a powerful tool in SQL, specifically in BigQuery, for eliminating duplicate values in a specified column before counting the unique entries. This function is invaluable for data analysis tasks where understanding the variety or uniqueness within a dataset is crucial.

For example, it can help determine the number of unique visitors to a website or the variety of products sold in a store. Its primary benefit is enhancing data accuracy and integrity by ensuring that only unique items contribute to the count.


COUNT(DISTINCT expression)

  • DISTINCT: A keyword that instructs BigQuery to consider only unique values for counting.
  • expression: The column or expression over which the distinct count is performed.


Suppose you are analyzing sales data and need to know how many unique customers made purchases in this real-life scenario. The customer_id column identifies customers, and you're interested in the count of distinct customer_id values in the sales_data table.

AS unique_customers FROM sales_data;


  • SELECT: Specifies what to return to the query result.
  • COUNT(DISTINCT customer_id): Counts the unique customer IDs in the sales_data table.
  • AS unique_customers: Renames the output of the count operation to unique_customers for clarity.

This formula returns the number of unique customers who have made purchases, providing valuable insights into the customer base's size and diversity.

SUM Function

The SUM function is a powerful tool in BigQuery that allows for the addition of numerical values in a column or expression across a dataset. It's invaluable for financial analysis, inventory management, performance tracking, and any scenario where total values are crucial.



  • expression: The column or calculation whose values you want to add together.


In this example, we're calculating the total sales from the monthly_sales table.

SELECT SUM(total_sales) total_sales 
FROM monthly_sales_summary;


  • SELECT SUM(total_sales): This part of the query tells BigQuery to add up all the values found in the sales column.
  • FROM monthly_sales_summary: Specifies the table from which to retrieve the sales data.

The output is the sum of the total revenue generated.

MAX Function

The MAX function is a powerful tool in BigQuery that finds the maximum value in a set of values. It's widely used across various types of data analysis to identify the highest number in a numerical dataset or the latest date in a series of dates. This function is invaluable for data comparison, trend analysis, and highlighting peak performance or values within your data.



  • MAX: The function that calculates the maximum value.
  • expression: The column or calculation from which the MAX function finds the highest value.


In this real-life example, we are querying a database of company employees to find the highest salary.

SELECT MAX(salary) max_salary 
FROM employee_records;


  • SELECT MAX(salary): This part of the query applies the MAX function to the 'salary' column.
  • FROM employee_records: Specifies the table named 'company_employees' from which to retrieve the data.

By executing this query, you would obtain the maximum salary value from the 'company_employees' table, providing a clear view of the highest salary in the company.

MIN Function

The MIN function is a cornerstone of data analysis, particularly in BigQuery environment. It identifies the smallest value in a dataset or a specific column, making it invaluable for comparative analysis, trend spotting, and outlier detection.



  • MIN: Keyword to initiate the minimum value function.
  • expression: The column or calculation from which the minimum value is sought.


In this example, we're looking to find the lowest salary within the company_employees table.

SELECT MIN(salary) min_salary 
FROM employee_records;


  • SELECT MIN(salary): This part of the statement is directing BigQuery to select the minimum value found in the salary column.
  • FROM employee_records: Here, we specify the table company_employees as the data source for our query.

The query scans the salary column and returns the smallest value found, providing essential insights into the salary range and helping to evaluate financial distributions within the company.

BIT_AND Function

The BIT_AND function in BigQuery combines all the numbers in a column using a bitwise AND operation. It's useful for finding common bits in a set of numbers. For instance, it can help in identifying shared permissions or settings among a group of users or systems. Its ability to consolidate data into a single value that represents commonalities across a column simplifies decision-making processes.



  • expression: This is the column or calculation on which the BIT_AND operation will be applied. The expression must return an integer type, as bitwise operations are only meaningful on binary (bit-based) data.


Consider a table user_permissions with a column permission that stores integer values representing user permissions in a binary format. You want to find out what permissions are common across all users.

SELECT BIT_AND(permissions) AS common_permissions
FROM user_permissions_data;


  • BIT_AND(permissions): This calculates the bitwise AND across all values in the permissions' column.
  • common_permissions: This is the alias for the result of the BIT_AND operation, representing the common permissions shared by all users in the user_permissions table.

The result identifies the bit flags that are common across all entries in the dataset, essentially showing which permissions are shared by all users.

BIT_OR Function

The BIT_OR function in BigQuery is an aggregate function that performs a bitwise OR operation across all non-null input values of a column. This function is particularly useful when you need to aggregate flags or binary indicators stored across multiple records. The beauty of BIT_OR lies in its ability to compactly summarize binary data.



  • expression: This is the column or expression over which the BIT_OR operation is applied. It must yield a numeric type (typically INT64).


In this real-world example, suppose each employee in a company has an access_flags field represented as an integer, where each bit represents a specific access permission.

SELECT department, BIT_OR(access_flags) as department_access 
FROM employee_access GROUP BY department;


  • department: Column by which the results are grouped.
  • access_flags: The binary indicators for each employee's access permissions.
  • employee_access: The table containing records of employees' access permissions.
  • GROUP BY department: This clause groups the results by department, allowing the BIT_OR operation to be applied to each group separately.

The query aggregates the access flags across all employees in each department. The department_access column then represents the combined access permissions for the entire department, with each bit in the result set if any employee in the department has the corresponding permission.

BIT_XOR Function

The BIT_XOR function in BigQuery is a powerful tool for performing bitwise exclusive OR operations across all input values for a column. This function is particularly useful in scenarios where you need to find odd occurrences of bits across a dataset or combine flags in a bitwise manner.



  • expression: The column or expression over which the BIT_XOR operation will be applied. This should be of a numeric type that is implicitly convertible to a binary representation.


Consider a dataset user_flags with a column permission that stores bitwise flags representing user permissions. The following query will find the cumulative permission settings across all users.

SELECT BIT_XOR(permissions) AS cumulative_permissions 
FROM employee_access;


  • permissions: The column in user_flags containing bitwise flags.
  • cumulative_permissions: The alias for the result of the BIT_XOR operation.
  • employee_access: The name of the table containing user permissions data.

The result will combine the permissions from all entries, where each bit is set to 1 only if it is set to 1 in an odd number of entries. Essentially, it will show the cumulative effect of toggling permissions across all users, identifying unique or non-repeated permissions.


The GROUPING function is like a helper in BigQuery that lets you see if the data you're looking at is a total sum of other data or just regular, individual data. It tells you with a simple 0 or 1 whether a piece of data is a sum of other pieces (like a total sales number) or just a single piece of information (like sales from one department).

This is useful for making reports that are easy to understand because you can tell which numbers are totals and which are specifics.



  • name_of_column: This is the part of your data you're focusing on. The function checks if this part is a total sum or just regular data.


Let's say you have a list of sales with the areas they're from, the departments, and how much was sold. You want a report that shows sales for each area and department, plus totals for each area and a big total for everything.

SELECT region, department, 
GROUPING(region) AS grouped_region, 
GROUPING(department) AS grouped_department, 
SUM(sales_amount) AS total_sales
FROM sales_data GROUP BY ROLLUP(region, department);


  • GROUPING(region) and GROUPING(department) : This helps you tell apart the total sales numbers from the regular department sales numbers.
  • ROLLUP(region, department): This sorts your sales into layers: first by each department within a region, then totals for each region, and finally a grand total.
  • SUM(sales_amount) AS total_sales: This adds up the sales for each group.

The query helps you get a detailed report that's easy to read because you can clearly see which numbers are totals and which are detailed sales figures.


The LOGICAL_AND function in BigQuery checks if a condition is true for all rows. If every row meets the condition, it returns TRUE; if any row doesn't, it returns FALSE. This is handy for making sure all records meet certain criteria, simplifying data checks into one simple result.



  • expression: A Boolean expression that evaluates to TRUE or FALSE.

The LOGICAL_AND function takes a single Boolean expression as input and processes it across all rows within the specified aggregation group. It simplifies the evaluation of multiple records against a common condition, streamlining data analysis workflows.


In this example, the LOGICAL_AND function checks whether all items identified by product_id in the inventory table are in stock (is_in_stock is TRUE).

SELECT product_id, LOGICAL_AND(is_in_stock) 
AS all_in_stock 
FROM inventory GROUP BY product_id;


  • product_id: Identifier for each product in the inventory.
  • is_in_stock: A Boolean column in the inventory table indicating whether the product is in stock.
  • LOGICAL_AND(is_in_stock) AS all_in_stock: Evaluates to TRUE if every item with the same product_id is in stock, FALSE otherwise.

This query returns a result for each product ID, indicating with all_in_stock as TRUE if all associated records are in stock; otherwise, it returns FALSE.


The LOGICAL_OR function in BigQuery is a boolean aggregate function that returns TRUE if at least one TRUE value exists within the input expression across all input values in the group. For example, it can be used to determine if any record in a dataset meets a specific criterion, such as any sales transaction being flagged for review. The LOGICAL_OR is invaluable for quick, conditional data checks and summaries.



  • expression: A boolean expression that evaluates to TRUE or FALSE.

The expression is evaluated for each row in a dataset. If any row evaluates to TRUE, the LOGICAL_OR function will return TRUE. If all rows are evaluated as FALSE, then it will return FALSE.


Let's consider a real-world example where we have a dataset of online orders, and we want to check if any orders in a particular month were marked as fraudulent.

SELECT month, LOGICAL_OR(is_fraudulent) 
AS any_fraudulent_orders 
FROM orders GROUP BY month;


  • month: The column by which the data is being grouped.
  • is_fraudulent: A boolean column that indicates whether an order was marked as fraudulent (TRUE) or not (FALSE).
  • LOGICAL_OR(is_fraudulent): It checks across each group of orders (grouped by month) to see if any order was marked as fraudulent.
  • AS any_fraudulent_orders: It assigns a name to the output column of the LOGICAL_OR function.

This query effectively summarizes the data by month and indicates whether there were any fraudulent orders in each month, simplifying the process of identifying problematic periods for further analysis.

MAX_BY Function

The MAX_BY function in BigQuery is a powerful tool for retrieving the value from one column that corresponds to the maximum value in another column. It is particularly useful in scenarios where you need to identify a specific data point that holds the highest value in a dataset, such as finding the employee with the highest sales in a particular month or the product with the highest rating.


MAX_BY(value_expression, order_expression)

  • value_expression: The column or expression that you want to retrieve the corresponding value from.
  • order_expression: The column or expression used to determine the maximum value.


Let's say we have a table sales_data with columns employee_name, sales_amount, and month. To find the name of the employee who achieved the highest sales in March, we can use the following query.

SELECT MAX_BY(employeeName, sales_amount) AS top_seller
FROM sales_data
WHERE month = 'March';


  • SELECT MAX_BY(employeeName, sales_amount) AS top_seller: This part of the query identifies the employee with the highest sales_amount for the given condition, labeling the result as top_seller.
  • FROM sales_data: Specifies the table named sales_data as the source of the data for the query.
  • WHERE month = 'March': Filters the records to only include those where the month column matches 'March', ensuring the query only considers sales data from March.

The output of this query will be the name of the top-performing salesperson for March based on the sales amount.

MIN_BY Function

The MIN_BY function in BigQuery enables analysts and data scientists to identify and extract a value from one column corresponding to the minimum value in another. This function is valuable when you need to pinpoint critical data points, such as finding the product with the lowest cost or the employee with the least hours worked in a given period.

Utilizing MIN_BY can streamline data analysis, making it easier to identify areas for cost reduction, efficiency improvement, or other optimizations based on minimum metrics.


MIN_BY(value_expression, order_expression)

  • value_expression: The column or expression from which you want to retrieve the corresponding value.
  • order_expression: The column or expression used to determine the minimum value.


Consider a scenario where we have a dataset project_hours with columns employee_name, hours_worked, and project_id. To find the name of the employee who worked the least hours on a specific project, say project ID 1234, we can use the following query.

SELECT MIN_BY(employeeName, hours_worked) 
AS least_hours_employee
FROM project_hours
WHERE project_id = '1234';


  • SELECT MIN_BY(employeeName, hours_worked) AS least_hours_employee: This part of the query finds the employee with the minimum hours_worked for a specified condition, and labels the result as least_hours_employee.
  • FROM project_hours: Specifies the table named project_hours as the source of the data for the query. This table presumably contains records of the hours worked by employees on various projects.
  • WHERE project_id = '1234': Filters the records to only include those where the project_id column matches '1234', ensuring the query only considers hours worked on the project with ID 1234.

The query identifies the employee requiring the least hours on the project, providing valuable insights for project management and workload distribution.


Make Your Corporate BigQuery Data Smarter in Sheets

Transform Google Sheets into a dynamic data powerhouse for BigQuery. Visualize your data for wise, efficient, and automated reporting

Transform Your Reporting

Differentially Private Aggregate Functions in BigQuery

Ensuring privacy while analyzing data is paramount. BigQuery's differentially private aggregate functions allow analysts to perform statistical analysis while preserving individual data privacy.


The AVG (DIFFERENTIAL_PRIVACY) function in BigQuery is designed to compute the average of a set of values while ensuring the privacy of individual data entries. This approach allows analysts to generate insights from sensitive data without compromising the confidentiality of the subjects involved. By incorporating differential privacy, this function adds random noise to the average calculation, making it statistically impossible to reverse-engineer the data to identify individual contributions.



   OPTIONS (epsilon = epsilon_value [, delta = delta_value] [, k_threshold = k_value]), 

 AVG(expression, contribution_bounds_per_row)

  • expression: The column or expression for which the average is calculated.
  • OPTIONS: Specifies the differential privacy options:
  1. epsilon: A non-negative value that determines the privacy guarantee level, with lower values providing stronger privacy.
  2. delta (optional): A value close to zero that provides a probabilistic privacy guarantee.
  3. k_threshold (optional): A positive integer that sets a minimum number of rows required for the calculation.


In the given scenario, we have a table called employee_salaries in the dataset employee_salaries, with columns for department and salary. To compute the average salary per department while preserving the privacy of individual salary data, we utilize a SQL query with differential privacy controls. This approach ensures that the salary details of individuals are protected while calculating the aggregate information.

    privacy_unit_column=salary) department,
  AVG(salary) average_quantity


  • SELECT ... department, AVG(salary) average_quantity: This part of the query selects the department and computes the average salary, referred to here as average_quantity, for each department.
  • FROM 'owox-analytics.dataset.employee_salaries': Specifies the dataset and table from which to retrieve the data.
  • GROUP BY department: Groups the results by the department to calculate the average salary for each department.
  • WITH DIFFERENTIAL_PRIVACY OPTIONS (...): Applies differential privacy to the aggregation, ensuring individual salary data remains confidential.
    • epsilon=1000: The epsilon parameter configures the level of privacy protection; a higher epsilon (1000 in this case) provides less privacy but more accurate results.
    • delta=0.01: Sets the delta parameter, defining the probability of exceeding the privacy budget; here, it's set to 1%.
    • max_groups_contributed=1: Restricts the number of groups each individual's data can contribute to, further protecting privacy.
    • privacy_unit_column=salary: Specifies that the salary column is used to determine the privacy units, ensuring each salary's privacy is maintained.

Each row in the output represents a different department, alongside the calculated average salary for that department, ensuring that the salary information is aggregated and privacy-protected.


The COUNT (DIFFERENTIAL_PRIVACY) function in BigQuery is an advanced tool that enables data analysts to count occurrences within a dataset while adhering to differential privacy principles. This ensures that the count does not compromise the privacy of individual data entries by introducing a layer of privacy protection that masks the exact numbers.

It is useful in sensitive domains where privacy is crucial, such as patient records in healthcare, personal information in surveys, or user behavior data in tech companies.



  OPTIONS (epsilon = epsilon_value [, delta = delta_value] [, k_threshold = k_value]),

 COUNT(expression, contribution_bounds_per_row)

  • expression: The column or expression to count.
  • OPTIONS: A STRUCT containing the differential privacy parameters which may include:
    • epsilon: A required parameter representing the privacy budget. Smaller values provide stronger privacy.
    • delta (optional): Represents the probability of differential privacy not being achieved, typically a very small number close to zero.
    • k_threshold (optional): The minimum number of occurrences of a value for it to be included in the count. Values occurring fewer than k_threshold times are grouped into a separate bucket.


Consider a scenario where you have a dataset user_interactions with columns interaction_type and user_id. To count the number of unique interactions per type with differential privacy applied, ensuring that the count does not reveal individual user activity, you could use the following query.

    privacy_unit_column=user_id) interaction_type,
  COUNT(*, contribution_bounds_per_group=>(0,
      100)) times_requested


  • SELECT WITH DIFFERENTIAL_PRIVACY: Initiates a selection that applies differential privacy to the query results, ensuring individual data points cannot be re-identified.
  • OPTIONS(epsilon=10, delta=.01, max_groups_contributed=1, privacy_unit_column=user_id): Configures the differential privacy settings, including privacy budget (epsilon), acceptable privacy loss (delta), the maximum number of groups a single user can contribute to (max_groups_contributed), and the column identifying individual users (privacy_unit_column).
  • interaction_type: The field representing the type of user interaction, grouped by in the query results.
  • COUNT(*, contribution_bounds_per_group=>(0, 100)): Counts the number of rows for each interaction_type, limiting the contribution of each user per group to the range 0 to 100.
  • FROM ‘owox-analytics.dataset.user_interactions`: Specifies the dataset and table from which to retrieve the data.
  • GROUP BY interaction_type: Groups the result set by the interaction_type field to count the occurrences of each type.

This QUERY allows for analyzing user engagement patterns across different types of interactions while safeguarding user privacy.


The PERCENTILE_CONT (DIFFERENTIAL_PRIVACY) function in BigQuery allows analysts to calculate continuous percentile values within their datasets while integrating differential privacy. This ensures that the calculation of percentiles does not compromise individual data privacy.

It's particularly beneficial in scenarios requiring the analysis of sensitive data distributions, such as income levels, patient health metrics, or exam scores, where understanding the distribution without revealing specific data points is crucial.



  OPTIONS (epsilon = epsilon_value [, delta = delta_value] [, k_threshold = k_value]),

PERCENTILE_CONT(value_expression, percentile, contribution_bounds_per_row)

  • value_expression: This is the column or expression for which the percentile is calculated. It represents the data points from which the percentile value will be derived.
  • percentile: A decimal value between 0 and 1 that specifies the desired percentile to compute.
  • contribution_bounds_per_row: This argument specifies contribution bounds. It clamps the contribution for each row separately before conducting intermediate grouping based on the privacy unit column.
  • OPTIONS (epsilon = epsilon_value [, delta = delta_value] [, k_threshold = k_value]): This clause sets the differential privacy parameters:
    • epsilon: A non-negative value that defines the degree of privacy. A smaller epsilon value provides stronger privacy.
    • delta (optional): A value between 0 and 1, exclusive, representing the probability of achieving privacy guarantees.
    • k_threshold (optional): A non-negative value that defines a threshold where any output below this threshold will be considered too sensitive and therefore suppressed.


Imagine a dataset named employee_salaries, with columns for department and salary. To calculate the median salary (50th percentile) within each department while ensuring differential privacy, you could use the following query.

    OPTIONS(epsilon=10, delta=.01, max_groups_contributed=1, privacy_unit_column=salary) department,
    PERCENTILE_CONT(salary, 0.5, contribution_bounds_per_row => (0,100)) percentile_requested
FROM `owox-analytics.dataset.employee_salaries`
GROUP BY department;


  • WITH DIFFERENTIAL_PRIVACY: This clause specifies that the query will apply differential privacy mechanisms to protect the individual privacy of the salary data.
  • OPTIONS(epsilon=10, delta=.01, max_groups_contributed=1, privacy_unit_column=salary): Sets the parameters for differential privacy. epsilon and delta control privacy loss, max_groups_contributed limits the number of groups an individual can contribute to, and privacy_unit_column indicates the column containing the private data.
  • PERCENTILE_CONT(salary, 0.5, contribution_bounds_per_row => (0,100)): Calculates the continuous percentile (median in this case) of the salary, with contribution bounds set per row to limit the influence of each row on the result.
  • GROUP BY department: Groups the results by the department column, so the median salary is calculated for each department separately.

This query outputs each department alongside the differentially private median salary for that department, ensuring privacy while providing useful statistical information.


The SUM (DIFFERENTIAL_PRIVACY) function in BigQuery is a robust tool designed to calculate the sum of a set of values while ensuring the privacy of individual data points. This function is invaluable for analyzing sensitive numerical data - such as financial records, health data, or user activity metrics - where it's essential to maintain individual confidentiality.



  OPTIONS (epsilon = epsilon_value [, delta = delta_value] [, k_threshold = k_value]),

SUM(expression, contribution_bounds_per_row)

  • expression: The column or expression to be summed.
  • OPTIONS clause: This is used to specify the differential privacy parameters:
    • epsilon: A required parameter that defines the privacy budget. Smaller values provide stronger privacy.
    • delta (optional): Defines the probability that the privacy guarantee might not hold. A smaller delta provides stronger privacy guarantees.
    • k_threshold (optional): Specifies a threshold that the result must meet or exceed in order to be returned; otherwise, the result is suppressed.


This SQL query utilizes differential privacy techniques to aggregate donation amounts for different campaigns from the owox-analytics.dataset.donations table. Differential privacy is ensured through specified parameters, such as epsilon and delta, to provide a balance between data utility and privacy. The query calculates the sum of donation amounts for each campaign while adhering to privacy constraints.

    privacy_unit_column=name) campaign_id,
  SUM(donation_amount, contribution_bounds_per_group => (0,
      100)) donation_amount


  • WITH DIFFERENTIAL_PRIVACY: Indicates that the query will execute under differential privacy constraints to protect individual data points.
  • OPTIONS(epsilon=100, delta=.01, max_groups_contributed=1, privacy_unit_column=name): Configures the differential privacy settings, where epsilon and delta control privacy loss, max_groups_contributed limits the number of groups an individual can contribute to, and privacy_unit_column specifies the column used to define individual contributors.
  • campaign_id: Selected as the grouping column, indicating that results will be aggregated by campaign.
  • SUM(donation_amount, contribution_bounds_per_group => (0, 100)): Calculates the total donation amount for each campaign, with bounds set for individual contributions to prevent outlier influence.
  • FROM owox-analytics.dataset.donations: Specifies the dataset and table from which the data is retrieved.
  • GROUP BY campaign_id: Directs the aggregation of donation amounts by each unique campaign identifier.

This query outputs each campaign ID alongside the total donations for that campaign, adjusted for differential privacy to ensure the anonymity of donors while providing aggregated financial insights.

Clamp explicitly in the DIFFERENTIAL_PRIVACY Clause

The technique of explicitly clamping in the DIFFERENTIAL_PRIVACY clause in BigQuery is a method to specify bounds for the values of an expression before applying differential privacy. This is crucial for accurately applying differential privacy to data while ensuring the results remain within a realistic and meaningful range.

By setting explicit lower and upper bounds, analysts can prevent the distortion of results due to the addition of differential privacy noise, especially in cases where the data has known physical, logical, or business constraints.



OPTIONS (epsilon, delta, privacy_unit_column),

FUNCTION(expression, contribution_bounds_per_row) 

  • OPTIONS (differential_privacy = (...)): This is where you configure the differential privacy parameters:
    • epsilon = epsilon_value: Specifies the privacy budget (epsilon).
    • delta = delta_value (optional): Provides an additional privacy parameter (delta), when necessary.
    • k_threshold = k_value (optional): Establishes a threshold that the query result must meet or exceed to be included in the output.
  • FUNCTION(expression): This is the aggregate function that you're applying differential privacy to, like SUM or AVG.
  • FUNCTION(... , contribution_bounds_per_group)
    • lower_bound = lb_value: Sets the explicit lower bound for the values in the expression.
    • upper_bound = ub_value: Sets the explicit upper bound for the values in the expression.


Imagine we have a dataset patient_heart_rates with columns patient_id and heart_rate. To calculate the average heart rate across patients while applying differential privacy and ensuring that the heart rate falls within a medically plausible range (40 to 180 beats per minute), you could use the following query.

  DIFFERENTIAL_PRIVACY OPTIONS ( epsilon = 10, delta = .01,
    privacy_unit_column=Patient_id ) 
AVG(Heart_rate, contribution_bounds_per_group=>(40, 180)) AS average_quantity


  • SELECT: Initiates the query to retrieve data.
  • WITH DIFFERENTIAL_PRIVACY: Applies differential privacy to the query, a technique to protect individual privacy while allowing for statistical analysis.
  • OPTIONS: Specifies the configuration for differential privacy.
    • epsilon = 10: Sets the privacy loss parameter, controlling the trade-off between privacy and accuracy.
    • delta = 0.01: Defines the probability of differential privacy providing less-than-optimal protection.
    • privacy_unit_column=Patient_id: Identifies the column that contains the unique identifier for privacy units, in this case, the patient IDs.
  • AVG(Heart_rate, contribution_bounds_per_group=>(40, 180)): Calculates the average heart rate, limiting individual contributions to the group's average to between 40 and 180.
  • AS average_quantity: Names the result of the average calculation as average_quantity.
  • FROM: Specifies the dataset to query.
  • owox-analytics.dataset.patient_heart_rates: Identifies the specific dataset containing the heart rate data of patients to be analyzed.

This query outputs the average heart rate across all patients in the patient_heart_rates dataset, adjusted for differential privacy and constrained to medically plausible values.

Clamp implicitly in the DIFFERENTIAL_PRIVACY Clause

Clamping implicitly in the DIFFERENTIAL_PRIVACY clause is a method used in BigQuery to automatically restrict the range of data values considered in a query, based on differential privacy. This approach simplifies applying differential privacy by letting BigQuery determine the appropriate bounds for the data.

Implicit clamping benefits analysts who may not know the exact range of their data or prefer a more automated approach to applying differential privacy, thus safeguarding data privacy without requiring manual specification of bounds.


The general syntax for applying differential privacy with implicit clamping does not explicitly mention clamping parameters; instead, it relies on BigQuery to automatically determine these bounds:


OPTIONS (epsilon, delta, privacy_unit_column, kappa),


    • FUNCTION(expression): Represents the differential privacy aggregate function (like SUM, AVG, etc.) applied to the data. The expression is the column or calculation on which the aggregate function operates.
    • OPTIONS (epsilon = epsilon_value, [delta = delta_value,] [kappa = kappa_value]): Specifies the differential privacy parameters within the function:
      • epsilon (required): A non-negative value providing the privacy budget's strength, with smaller values indicating stronger privacy.
      • delta (optional): A value, usually close to zero, representing the probability of exceeding the privacy budget defined by epsilon.
      • kappa (optional): A non-negative integer that controls implicit clamping by specifying the number of standard deviations to use for determining the clamping range based on the data's distribution.


    Assuming you have a dataset health_records with columns patient_id and blood_pressure.

    To calculate the average blood pressure across all patients while applying differential privacy with implicit clamping, you might use:

        delta = .01, privacy_unit_column=Patient_id ) 
    AVG(blood_pressure) AS average_quantity


    • WITH DIFFERENTIAL_PRIVACY: Specifies the use of differential privacy to protect individual data points in the aggregation.
    • OPTIONS (epsilon = 1.0, delta = .01, privacy_unit_column=Patient_id): Configures the differential privacy settings, where epsilon and delta are privacy parameters, and privacy_unit_column identifies the column used to define privacy units (in this case, Patient_id).
    • AVG(blood_pressure) AS average_quantity: Calculates the average blood pressure from the dataset and labels the result as average_quantity.
    • FROM `owox-analytics.dataset.health_records`: Specifies the dataset and table from which the data is retrieved.

    This query outputs the average blood pressure across all patients in the health_records dataset, adjusted for differential privacy.


    Simplify BigQuery Reporting in Sheets

    Easily analyze corporate data directly into Google Sheets. Query, run, and automatically update reports aligned with your business needs

    Simplify Reporting Now

    Best Practices for Using Aggregate Functions for Data Processing

    When it comes to utilizing aggregate functions in BigQuery for data processing, adopting best practices can significantly enhance the efficiency, accuracy, and overall performance of your data analysis tasks.

    Here’s a concise guide to ensure you make the most out of aggregate functions in BigQuery:

    Understanding Each Function's Purpose

    Familiarize yourself with the primary aggregate functions like COUNT, SUM, AVG, MIN, and MAX to effectively analyze data. Each function addresses specific analytical needs - COUNT for tallying records, SUM for total values, AVG for calculating average values, MIN for identifying the smallest, and MAX for the largest value within a dataset.

    Strategies for Handling NULL Values

    Understanding how NULL values affect aggregate functions is crucial. Most aggregate functions exclude NULLs from their calculations, which could skew your analysis if not accounted for.

    For instance, when calculating an average (AVG), records with NULL values won't be considered, potentially leading to higher or lower averages than expected if NULLs represent meaningful data points.

    Effective GROUP BY Applications

    Utilize GROUP BY clauses with aggregate functions to segment data analysis more finely. This combination enables data aggregation based on specific column values, facilitating detailed insights. For example, using GROUP BY with SUM can reveal total sales per region, offering a clear view of performance across different geographic areas.

    Replacing Application-Level Operations with Queries

    Leveraging BigQuery's processing capabilities to perform data aggregations can significantly enhance query performance over handling these operations at the application level. Structuring your data and queries in alignment with BigQuery's managed services optimizes performance, ensuring efficient data retrieval and analysis.

    Avoiding Nested Queries

    Nested queries can lead to increased complexity and resource consumption. Opt for JOINs or Common Table Expressions (CTEs) to simplify your SQL statements, enhance readability, and improve performance. For example, replacing a nested query with a CTE can make your query more efficient by processing the data more streamlined.

    If you are looking into advancing your skills in Google BigQuery, it's essential to understand its more complex functions and capabilities.

    • Date Functions: Master date functions in BigQuery to manipulate and analyze time-series data more effectively.

    • Conversion Functions: Learn conversion functions to efficiently transform data types and formats within your BigQuery datasets.

    • Array Functions: Explore array functions to handle and process arrays within your queries for more dynamic data structuring.

    • String Functions: Utilize string functions to manipulate and analyze textual data efficiently in your BigQuery queries

    • DML (Data Manipulation Language): Gain proficiency in DML to perform updates, deletions, and insertions that modify data stored in BigQuery efficiently.

    Troubleshooting Common Errors with Aggregate Functions in BigQuery

    Even the most experienced analysts may encounter errors when working with aggregate functions. Common challenges include dealing with NULL values in aggregate calculations, managing data types correctly, and understanding the intricacies of GROUP BY clauses.

    When errors arise, checking for syntax correctness, ensuring data type compatibility, and verifying GROUP BY operations' logic are critical first steps. Identifying and addressing these issues promptly can save significant time and frustration in data analysis projects.

    Note: The specific error messages you might encounter in BigQuery for the scenarios described can vary based on the context of the query execution and the nature of the error.

    NULL Element Error in ARRAY Function

    When using the ARRAY function in BigQuery, NULL elements can lead to unexpected results or errors, especially if the intention is to have a continuous array without gaps.

    ✅ Solution: One strategy to handle this is by filtering out NULLs using a subquery or WHERE clause before aggregation.

    For example, to create an array of employee names excluding NULLs, you might use:


    This ensures your array contains only valid names, avoiding the pitfalls of NULL values disrupting your data set.

    Note: When attempting to aggregate data into an array where NULL values are not handled explicitly, you might not receive a direct error message from BigQuery indicating a "NULL element error" because BigQuery allows NULL values in arrays.

    However, if your logic expects arrays without NULLs and this isn't handled, it could lead to logical errors in your analysis rather than explicit error messages from BigQuery.

    Handling NULL Elements in Array Concatenation Function

    Combining arrays with NULL elements using ARRAY_CONCAT can also present challenges.

    ✅ Solution: To concatenate arrays while managing NULLs effectively, consider using the IFNULL function to substitute any NULL array with an empty array.

    For instance, to safely concatenate two arrays without losing elements due to NULLs, you could use:

    SELECT ARRAY_CONCAT(IFNULL(array1, []), IFNULL(array2, [])) FROM dataset;

    This approach guarantees that the concatenation process is seamless, preventing NULL values from interrupting the array formation and ensuring data integrity.

    Note: For array concatenation involving NULLs, BigQuery does not generate a specific error message. Instead, the operation might result in an array that includes NULLs if not managed explicitly. The emphasis here would be ensuring your data handling logic accommodates or filters out NULLs as needed, rather than expecting an error message from BigQuery.

    Numeric Overflow Error in SUM Function of Non-NULL Values

    For numeric overflow errors when summing large numbers, BigQuery could return an error message similar to:

    ⚠️ Error: "Arithmetic overflow error"

    Note: This message indicates that the operation exceeded the numeric limits of the data type used in the calculation. The exact wording might vary, but the key is that the message will point towards an overflow issue, prompting you to consider data type limits in your aggregations.

    ✅ Solution:

    Consider casting your numbers to a larger data type before summation or using the NUMERIC prefix to mitigate this with your aggregate functions.

    For example, to avoid overflow when summing large sales figures, you might use:

      SUM(CAST(sales_amount AS NUMERIC)) AS total_sales_amount

    This query calculates the total of the sales_amount values from the sales_data table and casts the outcome as NUMERIC type to accommodate large figures without compromising on precision.


    Access BigQuery Data at Your Fingertips

    Make BigQuery corporate data accessible for business users. Easily query data, run reports, create pivots & charts, and enjoy automatic updates

    Elevate Your Analytics

    Overcoming Common Errors with Differentially Private Aggregate Functions in BigQuery

    Differentially private aggregate functions add a layer of privacy to data analysis but also introduce unique challenges, such as syntax errors and value clamping issues. To navigate these, analysts must pay careful attention to the specific requirements of differential privacy options, like epsilon values and bounds for clamping.

    Understanding the nuances of these options and the impact of privacy parameters on the accuracy of results is crucial. Effective troubleshooting in this context requires a balance between preserving privacy and maintaining the integrity and usefulness of the data analysis.

    Syntax Errors

    Encountering syntax errors in queries that involve differentially private aggregate functions can be a common hurdle.

    ✅ Solution:

    The key to resolving these errors is meticulously checking your query to ensure it aligns with the syntax requirements specific to differentially private operations. This entails correctly implementing the DIFFERENTIAL_PRIVACY clause and utilizing the appropriate functions - such as AVG, COUNT, PERCENTILE_CONT, and SUM with the required parameters.

    When applying differential privacy to an average calculation, ensure your syntax matches the expected format (example below) to avoid syntax-related issues:

      OPTIONS (epsilon = epsilon_value [, delta = delta_value]), 

    Value Clamping Errors

    Value clamping errors arise when there is confusion about how to appropriately limit the impact of outliers in queries.

    ✅ Solution:

    Differentiating between explicit and implicit clamping is vital for resolution. Explicit clamping requires setting contribution_bounds_per_group or contribution_bounds_per_row, dictating the data's lower and upper bounds. BigQuery resorts to implicit clamping without these specifications, automatically determining these bounds.

    For clarity and control over your data's range, explicitly define these bounds whenever possible, using syntax like the following within your differentially private aggregate function.

    FUNCTION(column_name, contribution_bounds_per_row >= (<min_value>, <max_value>))

    Implicit Bounding Flaws in Small Datasets

    Small datasets may yield less accurate results under implicit bounding due to the method's reliance on data volume to establish bounds.

    ✅ Solution:

    Opt for explicit clamping in cases with limited data, setting bounds based on external knowledge or data characteristics. This can be crucial for maintaining result accuracy in analyses involving smaller datasets. An explicit approach allows for tailored bounds that reflect the dataset's specifics, enhancing the reliability of your differentially private analyses.

    Clamping Decision Challenge in Data Analysis

    In data analysis, managing outliers or extreme values is crucial for accuracy. However, the choice between maintaining strict, consistent boundaries (explicit clamping) and allowing flexibility based on the dataset's characteristics (implicit clamping) presents a dilemma.

    This decision impacts the analysis's comparability, accuracy, and relevance, making it challenging to decide which method aligns best with specific analysis goals.

    ✅ Solution:

    To address this challenge, consider the nature of your dataset and the objectives of your analysis. If consistency and comparability across datasets are paramount, opt for explicit clamping with predefined bounds. This ensures uniformity and can enhance result accuracy. If your analysis benefits from adaptability, and you're dealing with diverse or dynamic datasets, implicit clamping is preferable.

    Handling Type Restrictions

    Differentially Private aggregate functions do not directly support NUMERIC and BIGNUMERIC types, presenting a challenge for their incorporation into privacy-focused analyses.

    ✅ Solution:

    To circumvent this, cast these types to FLOAT64 before utilizing them in your aggregate functions. This ensures compatibility with differential privacy requirements and allows for successfully applying functions like AVG, COUNT, PERCENTILE_CONT, and SUM.

    An example of this casting would be the following:

    CAST(column_name AS FLOAT64)

    Diving into BigQuery's aggregate functions reveals their key role in enhancing data analysis. By summarizing data and identifying trends, these functions pave the way for deeper insights and more informed decisions.

    Imagine a solution that brings BigQuery's aggregate functions right into Google Sheets, easing the complexity of data analysis. This integration would meld advanced data analysis with the simplicity of spreadsheets, unlocking insights more quickly.

    Build Powerful Reports with OWOX BI BigQuery Reports Extension

    The OWOX BI BigQuery Reports Extension is a dynamic tool designed to enhance the reporting capabilities of users working extensively with Google BigQuery. This extension allows for the seamless integration of BigQuery data into customizable reports and dashboards, providing a comprehensive view of analytics and metrics directly within your favorite spreadsheet application.


    Unlock BigQuery Insights in Google Sheets

    Report on what matters to you. Integrate corporate BigQuery data into a familiar spreadsheet interface. Get insightful, up-to-date reports with just a few clicks

    Unlock Data Insights Now

    With its intuitive interface and powerful data processing capabilities, the OWOX BI BigQuery Reports Add-on simplifies data analysis, making it easier for data analysts, marketers, and business intelligence professionals to derive actionable insights from their data. By leveraging this tool, users can automate data import, perform advanced data manipulations, and visualize results through interactive reports and dashboards, without leaving their spreadsheet environment.


    Expand all Close all
    • What Is an Aggregate Function in BigQuery?

      An aggregate function in BigQuery is a type of function used to perform a calculation on a set of values, returning a single value that summarizes the input data. These functions are crucial for data analysis tasks, allowing users to perform operations such as summing up numbers, calculating averages, finding minimum or maximum values, and more, across rows of a dataset. Aggregate functions help in deriving statistical insights and summaries from large datasets efficiently.
    • How Do I Use ARRAY_AGG in BigQuery, and What Is Its Purpose?

      ARRAY_AGG in BigQuery aggregates values from multiple rows into an array. It combines values from a specified column in multiple rows into a single array, making it easier to process and analyze groups of data collectively. To use ARRAY_AGG, specify the column you wish to aggregate as an argument to the function within your SELECT query. Its purpose is beneficial in scenarios where you must collect multiple values into a single field for each group defined in your query, such as collecting all items purchased by a single customer into one array.
    • What Is the Difference Between COUNT(), COUNTIF(), and COUNT DISTINCT in BigQuery?

      • COUNT(): Counts the number of rows in a specified column, ignoring NULL values. It provides the total number of non-NULL records.
      • COUNTIF(): Performs a conditional count. It counts the number of rows that match a specified condition, allowing for more nuanced data analysis.
      • COUNT DISTINCT: Counts the number of unique non-NULL values in a column. This is useful for finding the number of distinct entries in a dataset, eliminating duplicates from the count.
    • How Does STRING_AGG Work in BigQuery, and When Should I Use It?

      STRING_AGG in BigQuery concatenates values from multiple rows into a single string. You specify the column to aggregate and an optional separator between each value. STRING_AGG is useful when you want to combine text values from multiple rows related to a specific key or group into a single, easy-to-read string. It's especially handy for creating summarized lists or descriptions that combine multiple records.
    • How Does GROUP BY Interact With Aggregate Functions in BigQuery?

      GROUP BY in BigQuery is used in conjunction with aggregate functions to divide the dataset into groups, each consisting of rows that have the same values in specified columns. Aggregate functions are then applied to each group independently, rather than the whole dataset. This allows for aggregate calculations, such as sums, averages, or counts, to be performed on subsets of the data based on the grouping criteria, providing insights into each group's characteristics.
    • Is There a Way to Handle Null Values in Aggregate Functions in BigQuery?

      Yes, BigQuery provides mechanisms to handle NULL values in aggregate functions. Most aggregate functions automatically ignore NULL values in their calculations. However, if you need to include NULL values in your analysis or replace them with another value, you can use functions like IFNULL, COALESCE, or NULLIF to manipulate NULL values before applying the aggregate function. This allows for flexible handling of NULLs according to the specific requirements of your analysis.

    icon Get in-depth insights

    Modern Data Management Guide

    icon Get in-depth insights

    Modern Data Management Guide