All resources

What Is the DATE_FROM_UNIX_DATE Function in SQL?

DATE_FROM_UNIX_DATE is a BigQuery SQL function that converts an integer Unix date (days since 1970-01-01) into a standard DATE value. It’s handy when your tables store dates as Unix integers and you need human-readable calendar dates for filters, joins, and reporting in BI dashboards.

DATE_FROM_UNIX_DATE is a BigQuery SQL function that turns a Unix date stored as an integer into a regular DATE, making raw numeric values readable and usable in filters, joins, and reports.

What is DATE_FROM_UNIX_DATE in SQL?

When date fields arrive as plain integers, analysis gets messy fast. DATE_FROM_UNIX_DATE solves that by converting those integers into standard calendar dates that analysts can actually work with.

Basic idea: converting Unix integers to dates

In BigQuery, a Unix date is not a timestamp. It is the number of days since 1970-01-01. DATE_FROM_UNIX_DATE takes that day count and returns a DATE value such as 2024-05-01.

This is especially useful when source systems export compact numeric date fields to save space or follow legacy schemas. Instead of interpreting those numbers manually, you can convert them directly in SQL and keep your logic clear.

Syntax and return type

The syntax is simple: DATE_FROM_UNIX_DATE(integer_expression). The input must represent the number of days from the Unix epoch, and the output is a BigQuery DATE type.

That return type matters because DATE fields work cleanly in WHERE clauses, GROUP BY logic, and joins with other date-based tables. Once converted, the field behaves like any other standard date column in BigQuery.

How DATE_FROM_UNIX_DATE Works in BigQuery

This function is straightforward, but only if you understand what kind of Unix value you are dealing with. That is where many date bugs begin.

Input format: days since Unix epoch

The input is an integer counting full days from 1970-01-01. So 0 becomes 1970-01-01, 1 becomes 1970-01-02, and so on. Negative values are also possible for dates before the epoch.

The key point is that DATE_FROM_UNIX_DATE expects day-based storage. If your table contains a field like order_date_unix = 19800, BigQuery interprets that as 19,800 days after 1970-01-01, not seconds or milliseconds.

Difference from Unix timestamps in seconds or milliseconds

This is where analysts can get tripped up. Unix timestamps usually represent seconds from 1970-01-01 00:00:00 UTC, while some systems store milliseconds. DATE_FROM_UNIX_DATE does not handle either of those formats.

If your value is in seconds, you would typically use TIMESTAMP_SECONDS. If it is in milliseconds, TIMESTAMP_MILLIS is the better fit. DATE_FROM_UNIX_DATE is only for day-level integers. No time component. No timezone handling. Just the date.

Common pitfalls and off‑by‑one errors

The biggest mistake is applying DATE_FROM_UNIX_DATE to the wrong unit. If you pass a seconds-based Unix timestamp into it, the result will be wildly incorrect because BigQuery will treat that huge number as days.

Another issue appears when teams mix DATE and TIMESTAMP logic. A timestamp can shift depending on timezone conversions, while DATE_FROM_UNIX_DATE always returns a date directly. If one source is daily in UTC and another source is local-time timestamp based, the reporting day can drift by one day at the edges.

To avoid off-by-one errors, confirm three things before converting:

  • Whether the source field is stored in days, seconds, or milliseconds
  • Whether the business logic needs a DATE or a TIMESTAMP
  • Whether downstream joins and dashboards expect a timezone-adjusted date

Practical Examples of DATE_FROM_UNIX_DATE

This function shines in everyday analytics work, especially when raw export tables are not yet shaped for reporting.

Simple conversion example

A direct conversion is the easiest place to start:

1SELECT
2    DATE_FROM_UNIX_DATE(20000) AS report_date;

This returns a standard DATE value based on 20,000 days after 1970-01-01. It is a small function, but it can instantly make a cryptic integer field readable.

If you are speeding up query writing with using AI to build SQL queries, this is exactly the kind of detail worth specifying clearly: tell the model whether your source value is stored in days, seconds, or milliseconds.

Using in WHERE filters and GROUP BY

Suppose an events table stores event_day as a Unix date integer. You can convert it inside a filter or aggregation:

1SELECT 
2DATE_FROM_UNIX_DATE(event_day) AS event_date,
3COUNT(*) AS sessions
4FROM analytics.events
5WHERE DATE_FROM_UNIX_DATE(event_day) BETWEEN DATE '2024-01-01' AND DATE '2024-01-31'
6GROUP BY event_date
7ORDER BY event_date;

This makes the data immediately usable for daily reporting. Instead of grouping by raw integers, you group by actual dates that BI tools understand more naturally.

Converting for joins with standard DATE columns

Joins are another common use case. Imagine ad spend data uses a standard DATE column, but web events store the day as a Unix integer. You can align them in the join condition:

1SELECT 
2e.user_id,
3s.campaign,
4s.spend_date
5FROM analytics.events e
6JOIN marketing.spend s
7ON DATE_FROM_UNIX_DATE(e.event_day) = s.spend_date;

This is practical, but it is also a signal that your model may need cleanup. Repeated conversions inside joins can add complexity, so many teams standardize the field earlier in the pipeline.

DATE_FROM_UNIX_DATE vs Other Date Conversion Functions

BigQuery gives you several tools for working with temporal data. Picking the right one depends entirely on the type of value you start with.

DATE vs DATETIME vs TIMESTAMP

DATE stores only the calendar date. DATETIME stores date and time, but without timezone. TIMESTAMP stores a precise moment in time, typically interpreted in UTC.

DATE_FROM_UNIX_DATE returns DATE, so it is ideal when your analysis is daily and does not require hours, minutes, or timezone logic. That makes it a strong fit for campaign reporting, daily active users, and date-based dimensional joins.

When building models with clear relationships, date type consistency helps as much as structure. The same discipline you apply to key constraints in SQL also matters for date fields: define them consistently, and your joins become far more reliable.

When to use DATE_FROM_UNIX_DATE vs TIMESTAMP_SECONDS / TIMESTAMP_MILLIS

Use DATE_FROM_UNIX_DATE when the source value is an integer representing whole days since 1970-01-01. Use TIMESTAMP_SECONDS when the source is a Unix timestamp in seconds. Use TIMESTAMP_MILLIS when it is in milliseconds.

If your final output needs only the date but the source starts as a timestamp, you would usually convert the timestamp first and then extract the date. In contrast, if the source is already stored as a Unix day count, DATE_FROM_UNIX_DATE is the clean direct route.

DATE_FROM_UNIX_DATE in Reporting and Data Marts

In reporting pipelines, date normalization is not optional. It is one of the basic moves that keeps dashboards trustworthy.

Cleaning raw event data for analytics

Raw event exports often come with inconsistent field formats. One table has DATE, another has STRING, another has integer Unix dates. Converting those fields early makes reporting logic simpler and easier to maintain.

Many teams put this transformation into repeatable SQL steps or SQL stored procedures so every refresh applies the same date rules. That reduces manual fixes and helps avoid dashboard surprises after schema changes.

If those raw tables also include sensitive fields, standardization work often happens alongside governance tasks such as data masking in SQL, so analysts can use cleaned date dimensions without exposing restricted values.

Aligning date fields across multiple sources

Marketing and product data rarely arrive in one neat format. Ad platforms, CRM exports, transaction systems, and event logs can all represent dates differently. DATE_FROM_UNIX_DATE helps normalize one of those formats into something consistent.

That consistency becomes critical when you connect fact tables and dimensions through well-defined relationships like primary and foreign keys. Even when the join key is not the date itself, mismatched date fields can still break attribution windows, budget pacing views, or period comparisons.

OWOX Data Marts Context

Inside analytical data marts, small conversion functions can have a big impact. Clean date fields make marts easier to query, validate, and visualize.

Using DATE_FROM_UNIX_DATE in analytical data marts

When a source lands in BigQuery with Unix day integers, converting them during mart preparation gives analysts a stable reporting layer. Instead of repeating the same conversion in every dashboard query, teams can expose a ready-to-use DATE column in the mart.

That approach keeps SQL cleaner and reduces the chance of inconsistent logic between reports. It also makes the mart more friendly for analysts who need fast answers without rechecking raw storage formats every time.

Why consistent date formats matter for marketing and BI reports

Consistent dates keep reporting periods aligned across acquisition, conversion, retention, and revenue views. Without that consistency, two dashboards can show different answers for the same day simply because one used a raw integer and the other used a converted date.

For BI specialists and marketers, that is not just annoying. It can throw off budget decisions, campaign pacing, and stakeholder trust. Standardizing date fields with functions like DATE_FROM_UNIX_DATE helps build reports that are clear, comparable, and ready for action.

Need a cleaner way to work with reporting-ready dates and unified datasets? Build structured data marts, organize your marketing data, and get your BI reporting into shape with OWOX Data Marts.

You might also like

No items found.

Related blog posts

No items found.

2,000 companies rely on us

Oops! Something went wrong while submitting the form...