All resources

What Is the UNIX_DATE Function in SQL?

The UNIX_DATE function converts a DATE value to an integer representing the number of days since the Unix epoch (1970-01-01). It’s commonly used in SQL (for example, in BigQuery) to normalize dates, simplify comparisons, and join tables that store dates as Unix-based integers for analytics and reporting.

The UNIX_DATE function converts a DATE into an integer day number counted from the Unix epoch (1970-01-01), making dates easier to compare, join, and store consistently in analytics workflows.

What Is the UNIX_DATE Function?

UNIX_DATE is a SQL date utility (commonly used in BigQuery-style SQL) that turns a calendar date like 2026-03-11 into a single integer that represents “how many whole days since 1970-01-01.” That’s it—no time of day, no timezone drama, no formatting quirks.

Why analysts love it: integers are fast to compare, easy to join on, and consistent across tools. If one table stores dates as DATE values and another stores an integer date key (often for dimensional modeling or legacy reasons), UNIX_DATE becomes the bridge.

How UNIX_DATE Works Under the Hood

Unix epoch and day-based timestamps

The Unix epoch is the reference start point: 1970-01-01. UNIX_DATE calculates the number of day boundaries between your input date and that reference date.

Think of it as a “day index.” If your date is exactly 1970-01-01, the result is 0. If it’s the next day, the result is 1. Dates before 1970-01-01 produce negative numbers. That behavior is useful when you need consistent ordering or want to create rolling windows that don’t care about month lengths.

Crucially, UNIX_DATE is day-granular. It is not a timestamp conversion. There is no time component to round or truncate—your input is already a date, and the output is a whole number of days.

Supported input types and return type

UNIX_DATE takes a DATE value as input and returns an INT64 (an integer). In practice, your “DATE value” might come from:

  • A literal date (like DATE '2026-03-11')
  • A DATE column
  • A conversion from other types (for example, converting a timestamp or string into DATE first, then applying UNIX_DATE)

The output is always an integer day count. This is perfect for joining to date dimension keys, building partitions/buckets, or normalizing messy upstream date representations into a single comparable value.

UNIX_DATE Syntax and Basic Usage

Function syntax in SQL (BigQuery-style)

In BigQuery-style SQL, the basic form is straightforward: UNIX_DATE(date_expression). The important part is that date_expression must evaluate to a DATE.

If you’re drafting queries quickly (or iterating on variations for cohorts, windows, and joins), it can help to speed up the “first draft” with tools or prompts for generating SQL date queries with AI—then you validate the logic and edge cases like any responsible analyst.

Simple examples with static and column values

Here are a few practical patterns you’ll actually use.

1) Convert a static DATE literal to a Unix day integer:

SELECT UNIX_DATE(DATE '1970-01-01') AS unix_day;

2) Convert a DATE column for normalization:

SELECT order_id, UNIX_DATE(order_date) AS order_unix_day FROM dataset.orders;

3) Convert then compare using integers (useful when another table stores date keys as integers):

SELECT * FROM dataset.orders WHERE UNIX_DATE(order_date) BETWEEN 20000 AND 20030;

Even if you don’t memorize what day 20000 is, the integer range is stable and easy to compute from another date value in the same query.

Practical Use Cases for Analysts

Joining fact tables with integer date keys

A classic: one dataset has a proper DATE column, another has an integer “date key” (often from a date dimension or an older warehouse design). UNIX_DATE lets you join without messy string formatting or fragile “YYYYMMDD” hacks.

Common join pattern:

  • Fact table A: event_date as DATE
  • Fact table B: date_key as INT64 (days since epoch)

Then you join on: UNIX_DATE(event_date) = date_key.

If you’re managing these relationships in a more governed way, it helps to understand key constraints for managing date keys, especially when you want the warehouse to catch broken joins (like missing dates or duplicate keys) before your dashboard catches fire.

Filtering and bucketing data by Unix date

UNIX_DATE is great when you want stable buckets that don’t depend on month boundaries or localized formatting. For example:

  • Rolling windows: filter to “last 30 days” by comparing Unix day integers.
  • Daily bucketing: group by Unix day when your upstream data arrives as day keys.
  • Incremental processing: store the max processed unix day and resume from there.

Because the unit is “day,” your logic stays clean. You’re not worrying about hours, minutes, daylight savings shifts, or timestamp rounding. It’s just day arithmetic.

And if you’re encapsulating these patterns into reusable warehouse routines, you’ll often end up using SQL functions inside stored procedures so the team can run consistent transformations without copying query fragments across notebooks and BI tools.

Converting back from Unix date for reporting

Dashboards, exports, and stakeholder reports usually want human-readable dates, not “19743.” So you’ll often convert the integer back to a DATE for presentation or for joining to a full date dimension with attributes like week start, fiscal period, holiday flags, and so on.

In BigQuery-style SQL, you typically convert back by adding the integer day offset to the epoch date. Conceptually, it’s: epoch_date + day_offset. Once you’re back to a DATE, you can format it, group by month, or map it to reporting calendars.

One more real-world note: date fields sometimes become sensitive when combined with user-level data (think signup date, last purchase date, or “date of first seen”). If you’re sharing datasets broadly, look into data masking for sensitive date-related fields so analysts can do their work without exposing what shouldn’t be exposed.

Example: Using UNIX_DATE in a Marketing Dashboard

Preparing date keys for data marts

Scenario: you’re building a daily marketing performance dashboard. Your ad costs come in with a DATE. Your web analytics events land as timestamps. Your CRM exports a daily snapshot table keyed by an integer day number (days since 1970-01-01). You want one consistent “date key” across everything.

A practical approach is to create a single daily key field:

  • For tables with DATE: compute UNIX_DATE(date)
  • For tables with TIMESTAMP: convert to DATE first, then UNIX_DATE(DATE(timestamp))
  • For tables already using day integers: keep as-is, but validate it matches the same definition

This gives you a stable join key for your daily grain data mart. It also pairs nicely with dimensional modeling practices where a date dimension owns the definition of “day,” and facts point to it via a key. If you want a refresher on how those relationships should be structured, review primary and foreign keys for date dimensions.

Combining ad, web, and CRM data by Unix date

Let’s say you have:

  • ads_daily(date DATE, campaign_id STRING, cost NUMERIC)
  • web_events(event_timestamp TIMESTAMP, campaign_id STRING, sessions INT64)
  • crm_daily(unix_day INT64, campaign_id STRING, leads INT64)

You can normalize them to a shared key and combine at a daily, campaign level:

1WITH ads AS2SELECT    
3UNIX_DATE(date) AS unix_day,   
4campaign_id,  
5SUM(cost) AS cost 
6FROM ads_daily 
7GROUP BY 1, 2
8),
9web AS (  
10SELECT   
11UNIX_DATE(DATE(event_timestamp)) AS unix_day,  
12campaign_id,  
13SUM(sessions) AS sessions 
14FROM web_events 
15GROUP BY 1, 2
16),
17crm AS18SELECT   
19unix_day,  
20campaign_id,  
21SUM(leads) AS leads 
22FROM crm_daily 
23GROUP BY 1, 2
24)
25SELECT 
26COALESCE(ads.unix_day, web.unix_day, crm.unix_day) AS unix_day, 
27COALESCE(ads.campaign_id, web.campaign_id, crm.campaign_id) AS campaign_id, 
28ads.cost,
29web.sessions, 
30crm.leads
31FROM ads
32FULL OUTER JOIN web USING (unix_day, campaign_id)
33FULL OUTER JOIN crm USING (unix_day, campaign_id);

The hype part: this is the kind of normalization that turns “three noisy feeds” into “one reliable daily truth table.” Your BI layer gets simpler, your joins get more predictable, and your comparisons across sources stop being a guessing game.

UNIX_DATE vs Other Date and Time Functions

UNIX_DATE vs UNIX_SECONDS / UNIX_MILLIS

UNIX_DATE returns an integer count of days. UNIX_SECONDS and UNIX_MILLIS return integers based on time (seconds or milliseconds) since the epoch.

  • Use UNIX_DATE when your reporting grain is daily and you want day-level keys.
  • Use UNIX_SECONDS / UNIX_MILLIS when you need time ordering within a day, event sequencing, latency calculations, or precise windows.

If you accidentally use seconds when you meant days, your joins will fail silently (no matches) or explode cardinality (too many matches). So pick the function that matches your grain.

UNIX_DATE vs standard DATE functions

Standard DATE functions help you extract parts (year, month, day), truncate (to week/month), add/subtract intervals, and format dates for display. UNIX_DATE is different: it’s a normalization move that converts a DATE into a single comparable integer.

In practice, they work together:

  • Use DATE functions to define the business logic (like “week starts Monday” or “month to date”).
  • Use UNIX_DATE to create stable keys and simplify joins/filters where integers are expected.

If your warehouse design leans on a date dimension, you may only need UNIX_DATE at ingestion/standardization time, and then rely on the date dimension for everything else downstream.

OWOX Data Marts Context Note

In data mart workflows, UNIX_DATE is a handy way to enforce a single daily key across sources that disagree on date formats or storage types. Whether your upstream tables arrive with DATEs, timestamps, or legacy integer day keys, the goal is the same: one clean join field that makes daily reporting fearless.

Want to build clean, join-ready reporting tables faster? Try OWOX Data Marts to structure your warehouse outputs for analysis, and keep your data marts consistent as your sources grow.

You might also like

No items found.

Related blog posts

No items found.

2,000 companies rely on us

Oops! Something went wrong while submitting the form...