5 min read2026-05-09

Unix timestamp explained for developers

Understand Unix timestamps, seconds vs milliseconds, timezone display, and common API or log debugging mistakes.

What a Unix timestamp represents

A Unix timestamp represents a point in time counted from 1970-01-01 00:00:00 UTC. Many APIs, databases, logs, queues, and tokens use timestamps because they are compact and easy for systems to compare.

The timestamp itself is not tied to your local timezone. Timezone affects how humans display the value. This distinction matters when a backend, frontend, database, and monitoring tool show the same event differently.

Seconds vs milliseconds

One of the most common timestamp bugs is mixing seconds and milliseconds. A 10-digit value is usually seconds. A 13-digit value is usually milliseconds. Passing milliseconds to code that expects seconds can produce a date far in the future; passing seconds to code that expects milliseconds can produce a date near 1970.

JWT exp and iat claims are usually seconds. JavaScript Date values often use milliseconds. Logs and databases vary. Always check the expected unit before assuming the value is wrong.

Debugging workflow

Convert the timestamp to a readable UTC date first, then compare it with local time if needed. Write both the raw value and readable date in bug notes so teammates in different timezones can follow the issue.

If a problem is intermittent, check clock skew, token expiration, cache lifetime, and timezone conversion. Many date bugs are not parsing failures; they are boundary problems around expiry, daylight saving, or environment differences.

Related tools

Browse all developer tools

Related workflows