Introduction
Every relational database lives and dies by its transaction log. In SQL Server, that's the transaction log file; in PostgreSQL, it's the WAL (Write-Ahead Log). It's the beating heart that guarantees durability, recovery, and replication. Without the log, you wouldn't have consistency after a crash, point-in-time restore, or reliable replicas.
So the idea of skipping the log sounds almost reckless. Why would anyone in their right mind want to avoid logging?
PostgreSQL gives you exactly that option with UNLOGGED tables. It's a feature that flips the script: the table still persists on disk, but its writes don't go into the WAL. That means much less overhead, often much faster bulk operations, but with a big trade-off—no durability if the database crashes.
For SQL Server DBAs, this feels strange. We don't really have a one-to-one feature. You might think about BULK INSERT with minimal logging, temp tables in tempdb, or even memory-optimized SCHEMA_ONLY tables. Each of those captures pieces of the UNLOGGED behavior, but not the whole story.
In this article, we'll dive deep into what UNLOGGED tables are, why they exist, how you can use them, and the caveats that keep them firmly in the "special tool" category.
What Is an UNLOGGED Table?
In PostgreSQL, normal tables write every change into the WAL (Write-Ahead Log). The WAL is how PostgreSQL guarantees durability: if the system crashes, it replays the log to bring the table back to a consistent state. Other platforms have the same idea under different names. In SQL Server it's the transaction log (.ldf file). In Oracle it's the redo log. In MySQL with InnoDB, it's the redo/undo logs. All of them serve the same purpose—before the data page is updated on disk, the change is recorded in the log so it can be recovered.
An UNLOGGED table in PostgreSQL skips that step. The table looks and behaves like any other—you can insert, update, and query rows—but the changes are not written to the WAL. This makes operations faster, but there's no durability if the server crashes. On restart, PostgreSQL clears the table to avoid leaving it in an unknown state.
The syntax is simple:
CREATE UNLOGGED TABLE fast_table (
id INT,
payload TEXT
);
The key difference is that while a normal table will survive a crash thanks to WAL, an unlogged table will be truncated. It persists across sessions and even clean reboots, but it does not survive an unexpected shutdown.
Why Would You Use This?
The answer is speed. Logging every row change has a cost, and skipping that cost makes inserts and updates noticeably faster. In many workloads, especially those that involve bulk loads, staging areas, or scratch work, durability isn't always required.
Some real-world examples:
- Staging data loads: If you're pulling in a nightly batch from a CSV or external system, you often just reload if something goes wrong. The staging step doesn't need crash recovery.
- ETL pipelines: When raw data is transformed and then discarded, you don't need to pay the logging tax.
- Intermediate analysis: Analysts sometimes create intermediate tables to hold results during a complex report run. If those vanish after a crash, you can simply re-run.
- Caching or queues: If the table is acting as a buffer for logs, metrics, or data that's duplicated elsewhere, durability doesn't matter.
Think of UNLOGGED tables as a tool for situations where losing data is acceptable, but waiting around for logging overhead isn't.
Demo: Logged vs. Unlogged
The easiest way to understand the benefit is to test it yourself. I created two tables: one regular logged table, and one unlogged.
-- A regular logged table CREATE TABLE normal_table (id INT, payload TEXT); -- An unlogged table CREATE UNLOGGED TABLE fast_table (id INT, payload TEXT);
Then I inserted 8 million rows into each, wrapping the insert between SELECT now() calls to measure duration.
Normal/Logged table load
The query simply generates 8 million rows using generate_series, assigns each number as the id, and creates a random string with md5(random()::text) as the payload before inserting into the normal_table table. We wrapped the INSERT between SELECT now() calls to record the start and end time. This let us measure how long it took to load 8 million rows into each table and directly compare logged vs. unlogged performance.
postgres=# SELECT now(); INSERT INTO normal_table SELECT g, md5(random()::text) FROM generate_series(1, 8000000) g; SELECT now();
This run took about 22 seconds as we can see in below screenshot.

Unlogged table load
For the unlogged table, the same INSERT … SELECT was used with generate_series and md5(random()::text), again wrapped with SELECT now() calls to capture duration.
postgres=# SELECT now(); INSERT INTO fast_table SELECT g, md5(random()::text) FROM generate_series(1, 8000000) g; SELECT now();
Above insert finished in 10 seconds. Refer below screenshot:

Results
On the same machine, inserting 8 million rows into the unlogged table was more than 2× faster than into the logged table. That difference comes entirely from skipping the WAL.
The benefit grows as you scale. With tens of millions of rows, or repeated ETL cycles, that's the kind of speed-up that can shave hours off a load window.
It's also worth noting that indexes on unlogged tables skip WAL as well, so building or rebuilding them is quicker too.
The Durability Trade-Off
The speed of UNLOGGED tables comes with a clear cost: durability. If you insert rows into an unlogged table and the database crashes (without a clean shutdown), the table will still exist after restart, but it will be empty. PostgreSQL automatically truncates it on startup because there are no WAL records to restore its state. This behavior is by design, and it's documented: "Unlogged tables are not crash-safe: an unlogged table is automatically truncated after a crash or unclean shutdown." (PostgreSQL Docs)
That means unlogged tables are safe to use only for data you can easily reload or regenerate. If you can't afford to lose rows in the event of a crash, you can't afford to use UNLOGGED.
Comparisons to SQL Server
For SQL Server DBAs, here's how it lines up:
- Minimal logging in bulk operations: SQL Server can minimize log activity during bulk inserts with
TABLOCKand the simple recovery model. That's helpful, but it's only for the insert itself—the table remains fully logged afterward. PostgreSQL UNLOGGED applies to all operations on the table. - tempdb: Temp tables don't log as heavily, and they're cleared on restart, but they also don't persist beyond the session. UNLOGGED tables do persist, unless there's a crash.
- Memory-optimized SCHEMA_ONLY tables: SQL Server's In-Memory OLTP allows non-durable tables that vanish on crash or restart. This is the closest equivalent. The big difference is that PostgreSQL's UNLOGGED tables live on disk, not in memory.
So while SQL Server has pieces of the puzzle, there's nothing that matches the simplicity of just declaring a permanent table "unlogged" and trading durability for speed.
Benchmarks and Maintenance
UNLOGGED tables don't just speed up inserts. Since they avoid writing to the WAL, other operations get faster too. Creating indexes takes less time, bulk deletes and updates run with less overhead, and even VACUUM has less work to do. This makes them handy in workflows where you create, fill, and drop tables often. But skipping WAL also changes how replication and backups behave. Streaming replication won't copy unlogged tables, and logical replication ignores them as well, so if you need replicas, they aren't an option. Backups are a mixed story—physical backups taken with pg_basebackup will include the data, but logical dumps with pg_dump only capture the schema unless you explicitly export the contents. These differences are important to keep in mind before putting unlogged tables into a process.
A Story From the Field
At one client site, we had nightly batch jobs pulling in millions of rows from external systems. The pipeline always loaded data into staging tables, transformed it, and pushed the clean rows into production tables.
The staging tables were recreated each night, and if a job failed, the easiest fix was just to re-run. Once we switched the staging tables to UNLOGGED, the nightly load times dropped by 20–30%. That was a real win, because it shaved hours off the ETL window and meant less overlap with the reporting queries in the morning.
We never worried about the durability risk because the staging data was never authoritative—it could always be reloaded. That's exactly the sweet spot for UNLOGGED.
When Not to Use Them
UNLOGGED tables are not a good fit for every workload. You should avoid them for core business tables where losing rows could mean lost revenue or compliance problems. They also don't participate in replication, so the data won't appear on standby servers, which makes them unsuitable for high-availability setups. If a failover occurs, the missing data can cause unexpected issues. Finally, if the data is difficult or impossible to regenerate, you shouldn't rely on an unlogged table because a crash will wipe it out and leave you with nothing to recover.
Conclusion
UNLOGGED tables are one of those quirky PostgreSQL features that make you pause the first time you hear about them. A table that persists, but skips the log? It feels dangerous. And it can be, if you apply it blindly. But in the right scenarios—staging loads, ETL scratchpads, disposable caches—they're incredibly practical. They strip away logging overhead, speed up bulk operations, and reduce I/O.
For SQL Server DBAs, the closest comparison is probably memory-optimized SCHEMA_ONLY tables, but even those don't map perfectly. UNLOGGED lives in its own space: fast, disk-based, persistent-but-fragile.
The key takeaway is simple: don't be afraid of the feature, but be smart about where you use it. It's not for your orders table or your customer master. It's for the scratch areas, the throwaway steps, and the pipelines where speed matters more than durability. And once you try it in the right place, you'll probably find yourself wishing SQL Server had something this straightforward.