New: How Datapace Cut Supabase Dashboard Queries from 61 to 1. Read the case study →
Pre-deploy safety

Stop bad migrations at the PR, not at 3 AM.

Every migration in every PR is parsed, run against a clone with your live schema shape, and checked for lock risk, data loss, and plan damage. Datapace comments on the PR with the risk, proposes a safe alternative as a drop-in diff, and can block the merge when your policy requires it.

Patterns detected

60+

Typical scan time

<30s

Postgres versions

13 to 17

The problem

One bad ALTER takes an ACCESS EXCLUSIVE lock on your biggest table. Writes stall. Users wait. You roll back. The engineer who wrote the migration did not know that adding a NOT NULL column with a default would rewrite the entire table.

This is not a knowledge problem you fix with internal docs. Your team ships on Friday. Your staging clone does not have 50M rows. The migration works locally. It works in CI. It times out in production.

Platform teams spend their week reviewing SQL they did not write, trying to catch patterns that only bite at scale. That review time goes away when the review is automated.

How Datapace solves this

The fix, automated.

01

Reads the migration the way a senior DBA would

Every SQL file in the PR is parsed and evaluated against your live schema shape, row counts, index inventory, and query workload. Datapace knows that ADD COLUMN verified bool NOT NULL DEFAULT false on a 50M row table holds an exclusive lock for minutes. It knows that CREATE INDEX without CONCURRENTLY is never acceptable on a table that serves traffic. It knows which DROP COLUMN statements will silently break live queries that still reference the column.

live impact projectionACCESS EXCLUSIVE
users52,340,118 rows
idbigserialPK
emailtextNOT NULL
created_attimestamptzDEFAULT now()
+ verifiedboolNOT NULL DEFAULT false
0spolicy limit 2s~8 min
writes to users blocked
queue:
+4.2k/s
02

Proposes a safe alternative inline

When a risky migration is detected, Datapace does more than warn. It writes the safe version and posts it as an inline diff in the PR comment. For the NOT NULL case it proposes the three step pattern: add the column nullable, backfill in batches, add the constraint as NOT VALID then VALIDATE. You accept the suggested diff with one click and the check re-runs.

prod impact · lock footprintwrites blocked
Original migration~480s
Three step safe pattern~3s
1. add nullable2. backfill batches3. NOT VALID · VALIDATE

160x less write blocking. Same end state.

03

Blocks the merge when policy requires

You define the policy. Block any migration that would hold an exclusive lock longer than 2 seconds. Block any DROP or TRUNCATE without an explicit guard. Block any migration that touches tables tagged high traffic. Everything else passes through with an advisory comment. Policy lives in a YAML file in your repo, versioned and reviewable like any other code.

datapace · policy engineblocked

Migration in PR

ALTER TABLE users

ADD COLUMN verified bool NOT NULL DEFAULT false;

tags: high_traffic, core_auth

.datapace/policy.yaml

rules:
- name: high_traffic_table
match: tags includes high_traffic
block_if: max_exclusive_lock_ms > 2000

merge blocked

this migration would hold an exclusive lock for ~240s, threshold is 2s

Who this is for

Teams running PostgreSQL in production with non-trivial write volume
Engineering orgs without a dedicated DBA reviewing every migration
Platform teams tired of owning prod migrations that someone else wrote
Anyone who has watched a migration run for 45 minutes in production

Example workflow

See it in action.

Stop regressions before they ship.

2-minute setup. Read-only Postgres connection. Results delivered in your repo and Slack.