New: How Datapace Cut Supabase Dashboard Queries from 61 to 1. Read the case study →
Schema hygiene

Unused indexes, duplicates, bloated tables. Cleaned up every month.

Your index inventory rots. Indexes get added, nothing removes them. Duplicates accumulate. Bloat grows. Datapace audits on your schedule and opens cleanup PRs you can merge when ready.

Audit cadence

monthly by default

Bloat detection

pgstattuple + pg_stats

Output

DROP or REINDEX PRs

The problem

Every index costs writes. Every bloated table costs reads. Nobody audits. The cost compounds until queries get slow and the database is 3x the size it needs to be, and everyone blames the data, not the indexes.

REINDEX CONCURRENTLY exists. DROP INDEX exists. They require someone to know which indexes are unused, which are duplicates, which tables have bloat beyond the threshold. That someone does not exist on your team.

How Datapace solves this

The fix, automated.

01

Continuous index usage tracking

Datapace reads pg_stat_user_indexes on a rolling window and identifies indexes that have not been used in 30, 60, or 90 days. It also detects duplicate indexes, where two distinct indexes cover the same columns, or where one is a strict prefix of another. Every flagged index includes its size, last-used timestamp, and scan count so you can decide with full context.

index audit · pg_stat_user_indexes3.8 GB reclaimable
indexsizescans 30dlast useverdict
idx_users_email1.2 GB48.2kjust nowkeep
idx_users_email_lower2.1 GB091d agodrop
idx_orders_user_id890 MB12.1kjust nowkeep
idx_orders_user890 MB360d agoduplicate
idx_sessions_created_at520 MB0neverdrop

drop

2 · 2.6 GB

duplicate

1 · 890 MB

scanned

89 total

02

Bloat analysis per table

Using pgstattuple where available and pg_stats otherwise, Datapace estimates dead tuple ratio and physical bloat per table. Tables that cross your configured threshold are flagged with a reclaim estimate, a proposed remediation (REINDEX, VACUUM FULL with guard, or pg_repack), and an off-peak window suggestion based on your traffic profile.

bloat analysis · dead tuple ratio2 tables over threshold
sessions
12.4 GB61%
events
48 GB42%
orders
8.1 GB18%
users
2.4 GB8%
threshold · configurable30%proposed fixREINDEX CONCURRENTLY
03

Automated cleanup PRs

Each audit produces a small set of PRs: one to drop unused indexes (with size reclaimed and last-used dates), one to consolidate duplicates, one to REINDEX CONCURRENTLY the worst bloated indexes. Each PR is small, reviewable, and safe to merge or skip at your own cadence. Every proposed DROP is revalidated against the live workload right before it lands, so no index that just started serving a new query gets removed.

monthly cleanup · 3 PRs opened~22.7 GB reclaimable
#612chore: drop unused indexes

DROP INDEX CONCURRENTLY idx_users_email_lower;

DROP INDEX CONCURRENTLY idx_sessions_created_at;

4 indexes dropped · 3.8 GB reclaimed

#613chore: consolidate duplicate indexes

DROP INDEX CONCURRENTLY idx_orders_user;

-- covered by idx_orders_user_id, status

2 pairs consolidated · 890 MB reclaimed

#614chore: reindex bloated tables

REINDEX TABLE CONCURRENTLY sessions;

REINDEX TABLE CONCURRENTLY events;

3 tables · ~18 GB reclaimed

Who this is for

Teams running PostgreSQL without a dedicated DBA
Platform teams trying to reduce cloud database costs
Any long-lived schema that has not been audited in months
Engineers who have stared at a database size chart and wondered where the gigabytes went

Example workflow

See it in action.

Stop regressions before they ship.

2-minute setup. Read-only Postgres connection. Results delivered in your repo and Slack.