A Developer's Guide to EXPLAIN: 3 Levels of Mastering Supabase Query Performance
The answer is EXPLAIN, the most important command for any developer working on a Postgres-backed service like Supabase. This guide will demystify EXPLAIN in three levels.
As a developer, you live by the performance of your API. When an endpoint is slow, your first instinct might be to check your application code, but more often than not, the real bottleneck is hiding in your database query.
But how do you prove it? How do you look inside PostgreSQL and ask, "What are you really doing, and why is it taking so long?"
The answer is EXPLAIN
, the most important command for any developer working on a Postgres-backed service like Supabase. This guide will demystify EXPLAIN
in three levels, showing you how to diagnose the exact cause of a slow query and turn your API from sluggish to snappy.
TL;DR:
- Level 1 (
EXPLAIN
): Shows the estimated plan and helps you spot obvious issues like aSeq Scan
(full table scan). - Level 2 (
EXPLAIN ANALYZE
): Executes the query and shows you the actual time taken, confirming if the query is truly slow. - Level 3 (
EXPLAIN ANALYZE, BUFFERS
): Reveals the I/O cost, showing how many data blocks were read from disk—a key driver of your cloud bill.
What is an Execution Plan?
When you send a query to Postgres, its sophisticated planner first decides on the most efficient way to fetch the data. This step-by-step recipe is called an execution plan. Think of it as the GPS navigation for your data request. The EXPLAIN
command simply asks the planner to show you the map without starting the car.
Level 1: The Blueprint (EXPLAIN
)
Let's start with a common query on a large orders
table.
EXPLAIN SELECT * FROM orders WHERE user_id = 'some-user-id';
-- Seq Scan on orders (cost=0.00..25855.00 rows=10 width=120)
-- Filter: (user_id = 'some-user-id'::uuid)
How to Read This:
Seq Scan
: This is your smoking gun. It means PostgreSQL is performing a Sequential Scan (a full table scan).cost=0.00..25855.00
: The planner's estimated "work units." A higher number is bad. 25,855 is very high.- Takeaway: This immediately tells us we're doing a full table scan, almost certainly due to a missing index on
user_id
.
Level 2: The Reality Check (EXPLAIN ANALYZE
)
The basic EXPLAIN
is just an estimate. To see what actually happens, we add ANALYZE
.
Warning: This command executes the query, so do not use it with UPDATE
or DELETE
on production data.
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 'some-user-id';
-- Seq Scan on orders (cost=0.00..25855.00 rows=10 width=120) (actual time=0.025..350.450 rows=10 loops=1)
-- Filter: (user_id = 'some-user-id'::uuid)
-- Planning Time: 0.150 ms
-- Execution Time: 350.500 ms
What's New:
actual time=...350.450
: The real-world timing. It took 350.5 milliseconds to execute. This is the latency your user feels.- Takeaway:
EXPLAIN ANALYZE
gives you the ground truth. It confirms our suspicion: the query is slow because of the full table scan.
Level 3: The I/O Deep Dive (EXPLAIN ANALYZE, BUFFERS
)
This is the most advanced version, revealing the query's interaction with memory and disk—a direct link to your cloud bill.
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE user_id = 'some-user-id';
-- Seq Scan on orders (cost=0.00..25855.00 rows=10 width=120) (actual time=0.025..350.450 rows=10 loops=1)
-- Filter: (user_id = 'some-user-id'::uuid)
-- Buffers: shared read=15830
-- Planning Time: 0.150 ms
-- Execution Time: 350.500 ms
What's New:
Buffers: shared read=15830
: This is the goldmine. It shows Postgres had to read 15,830 blocks from disk. Disk I/O is slow and expensive.- Takeaway: We've proven the query is slow, inefficient, and costly.
The Fix: From "Seq Scan" to "Index Scan"
Now that we have a full diagnosis, we add an index.
CREATE INDEX idx_orders_user_id ON orders (user_id);
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE user_id = 'some-user-id';
-- Index Scan using idx_orders_user_id on orders (cost=0.42..8.44 rows=10 width=120) (actual time=0.035..0.045 rows=10 loops=1)
-- Index Cond: (user_id = 'some-user-id'::uuid)
-- Buffers: shared hit=4 read=3
-- Planning Time: 0.250 ms
-- Execution Time: 0.065 ms
The Transformation:
- The plan is now an
Index Scan
. - Execution Time dropped from 350ms to 0.065ms (over 5,000x faster).
- Disk reads dropped from 15,830 to just 3.
From Manual Analysis to an Automated Co-Pilot
This three-level workflow is incredibly powerful, but it's also manual and reactive. You often don't think to run EXPLAIN
until a user is already complaining.
This is where datapace becomes your performance co-pilot. Our platform is designed to automate this entire three-level analysis for you:
- Instead of you guessing which query to check, datapace uses signals like
pg_stat_statements
to automatically surface your most impactful queries. - It then automatically runs the
EXPLAIN
analysis, highlighting the key bottlenecks likeSeq Scans
and high I/O fromBuffers
. - Finally, it provides a concrete recommendation (like "Add this index"), turning a 30-minute manual deep dive into a proactive insight.
Conclusion
Mastering EXPLAIN
is a superpower for any developer building on Supabase. This three-level approach gives you a complete picture of your query's performance.
Start with EXPLAIN
for the blueprint, use EXPLAIN ANALYZE
for the reality check, and add BUFFERS
to understand the I/O cost. While this manual skill is invaluable, using an automated platform like datapace is the key to applying this power continuously and proactively across your entire application.
Want to optimize your database performance?
Get AI-powered recommendations for your specific database setup.