
Santa doesn’t judge your SQL by intent. Santa judges it by execution plans, logical io, cpu utilization, temp usage, and response time.
This is a practical conversion guide: common “naughty” query patterns and the simplest ways to turn each into a “nice list” version that is faster, more predictable, and less likely to ruin your on-call holidays.
1) Naughty: SELECT * on wide tables (~100 columns)
Why it’s naughty
- Wider tuples cost more everywhere: more memory bandwidth, more cache misses, bigger sort/hash entries, more network payload.
- Index-only becomes impossible: if you request 100 columns...

Santa doesn’t judge your SQL by intent. Santa judges it by execution plans, logical io, cpu utilization, temp usage, and response time.
This is a practical conversion guide: common “naughty” query patterns and the simplest ways to turn each into a “nice list” version that is faster, more predictable, and less likely to ruin your on-call holidays.
1) Naughty: SELECT * on wide tables (~100 columns)
Why it’s naughty
- Wider tuples cost more everywhere: more memory bandwidth, more cache misses, bigger sort/hash entries, more network payload.
- Index-only becomes impossible: if you request 100 columns, the planner can’t satisfy the query from a narrow index, so you force heap fetches.
- You pay a bigger “spill tax”: wide rows make sorts/aggregations spill to disk sooner.
Extra naughty in the HTAP era
Postgres is increasingly “one database, multiple workloads.” Columnar/analytics options (e.g., Orioledb, Hydra Columnar; DuckDB-backed columnstore/engine integrations) make projection discipline even more decisive, because columnar execution reads only the referenced columns so selecting fewer columns directly reduces I/O and CPU.
Nice list fixes
- Always select only what you need.
Santa verdict: If you didn’t need the column, don’t fetch it, don’t carry it, don’t ship it.
2) Naughty: WHERE tenant_id = $1 ... ORDER BY x LIMIT N that hits the “ORDER BY + LIMIT optimizer quirk”
Why it’s naughty
This is a classic planner trap: the optimizer tries to be clever and exploit an index that matches ORDER BY, so it can stop early with LIMIT. But if the filtering predicate is satisfied “elsewhere” (joins, correlations, distribution skew), Postgres may end up scanning far more rows than expected before it finds the first N that qualify. That’s the performance cliff.
When it goes bad, you see:
- Long response time (scanning deep into an index to find qualifying rows)
- High temp usage if the alternative plan sorts a large intermediate set and spills
Nice list fixes :
A. “Make the smart optimizer more stupid” (the + 0 trick)
In some cases, you can deliberately prevent the planner from matching the ORDER BY to an index by turning it into an expression, e.g. ORDER BY x + 0, which can force a different (often better) join order / plan. This is a common workaround used in the wild.
SELECT id, tenant_id, x
FROM events
WHERE tenant_id = $1
ORDER BY x+0
LIMIT 200;
B: Create covering index or Increasing default_statistics_target
ALTER TABLE events
ALTER COLUMN tenant_id SET STATISTICS 1666;
Santa verdict: LIMIT is only a turbo button if the engine can find the first rows without searching the entire parking lot.
3) Naughty: idle-in-transaction (refcursor + client think time)
Why it’s naughty (and this one is pure coal)
Idle-in-transaction is a vacuum’s worst enemy:
- It pins an old snapshot, so autovacuum can’t reclaim dead tuples that still might be visible.
- Bloat grows, indexes bloat, and eventually your “mystery slowness” appears.
- Meanwhile the backend is doing nothing, just holding the database hostage.
The refcursor pattern often creates this by design:
- Begin transaction
- Open cursor
- Fetch some rows
- Client does slow work (several minutes to hours)
- Cursor stays open, transaction stays open
Nice list fixes
- Don’t do application think-time inside a DB transaction. Fetch quickly, commit, process outside, then write back in a new transaction.
- If you must stream: keep the stream continuous, not “fetch then pause.”
- Enforce guardrails:
ALTER ROLE app_user SET idle_in_transaction_session_timeout = '30s';
ALTER ROLE app_user SET statement_timeout = '2min';
Quick detector:
SELECT pid, usename, wait_event, now() - xact_start AS tx_age, query
FROM pg_stat_activity
WHERE state = 'idle in transaction'
ORDER BY 4 DESC;
Santa verdict: A transaction is not a tote bag. Don’t carry it around while you do errands.
4) Naughty: mega-CTE chains (WITH a AS (...), b AS (...), c AS (...) ...)
Why it’s naughty
Long WITH pipelines can compound rowcount estimation errors. Once estimates are off, everything downstream is at risk: join order, join type, memory sizing, spill behavior. Also a big headache for prod support teams in debugging production functional or data quality issues.
Best nice-list option : split into temp tables
This gives you:
- a clean pipeline boundary
- the ability to ANALYZE intermediate results (real stats)
- tactical indexes on stages that matter
Pattern:
DROP TABLE IF EXISTS stage1;
CREATE TEMP TABLE stage1 AS
SELECT ...;
ANALYZE stage1;
CREATE INDEX ON stage1 (join_key);
DROP TABLE IF EXISTS stage2;
CREATE TEMP TABLE stage2 AS
SELECT ...
FROM stage1
JOIN ...;
ANALYZE stage2;
DROP TABLE IF EXISTS stage3;
CREATE TEMP TABLE stage3 AS
SELECT ...
FROM stage2
JOIN ...;
ANALYZE stage3;
.
.
.
.
On “pg_catalog bloat” issue : in practice, I have seen bigger wins from good autovacuum tuning (cost limit/delay, scale factors, number of workers) than from worrying about catalog bloat as a primary performance constraint.
Also: yes, many of us are still waiting for “true global temp tables someday.” (pgtt extension doesn’t do anything to reduce catalog bloat)
Santa verdict: If the planner can’t see the pipeline clearly, give it stages and statistics.
5) Naughty: wildcard search LIKE '%foo%' (and friends)
Why it’s naughty
A normal B-tree index can only help when the pattern is anchored at the start (e.g., col LIKE 'foo%'), not when it starts with %.
Nice list fixes
Fix A: pg_trgm
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE INDEX CONCURRENTLY ON docs USING gin (title gin_trgm_ops);
pg_trgm provides GIN/GiST operator classes for fast similarity and pattern searches.
Fix B: Biscuit (newer alternative)
Biscuit is an index access method designed specifically for fast LIKE/ILIKE pattern matching and claims to reduce trigram “recheck overhead” for wildcard-heavy queries. Evaluate it on your data and query mix.
GitHub
Santa verdict: Leading % turns your index into holiday decoration: pretty, not load-bearing.
6) Naughty: functions on indexed columns (WHERE lower(email) = ...)
Why it’s naughty
You turned an indexable predicate into an expression; without an expression index, Postgres often can’t use your B-tree index effectively.
Nice list fixes
CREATE INDEX CONCURRENTLY ON users ((lower(email)));
SELECT ...
FROM users
WHERE lower(email) = lower($1);
Santa verdict: If you wrap the column, index the wrapper.
7) Naughty: OR conditions that derail index usage (a = 1 OR b = 2)
Why it’s naughty
OR predicates can push the planner into a compromise plan: either scan too much, or pick a strategy that’s great for one branch and terrible for the other.
Nice list fixes
Split with UNION ALL (careful to preserve semantics):
SELECT ... FROM t WHERE a = 1
UNION ALL
SELECT ... FROM t WHERE b = 2 ;
Santa verdict: OR is two queries wearing one trench coat.
8) Naughty: mismatched join types / implicit casts on join keys
Why it’s naughty
If join keys don’t match types, you can:
- prevent index usage
- trigger repeated casting on large relations
- get unexpected nested loops
Nice list fixes
- Fix schema types where possible.
- Otherwise cast the parameter, not the column:
WHERE t.uuid_col = $1::uuid
Santa verdict: If your join keys can’t agree on a type, they shouldn’t be meeting in production.
9) Naughty: missing indexes for join keys / foreign keys (especially on large tables)
Why it’s naughty
- Joins devolve into large hash builds or repeated scans.
- Deletes/updates on referenced tables become expensive due to referential checks.
Nice list fixes
CREATE INDEX CONCURRENTLY ON child (parent_id);
Experimental (not for Prod yet) : Working on an extension to prevent unindexed foreign keys from going in production. FKHunter (Any improvement suggestions are welcome)
Santa verdict: Foreign keys without supporting indexes are a gift you gave yourself…. with no receipt.
10) Naughty: SELECT DISTINCT as a band-aid for join explosions
Why it’s naughty
DISTINCT often hides:
- missing join predicates
- unintended many-to-many relationships
- data modeling issues
It forces a sort or hash aggregate on an inflated intermediate result.
Nice list fixes
- Fix the join.
Santa verdict: DISTINCT is not deodorant.
If Santa finds your SQL on the Naughty List, don’t take it personally but take it as a to-do list. Pick one offender this week, run EXPLAIN (ANALYZE, BUFFERS), apply the smallest “Nice List” fix, and measure the win. Do that ten times and you won’t just get better latency; you’ll get a calmer pager, and a database that actually feels like it’s on holiday too.
PS: Extra content for holidy season
SQL-Clause Before :

SQL-Clause Now :

AI Generated SQL-Clause Last Christmas vs AI Generated SQL-Clause Today. Some would say AI lost it’s soul in rat race of larger models, previous SQL-Clause meme had a character.