PostgreSQL is powerful, flexible, and impressively stable.
But even Postgres has internal limits — some hard, some soft.
Most of the time, you’ll never encounter them.
But if you push the system hard enough, or build at serious scale, you might crash into these hidden ceilings.
Let's explore the key limits inside PostgreSQL, why they exist, and what happens when you hit them! 🚀
🔹 1. Maximum Columns Per Table
- Limit: 1600 columns per table
- Why: Due to internal NULL bitmap structure.
-
What Happens:
- PostgreSQL throws:
ERROR: tables can have at most 1600 columns
- Even "dropped" columns count unless a full table rewrite occurs.
Tip: Prefer JSONB for highly dynamic schemas.
🔹 2. Maximum Row Size
- Limit: 1.6 GB per row (theoretically)
- Practical Limit: About 2 KB per row before TOAST kicks in.
-
What Happens:
- Large fields are automatically moved into a TOAST table (compressed external storage).
- You usually won't see errors unless TOAST itself gets overwhelmed.
Tip: Don't cram huge objects into a single row unless necessary. Use bytea or external storage for big files.
🔹 3. Maximum Table Size
- Limit: 32 TB (default block size 8 KB)
-
What Happens:
- Hitting this limit requires a massive dataset.
- You might get I/O errors or file size limits at the OS/filesystem layer first.
Tip: Use table partitioning for massive tables to scale horizontally.
🔹 4. Maximum Database Size
- Limit: No hard limit (only by disk size)
-
What Happens:
- Practically, performance degrades when catalogs grow huge (millions of tables, indexes, etc.).
- Backups, restores, and planning become slower.
Tip: Archive old data, partition aggressively, and monitor catalog growth.
🔹 5. Maximum Index Size
- Limit: 32 TB
-
What Happens:
- Same as table size limits: reaching 32TB would take extraordinary data volumes.
Tip: Composite indexes and partial indexes help optimize index size and usage.
🔹 6. Maximum Number of Tables in a Database
- Soft Limit: Hundreds of thousands theoretically
-
What Happens:
- System catalogs (
pg_class
,pg_attribute
) grow massive. - Query planning and autovacuum slow down drastically.
- System catalogs (
Tip: Organize multi-tenant apps carefully. Consider using schemas instead of separate tables.
🔹 7. Maximum Number of Connections
- Default Limit: 100 concurrent connections
- Hard Limit: No strict maximum, but too many connections crush memory and CPU.
What Happens:
-
New connections are rejected with:
FATAL: sorry, too many clients already
Tip: Use connection pooling (e.g., PgBouncer) to manage thousands of app connections efficiently.
🌎 Visual Summary: PostgreSQL Core Limits
Resource | Limit | Reaction When Hit |
---|---|---|
Columns per table | 1600 | Hard ERROR |
Row size | 1.6 GB | TOAST handling |
Table size | 32 TB | Storage error |
Database size | Disk space | Performance hit |
Index size | 32 TB | Storage error |
Tables per DB | Hundreds of thousands | Catalog bloat |
Connections | 100+ | FATAL error |
🌟 Final Thoughts
PostgreSQL is incredibly scalable — but no system is limitless.
If you know where the walls are, you can design around them safely and creatively!
💊 The best PostgreSQL architects aren't the ones who avoid the limits — they're the ones who plan for them.
Happy scaling! 🚀📈