Vector search is becoming a core workload for AI-driven applications. But do you really need to introduce a new system just to handle it?
We ran a performancebenchmark to find out: comparing PostgreSQL (using pgvector + pgvectorscale) with Qdrant on 50 million embeddings.
The results at 99% recall:
- Sub-100ms query latencies
- 471 queries per second (QPS) on Postgres—11x higher throughput than Qdrant (41 QPS)
Head to the full write-up for a deep dive into our vector database comparison.
For vectors, Postgres is all you need.
At 99% recall, Postgres delivers sub-100ms query latencies and handles 11x more query throughput than Qdrant (471 QPS vs. Qdrant’s 41 QPS).
The results show that thanks to pgvectorscale
, Postgres can keep up with specialized vector databases and deliver as good, if not better performance at scale. Learn more about why Postgres wins for AI and vector workloads.
Turning PostgreSQL Into a High-Performance Vector Search Engine
How? We built pgvectorscale
to push Postgres to its limits for vector workloads—without compromising recall, latency, or cost-efficiency. It turns your favorite relational database into a high-performance vector search engine.
- ✅ No extra systems.
- ✅ No new query languages.
- ✅ Just Postgres.
We used RTABench to run a transparent, reproducible evaluation—designed for real-world, high-scale workloads.
Curious about the architecture behind it all?
👉 Read our whitepaper on building Timescale for real-time and AI workloads
It dives into how we engineered Timescale to handle time-series, vector, and relational data—all in one Postgres-native platform.
TL;DR: For many vector workloads, Postgres is all you need.
Have you used Postgres or Qdrant for vector search?
What’s your stack look like today—and where do you feel the friction?
👉 Postgres vs Qdrant: which side are you on? Comment down below!