Recommendation engines, image retrieval platforms, document matching services, and RAG pipelines all rely on finding the nearest neighbors to a given query vector in high-dimensional space. This is where vector similarity search comes in.
But this similarity search becomes a massive challenge when datasets grow to millions or billions of vectors, each potentially having hundreds or thousands of dimensions. Storing these vectors in raw 32-bit floating-point format becomes expensive, both in memory consumption and search latency.
A solution to this problem is Product Quantization (paper), which compresses vectors into short codes while preserving enough structure for distance cal…
Recommendation engines, image retrieval platforms, document matching services, and RAG pipelines all rely on finding the nearest neighbors to a given query vector in high-dimensional space. This is where vector similarity search comes in.
But this similarity search becomes a massive challenge when datasets grow to millions or billions of vectors, each potentially having hundreds or thousands of dimensions. Storing these vectors in raw 32-bit floating-point format becomes expensive, both in memory consumption and search latency.
A solution to this problem is Product Quantization (paper), which compresses vectors into short codes while preserving enough structure for distance calculations.
In this piece, we’ll walk through how Product Quantization actually works.
Memory Problem in Vector Search
Consider a dataset of one million 128-dimensional vectors, each stored as 32-bit floats. The memory requirement is straightforward to calculate:
1,000,000 vectors x 128 dimensions x 4 bytes = 512 MB
This seems manageable until we scale to one billion vectors, where the same calculation yields 512 GB. Factor in the overhead of index structures like HNSW graphs, and we quickly exceed the memory capacity of commodity hardware.
The situation worsens with modern embedding models. OpenAI’s text-embedding-3-large produces 3072-dimensional vectors. A billion such vectors would require approximately 12 TB of memory just for the raw vectors.
Traditional dimensionality reduction techniques like PCA can help, but they change the semantic content of the vectors. Product Quantization takes a different approach: it compresses vectors without reducing their dimensionality, preserving the structure of the original embedding space while dramatically reducing storage requirements.
Vector Quantization
Before diving into Product Quantization, let’s go through vector quantization. Vector quantization maps continuous vectors to a finite set of representative vectors called centroids. These centroids form a codebook.
Given a set of k centroids C = {c1, c2, ..., ck}, any input vector x is quantized by finding its nearest centroid:
q(x) = argmin_i ||x - c_i||^2
The vector is then represented by the index of that centroid rather than its full coordinates. Thus, if k = 256, we need only 8 bits to represent any vector.
Thus, every input vector gets mapped to whichever centroid is closest to it and hence the entire dataset of potentially millions of unique vectors gets represented by just k distinct values.
This simple idea breaks down once we look at the number of centroids required. For a 128-dimensional vector space, we might need k = 2^64 centroids (depends on accuracy requirements). This is computationally expensive - training requires orders of magnitude more data than the codebook size, and storing the codebook itself becomes impossible.
Product Quantization
Product Quantization resolves this by decomposing the vector space into a Cartesian product of lower-dimensional subspaces - set of all possible combinations. Rather than learning one massive codebook for the entire space, PQ learns multiple smaller codebooks, one for each subspace.
Here’s the basic idea. Take a D-dimensional vector x and split it into m equal parts, creating m subvectors of dimension D/m each:
x = [x_1, x_2, ..., x_m]
Each subvector x_j lives in its own D/m-dimensional subspace. For each subspace, train a separate quantizer with k centroids. The total number of possible reconstructed vectors becomes k^m, achieved with only m x k centroids to store.
For example, suppose we use D = 128, m = 8, and k = 256, each subvector has 16 dimensions. We maintain 8 codebooks, each with 256 centroids of 16 dimensions. The total codebook storage is:
8 codebooks x 256 centroids x 16 dimensions x 4 bytes = 128 KB
Yet this represents 256^8 = 2^64 possible reconstructed vectors, matching the precision that would require impossibly large storage with naive vector quantization.
Here’s a concrete example - say total dim = 128, m = 2 and k = 256. Codebook 1 has 256 centroids covering dimensions 1-64. Codebook 2 has 256 centroids covering dimensions 65-128.
The Cartesian product of these two codebooks is every possible pairing:
(centroid 0 from codebook 1, centroid 0 from codebook 2)
(centroid 0 from codebook 1, centroid 1 from codebook 2)
(centroid 0 from codebook 1, centroid 2 from codebook 2)
...
(centroid 255 from codebook 1, centroid 255 from codebook 2)
That’s 256 × 256 = 65,536 unique reconstructed vectors. But we only stored 256 + 256 = 512 centroids and use them to reconstruct the original vector (approx).
Scale this to m = 8 codebooks with k = 256 each: we get 256^8 ≈ 18 quintillion possible reconstructions from just 8 × 256 = 2,048 centroids.
The Cartesian product structure lets us represent an exponentially large set of possible vectors using only a linear amount of storage. Without this structure, we would need to explicitly store every possible reconstruction, which is impossible at these scales.
Training Codebooks
Training Product Quantization codebooks use k-means clustering applied independently to each subspace. The training process proceeds as follows:
- Collect a representative sample of training vectors from our dataset. Between 10,000 and 100,000 vectors typically suffice.
- Split each training vector into
msubvectors according to the chosen subspace decomposition. - For each subspace
j, gather all corresponding subvectors from the training set and run k-means clustering to findkcentroids. - Store the resulting
mcodebooks, each containingkcentroids.
Here is a quick python implementation:
import numpy as np
from sklearn.cluster import KMeans
class ProductQuantizer:
def __init__(self, m, k):
self.m = m # number of subspaces
self.k = k # number of centroids per subspace
self.codebooks = None
def fit(self, X):
n, d = X.shape
assert d % self.m == 0, "Dimension must be divisible by m"
self.d_sub = d // self.m
self.codebooks = np.zeros((self.m, self.k, self.d_sub))
for j in range(self.m):
# Extract subvectors for this subspace
start_idx = j * self.d_sub
end_idx = (j + 1) * self.d_sub
subvectors = X[:, start_idx:end_idx]
# Run k-means
kmeans = KMeans(n_clusters=self.k, random_state=42)
kmeans.fit(subvectors)
self.codebooks[j] = kmeans.cluster_centers_
return self
We will get better results if the vectors we train on are representative of our actual dataset. Using a random subset of our actual vectors typically works well.
Vectors to PQ codes
Once codebooks are trained, encoding a vector means finding the nearest centroid in each subspace and recording its index. The result is a PQ code: a sequence of m integers, each ranging from 0 to k-1.
def encode(self, X):
n = X.shape[0]
codes = np.zeros((n, self.m), dtype=np.uint8)
for j in range(self.m):
start_idx = j * self.d_sub
end_idx = (j + 1) * self.d_sub
subvectors = X[:, start_idx:end_idx]
# Find nearest centroid for each subvector
distances = np.sum(
(subvectors[:, np.newaxis, :] -
self.codebooks[j][np.newaxis, :, :]) ** 2,
axis=2
)
codes[:, j] = np.argmin(distances, axis=1)
return codes
With k = 256 centroids per subspace, each index fits in a single byte. A 128-dimensional vector that originally required 512 bytes now requires only m = 8 bytes, achieving 64x compression.
Reconstructing Vectors from PQ codes
A PQ code can be decoded back to an approximate vector by concatenating the corresponding centroids:
def decode(self, codes):
n = codes.shape[0]
X_reconstructed = np.zeros((n, self.m * self.d_sub))
for j in range(self.m):
start_idx = j * self.d_sub
end_idx = (j + 1) * self.d_sub
X_reconstructed[:, start_idx:end_idx] = self.codebooks[j][codes[:, j]]
return X_reconstructed
The reconstruction is lossy. Increasing k reduces error but increases codebook size. Increasing m reduces error (finer-grained quantization) but increases code size.
Concrete Example
Say we have 128-dimensional vectors. We split them into m = 4 subspaces, so each subspace covers 32 dimensions. Each subspace has its own codebook with k = 3 centroids.
Codebook 1 (dims 1-32): centroid A, centroid B, centroid C
Codebook 2 (dims 33-64): centroid D, centroid E, centroid F
Codebook 3 (dims 65-96): centroid G, centroid H, centroid I
Codebook 4 (dims 97-128): centroid J, centroid K, centroid L
A PQ code like [0, 2, 1, 0] means:
- Take centroid A from codebook 1 (index 0)
- Take centroid F from codebook 2 (index 2)
- Take centroid H from codebook 3 (index 1)
- Take centroid J from codebook 4 (index 0)
Concatenate them: [A | F | H | J] gives you a 128-dimensional reconstructed vector. To reconstruct a vector, we pick one centroid from each codebook and concatenate them.
The number of distinct reconstructed vectors we can reproduce is
- Codebook 1: 3 choices
- Codebook 2: 3 choices
- Codebook 3: 3 choices
- Codebook 4: 3 choices
Total combinations: 3 × 3 × 3 × 3 = 3^4 = 81 distinct reconstructed vectors.
Thus, with m codebooks and k centroids each, you have k choices per codebook, and m codebooks. That’s k × k × k × … (m times) = k^m total combinations.
We just stored 4 codebooks × 3 centroids = 12 centroids total. But we could reconstruct 81 different vectors from them.
Distances with PQ codes
The power of Product Quantization lies in computing approximate distances without decoding. Two distance computation methods: Symmetric Distance Computation (SDC) and Asymmetric Distance Computation (ADC).
Symmetric Distance Computation
SDC computes distances between two PQ codes by summing precomputed inter-centroid distances:
def symmetric_distance(self, codes_a, codes_b):
distance = 0
for j in range(self.m):
c_a = self.codebooks[j][codes_a[j]]
c_b = self.codebooks[j][codes_b[j]]
distance += np.sum((c_a - c_b) ** 2)
return distance
SDC introduces quantization error in both vectors being compared.
Asymmetric Distance Computation
ADC keeps the query vector unquantized while comparing against quantized database vectors. This reduces error because only one side is approximated.
The key insight is precomputing a distance table. For a query vector q, compute distances from each query subvector to all centroids in the corresponding codebook:
def compute_distance_table(self, query):
distance_table = np.zeros((self.m, self.k))
for j in range(self.m):
start_idx = j * self.d_sub
end_idx = (j + 1) * self.d_sub
query_sub = query[start_idx:end_idx]
# Distance from query subvector to each centroid
distance_table[j] = np.sum(
(query_sub - self.codebooks[j]) ** 2,
axis=1
)
return distance_table
Given this table, computing the distance to any PQ code requires only m lookups and additions:
def adc_distance(self, distance_table, code):
distance = 0
for j in range(self.m):
distance += distance_table[j, code[j]]
return distance
This is pretty efficient. Computing distances to a million database vectors requires a million sequences of m table lookups and additions, all operating on cached memory.
Product Quantization in Action
Vector databases integrate PQ as a core feature.
Milvus IVF_PQ
index_params = {
"index_type": "IVF_PQ",
"metric_type": "L2",
"params": {
"nlist": 1024, # number of clusters
"m": 8, # number of subspaces
"nbits": 8 # bits per subspace
}
}
search_params = {
"nprobe": 10 # partitions to search
}
Weaviate PQ
collection.config.update(
vector_config=Reconfigure.Vectors.update(
vector_index_config=Reconfigure.VectorIndex.hnsw(
quantizer=Reconfigure.VectorIndex.Quantizer.pq(
segments=8, # equivalent to m
training_limit=50000 # vectors for training
)
)
)
)
Qdrant
quantization_config = {
"product": {
"compression": "x16", # compression ratio
"always_ram": True # keep quantized vectors in RAM
}
}
Applications in RAG
RAG systems use vector search to find relevant documents for grounding LLM responses. PQ enables RAG at scale by compressing document embeddings.
Consider a knowledge base of 10 million documents, each embedded as a 1536-dimensional vector (typical for OpenAI embeddings). Raw storage requires:
10,000,000 x 1536 x 4 bytes = 61.4 GB
With PQ using m = 48 and nbits = 8:
10,000,000 x 48 bytes = 480 MB
This 128x reduction makes the entire index fit in RAM on modest hardware.
The tradeoff is lower recall compared to storing full-precision vectors. RAG systems compensate this by:
- Using higher nprobe values during search
- Implementing rescoring with original vectors
- Combining PQ with hybrid search (keyword + semantic)
- Over-retrieving candidates before reranking
Footnote
In short, PQ splits each vector into chunks and replaces each chunk with the index of the closest centroid. This gives us big compression gains while still letting us compare vectors using lookup tables. Paired with an inverted index, it scales to billions of vectors on standard hardware, with recall being the main tradeoff.