3 min read3 days ago
–
If you’ve worked with embeddings or vector databases like Qdrant, Pinecone, or FAISS, you’ve probably heard terms like dense vectors, sparse vectors, and lately, multi-vectors.
They all represent data as numbers in some high-dimensional space — but how they do it and why it matters is where things get interesting.
Press enter or click to view image in full size
Dense Vectors
Here’s the thing: most embeddings you deal with — from OpenAI, Azure OpenAI, HuggingFace, or even Google — are dense vectors.
A dense vector is basically a list of numbers that represents your data — a sentence, an image, or even an audio clip — in a high-dimensional space.
Think of it like this: Every sentence or image becomes a coordinate in a multi-dimensional w…
3 min read3 days ago
–
If you’ve worked with embeddings or vector databases like Qdrant, Pinecone, or FAISS, you’ve probably heard terms like dense vectors, sparse vectors, and lately, multi-vectors.
They all represent data as numbers in some high-dimensional space — but how they do it and why it matters is where things get interesting.
Press enter or click to view image in full size
Dense Vectors
Here’s the thing: most embeddings you deal with — from OpenAI, Azure OpenAI, HuggingFace, or even Google — are dense vectors.
A dense vector is basically a list of numbers that represents your data — a sentence, an image, or even an audio clip — in a high-dimensional space.
Think of it like this: Every sentence or image becomes a coordinate in a multi-dimensional world where similar things live closer together.
So if you embed:
- “I love playing football”
 - “Soccer is my favourite sport”
 
Their vectors will end up close to each other because they roughly convey the same meaning.
A dense vector might look like this:
[0.6, 0.3, 0.1, 0.9, 0.4, …]