Overview
This overview covers text-based embedding models. LangChain does not currently support multimodal embeddings.
Embedding models transform raw text—such as a sentence, paragraph, or tweet—into a fixed-length vector of numbers that captures its semantic meaning. These vectors allow machines to compare and search text based on meaning rather than exact words.
In practice, this means that texts with similar ideas are placed close together in the vector space. For example, instead of matching only the phrase “machine learning”, embeddings can surface documents that discuss related concepts even when different wording is used.
How it works
- Vectorization — The model encodes each input string as a high-dimensional vector.
- Similarity scoring — Vectors are compared using mathematical metrics to measure how closely related the underlying texts are.
Similarity metrics
Several metrics are commonly used to compare embeddings:
- Cosine similarity — measures the angle between two vectors.
- Euclidean distance — measures the straight-line distance between points.
- Dot product — measures how much one vector projects onto another.
Here’s an example of computing cosine similarity between two vectors:
import numpy as np
def cosine_similarity(vec1, vec2):
dot = np.dot(vec1, vec2)
return dot / (np.linalg.norm(vec1) * np.linalg.norm(vec2))
similarity = cosine_similarity(query_embedding, document_embedding)
print("Cosine Similarity:", similarity)
Embedding Interface in LangChain
LangChain provides a standard interface for text embedding models (e.g., OpenAI, Cohere, Hugging Face) via the Embeddings interface.
Two main methods are available:
embed_documents(texts: List[str]) → List[List[float]]: Embeds a list of documents.
embed_query(text: str) → List[float]: Embeds a single query.
The interface allows queries and documents to be embedded with different strategies, though most providers handle them the same way in practice.
Top integrations
OpenAI
Azure
Google Gemini
Google Vertex
AWS
HuggingFace
Ollama
Cohere
Mistral AI
Nomic
NVIDIA
Voyage AI
IBM watsonx
Fake
xAI
Perplexity
DeepSeek
pip install -qU langchain-openai
import getpass
import os
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
embeddings.embed_query("Hello, world!")
Caching
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Caching embeddings can be done using a CacheBackedEmbeddings. This wrapper stores embeddings in a key-value store, where the text is hashed and the hash is used as the key in the cache.
The main supported way to initialize a CacheBackedEmbeddings is from_bytes_store. It takes the following parameters:
- underlying_embedder: The embedder to use for embedding.
- document_embedding_cache: Any
ByteStore for caching document embeddings.
- batch_size: (optional, defaults to
None) The number of documents to embed between store updates.
- namespace: (optional, defaults to
"") The namespace to use for the document cache. Helps avoid collisions (e.g., set it to the embedding model name).
- query_embedding_cache: (optional, defaults to
None) A ByteStore for caching query embeddings, or True to reuse the same store as document_embedding_cache.
import time
from langchain.embeddings import CacheBackedEmbeddings
from langchain.storage import LocalFileStore
from langchain_core.vectorstores import InMemoryVectorStore
# Create your underlying embeddings model
underlying_embeddings = ... # e.g., OpenAIEmbeddings(), HuggingFaceEmbeddings(), etc.
# Store persists embeddings to the local filesystem
# This isn't for production use, but is useful for local
store = LocalFileStore("./cache/")
cached_embedder = CacheBackedEmbeddings.from_bytes_store(
underlying_embeddings,
store,
namespace=underlying_embeddings.model
)
# Example: caching a query embedding
tic = time.time()
print(cached_embedder.embed_query("Hello, world!"))
print(f"First call took: {time.time() - tic:.2f} seconds")
# Subsequent calls use the cache
tic = time.time()
print(cached_embedder.embed_query("Hello, world!"))
print(f"Second call took: {time.time() - tic:.2f} seconds")
In production, you would typically use a more robust persistent store, such as a database or cloud storage. Please see stores integrations for options.
All integrations