Skip to main content

MemoryVectorStore

LangChain offers is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.

As it is intended for demos, it does not yet support ids or deletion.

This guide provides a quick overview for getting started with in-memory vector stores. For detailed documentation of all MemoryVectorStore features and configurations head to the API reference.

Overview

Integration details

ClassPackagePY supportPackage latest
MemoryVectorStorelangchainNPM - Version

Setup

To use in-memory vector stores, you’ll need to install the langchain package:

This guide will also use OpenAI embeddings, which require you to install the @langchain/openai integration package. You can also use other supported embeddings models if you wish.

yarn add langchain @langchain/openai

Credentials

There are no required credentials to use in-memory vector stores.

If you are using OpenAI embeddings for this guide, you’ll need to set your OpenAI key as well:

process.env.OPENAI_API_KEY = "YOUR_API_KEY";

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

// process.env.LANGCHAIN_TRACING_V2="true"
// process.env.LANGCHAIN_API_KEY="your-api-key"

Instantiation

import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";

const embeddings = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});

const vectorStore = new MemoryVectorStore(embeddings);

Manage vector store

Add items to vector store

import type { Document } from "@langchain/core/documents";

const document1: Document = {
pageContent: "The powerhouse of the cell is the mitochondria",
metadata: { source: "https://example.com" },
};

const document2: Document = {
pageContent: "Buildings are made out of brick",
metadata: { source: "https://example.com" },
};

const document3: Document = {
pageContent: "Mitochondria are made out of lipids",
metadata: { source: "https://example.com" },
};

const documents = [document1, document2, document3];

await vectorStore.addDocuments(documents);

Query vector store

Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.

Query directly

Performing a simple similarity search can be done as follows:

const filter = (doc) => doc.metadata.source === "https://example.com";

const similaritySearchResults = await vectorStore.similaritySearch(
"biology",
2,
filter
);

for (const doc of similaritySearchResults) {
console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);
}
* The powerhouse of the cell is the mitochondria [{"source":"https://example.com"}]
* Mitochondria are made out of lipids [{"source":"https://example.com"}]

The filter is optional, and must be a predicate function that takes a document as input, and returns true or false depending on whether the document should be returned.

If you want to execute a similarity search and receive the corresponding scores you can run:

const similaritySearchWithScoreResults =
await vectorStore.similaritySearchWithScore("biology", 2, filter);

for (const [doc, score] of similaritySearchWithScoreResults) {
console.log(
`* [SIM=${score.toFixed(3)}] ${doc.pageContent} [${JSON.stringify(
doc.metadata
)}]`
);
}
* [SIM=0.165] The powerhouse of the cell is the mitochondria [{"source":"https://example.com"}]
* [SIM=0.148] Mitochondria are made out of lipids [{"source":"https://example.com"}]

Query by turning into retriever

You can also transform the vector store into a retriever for easier usage in your chains:

const retriever = vectorStore.asRetriever({
// Optional filter
filter: filter,
k: 2,
});

await retriever.invoke("biology");
[
Document {
pageContent: 'The powerhouse of the cell is the mitochondria',
metadata: { source: 'https://example.com' },
id: undefined
},
Document {
pageContent: 'Mitochondria are made out of lipids',
metadata: { source: 'https://example.com' },
id: undefined
}
]

Maximal marginal relevance

This vector store also supports maximal marginal relevance (MMR), a technique that first fetches a larger number of results (given by searchKwargs.fetchK), with classic similarity search, then reranks for diversity and returns the top k results. This helps guard against redundant information:

const mmrRetriever = vectorStore.asRetriever({
searchType: "mmr",
searchKwargs: {
fetchK: 10,
},
// Optional filter
filter: filter,
k: 2,
});

await mmrRetriever.invoke("biology");
[
Document {
pageContent: 'The powerhouse of the cell is the mitochondria',
metadata: { source: 'https://example.com' },
id: undefined
},
Document {
pageContent: 'Buildings are made out of brick',
metadata: { source: 'https://example.com' },
id: undefined
}
]

Usage for retrieval-augmented generation

For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:

API reference

For detailed documentation of all MemoryVectorStore features and configurations head to the API reference.


Was this page helpful?


You can also leave detailed feedback on GitHub.