Stay organized with collections
Save and categorize content based on your preferences.
Memorystore for Valkey supports storing and querying vector data. This page provides
information about vector search on Memorystore for Valkey.
Vector search on Memorystore for Valkey is compatible with the open-source LLM
framework LangChain.
Using vector search with LangChain lets you build solutions for the following
use cases:
Retrieval Augmented Generation (RAG)
LLM cache
Recommendation engine
Semantic search
Image similarity search
The advantage of using Memorystore to store your Gen AI data, as opposed
to other Google Cloud databases is Memorystore's speed. Vector
search on Memorystore for Valkey leverages multi-threaded queries, resulting in
high query throughput (QPS) at low latency.
Memorystore also provides two distinct search approaches to help you find the right balance between speed and accuracy. The HNSW (Hierarchical Navigable Small World) option delivers fast, approximate results - ideal for large datasets where a close match is sufficient. If you require absolute precision, the 'FLAT' approach produces exact answers, though it may take slightly longer to process.
If you want to optimize your application for the fastest vector data read and write
speeds, Memorystore for Valkey is likely the best option for you.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# About vector search\n\nMemorystore for Valkey supports storing and querying vector data. This page provides\ninformation about vector search on Memorystore for Valkey.\n| **Important:** To use vector search, your instance must be created after the feature launch date of September 13, 2024. If your instance was created prior to this date, you will need to [create](/memorystore/docs/valkey/create-instances) a new instance to use this feature.\n\nVector search on Memorystore for Valkey is compatible with the open-source LLM\nframework [LangChain](https://python.langchain.com/docs/get_started/introduction).\nUsing vector search with LangChain lets you build solutions for the following\nuse cases:\n\n- Retrieval Augmented Generation (RAG)\n- LLM cache\n- Recommendation engine\n- Semantic search\n- Image similarity search\n\nThe advantage of using Memorystore to store your Gen AI data, as opposed\nto other Google Cloud databases is Memorystore's speed. Vector\nsearch on Memorystore for Valkey leverages multi-threaded queries, resulting in\nhigh query throughput (QPS) at low latency.\n\nMemorystore also provides two distinct search approaches to help you find the right balance between speed and accuracy. The HNSW (Hierarchical Navigable Small World) option delivers fast, approximate results - ideal for large datasets where a close match is sufficient. If you require absolute precision, the 'FLAT' approach produces exact answers, though it may take slightly longer to process.\n\nIf you want to optimize your application for the fastest vector data read and write\nspeeds, Memorystore for Valkey is likely the best option for you."]]