Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Memorystore for Valkey mendukung penyimpanan dan kueri data vektor. Halaman ini memberikan
informasi tentang penelusuran vektor di Memorystore for Valkey.
Penelusuran vektor di Memorystore for Valkey kompatibel dengan framework LLM
open source LangChain.
Dengan penelusuran vektor menggunakan LangChain, Anda dapat membangun solusi untuk kasus penggunaan berikut:
Retrieval-Augmented Generation (RAG)
Cache LLM
Mesin pemberi saran
Penelusuran semantik
Penelusuran kemiripan gambar
Keuntungan menggunakan Memorystore untuk menyimpan data AI Generatif Anda, dibandingkan dengan database Google Cloud lainnya, adalah kecepatan Memorystore. Penelusuran
vektor di Memorystore for Valkey memanfaatkan kueri multi-thread, sehingga menghasilkan
throughput kueri (QPS) yang tinggi dengan latensi rendah.
Memorystore juga menyediakan dua pendekatan penelusuran yang berbeda untuk membantu Anda menemukan keseimbangan yang tepat antara kecepatan dan akurasi. Opsi HNSW (Hierarchical Navigable Small World) memberikan hasil perkiraan yang cepat - ideal untuk set data besar di mana kecocokan terdekat sudah cukup. Jika Anda memerlukan presisi absolut, pendekatan 'FLAT' menghasilkan jawaban yang tepat, meskipun pemrosesannya mungkin memerlukan waktu yang sedikit lebih lama.
Jika Anda ingin mengoptimalkan aplikasi untuk kecepatan baca dan tulis data vektor tercepat, Memorystore for Valkey mungkin merupakan opsi terbaik untuk Anda.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-19 UTC."],[],[],null,["# About vector search\n\nMemorystore for Valkey supports storing and querying vector data. This page provides\ninformation about vector search on Memorystore for Valkey.\n| **Important:** To use vector search, your instance must be created after the feature launch date of September 13, 2024. If your instance was created prior to this date, you will need to [create](/memorystore/docs/valkey/create-instances) a new instance to use this feature.\n\nVector search on Memorystore for Valkey is compatible with the open-source LLM\nframework [LangChain](https://python.langchain.com/docs/get_started/introduction).\nUsing vector search with LangChain lets you build solutions for the following\nuse cases:\n\n- Retrieval Augmented Generation (RAG)\n- LLM cache\n- Recommendation engine\n- Semantic search\n- Image similarity search\n\nThe advantage of using Memorystore to store your Gen AI data, as opposed\nto other Google Cloud databases is Memorystore's speed. Vector\nsearch on Memorystore for Valkey leverages multi-threaded queries, resulting in\nhigh query throughput (QPS) at low latency.\n\nMemorystore also provides two distinct search approaches to help you find the right balance between speed and accuracy. The HNSW (Hierarchical Navigable Small World) option delivers fast, approximate results - ideal for large datasets where a close match is sufficient. If you require absolute precision, the 'FLAT' approach produces exact answers, though it may take slightly longer to process.\n\nIf you want to optimize your application for the fastest vector data read and write\nspeeds, Memorystore for Valkey is likely the best option for you."]]