[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-25。"],[[["\u003cp\u003eLangChain is an LLM orchestration framework that streamlines the development of generative AI applications and retrieval-augmented generation (RAG) workflows.\u003c/p\u003e\n"],["\u003cp\u003eAlloyDB integrates with LangChain components, including vector store, document loader, and chat message history, to enhance LLM-powered applications.\u003c/p\u003e\n"],["\u003cp\u003eThe vector store feature enables applications to perform semantic searches by storing and retrieving documents and metadata from a vector database using the \u003ccode\u003eAlloyDBVectorStore\u003c/code\u003e class.\u003c/p\u003e\n"],["\u003cp\u003eThe document loader allows for the loading, saving, and deleting of LangChain \u003ccode\u003eDocument\u003c/code\u003e objects using the \u003ccode\u003eAlloyDBLoader\u003c/code\u003e and \u003ccode\u003eAlloyDBSaver\u003c/code\u003e classes.\u003c/p\u003e\n"],["\u003cp\u003e\u003ccode\u003eAlloyDBChatMessageHistory\u003c/code\u003e class allows for message storage and retrieval, providing context for question-and-answer applications by managing conversation histories.\u003c/p\u003e\n"]]],[],null,["# Build LLM-powered applications using LangChain\n\n| **Preview\n| --- LangChain**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis page introduces how to build LLM-powered applications using\n[LangChain](https://www.langchain.com/). The overviews on this\npage link to procedure guides in GitHub.\n\nWhat is LangChain?\n------------------\n\nLangChain is an LLM orchestration framework that helps developers build\ngenerative AI applications or retrieval-augmented generation (RAG) workflows. It\nprovides the structure, tools, and components to streamline complex large\nlanguage model (LLM) workflows. For more information about LLMs, their use cases, and the specific models and services that Google offers, see an\n[overview of LLM concepts and services in Google Cloud](/ai/llms).\n\nFor more information about LangChain, see the [Google\nLangChain](https://python.langchain.com/docs/integrations/platforms/google)\npage. For more information about the LangChain framework, see the [LangChain](https://python.langchain.com/docs/get_started/introduction)\nproduct documentation.\n\nLangChain components for AlloyDB\n--------------------------------\n\n- [Vector store](#vector-store)\n- [Document loader](#document-loader)\n- [Chat message history](#chat-message-history)\n\nLearn how to use LangChain with the\n[LangChain Quickstart for AlloyDB](https://github.com/googleapis/langchain-google-alloydb-pg-python/blob/main/samples/langchain_quick_start.ipynb).\nThis quickstart creates an application that accesses a Netflix Movie dataset so\nthat users can interact with movie data.\n\nVector store for AlloyDB\n------------------------\n\nVector store retrieves and stores documents and metadata from a vector database.\nVector store gives an application the ability to perform semantic searches that\ninterpret the meaning of a user query. This type of search is a called a\nvector search, and it can find topics that match the query conceptually. At\nquery time, vector store retrieves the embedding vectors that are\nmost similar to the embedding of the search request. In LangChain, a vector\nstore takes care of storing embedded data and performing the vector search\nfor you.\n\nTo work with vector store in AlloyDB, use the\n`AlloyDBVectorStore` class.\n\nFor more information, see the\n[LangChain vector stores](https://python.langchain.com/docs/how_to/#vector-stores)\nproduct documentation.\n\n### Vector store procedure guide\n\nThe [AlloyDB guide for vector\nstore](https://github.com/googleapis/langchain-google-alloydb-pg-python/blob/main/docs/vector_store.ipynb) shows you how to do the following:\n\n- Install the integration package and LangChain\n- Create an `AlloyDBEngine` object and configure a connection pool to your AlloyDB database\n- Initialize a table for the vector store\n- Set up an embedding service using `VertexAIEmbeddings`\n- Initialize `AlloyDBVectorStore`\n- Add and delete documents\n- Search for similar Documents\n- Add a vector index to improve search performance\n- Create a custom vector store to connect to a pre-existing AlloyDB for PostgreSQL database that has a table with vector embeddings\n\nDocument loader for AlloyDB\n---------------------------\n\nThe document loader saves, loads, and deletes a LangChain `Document`\nobjects. For example, you can load data for processing into embeddings and\neither store it in vector store or use it as a tool to provide specific context\nto chains.\n\nTo load documents from AlloyDB, use the\n`AlloyDBLoader` class. `AlloyDBLoader` returns a list of documents from a table\nusing the first column for page content and all other columns for metadata. The\ndefault table has the first column as page content and the second column as\nJSON metadata. Each row becomes a document. Instructions for customizing these\nsettings are in the [procedure guide](#langchain-procedures).\n\nUse the `AlloyDBSaver` class to save and delete documents.\n\nFor more information, see the [LangChain Document\nloaders](https://python.langchain.com/docs/how_to/#document-loaders) topic.\n\n### Document loader procedure guide\n\nThe [AlloyDB guide for document loader](https://github.com/googleapis/langchain-google-alloydb-pg-python/blob/main/docs/document_loader.ipynb) shows you how to do the following:\n\n- Install the integration package and LangChain\n- Load documents from a table\n- Add a filter to the loader\n- Customize the connection and authentication\n- Customize Document construction by specifying customer content and metadata\n- How to use and customize a `AlloyDBSaver` to store and delete documents\n\nChat message history for AlloyDB\n--------------------------------\n\nQuestion and answer applications require a history of the things said in the\nconversation to give the application context for answering further questions\nfrom the user. The LangChain `ChatMessageHistory` class lets the application\nsave messages to a database and retrieve them when needed to formulate further\nanswers. A message can be a question, an answer, a statement, a greeting or any\nother piece of text that the user or application gives during the conversation.\n`ChatMessageHistory` stores each message and chains messages together for each\nconversation.\n\nAlloyDB extends this class with `AlloyDBChatMessageHistory`.\n\n### Chat message history procedure guide\n\nThe [AlloyDB guide for chat message history](https://github.com/googleapis/langchain-google-alloydb-pg-python/blob/main/docs/chat_message_history.ipynb) shows you how to do the following:\n\n- Install the integration package and LangChain\n- Create an `AlloyDBEngine` object and configure a connection pool to your AlloyDB database\n- Initialize a table\n- Initialize the `AlloyDBChatMessageHistory` class to add and delete messages\n- Create a chain for message history using the LangChain Expression Language (LCEL)\n\nWhat's next\n-----------\n\n- [Migrate data from a vector database to AlloyDB using LangChain](/alloydb/docs/ai/migrate-data-from-langchain-vector-stores-to-alloydb)."]]