A partir de 29 de abril de 2025, os modelos Gemini 1.5 Pro e Gemini 1.5 Flash não estarão disponíveis em projetos que não os usaram antes, incluindo novos projetos. Para mais detalhes, consulte Versões e ciclo de vida do modelo.
Também é possível personalizar o comportamento do agente além de input transmitindo argumentos de palavra-chave adicionais para query().
response=agent.query(input={"input"=["What is Paul Graham's life in college?","How did Paul Graham's college experience shape his career?","How did Paul Graham's college experience shape his entrepreneurial mindset?",],},batch=True# run the pipeline in batch mode and pass a list of inputs.)print(response)
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-08-25 UTC."],[],[],null,["# Use a LlamaIndex Query Pipeline agent\n\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nIn addition to the general instructions for [using an agent](/vertex-ai/generative-ai/docs/agent-engine/use),\nthis page describes features that are specific to `LlamaIndexQueryPipelineAgent`.\n\nBefore you begin\n----------------\n\nThis tutorial assumes that you have read and followed the instructions in:\n\n- [Develop a LlamaIndexQueryPipeline agent](/vertex-ai/generative-ai/docs/agent-engine/develop/llama-index/query-pipeline): to develop `agent` as an instance of `LlamaIndexQueryPipelineAgent`.\n- [User authentication](/vertex-ai/generative-ai/docs/agent-engine/set-up#authentication) to authenticate as a user for querying the agent.\n\nSupported operations\n--------------------\n\nThe following operations are supported for `LlamaIndexQueryPipelineAgent`:\n\n- [`query`](/vertex-ai/generative-ai/docs/agent-engine/use#query-agent): for getting a response to a query synchronously.\n\nThe `query` method supports the following type of argument:\n\n- [`input`](#input-messages): the messages to be sent to the agent.\n\nQuery the agent\n---------------\n\nThe command: \n\n agent.query(input=\"What is Paul Graham's life in college?\")\n\nis equivalent to the following (in full form): \n\n agent.query(input={\"input\": \"What is Paul Graham's life in college?\"})\n\nTo customize the input dictionary, see\n[Customize the prompt template](/vertex-ai/generative-ai/docs/agent-engine/develop/llama-index/query-pipeline#prompt-template).\n\nYou can also customize the agent's behavior beyond `input` by passing additional keyword arguments to `query()`. \n\n response = agent.query(\n input={\n \"input\" = [\n \"What is Paul Graham's life in college?\",\n \"How did Paul Graham's college experience shape his career?\",\n \"How did Paul Graham's college experience shape his entrepreneurial mindset?\",\n ],\n },\n batch=True # run the pipeline in batch mode and pass a list of inputs.\n )\n print(response)\n\nSee the [`QueryPipeline.run` code](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/query_pipeline/query.py#L392) for a complete list of available parameters.\n\nWhat's next\n-----------\n\n- [Use an agent](/vertex-ai/generative-ai/docs/agent-engine/use).\n- [Evaluate an agent](/vertex-ai/generative-ai/docs/agent-engine/evaluate).\n- [Manage deployed agents](/vertex-ai/generative-ai/docs/agent-engine/manage).\n- [Get support](/vertex-ai/generative-ai/docs/agent-engine/support)."]]