[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-03-04。"],[[["\u003cp\u003eThis document guides you on how to query data in an Azure Blob Storage BigLake table using GoogleSQL syntax, similar to querying a standard BigQuery table.\u003c/p\u003e\n"],["\u003cp\u003eTo query Blob Storage BigLake tables, you need specific roles, including BigQuery Connection User, BigQuery Data Viewer, and BigQuery User, which can be assigned to your account or a Blob Storage connection service account.\u003c/p\u003e\n"],["\u003cp\u003eBigQuery stores cached query results in temporary tables, and you can access these temporary tables via the Google Cloud console or the BigQuery API.\u003c/p\u003e\n"],["\u003cp\u003eExternal data sources provide a \u003ccode\u003e_FILE_NAME\u003c/code\u003e pseudocolumn, which reveals the full path to the file containing each row, allowing for filtering based on file location when using external data.\u003c/p\u003e\n"],["\u003cp\u003eWhen creating a reservation in a BigQuery Omni region, you should use the Enterprise edition.\u003c/p\u003e\n"]]],[],null,["# Query Blob Storage data\n=======================\n\nThis document describes how to query data stored in an\n[Azure Blob Storage BigLake table](/bigquery/docs/omni-azure-create-external-table).\n\nBefore you begin\n----------------\n\nEnsure that you have a [Blob Storage BigLake table](/bigquery/docs/omni-azure-create-external-table).\n\n### Required roles\n\nTo query Blob Storage BigLake tables, ensure\nthat the caller of the BigQuery API has the following roles:\n\n- BigQuery Connection User (`roles/bigquery.connectionUser`)\n- BigQuery Data Viewer (`roles/bigquery.dataViewer`)\n- BigQuery User (`roles/bigquery.user`)\n\nThe caller can be your account or an\n[Blob Storage connection service account](/bigquery/docs/omni-azure-create-connection#create_an_azure_connection).\nDepending on your permissions, you can\ngrant these roles to yourself or ask your administrator\nto grant them to you. For more information about granting roles, see\n[Viewing the grantable roles on resources](/iam/docs/viewing-grantable-roles).\n\nTo see the exact permissions that are required to query\nBlob Storage BigLake tables, expand the\n**Required permissions** section: \n\n#### Required permissions\n\n- `bigquery.connections.use`\n- `bigquery.jobs.create`\n- `bigquery.readsessions.create` (Only required if you are [reading data with the\n BigQuery Storage Read API](/bigquery/docs/reference/storage))\n- `bigquery.tables.get`\n- `bigquery.tables.getData`\n\nYou might also be able to get these permissions with [custom roles](/iam/docs/creating-custom-roles)\nor other [predefined roles](/iam/docs/understanding-roles).\n\nQuery Blob Storage BigLake tables\n---------------------------------\n\nAfter creating a Blob Storage BigLake table, you can [query it using\nGoogleSQL syntax](/bigquery/docs/running-queries), the same as if\nit were a standard BigQuery table.\n\nThe [cached query results](/bigquery/docs/cached-results)\nare stored in a BigQuery temporary table. To query a temporary\nBigLake table, see\n[Query a temporary BigLake table](#query-temp-biglake-table).\nFor more information about BigQuery Omni limitations and quotas, see\n[limitations](/bigquery/docs/omni-introduction#limitations)\nand [quotas](/bigquery/docs/omni-introduction#quotas_and_limits).\n\nWhen creating a reservation in a BigQuery Omni region, use the\nEnterprise edition. To learn how to create a reservation with an edition, see\n[Create reservations](/bigquery/docs/reservations-tasks#create_reservations).\n\nRun a query on the Blob Storage BigLake table:\n\n1. In the Google Cloud console, go to the **BigQuery** page.\n\n [Go to BigQuery](https://console.cloud.google.com/bigquery)\n2. In the query editor, enter the following statement:\n\n ```googlesql\n SELECT * FROM DATASET_NAME.TABLE_NAME;\n ```\n\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eDATASET_NAME\u003c/var\u003e: the dataset name that you created\n - \u003cvar translate=\"no\"\u003eTABLE_NAME\u003c/var\u003e: the BigLake table that name you\n created\n\n - Click play_circle **Run**.\n\n \u003cbr /\u003e\n\nFor more information about how to run queries, see [Run an interactive query](/bigquery/docs/running-queries#queries).\n\nQuery a temporary table\n-----------------------\n\nBigQuery creates temporary tables to store query results.\nTo retrieve query result from temporary tables, you can use the Google Cloud console\nor the [BigQuery API](/bigquery/docs/reliability-read#read_with_api).\n\nSelect one of the following options: \n\n### Console\n\nWhen you [query a BigLake table](#query-biglake-table) that\nreferences external cloud data, you can view the query results displayed\nin the Google Cloud console.\n\n### API\n\nTo query a BigLake table using the API, follow these steps:\n\n1. Create a [Job object](/bigquery/docs/reference/rest/v2/Job).\n2. Call the [`jobs.insert` method](/bigquery/docs/reference/v2/jobs/insert) to run the query asynchronously or the [`jobs.query` method](/bigquery/docs/reference/rest/v2/jobs/query) to run the query synchronously, passing in the `Job` object.\n3. Read rows with the [`jobs.getQueryResults`](/bigquery/docs/reference/rest/v2/jobs/getQueryResults) by passing the given job reference, and the [`tabledata.list`](/bigquery/docs/reference/rest/v2/tabledata/list) methods by passing the given table reference of the query result.\n\nQuery the `_FILE_NAME` pseudocolumn\n-----------------------------------\n\n\nTables based on external data sources provide a pseudocolumn named `_FILE_NAME`. This\ncolumn contains the fully qualified path to the file to which the row belongs. This column is\navailable only for tables that reference external data stored in\n**Cloud Storage** , **Google Drive** ,\n**Amazon S3** , and **Azure Blob Storage**.\n\n\nThe `_FILE_NAME` column name is reserved, which means that you cannot create a column\nby that name in any of your tables. To select the value of `_FILE_NAME`, you must use\nan alias. The following example query demonstrates selecting `_FILE_NAME` by assigning\nthe alias `fn` to the pseudocolumn. \n\n``````bash\n bq query \\\n --project_id=PROJECT_ID \\\n --use_legacy_sql=false \\\n 'SELECT\n name,\n _FILE_NAME AS fn\n FROM\n `````\u003cvar translate=\"no\"\u003eDATASET\u003c/var\u003e``.``\u003cvar translate=\"no\"\u003eTABLE_NAME\u003c/var\u003e`````\n WHERE\n name contains \"Alex\"' \n``````\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is a valid project ID (this flag is not required if you use Cloud Shell or if you set a default project in the Google Cloud CLI)\n- \u003cvar translate=\"no\"\u003eDATASET\u003c/var\u003e` `is the name of the dataset that stores the permanent external table\n- \u003cvar translate=\"no\"\u003eTABLE_NAME\u003c/var\u003e is the name of the permanent external table\n\n\nWhen the query has a filter predicate on the `_FILE_NAME` pseudocolumn,\nBigQuery attempts to skip reading files that do not satisfy the filter. Similar\nrecommendations to\n[querying ingestion-time partitioned tables using pseudocolumns](/bigquery/docs/querying-partitioned-tables#query_an_ingestion-time_partitioned_table)\napply when constructing query predicates with the `_FILE_NAME` pseudocolumn.\n\nWhat's next\n-----------\n\n- Learn about [using SQL in BigQuery](/bigquery/docs/introduction-sql).\n- Learn about [BigQuery Omni](/bigquery/docs/omni-introduction).\n- Learn about [BigQuery quotas](/bigquery/quotas)."]]