[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[],[],null,["# Amazon Athena\n\nLooker supports connections to [Amazon Athena](https://aws.amazon.com/athena/), an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Amazon Athena is serverless, so there is no infrastructure to manage. You are charged only for the queries that are run.\n\nEncrypting network traffic\n--------------------------\n\nIt is a best practice to encrypt network traffic between the Looker application and your database. Consider one of the options described on the [Enabling secure database access](/looker/docs/enabling-secure-db-access) documentation page.\n\nConfiguring an Amazon Athena connection\n---------------------------------------\n\nThis page describes how to connect Looker to an Amazon Athena instance.\n\n1. Ensure that you have the following:\n\n - A pair of Amazon AWS access keys.\n - The S3 bucket containing the data you want to query in Looker with Amazon Athena. The Amazon AWS access keys must have read-write access to this bucket.\n\n \u003e Amazon Athena must have access to this S3 bucket by either a role or a permission set, as well as by firewall rules. Do not add security rules to the S3 bucket for Looker's IP, since this can inadvertently block Amazon Athena's access to the S3 bucket. (For other dialects besides Amazon Athena, users may want to limit access to the data from the network layer with an IP allowlist, as described on the [Enabling secure database access](/looker/docs/enabling-secure-db-access) documentation page.)\n - Knowledge of where your Amazon Athena instance data is located. The region name can be found in the upper right-hand portion of the Amazon Console.\n\n2. In the **Admin** section of Looker, select **Connections** , and then click **Add Connection**.\n\n3. Fill out the connection details:\n\n | **Note:** The [Simba Athena JDBC Driver with SQL Connector Installation and Configuration Guide](https://s3.amazonaws.com/athena-downloads/drivers/JDBC/SimbaAthenaJDBC_2.0.2/docs/Simba+Athena+JDBC+Driver+Install+and+Configuration+Guide.pdf) provides more details about custom JDBC configurations and other configuration information.\n - **Name**: Specify the name of the connection. This is how you will refer to the connection in LookML projects.\n - **Dialect** : Select **Amazon Athena**.\n - **Host** and **Port** : Specify the name of the host and port as described in the [Athena documentation on the JDBC URL format](http://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html#jdbc-url-format). The host should be a valid Amazon endpoint (like `athena.eu-west-1.amazonaws.com`), and the port should stay at `443`. An up-to-date list of endpoints that support Athena can be found on this [AWS General Reference](http://docs.aws.amazon.com/general/latest/gr/rande.html#athena) page.\n - **Database**: Specify the default database that you would like modeled. Other databases can be accessed, but Looker treats this database as the default database.\n - **Username**: Specify the AWS access key ID.\n - **Password**: Specify the AWS secret access key.\n - **Enable PDTs** : Use this toggle to enable [persistent derived tables (PDTs)](/looker/docs/derived-tables#persistent-derived-tables). Enabling PDTs reveals additional PDT fields and the [**PDT Overrides**](/looker/docs/connecting-to-your-db#pdt-overrides) section for the connection.\n - **Temp Database** : Specify the name of the output directory in your S3 bucket where you want Looker to write your PDTs. The full path to your output directory must be specified in the **Additional JDBC parameters** field; see the [Specifying your S3 bucket for query results output and PDTs](#staging_bucket) section on this page.\n - **Max number of PDT builder connections** : Specify the number of possible concurrent PDT builds on this connection. Setting this value too high could negatively impact query times. For more information, see the [Connecting Looker to your database](/looker/docs/connecting-to-your-db#max_pdt_builder_connections) documentation page.\n - **Additional JDBC parameters** : Specify additional parameters for the connection:\n - The `s3_staging_dir` parameter is the S3 bucket that Looker should use for query results output and PDTs; see the [Specifying Your S3 bucket for query results output and PDTs](#staging_bucket) section on this page.\n - Flag for streaming results. If you have the [`athena:GetQueryResultsStream` policy](https://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html#migration-from-previous-jdbc-driver) attached to your Athena user, you can add `;UseResultsetStreaming=1` to the end of your additional JDBC parameterss to significantly improve the performance of large result set extraction. This parameter is set to `0` by default.\n - Optional additional parameters to add to the JDBC connection string.\n - **SSL**: Ignore; by default, all connections to the AWS API will be encrypted.\n - **Max connections per node** : By default, this is set to 5. You can increase this up to 20 if Looker is the main query engine running against Athena. See [the Athena service limits documentation](http://docs.aws.amazon.com/athena/latest/ug/service-limits.html) for more details about the service limits. See the [Connecting Looker to your database](/looker/docs/connecting-to-your-db#max_connections) documentation page for more information.\n - **Connection Pool Timeout** : Specify the connection pool timeout. By default, the timeout is set to 120 seconds. See the [Connecting Looker to your database](/looker/docs/connecting-to-your-db#connection_pool_timeout) documentation page for more information.\n - **SQL Runner Precache** : Unselect this option if you prefer SQL Runner to load table information only when a table is selected. See the [Connecting Looker to your database](/looker/docs/connecting-to-your-db#sql_runner_precache) documentation page for more information.\n - **Database Time Zone** : Specify the time zone used in the database. Leave this field blank if you do not want time zone conversion. See the [Using time zone settings](/looker/docs/using-time-zone-settings) documentation page for more information.\n\nTo verify that the connection is successful, click **Test** . See the [Testing database connectivity](/looker/docs/testing-db-connectivity) documentation page for troubleshooting information.\n\nTo save these settings, click **Connect**.\n\nSpecifying your S3 bucket for query results output and PDTs\n-----------------------------------------------------------\n\nUse the [**Additional JDBC parameters**](#additional_params) field of the **Connections** page to configure the path to the S3 bucket that Looker will use for storing query results output, and to specify the name of the output directory in the S3 bucket where you want Looker to write PDTs. Specify this information using the `s3_staging_dir` parameter.\n\nThe `s3_staging_dir` JDBC parameter is an alternative way to configure the Amazon Athena `S3OutputLocation` property, which is required for Athena JDBC connections. See [the Athena documentation on JDBC Driver Options](http://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html#jdbc-options) for more information and a list of all available JDBC driver options.\n\nIn the **Additional JDBC parameters** field, specify the `s3_staging_dir` parameter using the following format: \n\n `s3_staging_dir=s3://\u003cs3-bucket\u003e/\u003coutput-path\u003e`\n\nWhere:\n\n- `\u003cs3-bucket\u003e` is the name of the S3 bucket.\n- `\u003coutput-path\u003e` is the path where Looker will write query results output.\n\n\u003e The AWS access key pair must have write permissions to the `\u003cs3-bucket\u003e` directory.\n\nTo configure the directory where Looker will write PDTs, enter the *path of the directory in the S3 bucket* in the **Temp Database** field.\nFor example, if you want Looker to write PDTs into `s3://\u003cs3-bucket\u003e/looker_scratch`, then enter this in the **Temp Database** field: \n\n `looker_scratch`\n\nOnly enter the *path of the directory* . Looker gets the S3 bucket name from the `s3_staging_dir` parameter that you enter in the **Additional JDBC Parameters** field.\n\n### S3 bucket considerations\n\nIt is recommended that you configure [Amazon S3 object lifecycles](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html#lifecycle-config-overview-what) to periodically clean out unneeded files in your specified S3 bucket. There are reasons for this:\n\n- Athena stores query results for every query in an S3 bucket. See [Athena Querying](https://docs.aws.amazon.com/athena/latest/ug/querying.html#querying-identifying-output-files).\n- If you have PDTs enabled, when a PDT is built, metadata about the created table is stored in the S3 bucket.\n\nResources\n---------\n\n- [Amazon Athena documentation](http://docs.aws.amazon.com/athena/latest/ug/what-is.html)\n- [Amazon Web Services Console for Athena](https://console.aws.amazon.com/athena/home) (requires an AWS login)\n- [Amazon Athena SQL and HiveQL reference](https://docs.aws.amazon.com/athena/latest/ug/ddl-sql-reference.html)\n\nDebugging\n---------\n\nAmazon provides `LogLevel` and `LogPath` JDBC driver options for debugging connections. To use them, add `;LogLevel=DEBUG;LogPath=/tmp/athena_debug.log` to the end of the **Additional JDBC Parameters** field and test the connection again.\n\nIf Looker is hosting the instance, then Looker Support or your analyst will need to retrieve this file to continue debugging.\n\nFeature support\n---------------\n\nFor Looker to support some features, your database dialect must also support them.\n\nAmazon Athena supports the following features as of Looker 25.14:\n\nNext steps\n----------\n\nAfter you have completed the database connection, [configure authentication options](/looker/docs/getting-started-with-users)."]]