Sets the compression type. Not applicable when extracting models.
Parameter
value (String) — The compression type to use for exported
files. Possible values include GZIP and NONE. The default
value is NONE.
#delimiter=
defdelimiter=(value)
Sets the field delimiter. Not applicable when extracting models.
Parameter
value (String) — Delimiter to use between fields in the
exported data. Default is ,.
#format=
defformat=(new_format)
Sets the destination file format. The default value for
tables is csv. Tables with nested or repeated fields cannot be
exported as CSV. The default value for models is ml_tf_saved_model.
Print a header row in the exported file. Not applicable when
extracting models.
Parameter
value (Boolean) — Whether to print out a header row in the
results. Default is true.
#labels=
deflabels=(value)
Sets the labels to use for the job.
Parameter
value (Hash) —
A hash of user-provided labels associated with
the job. You can use these to organize and group your jobs.
The labels applied to a resource must meet the following requirements:
Each resource can have multiple labels, up to a maximum of 64.
Each label must be a key-value pair.
Keys have a minimum length of 1 character and a maximum length of
63 characters, and cannot be empty. Values can be empty, and have
a maximum length of 63 characters.
Keys and values can contain only lowercase letters, numeric characters,
underscores, and dashes. All characters must use UTF-8 encoding, and
international characters are allowed.
The key portion of a label must be unique. However, you can use the
same key with multiple resources.
Keys must start with a lowercase letter or international character.
#location=
deflocation=(value)
Sets the geographic location where the job should run. Required
except for US and EU.
Parameter
value (String) — A geographic location, such as "US", "EU" or
"asia-northeast1". Required except for US and EU.
Indicate whether to enable extracting applicable column types (such
as TIMESTAMP) to their corresponding AVRO logical types
(timestamp-micros), instead of only using their raw types
(avro-long).
Only used when #format is set to "AVRO" (#avro?).
Parameter
value (Boolean) — Whether applicable column types will use
their corresponding AVRO logical types.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-09 UTC."],[],[],null,["# BigQuery API - Class Google::Cloud::Bigquery::ExtractJob::Updater (v1.55.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.55.0 (latest)](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.54.0](/ruby/docs/reference/google-cloud-bigquery/1.54.0/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.53.0](/ruby/docs/reference/google-cloud-bigquery/1.53.0/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.52.1](/ruby/docs/reference/google-cloud-bigquery/1.52.1/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.51.1](/ruby/docs/reference/google-cloud-bigquery/1.51.1/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.50.0](/ruby/docs/reference/google-cloud-bigquery/1.50.0/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.49.1](/ruby/docs/reference/google-cloud-bigquery/1.49.1/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.48.1](/ruby/docs/reference/google-cloud-bigquery/1.48.1/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.47.0](/ruby/docs/reference/google-cloud-bigquery/1.47.0/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.46.1](/ruby/docs/reference/google-cloud-bigquery/1.46.1/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.45.0](/ruby/docs/reference/google-cloud-bigquery/1.45.0/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.44.2](/ruby/docs/reference/google-cloud-bigquery/1.44.2/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.43.1](/ruby/docs/reference/google-cloud-bigquery/1.43.1/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.42.0](/ruby/docs/reference/google-cloud-bigquery/1.42.0/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.41.0](/ruby/docs/reference/google-cloud-bigquery/1.41.0/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.40.0](/ruby/docs/reference/google-cloud-bigquery/1.40.0/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.39.0](/ruby/docs/reference/google-cloud-bigquery/1.39.0/Google-Cloud-Bigquery-ExtractJob-Updater)\n- [1.38.1](/ruby/docs/reference/google-cloud-bigquery/1.38.1/Google-Cloud-Bigquery-ExtractJob-Updater) \nReference documentation and code samples for the BigQuery API class Google::Cloud::Bigquery::ExtractJob::Updater.\n\nYielded to a block to accumulate changes for an API request. \n\nInherits\n--------\n\n- [Google::Cloud::Bigquery::ExtractJob](./Google-Cloud-Bigquery-ExtractJob)\n\nMethods\n-------\n\n### #cancel\n\n def cancel()\n\n### #compression=\n\n def compression=(value)\n\nSets the compression type. Not applicable when extracting models. \n**Parameter**\n\n- **value** (String) --- The compression type to use for exported files. Possible values include `GZIP` and `NONE`. The default value is `NONE`.\n\n### #delimiter=\n\n def delimiter=(value)\n\nSets the field delimiter. Not applicable when extracting models. \n**Parameter**\n\n- **value** (String) --- Delimiter to use between fields in the exported data. Default is `,`.\n\n### #format=\n\n def format=(new_format)\n\nSets the destination file format. The default value for\ntables is `csv`. Tables with nested or repeated fields cannot be\nexported as CSV. The default value for models is `ml_tf_saved_model`.\n\nSupported values for tables:\n\n- `csv` - CSV\n- `json` - [Newline-delimited JSON](https://jsonlines.org/)\n- `avro` - [Avro](http://avro.apache.org/)\n\nSupported values for models:\n\n- `ml_tf_saved_model` - TensorFlow SavedModel\n- `ml_xgboost_booster` - XGBoost Booster \n**Parameter**\n\n- **new_format** (String) --- The new source format.\n\n### #header=\n\n def header=(value)\n\nPrint a header row in the exported file. Not applicable when\nextracting models. \n**Parameter**\n\n- **value** (Boolean) --- Whether to print out a header row in the results. Default is `true`.\n\n### #labels=\n\n def labels=(value)\n\nSets the labels to use for the job. \n**Parameter**\n\n- **value** (Hash) ---\n\n A hash of user-provided labels associated with\n the job. You can use these to organize and group your jobs.\n\n The labels applied to a resource must meet the following requirements:\n - Each resource can have multiple labels, up to a maximum of 64.\n - Each label must be a key-value pair.\n - Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.\n - Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.\n - The key portion of a label must be unique. However, you can use the same key with multiple resources.\n - Keys must start with a lowercase letter or international character.\n\n### #location=\n\n def location=(value)\n\nSets the geographic location where the job should run. Required\nexcept for US and EU. \n**Parameter**\n\n- **value** (String) --- A geographic location, such as \"US\", \"EU\" or \"asia-northeast1\". Required except for US and EU.\n**Example** \n\n```ruby\nrequire \"google/cloud/bigquery\"\n\nbigquery = Google::Cloud::Bigquery.new\ndataset = bigquery.dataset \"my_dataset\"\ntable = dataset.table \"my_table\"\n\ndestination = \"gs://my-bucket/file-name.csv\"\nextract_job = table.extract_job destination do |j|\n j.location = \"EU\"\nend\n\nextract_job.wait_until_done!\nextract_job.done? #=\u003e true\n```\n\n### #refresh!\n\n def refresh!()\n\n**Alias Of** : [#reload!](./Google-Cloud-Bigquery-ExtractJob-Updater#Google__Cloud__Bigquery__ExtractJob__Updater_reload!_instance_)\n\n### #reload!\n\n def reload!()\n\n**Aliases**\n\n- [#refresh!](./Google-Cloud-Bigquery-ExtractJob-Updater#Google__Cloud__Bigquery__ExtractJob__Updater_refresh!_instance_)\n\n### #rerun!\n\n def rerun!()\n\n### #use_avro_logical_types=\n\n def use_avro_logical_types=(value)\n\nIndicate whether to enable extracting applicable column types (such\nas `TIMESTAMP`) to their corresponding AVRO logical types\n(`timestamp-micros`), instead of only using their raw types\n(`avro-long`).\n\n\n\u003cbr /\u003e\n\nOnly used when `#format` is set to `\"AVRO\"` (`#avro?`). \n**Parameter**\n\n- **value** (Boolean) --- Whether applicable column types will use their corresponding AVRO logical types.\n\n### #wait_until_done!\n\n def wait_until_done!()"]]