Checks if the destination format for the table data is
Avro. The default is false. Not applicable
when extracting models.
Returns
(Boolean) — true when AVRO, false if not AVRO or not a
table extraction.
#compression?
defcompression?()->Boolean
Checks if the export operation compresses the data using gzip. The
default is false. Not applicable when extracting models.
Returns
(Boolean) — true when GZIP, false if not GZIP or not a
table extraction.
#csv?
defcsv?()->Boolean
Checks if the destination format for the table data is CSV. Tables with
nested or repeated fields cannot be exported as CSV. The default is
true for tables. Not applicable when extracting models.
Returns
(Boolean) — true when CSV, or false if not CSV or not a
table extraction.
#delimiter
defdelimiter()->String,nil
The character or symbol the operation uses to delimit fields in the
exported data. The default is a comma (,) for tables. Not applicable
when extracting models.
Returns
(String, nil) — A string containing the character, such as ",",
nil if not a table extraction.
#destinations
defdestinations()
The URI or URIs representing the Google Cloud Storage files to which
the data is exported.
#destinations_counts
defdestinations_counts()->Hash<String,Integer>
A hash containing the URI or URI pattern specified in
#destinations mapped to the counts of files per destination.
Returns
(Hash<String, Integer>) — A Hash with the URI patterns as keys
and the counts as values.
#destinations_file_counts
defdestinations_file_counts()->Array<Integer>
The number of files per destination URI or URI pattern specified in
#destinations.
Returns
(Array<Integer>) — An array of values in the same order as the
URI patterns.
#json?
defjson?()->Boolean
Checks if the destination format for the table data is newline-delimited
JSON. The default is false. Not applicable when
extracting models.
Returns
(Boolean) — true when NEWLINE_DELIMITED_JSON, false if not
NEWLINE_DELIMITED_JSON or not a table extraction.
#ml_tf_saved_model?
defml_tf_saved_model?()->Boolean
Checks if the destination format for the model is TensorFlow SavedModel.
The default is true for models. Not applicable when extracting tables.
Returns
(Boolean) — true when ML_TF_SAVED_MODEL, false if not
ML_TF_SAVED_MODEL or not a model extraction.
#ml_xgboost_booster?
defml_xgboost_booster?()->Boolean
Checks if the destination format for the model is XGBoost. The default
is false. Not applicable when extracting tables.
Returns
(Boolean) — true when ML_XGBOOST_BOOSTER, false if not
ML_XGBOOST_BOOSTER or not a model extraction.
#model?
defmodel?()->Boolean
Whether the source of the export job is a model. See #source.
Returns
(Boolean) — true when the source is a model, false
otherwise.
#print_header?
defprint_header?()->Boolean
Checks if the exported data contains a header row. The default is
true for tables. Not applicable when extracting models.
Returns
(Boolean) — true when the print header configuration is
present or nil, false if disabled or not a table extraction.
#source
defsource(view:nil)->Table,Model,nil
The table or model which is exported.
Parameter
view (String) (defaults to: nil) — Specifies the view that determines which table information is returned.
By default, basic table information and storage statistics (STORAGE_STATS) are returned.
Accepted values include :unspecified, :basic, :storage, and
:full. For more information, see BigQuery Classes.
The default value is the :unspecified view type.
Returns
(Table, Model, nil) — A table or model instance, or nil.
#table?
deftable?()->Boolean
Whether the source of the export job is a table. See #source.
Returns
(Boolean) — true when the source is a table, false
otherwise.
#use_avro_logical_types?
defuse_avro_logical_types?()->Boolean
If #avro? (#format is set to "AVRO"), this flag indicates
whether to enable extracting applicable column types (such as
TIMESTAMP) to their corresponding AVRO logical types
(timestamp-micros), instead of only using their raw types
(avro-long). Not applicable when extracting models.
Returns
(Boolean) — true when applicable column types will use their
corresponding AVRO logical types, false if not enabled or not a
table extraction.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-09 UTC."],[],[],null,["# BigQuery API - Class Google::Cloud::Bigquery::ExtractJob (v1.55.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.55.0 (latest)](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-ExtractJob)\n- [1.54.0](/ruby/docs/reference/google-cloud-bigquery/1.54.0/Google-Cloud-Bigquery-ExtractJob)\n- [1.53.0](/ruby/docs/reference/google-cloud-bigquery/1.53.0/Google-Cloud-Bigquery-ExtractJob)\n- [1.52.1](/ruby/docs/reference/google-cloud-bigquery/1.52.1/Google-Cloud-Bigquery-ExtractJob)\n- [1.51.1](/ruby/docs/reference/google-cloud-bigquery/1.51.1/Google-Cloud-Bigquery-ExtractJob)\n- [1.50.0](/ruby/docs/reference/google-cloud-bigquery/1.50.0/Google-Cloud-Bigquery-ExtractJob)\n- [1.49.1](/ruby/docs/reference/google-cloud-bigquery/1.49.1/Google-Cloud-Bigquery-ExtractJob)\n- [1.48.1](/ruby/docs/reference/google-cloud-bigquery/1.48.1/Google-Cloud-Bigquery-ExtractJob)\n- [1.47.0](/ruby/docs/reference/google-cloud-bigquery/1.47.0/Google-Cloud-Bigquery-ExtractJob)\n- [1.46.1](/ruby/docs/reference/google-cloud-bigquery/1.46.1/Google-Cloud-Bigquery-ExtractJob)\n- [1.45.0](/ruby/docs/reference/google-cloud-bigquery/1.45.0/Google-Cloud-Bigquery-ExtractJob)\n- [1.44.2](/ruby/docs/reference/google-cloud-bigquery/1.44.2/Google-Cloud-Bigquery-ExtractJob)\n- [1.43.1](/ruby/docs/reference/google-cloud-bigquery/1.43.1/Google-Cloud-Bigquery-ExtractJob)\n- [1.42.0](/ruby/docs/reference/google-cloud-bigquery/1.42.0/Google-Cloud-Bigquery-ExtractJob)\n- [1.41.0](/ruby/docs/reference/google-cloud-bigquery/1.41.0/Google-Cloud-Bigquery-ExtractJob)\n- [1.40.0](/ruby/docs/reference/google-cloud-bigquery/1.40.0/Google-Cloud-Bigquery-ExtractJob)\n- [1.39.0](/ruby/docs/reference/google-cloud-bigquery/1.39.0/Google-Cloud-Bigquery-ExtractJob)\n- [1.38.1](/ruby/docs/reference/google-cloud-bigquery/1.38.1/Google-Cloud-Bigquery-ExtractJob) \nReference documentation and code samples for the BigQuery API class Google::Cloud::Bigquery::ExtractJob.\n\nExtractJob\n----------\n\nA [Job](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Job \"Google::Cloud::Bigquery::Job (class)\") subclass representing an export operation that may be performed\non a [Table](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Table \"Google::Cloud::Bigquery::Table (class)\") or [Model](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Model \"Google::Cloud::Bigquery::Model (class)\"). A ExtractJob instance is returned when you call\n[Project#extract_job](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Project#Google__Cloud__Bigquery__Project_extract_job_instance_ \"Google::Cloud::Bigquery::Project#extract_job (method)\"), [Table#extract_job](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Table#Google__Cloud__Bigquery__Table_extract_job_instance_ \"Google::Cloud::Bigquery::Table#extract_job (method)\") or [Model#extract_job](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Model#Google__Cloud__Bigquery__Model_extract_job_instance_ \"Google::Cloud::Bigquery::Model#extract_job (method)\"). \n\nInherits\n--------\n\n- [Google::Cloud::Bigquery::Job](./Google-Cloud-Bigquery-Job)\n\nExamples\n--------\n\nExport table data \n\n```ruby\nrequire \"google/cloud/bigquery\"\n\nbigquery = Google::Cloud::Bigquery.new\ndataset = bigquery.dataset \"my_dataset\"\ntable = dataset.table \"my_table\"\n\nextract_job = table.extract_job \"gs://my-bucket/file-name.json\",\n format: \"json\"\nextract_job.wait_until_done!\nextract_job.done? #=\u003e true\n```\n\nExport a model \n\n```ruby\nrequire \"google/cloud/bigquery\"\n\nbigquery = Google::Cloud::Bigquery.new\ndataset = bigquery.dataset \"my_dataset\"\nmodel = dataset.model \"my_model\"\n\nextract_job = model.extract_job \"gs://my-bucket/#{model.model_id}\"\n\nextract_job.wait_until_done!\nextract_job.done? #=\u003e true\n```\n\nMethods\n-------\n\n### #avro?\n\n def avro?() -\u003e Boolean\n\nChecks if the destination format for the table data is\n[Avro](http://avro.apache.org/). The default is `false`. Not applicable\nwhen extracting models. \n**Returns**\n\n- (Boolean) --- `true` when `AVRO`, `false` if not `AVRO` or not a table extraction.\n\n### #compression?\n\n def compression?() -\u003e Boolean\n\nChecks if the export operation compresses the data using gzip. The\ndefault is `false`. Not applicable when extracting models. \n**Returns**\n\n- (Boolean) --- `true` when `GZIP`, `false` if not `GZIP` or not a table extraction.\n\n### #csv?\n\n def csv?() -\u003e Boolean\n\nChecks if the destination format for the table data is CSV. Tables with\nnested or repeated fields cannot be exported as CSV. The default is\n`true` for tables. Not applicable when extracting models. \n**Returns**\n\n- (Boolean) --- `true` when `CSV`, or `false` if not `CSV` or not a table extraction.\n\n### #delimiter\n\n def delimiter() -\u003e String, nil\n\nThe character or symbol the operation uses to delimit fields in the\nexported data. The default is a comma (,) for tables. Not applicable\nwhen extracting models. \n**Returns**\n\n- (String, nil) --- A string containing the character, such as `\",\"`, `nil` if not a table extraction.\n\n### #destinations\n\n def destinations()\n\nThe URI or URIs representing the Google Cloud Storage files to which\nthe data is exported.\n\n### #destinations_counts\n\n def destinations_counts() -\u003e Hash\u003cString, Integer\u003e\n\nA hash containing the URI or URI pattern specified in\n[#destinations](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-ExtractJob#Google__Cloud__Bigquery__ExtractJob_destinations_instance_ \"Google::Cloud::Bigquery::ExtractJob#destinations (method)\") mapped to the counts of files per destination. \n**Returns**\n\n- (Hash\\\u003cString, Integer\\\u003e) --- A Hash with the URI patterns as keys and the counts as values.\n\n### #destinations_file_counts\n\n def destinations_file_counts() -\u003e Array\u003cInteger\u003e\n\nThe number of files per destination URI or URI pattern specified in\n[#destinations](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-ExtractJob#Google__Cloud__Bigquery__ExtractJob_destinations_instance_ \"Google::Cloud::Bigquery::ExtractJob#destinations (method)\"). \n**Returns**\n\n- (Array\\\u003cInteger\\\u003e) --- An array of values in the same order as the URI patterns.\n\n### #json?\n\n def json?() -\u003e Boolean\n\nChecks if the destination format for the table data is [newline-delimited\nJSON](https://jsonlines.org/). The default is `false`. Not applicable when\nextracting models. \n**Returns**\n\n- (Boolean) --- `true` when `NEWLINE_DELIMITED_JSON`, `false` if not `NEWLINE_DELIMITED_JSON` or not a table extraction.\n\n### #ml_tf_saved_model?\n\n def ml_tf_saved_model?() -\u003e Boolean\n\nChecks if the destination format for the model is TensorFlow SavedModel.\nThe default is `true` for models. Not applicable when extracting tables. \n**Returns**\n\n- (Boolean) --- `true` when `ML_TF_SAVED_MODEL`, `false` if not `ML_TF_SAVED_MODEL` or not a model extraction.\n\n### #ml_xgboost_booster?\n\n def ml_xgboost_booster?() -\u003e Boolean\n\nChecks if the destination format for the model is XGBoost. The default\nis `false`. Not applicable when extracting tables. \n**Returns**\n\n- (Boolean) --- `true` when `ML_XGBOOST_BOOSTER`, `false` if not `ML_XGBOOST_BOOSTER` or not a model extraction.\n\n### #model?\n\n def model?() -\u003e Boolean\n\nWhether the source of the export job is a model. See [#source](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-ExtractJob#Google__Cloud__Bigquery__ExtractJob_source_instance_ \"Google::Cloud::Bigquery::ExtractJob#source (method)\"). \n**Returns**\n\n- (Boolean) --- `true` when the source is a model, `false` otherwise.\n\n### #print_header?\n\n def print_header?() -\u003e Boolean\n\nChecks if the exported data contains a header row. The default is\n`true` for tables. Not applicable when extracting models. \n**Returns**\n\n- (Boolean) --- `true` when the print header configuration is present or `nil`, `false` if disabled or not a table extraction.\n\n### #source\n\n def source(view: nil) -\u003e Table, Model, nil\n\nThe table or model which is exported. \n**Parameter**\n\n- **view** (String) *(defaults to: nil)* --- Specifies the view that determines which table information is returned. By default, basic table information and storage statistics (STORAGE_STATS) are returned. Accepted values include `:unspecified`, `:basic`, `:storage`, and `:full`. For more information, see [BigQuery Classes](about:invalid#zCSafez). The default value is the `:unspecified` view type. \n**Returns**\n\n- ([Table](./Google-Cloud-Bigquery-Table), [Model](./Google-Cloud-Bigquery-Model), nil) --- A table or model instance, or `nil`.\n\n### #table?\n\n def table?() -\u003e Boolean\n\nWhether the source of the export job is a table. See [#source](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-ExtractJob#Google__Cloud__Bigquery__ExtractJob_source_instance_ \"Google::Cloud::Bigquery::ExtractJob#source (method)\"). \n**Returns**\n\n- (Boolean) --- `true` when the source is a table, `false` otherwise.\n\n### #use_avro_logical_types?\n\n def use_avro_logical_types?() -\u003e Boolean\n\nIf `#avro?` (`#format` is set to `\"AVRO\"`), this flag indicates\nwhether to enable extracting applicable column types (such as\n`TIMESTAMP`) to their corresponding AVRO logical types\n(`timestamp-micros`), instead of only using their raw types\n(`avro-long`). Not applicable when extracting models. \n**Returns**\n\n- (Boolean) --- `true` when applicable column types will use their corresponding AVRO logical types, `false` if not enabled or not a table extraction."]]