Defines the list of possible SQL data types to which the source decimal values are converted.
This list and the precision and the scale parameters of the decimal field determine the
target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in
the specified list and if it supports the precision and the scale. STRING supports all
precision and scale values.
Defines how to interpret files denoted by URIs. By default the files are assumed to be data
files (this can be specified explicitly via FILE_SET_SPEC_TYPE_FILE_SYSTEM_MATCH). A second
option is "FILE_SET_SPEC_TYPE_NEW_LINE_DELIMITED_MANIFEST" which interprets each file as a
manifest file, where each line is a reference to a file.
Sets whether BigQuery should allow extra values that are not represented in the table schema.
If true, the extra values are ignored. If false, records with extra columns are treated as
bad records, and if there are too many bad records, an invalid error is returned in the job
result. The default value is false. The value set with #setFormatOptions(FormatOptions) property determines what BigQuery treats as an extra value.
See Also: Ignore Unknown Values
Sets the maximum number of bad records that BigQuery can ignore when reading data. If the
number of bad records exceeds this value, an invalid error is returned in the job result. The
default value is 0, which requires that all records are valid.
A list of strings represented as SQL NULL value in a CSV file. null_marker and null_markers
can't be set at the same time. If null_marker is set, null_markers has to be not set. If
null_markers is set, null_marker has to be not set. If both null_marker and null_markers are
set at the same time, a user error would be thrown. Any strings listed in null_markers,
including empty string would be interpreted as SQL NULL. This applies to all column types.
When creating an external table, the user can provide a reference file with the table schema.
This is enabled for the following formats: AVRO, PARQUET, ORC.
Controls the strategy used to match loaded columns to the schema. If not set, a sensible
default is chosen based on how the schema is provided. If autodetect is used, then columns
are matched by name. Otherwise, columns are matched by position. This is done to keep the
behavior backward-compatible. Acceptable values are: POSITION - matches by position. This
assumes that the columns are ordered the same way as the schema. NAME - matches by name. This
reads the header row as column names and reorders columns to match the field names in the
schema.
Sets the fully-qualified URIs that point to your data in Google Cloud Storage (e.g.
gs://bucket/path). Each URI can contain one '*' wildcard character that must come after the
bucket's name. Size limits related to load jobs apply to external data sources, plus an
additional limit of 10 GB maximum size across all URIs.
For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully
specified and valid HTTPS URL for a Google Cloud Bigtable table.
For Google Cloud Datastore backup URIs: Exactly one URI can be specified. Also, the '*'
wildcard character is not allowed.
See Also: Quota
Time zone used when parsing timestamp values that do not have specific time zone information
(e.g. 2024-04-20 12:34:56). The expected format is a IANA timezone string (e.g.
America/Los_Angeles).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-07-31 UTC."],[],[]]