Forces all rows in the current batch to be inserted immediately.
Returns
(AsyncInserter) — returns self so calls can be chained.
#insert
definsert(rows,insert_ids:nil)
Adds rows to the async inserter to be inserted. Rows will be
collected in batches and inserted together.
See #insert_async.
Simple Ruby types are generally accepted per JSON rules, along with the following support for BigQuery's
more complex types:
| BigQuery | Ruby | Notes |
|--------------|--------------------------------------|----------------------------------------------------|
| NUMERIC | BigDecimal | BigDecimal values will be rounded to scale 9. |
| BIGNUMERIC | String | Pass as String to avoid rounding to scale 9. |
| DATETIME | DateTime | DATETIME does not support time zone. |
| DATE | Date | |
| GEOGRAPHY | String | |
| JSON | String (Stringified JSON) | String, as JSON does not have a schema to verify. |
| TIMESTAMP | Time | |
| TIME | Google::Cloud::BigQuery::Time | |
| BYTES | File, IO, StringIO, or similar | |
| ARRAY | Array | Nested arrays, nil values are not supported. |
| STRUCT | Hash | Hash keys may be strings or symbols. |
Because BigQuery's streaming API is designed for high insertion
rates, modifications to the underlying table metadata are eventually
consistent when interacting with the streaming system. In most cases
metadata changes are propagated within minutes, but during this
period API responses may reflect the inconsistent state of the
table.
The value :skip can be provided to skip the generation of IDs for all rows, or to skip the generation of
an ID for a specific row in the array.
Parameters
rows (Hash, Array<Hash>) — A hash object or array of hash objects
containing the data. Required. BigDecimal values will be rounded to
scale 9 to conform with the BigQuery NUMERIC data type. To avoid
rounding BIGNUMERIC type values with scale greater than 9, use String
instead of BigDecimal.
insert_ids (Array<String|Symbol>, Symbol) (defaults to: nil) — A unique ID for each row. BigQuery uses this property to
detect duplicate insertion requests on a best-effort basis. For more information, see data
consistency. Optional. If
not provided, the client library will assign a UUID to each row before the request is sent.
#interval
definterval()->Numeric
The number of seconds to collect rows
before the batch is inserted. Default is 10.
Returns
(Numeric) — the current value of interval
#max_bytes
defmax_bytes()->Integer
The maximum size of rows to be
collected before the batch is inserted. Default is 10,000,000
(10MB).
Returns
(Integer) — the current value of max_bytes
#max_rows
defmax_rows()->Integer
The maximum number of rows to be
collected before the batch is inserted. Default is 500.
Returns
(Integer) — the current value of max_rows
#started?
defstarted?()->boolean
Whether the inserter has been started.
Returns
(boolean) — true when started, false otherwise.
#stop
defstop()->AsyncInserter
Begins the process of stopping the inserter. Rows already in the
queue will be inserted, but no new rows can be added. Use #wait!
to block until the inserter is fully stopped and all pending rows
have been inserted.
Returns
(AsyncInserter) — returns self so calls can be chained.
#stopped?
defstopped?()->boolean
Whether the inserter has been stopped.
Returns
(boolean) — true when stopped, false otherwise.
#threads
defthreads()->Integer
The number of threads used to insert
rows. Default is 4.
Returns
(Integer) — the current value of threads
#wait!
defwait!(timeout=nil)->AsyncInserter
Blocks until the inserter is fully stopped, all pending rows
have been inserted, and all callbacks have completed. Does not stop
the inserter. To stop the inserter, first call #stop and then
call #wait! to block until the inserter is stopped.
Returns
(AsyncInserter) — returns self so calls can be chained.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-09 UTC."],[],[],null,["# BigQuery API - Class Google::Cloud::Bigquery::Table::AsyncInserter (v1.56.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.56.0 (latest)](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.55.0](/ruby/docs/reference/google-cloud-bigquery/1.55.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.54.0](/ruby/docs/reference/google-cloud-bigquery/1.54.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.53.0](/ruby/docs/reference/google-cloud-bigquery/1.53.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.52.1](/ruby/docs/reference/google-cloud-bigquery/1.52.1/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.51.1](/ruby/docs/reference/google-cloud-bigquery/1.51.1/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.50.0](/ruby/docs/reference/google-cloud-bigquery/1.50.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.49.1](/ruby/docs/reference/google-cloud-bigquery/1.49.1/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.48.1](/ruby/docs/reference/google-cloud-bigquery/1.48.1/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.47.0](/ruby/docs/reference/google-cloud-bigquery/1.47.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.46.1](/ruby/docs/reference/google-cloud-bigquery/1.46.1/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.45.0](/ruby/docs/reference/google-cloud-bigquery/1.45.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.44.2](/ruby/docs/reference/google-cloud-bigquery/1.44.2/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.43.1](/ruby/docs/reference/google-cloud-bigquery/1.43.1/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.42.0](/ruby/docs/reference/google-cloud-bigquery/1.42.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.41.0](/ruby/docs/reference/google-cloud-bigquery/1.41.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.40.0](/ruby/docs/reference/google-cloud-bigquery/1.40.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.39.0](/ruby/docs/reference/google-cloud-bigquery/1.39.0/Google-Cloud-Bigquery-Table-AsyncInserter)\n- [1.38.1](/ruby/docs/reference/google-cloud-bigquery/1.38.1/Google-Cloud-Bigquery-Table-AsyncInserter) \nReference documentation and code samples for the BigQuery API class Google::Cloud::Bigquery::Table::AsyncInserter.\n\nAsyncInserter\n-------------\n\nUsed to insert multiple rows in batches to a topic. See\n[#insert_async](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Table#Google__Cloud__Bigquery__Table_insert_async_instance_ \"Google::Cloud::Bigquery::Table#insert_async (method)\"). \n\nInherits\n--------\n\n- Object \n\nIncludes\n--------\n\n- MonitorMixin\n\nExample\n-------\n\n```ruby\nrequire \"google/cloud/bigquery\"\n\nbigquery = Google::Cloud::Bigquery.new\ndataset = bigquery.dataset \"my_dataset\"\ntable = dataset.table \"my_table\"\ninserter = table.insert_async do |result|\n if result.error?\n log_error result.error\n else\n log_insert \"inserted #{result.insert_count} rows \" \\\n \"with #{result.error_count} errors\"\n end\nend\n\nrows = [\n { \"first_name\" =\u003e \"Alice\", \"age\" =\u003e 21 },\n { \"first_name\" =\u003e \"Bob\", \"age\" =\u003e 22 }\n]\ninserter.insert rows\n\ninserter.stop.wait!\n```\n\nMethods\n-------\n\n### #flush\n\n def flush() -\u003e AsyncInserter\n\nForces all rows in the current batch to be inserted immediately. \n**Returns**\n\n- ([AsyncInserter](./Google-Cloud-Bigquery-Table-AsyncInserter)) --- returns self so calls can be chained.\n\n### #insert\n\n def insert(rows, insert_ids: nil)\n\nAdds rows to the async inserter to be inserted. Rows will be\ncollected in batches and inserted together.\nSee [#insert_async](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Table#Google__Cloud__Bigquery__Table_insert_async_instance_ \"Google::Cloud::Bigquery::Table#insert_async (method)\").\n\n\nSimple Ruby types are generally accepted per JSON rules, along with the following support for BigQuery's\nmore complex types:\n\n\\| BigQuery \\| Ruby \\| Notes \\|\n\\|--------------\\|--------------------------------------\\|----------------------------------------------------\\|\n\\| `NUMERIC` \\| `BigDecimal` \\| `BigDecimal` values will be rounded to scale 9. \\|\n\\| `BIGNUMERIC` \\| `String` \\| Pass as `String` to avoid rounding to scale 9. \\|\n\\| `DATETIME` \\| `DateTime` \\| `DATETIME` does not support time zone. \\|\n\\| `DATE` \\| `Date` \\| \\|\n\\| `GEOGRAPHY` \\| `String` \\| \\|\n\\| `JSON` \\| `String` (Stringified JSON) \\| String, as JSON does not have a schema to verify. \\|\n\\| `TIMESTAMP` \\| `Time` \\| \\|\n\\| `TIME` \\| `Google::Cloud::BigQuery::Time` \\| \\|\n\\| `BYTES` \\| `File`, `IO`, `StringIO`, or similar \\| \\|\n\\| `ARRAY` \\| `Array` \\| Nested arrays, `nil` values are not supported. \\|\n\\| `STRUCT` \\| `Hash` \\| Hash keys may be strings or symbols. \\|\n\nBecause BigQuery's streaming API is designed for high insertion\nrates, modifications to the underlying table metadata are eventually\nconsistent when interacting with the streaming system. In most cases\nmetadata changes are propagated within minutes, but during this\nperiod API responses may reflect the inconsistent state of the\ntable.\n\n\u003cbr /\u003e\n\nThe value `:skip` can be provided to skip the generation of IDs for all rows, or to skip the generation of\nan ID for a specific row in the array. \n**Parameters**\n\n- **rows** (Hash, Array\\\u003cHash\\\u003e) --- A hash object or array of hash objects containing the data. Required. `BigDecimal` values will be rounded to scale 9 to conform with the BigQuery `NUMERIC` data type. To avoid rounding `BIGNUMERIC` type values with scale greater than 9, use `String` instead of `BigDecimal`.\n- **insert_ids** (Array\\\u003cString\\|Symbol\\\u003e, Symbol) *(defaults to: nil)* --- A unique ID for each row. BigQuery uses this property to detect duplicate insertion requests on a best-effort basis. For more information, see [data\n consistency](https://cloud.google.com/bigquery/streaming-data-into-bigquery#dataconsistency). Optional. If not provided, the client library will assign a UUID to each row before the request is sent.\n\n### #interval\n\n def interval() -\u003e Numeric\n\nThe number of seconds to collect rows\nbefore the batch is inserted. Default is 10. \n**Returns**\n\n- (Numeric) --- the current value of interval\n\n### #max_bytes\n\n def max_bytes() -\u003e Integer\n\nThe maximum size of rows to be\ncollected before the batch is inserted. Default is 10,000,000\n(10MB). \n**Returns**\n\n- (Integer) --- the current value of max_bytes\n\n### #max_rows\n\n def max_rows() -\u003e Integer\n\nThe maximum number of rows to be\ncollected before the batch is inserted. Default is 500. \n**Returns**\n\n- (Integer) --- the current value of max_rows\n\n### #started?\n\n def started?() -\u003e boolean\n\nWhether the inserter has been started. \n**Returns**\n\n- (boolean) --- `true` when started, `false` otherwise.\n\n### #stop\n\n def stop() -\u003e AsyncInserter\n\nBegins the process of stopping the inserter. Rows already in the\nqueue will be inserted, but no new rows can be added. Use [#wait!](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Table-AsyncInserter#Google__Cloud__Bigquery__Table__AsyncInserter_wait__instance_ \"Google::Cloud::Bigquery::Table::AsyncInserter#wait! (method)\")\nto block until the inserter is fully stopped and all pending rows\nhave been inserted. \n**Returns**\n\n- ([AsyncInserter](./Google-Cloud-Bigquery-Table-AsyncInserter)) --- returns self so calls can be chained.\n\n### #stopped?\n\n def stopped?() -\u003e boolean\n\nWhether the inserter has been stopped. \n**Returns**\n\n- (boolean) --- `true` when stopped, `false` otherwise.\n\n### #threads\n\n def threads() -\u003e Integer\n\nThe number of threads used to insert\nrows. Default is 4. \n**Returns**\n\n- (Integer) --- the current value of threads\n\n### #wait!\n\n def wait!(timeout = nil) -\u003e AsyncInserter\n\nBlocks until the inserter is fully stopped, all pending rows\nhave been inserted, and all callbacks have completed. Does not stop\nthe inserter. To stop the inserter, first call [#stop](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Table-AsyncInserter#Google__Cloud__Bigquery__Table__AsyncInserter_stop_instance_ \"Google::Cloud::Bigquery::Table::AsyncInserter#stop (method)\") and then\ncall [#wait!](/ruby/docs/reference/google-cloud-bigquery/latest/Google-Cloud-Bigquery-Table-AsyncInserter#Google__Cloud__Bigquery__Table__AsyncInserter_wait__instance_ \"Google::Cloud::Bigquery::Table::AsyncInserter#wait! (method)\") to block until the inserter is stopped. \n**Returns**\n\n- ([AsyncInserter](./Google-Cloud-Bigquery-Table-AsyncInserter)) --- returns self so calls can be chained."]]