This section provides troubleshooting guidance for common issues encountered when using BigQuery sink connectors.
Troubleshooting table-driven configuration
In table-driven configuration mode, Kafka messages must closely align with the table schema. Here are some common issues that might occur when you configure this method.
Missing non-nullable column
If a Kafka message omits a field that exists as a non-nullable column in the BigQuery table, the write operation fails with an error message similar to the following:
Failed to write rows after BQ table creation or schema update within 30
attempts for: GenericData{classInfo=[datasetId, projectId, tableId], {datasetId=<datasetID>, tableId=<tableID>}}"
Replace <datasetID>
and <tableID>
with the actual dataset and table IDs.
To resolve the issue, change the BigQuery column to NULLABLE
and restart the connector.
Undefined fields
If any fields in the Kafka message are not defined in the BigQuery table schema, the write operation fails with an error message similar to the following:
Insertion failed at table repairScenario for following rows:
[row index 0] (Failure reason : The source object has fields unknown to BigQuery: root.<fieldName>.)
Replace <fieldName>
with the name of the undefined field.
To resolve the issue, add the missing fields to the BigQuery table and restart the connector.
Type mismatch
If a type mismatch occurs between a Kafka message field and the corresponding BigQuery table column (for example, a string in Kafka and an integer in BigQuery), the write operation fails with an error message similar to the following:
[row index 0] (location <field>, reason: invalid): Cannot convert value to <type> (bad value): <val>
Replace <field>
, <type>
, and <val>
with the relevant field name, data
type, and value, respectively.
This is a known issue.
AppendRows request too large
When using the StorageWrite
mode of the connector, you might see an error
similar to the following:
INVALID_ARGUMENT: AppendRows request too large: 11053472 limit 10485760
The connector attempts to write all messages within a single poll from Kafka in
a single batch to BigQuery. If the batch size exceeds the
BigQuery limit (10485760 bytes), the write operation fails.
To resolve this error, set the consumer.override.max.poll.records
configuration property on the connector to a smaller number. The default value
for this parameter is 500.