This section provides troubleshooting guidance for common issues encountered when using BigQuery sink connectors.
Troubleshooting table-driven configuration
In table-driven configuration mode, Kafka messages must closely align with the table schema. Here are some common issues that might occur when you configure this method.
Missing non-nullable column
If a Kafka message omits a field that exists as a non-nullable column in the BigQuery table, the write operation fails with an error message similar to the following:
Failed to write rows after BQ table creation or schema update within 30
attempts for: GenericData{classInfo=[datasetId, projectId, tableId], {datasetId=<datasetID>, tableId=<tableID>}}"
Replace <datasetID>
and <tableID>
with the actual dataset and table IDs.
To resolve the issue, change the BigQuery column to NULLABLE
and restart the connector.
Undefined fields
If any fields in the Kafka message are not defined in the BigQuery table schema, the write operation fails with an error message similar to the following:
Insertion failed at table repairScenario for following rows:
[row index 0] (Failure reason : The source object has fields unknown to BigQuery: root.<fieldName>.)
Replace <fieldName>
with the name of the undefined field.
To resolve the issue, add the missing fields to the BigQuery table and restart the connector.
Type mismatch
If a type mismatch occurs between a Kafka message field and the corresponding BigQuery table column (for example, a string in Kafka and an integer in BigQuery), the write operation fails with an error message similar to the following:
[row index 0] (location <field>, reason: invalid): Cannot convert value to <type> (bad value): <val>
Replace <field>
, <type>
, and <val>
with the relevant field name, data
type, and value, respectively.
This is a known issue.