This guide shows you how to configure the BigQuery Connector for SAP.
Before you begin
Make sure that you or your administrators have completed the following prerequisites:
- Installed the BigQuery Connector for SAP in your SAP environment.
Set up authentication to access the BigQuery API.
Enabled the following Google Cloud APIs:
- The BigQuery API
- The IAM Service Account Credentials API
For information about how to enable Google Cloud APIs, see Enabling APIs.
Create a BigQuery dataset
To create a BigQuery dataset, your user account must have the proper IAM permissions for BigQuery. For more information, see Required permissions.
In the Google Cloud console, go to the BigQuery page:
Next to your project ID, click the View actions icon,
, and then click Create dataset.In the Dataset ID field, enter a unique name. For more information, see Name datasets.
After you set up Google Cloud authentication and authorization, you test access to Google Cloud by retrieving information about this dataset.
For more information about creating BigQuery datasets, see Creating datasets.
Create SAP roles and authorizations for BigQuery Connector for SAP
To work with BigQuery Connector for SAP, in addition to the standard
SAP LT Replication Server authorizations, users need access to the custom
transactions that are provided with BigQuery Connector for SAP:
/GOOG/SLT_SETTINGS
and /GOOG/REPLIC_VALID
.
To use the Load Simulation tool, users need access to the custom
transaction /GOOG/LOAD_SIMULATE
that is provided with
BigQuery Connector for SAP.
By default, users that have access to the custom
transactions /GOOG/SLT_SETTINGS
and /GOOG/REPLIC_VALID
can modify the
settings of any configuration, so if you need to,
you can restrict access to specific configurations. For users who only need
to view the BigQuery Connector for SAP settings, you can
grant them read-only access to the custom transaction /GOOG/SLT_SETT_DISP
.
The BigQuery Connector for SAP transport files include the Google BigQuery
Settings Authorization
object, ZGOOG_MTID
, for authorizations that are
specific to BigQuery Connector for SAP.
To grant access to the custom transactions and restrict access to specific configurations, perform the following steps:
Using SAP transaction code
PFCG
, define a role for the BigQuery Connector for SAP.Grant the role access to the custom transactions
/GOOG/SLT_SETTINGS
,/GOOG/REPLIC_VALID
, and/GOOG/LOAD_SIMULATE
.To limit the access of a role, specify the authorization group of each configuration that the role can access by using the
ZGOOG_MTID
authorization object. For example:- Authorization object for BigQuery Connector for SAP (
ZGOOG_MTID
):Activity 01
Authorization Group AUTH_GROUP_1,AUTH_GROUP_N
The
AUTH_GROUP_01
andAUTH_GROUP_N
are values that are defined in the SAP LT Replication Server configuration.The authorization groups specified for
ZGOOG_MTID
must match the authorization groups that are specified for the role in the SAPS_DMIS_SLT
authorization object.- Authorization object for BigQuery Connector for SAP (
Create SAP roles and authorizations for viewing BigQuery Connector for SAP settings
To grant read-only access for the custom transaction /GOOG/SLT_SETT_DISP
,
perform the following steps:
Using SAP transaction code
PFCG
, define a role for viewing the BigQuery Connector for SAP settings.Grant the role access to the custom transaction
/GOOG/SLT_SETT_DISP
.Add the authorization object for BigQuery Connector for SAP (
ZGOOG_MTID
) with the following attributes:Activity 03
Authorization Group = *
Generate the role profile and assign relevant users to the role.
Configure replication
To configure replication, you specify both BigQuery Connector for SAP and SAP LT Replication Server settings.
Specify access settings in /GOOG/CLIENT_KEY
Use transaction SM30
to specify settings for access to
BigQuery. BigQuery Connector for SAP stores the settings
as a record in the /GOOG/CLIENT_KEY
custom configuration table.
To specify the access settings:
In the SAP GUI, enter transaction code
SM30
.Select the
/GOOG/CLIENT_KEY
configuration table.Enter values for the following table fields:
Field Data type Description Name String Specify a descriptive name for the
CLIENT_KEY
configuration, such asABAP_SDK_CKEY
.A client key name is a unique identifier that is used by the BigQuery Connector for SAP to identify the configurations for accessing BigQuery.
Service Account Name String The name of the service account, in email address format, that was created for BigQuery Connector for SAP in the step Create a service account. For example:
sap-example-svc-acct@example-project-123456.iam.gserviceaccount.com
.If you're using JSON Web Tokens (JWT) for authentication, then leave this field blank.
Scope String Specify the
https://www.googleapis.com/auth/cloud-platform
API access scope, as recommended by Compute Engine. This access scope corresponds to the settingAllow full access to all Cloud APIs
on the host VM. For more information, see Set access scopes on the host VM.If you're using JSON Web Tokens (JWT) for authentication, then leave this field blank.
Project ID String The ID of the project that contains your target BigQuery dataset. Command name String Leave this field blank.
Authorization Class String The authorization class to use for replication. Authorization Field Not applicable Leave this field blank. Token Refresh Seconds Integer The amount of time, in seconds, before an access token expires and must be refreshed. The default value is
3500
. You can overwrite this default value by setting a value for theCMD_SECS_DEFLT
parameter in the advanced settings.Specifying a value from
1
to3599
overrides the default expiration time. If you specify0
, then the BigQuery Connector for SAP uses the default value.Token Caching Boolean The flag that determines whether or not the access tokens retrieved from Google Cloud are cached. We recommend that you enable token caching after you are done configuring BigQuery Connector for SAP and testing your connection to Google Cloud. For more information about token caching, see Enable token caching.
Configure RFC destinations
To connect the BigQuery Connector for SAP to Google Cloud, we recommend that you use RFC destinations.
To configure the RFC destinations for your replication:
In the SAP GUI, enter transaction code
SM59
.(Recommended) Create new RFC destinations by copying the sample RFC destinations
GOOG_BIGQUERY
andGOOG_IAMCREDENTIALS
, and then make a note of the new RFC destination names. You use them in later steps.The BigQuery Connector for SAP uses RFC destinations to connect to BigQuery and IAM APIs, respectively.
If you want to test RFC destination based connectivity, then you can skip this step and use the sample RFC destinations.
For the RFC destinations that you created, complete the following steps:
Go to the Technical Settings tab and make sure that the Service No. field is set with the value
443
. This is the port that is used by the RFC destination for secure communication.Go to the Logon & Security tab and make sure that the SSL Certificate field is set with the option DFAULT SSL Client (Standard).
Optionally, you can configure proxy settings, enable HTTP compression, and specify Private Service Connect endpoints.
Save your changes.
To test the connection, click Connection Test.
A response containing
404 Not Found
is acceptable and expected because the endpoint specified in the RFC destination corresponds to a Google Cloud service and not a specific resource hosted by the service. Such a response indicates that the target Google Cloud service is reachable and that no target resource was found.
In the SAP GUI, enter transaction code
SM30
.In the
/GOOG/CLIENT_KEY
table that you created in the preceding section, note the value for the Name field.In the table
/GOOG/SERVIC_MAP
, create entries with the following field values:Google Cloud Key Name Google Service Name RFC Destination CLIENT_KEY_TABLE_NAME
bigquery.googleapis.com Specify the name of your RFC destination that targets BigQuery. If you're using the sample RFC destination for testing purposes, then specify GOOG_BIGQUERY
.CLIENT_KEY_TABLE_NAME
iamcredentials.googleapis.com Specify the name of your RFC destination that targets IAM. If you're using the sample RFC destination for testing purposes, then specify GOOG_IAMCREDENTIALS
.Replace
CLIENT_KEY_TABLE_NAME
with the client key name that you noted in the preceding step.
Configure proxy settings
When you use RFC destinations to connect to Google Cloud, you can route communication from BigQuery Connector for SAP through the proxy server that you're using in your SAP landscape.
If you do not want to use a proxy server or don't have one in your SAP landscape, then you can skip this step.
To configure proxy server settings for BigQuery Connector for SAP, complete the following steps:
In the SAP GUI, enter transaction code
SM59
.Select your RFC destination that targets IAM.
Go to the Technical Settings tab, and then enter values for the fields in the HTTP Proxy Options section.
Repeat the previous step for your RFC destination that targets BigQuery.
Enable HTTP compression
When you use RFC destinations to connect to Google Cloud, you can enable HTTP compression.
If you do not want to enable this feature, then you can skip this step.
To enable HTTP compression, complete the following steps:
In the SAP GUI, enter transaction code
SM59
.Select your RFC destination that targets BigQuery.
Go to the Special Options tab.
For the HTTP Version field, select HTTP 1.1.
For the Compression field, select an appropriate value.
For information about the compression options, see SAP Note 1037677 - HTTP compression compresses certain documents only
Specify Private Service Connect endpoints
If you want BigQuery Connector for SAP to use Private Service Connect endpoints to allow private consumption of BigQuery and IAM, then you need to create those endpoints in your Google Cloud project and specify them in the respective RFC destinations.
If you want BigQuery Connector for SAP to continue using the default, public API endpoints to connect to BigQuery and IAM, then skip this step.
To configure BigQuery Connector for SAP to use your Private Service Connect endpoints, complete the following steps:
In the SAP GUI, enter transaction code
SM59
.Validate that you have created new RFC destinations for BigQuery and IAM. For instructions to create these RFC destinations, see Configure RFC destinations.
Select the RFC destination that targets BigQuery and then complete the following steps:
Go to the Technical Settings tab.
For the Target Host field, enter the name of the Private Service Connect endpoint that you created to access BigQuery.
Go to the Logon and Security tab.
For the Service No. field, make sure that value
443
is specified.For the SSL Certificate field, make sure that the option DFAULT SSL Client (Standard) is selected.
Select the RFC destination that targets IAM and then complete the following steps:
Go to the Technical Settings tab.
For the Target Host field, enter the name of the Private Service Connect endpoint that you created to access IAM.
Go to the Logon and Security tab.
For the Service No. field, make sure that value
443
is specified.For the SSL Certificate field, make sure that the option DFAULT SSL Client (Standard) is selected.
Enable token caching
To improve replication performance, we recommend that you enable caching for the access token that you retrieve from Google Cloud to access BigQuery.
Enabling token caching makes sure that an access token is reused until the access token expires or is revoked, which in turn reduces the number of HTTP calls made to retrieve new access tokens.
To enable token caching, select the Token Caching
flag in the client key table /GOOG/CLIENT_KEY
.
When you enable token caching, the access token is cached in the Shared Memory
of your SAP LT Replication Server application server for the duration that is
set for the Token Refresh Seconds field in the /GOOG/CLIENT_KEY
table. If
Token Refresh Seconds is not specified or is set to 0
, then the access
token is cached for the value specified in the CMD_SECS_DEFLT
parameter in
advanced settings.
For SAP workloads that are not running on Google Cloud, the cached access tokens also prevent technical issues that might arise while replicating huge data loads, where several processes of SAP LT Replication Server can simultaneously request for an access token at any given time.
For SAP workloads that are running on Google Cloud and use a user-managed service account to access BigQuery, token caching can bring a significant improvement as retrieving an access token in this scenario involves making two HTTP calls. For information about the token retrieval, see Test Google Cloud authentication and authorization.
Clear the cached access token
When token caching is enabled and you update the roles assigned to the service account that BigQuery Connector for SAP uses to access BigQuery, the new access token that corresponds to the updated roles is retrieved only after the existing cached token expires. In such situations, you can clear the access token manually.
To clear the cached access token, enter transaction SE38
and then
run the program /GOOG/R_CLEAR_TOKEN_CACHE
.
Create an SAP LT Replication Server replication configuration
Use SAP transaction LTRC
to create an SAP LT Replication Server replication
configuration.
If SAP LT Replication Server is running on a different server than the source SAP system, before you create a replication configuration, confirm that you have an RFC connection between the two systems.
Some of the settings in the replication configuration affect performance. To determine appropriate setting values for your installation, see the Performance Optimization Guide for your version of SAP LT Replication Server in the SAP Help Portal.
The interface and configuration options for SAP LT Replication Server might be slightly different depending on which version you are using.
To configure replication, use the procedure for your version of SAP LT Replication Server:
- Configure replication in DMIS 2011 SP17, DMIS 2018 SP02, or later
- Configure replication in DMIS 2011 SP16, DMIS 2018 SP01, or earlier
Configure replication in DMIS 2011 SP17, DMIS 2018 SP02, or later
The following steps configure replication in later versions of SAP LT Replication Server. If you are using an earlier version, see Configure replication in DMIS 2011 SP16, DMIS 2018 SP01, or earlier.
In the SAP GUI, enter transaction code
LTRC
.Click the Create configuration icon. The Create Configuration wizard opens.
In the Configuration Name and Description fields, enter a name and a description for the configuration, and then click Next.
You can specify the Authorization Group for restricting access to a specific authorization group now or specify it later.
In the Source System Connection Details panel:
- Select the RFC Connection radio button.
- In the RFC Destination field, specify the name of the RFC connection to the source system.
- Select the checkboxes for Allow Multiple Usage and Read from Single Client as appropriate. For more information, see the SAP LT Replication Server documentationl.
- Click Next.
These steps are for an RFC connection, but if your source is a database, you can select DB Connection if you have already defined a connection by using transaction
DBACOCKPIT
instead.In the Target System Connection Details panel:
- Select the radio button for Other.
- In the Scenario field, select SLT SDK from the drop-down menu.
- Click Next.
On the Specify Transfer Settings panel:
In the Application field of the Data Transfer Settings section, enter
/GOOG/SLT_BQ
orZGOOG_SLT_BQ
.In the Job options section, enter starting values in each of the following fields:
- Number of Data Transfer Jobs
- Number of Initial Load Jobs
- Number of Calculation Jobs
In the Replication Options section, select the Real Time radio button.
Click Next.
After reviewing the configuration, click Save.
Make a note of the three-digit ID in the Mass Transfer column. You use it in a later step.
For more information, see the PDF attached to SAP Note 2652704: Replicating Data Using SLT SDK - DMIS 2011 SP17, DMIS 2018 SP02.pdf.
Configure replication in DMIS 2011 SP16, DMIS 2018 SP01, or earlier
The following steps configure replication in earlier versions of SAP LT Replication Server. If you are using a later version, see Configure replication in DMIS 2011 SP17, DMIS 2018 SP02, or later.
- In the SAP GUI, enter transaction code
LTRC
. - Click New. A dialog opens for specifying a new configuration.
- In the step Specify Source System:
- Choose RFC Connection as the connection type.
- Enter the RFC connection name.
- Ensure that the field Allow Multiple Usage is selected.
- In the step Specify Target System:
- Enter the connection data to the target system.
- Choose RFC Connection as the connection type.
- In the field Scenario for RFC Communication, select the value Write Data to Target Using BAdI from the drop-down list. The RFC connection is automatically set to NONE.
- In the step Specify Transfer Settings, press F4 Help. The application that you defined previously is displayed in the Application field.
- Make a note of the three-digit ID in the Mass Transfer column. You use it in a later step.
For more information, see the PDF attached to SAP Note 2652704: Replicating Data Using SLT SDK - DMIS 2011 SP15, SP16, DMIS 2018 SP00, SP01.pdf.
Create a mass transfer configuration for BigQuery
Use the custom /GOOG/SLT_SETTINGS
transaction to configure a
mass transfer for BigQuery and specify the table and field
mappings.
Select the initial mass transfer options
When you first enter the /GOOG/SLT_SETTINGS
transaction, you select
which part of the BigQuery
mass transfer configuration you need to edit.
To select the part of the mass transfer configuration:
In the SAP GUI, enter the
/GOOG/SLT_SETTINGS
transaction preceded by/n
:/n/GOOG/SLT_SETTINGS
From the Settings Table drop-down menu in the launch screen for the
/GOOG/SLT_SETTINGS
transaction, select Mass Transfers.For a new mass transfer configuration, leave the Mass Transfer Key field blank.
Click the Execute icon. The BigQuery Settings Maintenance - Mass Transfers screen displays.
Specify table creation and other general attributes
In the initial section of a BigQuery mass transfer configuration, you identify the mass transfer configuration and specify the associated client key, as well as certain properties related to the creation of the target BigQuery table.
SAP LT Replication Server saves the mass transfer configuration as a record in the
/GOOG/BQ_MASTR
custom configuration table.
The fields that you specify in the following steps are required.
In the BigQuery Settings Maintenance - Mass Transfers screen, click the Append Row icon.
In the displayed row, specify the following settings:
- In the Mass Transfer Key field, define a name for this transfer. This name becomes the primary key of the mass transfer.
- In the Mass Transfer ID field, enter the three-digit ID that was generated when you create the corresponding SAP LT Replication Server replication configuration.
- To use the labels or short descriptions of the source fields as the names for the target fields in BigQuery, click the Use Custom Names Flag checkbox. For more information about field names, see Default naming options for fields.
To store the type of change that triggered an insert and to enable the validation of record counts between the source table, SAP LT Replication Server statistics, and the BigQuery table, select the Extra Fields Flag checkbox.
When this flag is set, BigQuery Connector for SAP adds columns to the BigQuery table schema. For more information, see Extra fields for record changes and count queries.
To stop sending data when a record with a data error is encountered, the Break at First Error Flag checkbox is checked by default. We recommend leaving this checked. For more information, see The BREAK flag.
Optionally, to automatically reduce the chunk size when the byte size of a chunk exceeds the maximum byte size for HTTP requests that BigQuery accepts, click the Dynamic Chunk Size Flag checkbox. For more information about dynamic chunk size, see Dynamic chunk size.
When a record with a data error is encountered, to skip the record and continue inserting records into the BigQuery table, click the Skip Invalid Records Flag checkbox. We recommend leaving this unchecked. For more information, see The SKIP flag.
In the Google Cloud Key Name field, enter the name of the corresponding
/GOOG/CLIENT_KEY
configuration.BigQuery Connector for SAP retrieves the Google Cloud Project Identifier automatically from the
/GOOG/CLIENT_KEY
configuration.In the BigQuery Dataset field, enter the name of the target BigQuery dataset that you created earlier in this procedure.
In the Is Setting Active Flag field, enable the mass transfer configuration by clicking the checkbox.
Click Save.
A mass transfer record is appended in the
/GOOG/BQ_MASTR
table and the Changed By, Changed On, and Changed At fields are automatically populated.Click Display Table.
The new mass transfer record is displayed followed by the table attribute entry panel.
Specify table attributes
You can specify table attributes, such as table name and table partitioning,
as well as the number of records to include in each transmission or
chunk that is sent to BigQuery, in the second section of the
/GOOG/SLT_SETTINGS
transaction.
The settings that you specify are stored as a record in the
/GOOG/BQ_TABLE
configuration table.
These settings are optional.
To specify table attributes:
Click the Append row icon.
In the SAP Table Name field, enter the name of the source SAP table.
In the External Table Name field, enter the name of the target BigQuery table. If the target table doesn't already exist, BigQuery Connector for SAP creates the table with this name. For the BigQuery naming conventions for tables, see Table naming.
To send uncompressed data for all fields in a table, select Send Uncompressed Flag. With this setting enabled, BigQuery Connector for SAP replicates any empty fields in the source records with the values that the fields are initialized with in the source table. For better performance, don't select this flag.
If you need to send uncompressed data for only specific fields, then don't select Send Uncompressed Flag at the table level. Instead, select Send Uncompressed Flag for those specific fields at field level. This option lets you retain the initial values of specific fields when replicating data to BigQuery, even if you're compressing the rest of the table data. For information about how to modify record compression at field level, see Change record compression at field level.
For more information about the record compression behavior, see Record compression.
Optionally, in the Chunk Size field, specify the maximum number of records to include in each chunk that is sent to BigQuery. We recommend that you use the default chunk size with BigQuery Connector for SAP, which is 10,000 records. If you need to, you can increase the chunk size up to 50,000 records, which is the maximum chunk size that BigQuery Connector for SAP allows.
If the source records have a large number of fields, the number of fields can increase the overall byte size of the chunks, which can cause chunk errors. If this occurs, try reducing the chunk size to reduce the byte size. For more information, see Chunk size in the BigQuery Connector for SAP. Alternatively, to automatically adjust the chunk size, enable dynamic chunk size. For more information, see Dynamic chunk size.
Optionally, in the Partition Type field, specify an increment of time to use for partitioning. Valid values are
HOUR
,DAY
,MONTH
, orYEAR
. For more information, see Table partitioning.Optionally, in the Partition Field field, specify the name of a field in the target BigQuery table that contains a timestamp to use for partitioning. When you specify Partition Field, you must also specify Partition Type. For more information, see Table partitioning.
In the Is Setting Active Flag field, enable the table attributes by clicking the checkbox. If the Is Setting Active Flag box is not checked, BigQuery Connector for SAP creates the BigQuery table with the name of the SAP source table, the default chunk size, and no partitioning.
Click Save.
Your attributes are stored as a record in the
/GOOG/BQ_TABLE
configuration table and the Changed By, Changed On, and Changed At fields are automatically populated.Click Display Fields.
The new table attribute record is displayed, followed by the field mapping entry panel.
Customize the default field mapping
If the source SAP table contains timestamp fields or booleans, change the default data type mapping to accurately reflect the data type in the target BigQuery table.
You can also change other data types, as well as the names that are used for target fields.
You can edit the default mapping directly in the SAP GUI or you can export the default mapping to a spreadsheet or a text file so that others can edit the values without requiring access to SAP LT Replication Server.
For more information about the default field mapping and the changes you can make, see Field mapping.
To customize the default mapping for the target BigQuery fields:
In the BigQuery Settings Maintenance - Fields page of the transaction
/GOOG/SLT_SETTINGS
, display the default field mappings for the mass transfer you are configuring.Edit the default target data types in the External Data Element column as needed. In particular, change the target data type for the following data types:
- Timestamps. Change the default target data type
from
NUMERIC
toTIMESTAMP
orTIMESTAMP (LONG)
. - Booleans. Change the default target data type from
STRING
toBOOLEAN
. - Hexadecimals. Change the default target data type from
STRING
toBYTES
.
To edit the default data type mapping:
- On the row of the field that you need to edit, click the External Data Element field.
- In the dialog for data types, select the BigQuery data type that you need.
- Confirm your changes, and then click Save.
- Timestamps. Change the default target data type
from
If you specified the Custom Names flag in the BigQuery Settings Maintenance page, edit the default target field names in the Temporary Field Name column as needed.
The values that you specify override the default names that are shown in the External Field Name column.
Edit the default target field descriptions in the Field Description column as needed.
Optionally, export the field map for external editing. For instructions, see Edit the BigQuery field map in a CSV file.
After all changes are complete and any externally edited values have been uploaded, confirm that the Is Setting Active Flag checkbox is selected. If Is Setting Active Flag is not selected, BigQuery Connector for SAP creates target tables with the default values.
Click Save.
The changes are stored in the
/GOOG/BQ_FIELD
configuration table and the Changed By, Changed On, and Changed At fields are automatically populated.
Change record compression at field level
To improve replication performance, the BigQuery Connector for SAP compresses
records by omitting all empty fields in the source record, which are then
initialized with null
in the target table in BigQuery.
However, if you need to replicate some empty fields with their initial values
to BigQuery while still using record compression, then you can
select Send Uncompressed Flag for those specific fields.
For more information about the record compression behavior, see Record compression.
To change record compression at field level, complete the following steps:
In the BigQuery Settings Maintenance - Fields page of the transaction
/GOOG/SLT_SETTINGS
, display the list of fields for the table whose mass transfer you are configuring.To send uncompressed data for a field, select Send Uncompressed Flag corresponding to the field.
Click Save.
Test replication
Test the replication configuration by starting data provisioning:
Open the SAP LT Replication Server Cockpit (transaction
LTRC
) in the SAP GUI.Click on the mass transfer configuration for the table replication that you are testing.
Click Data Provisioning.
In the Data Provisioning panel, start data provisioning:
- Enter the name of the source table.
- Click the radio button for the type of data provisioning that you want to test. For example, Start Load.
Click the Execute icon. The data transfer begins and the progress is displayed on the Participating objects screen.
If the table doesn't exist in BigQuery, the BigQuery Connector for SAP creates the table from a schema that it builds from the table and field attributes that you previously defined with the
/GOOG/SLT_SETTINGS
transaction.The length of time that an initial load of a table takes depends on the size of the table and its records.
Messages are written to the SAP LT Replication Server Application Logs section in transaction
LTRC
.
Alternatively, you can test the replication to BigQuery by using the Load Simulation tool. For more information, see Load Simulation tool.
Validate replication
You can validate replication using the following methods:
- In SAP LT Replication Server:
- Monitor the replication on the Data Provisioning screen.
- Check for error messages in the Application Logs screen.
- On the table information tab in BigQuery:
- Check the Schema tab to ensure that the schema looks right.
- Check the Preview tab to see a preview of the inserted rows.
- Check the Details tab for the number of rows inserted, the size of the table, and other information.
- If the Extra Fields Flag checkbox was selected when the
BigQuery table was configured, run the Replication
Validation tool by entering the
/GOOG/REPLIC_VALID
custom transaction.
Check replication in SAP LT Replication Server
Use transaction LTRC
to see the progress of initial load or replication jobs
after you start them and to check for error messages.
You can see the status of the load under the Load Statistics tab and the progress of the job under the Data Transfer Monitor tab in SAP LT Replication Server.
On the Application Logs screen of transaction LTRC
, you can see all
of the messages that are returned by BigQuery, the
BigQuery Connector for SAP, and SAP LT Replication Server.
Messages that are issued by BigQuery Connector for SAP code in
SAP LT Replication Server start with the prefix /GOOG/SLT
. Messages that are
returned from the BigQuery API start with the
prefix /GOOG/MSG
.
Messages that are returned by SAP LT Replication Server do not start with
a /GOOG/
prefix.
Check replication in BigQuery
In the Google Cloud console, confirm that the table was created and that BigQuery is inserting data into it.
In the Google Cloud console, go to the BigQuery page.
In the search field of the Explorer section, type the name of the target BigQuery table, and then press
Enter
.The table information is displayed under a tab in the content pane on the right side of the page.
In the table information section, click the following headings to check the table and row insertion:
- Preview, which shows the rows and fields that are inserted into the BigQuery table.
- Schema, which shows the field names and data types.
- Details, which shows the table size, the total number of rows, and other details.
Run the Replication Validation tool
If the Extra Fields Flag was selected when the BigQuery table was configured, you can use the Replication Validation tool to generate reports that compare the number of records in the BigQuery table with record counts in the SAP LT Replication Server statistics or the source table.
To run the Replication Validation tool:
In the SAP GUI, enter the
/GOOG/REPLIC_VALID
transaction preceded by/n
:/n/GOOG/REPLIC_VALID
In the Processing Options section, click the Execute Validation radio button.
In the Selection Options section, enter the following specifications:
- From the drop-down menu in the GCP Partner Identifier field, select BigQuery.
- From the drop-down menu in the Check Type field,
select the type of report to generate:
- Initial Load Counts
- Replication Counts
- Current Counts
- If the Check Date field is displayed, specify the date that you need the counts for.
- In the Mass Transfer Key field, enter the mass transfer configuration name.
- Optionally, in the Tables Names field, specify the table names in the mass transfer configuration for which you need to generate the report.
Run the Replication Validation tool by clicking the Execute icon.
After the validation check is complete, in the Processing Options section, display the report by clicking the Display Report radio button and then clicking the Execute icon.
For more information, see Replication Validation tool.