本頁面說明如何在 BigQuery 中設定及查看預建報表。這些報表是使用 Cloud Logging 的記錄建立。如要設定報表,您必須執行一次性活動,建立記錄匯出端,將記錄資料串流至 BigQuery,然後執行預先建構的指令碼。
必要的 IAM 角色
您必須具備下列 IAM 權限,才能在 BigQuery 中查看預先建構報表。瞭解如何授予身分與存取權管理角色。
角色 | 授予角色的時機 |
---|---|
記錄設定寫入者 (roles/logging.configWriter ) 或記錄管理員 ( roles/logging.admin ) 和BigQuery 資料編輯者 ( roles/bigquery.dataEditor ) |
如何透過 Google Cloud 控制台建立接收端和 BigQuery 資料集。 |
擁有者 (roles/owner ) |
如何透過 Google Cloud CLI 建立接收器和 BigQuery 資料集。 |
BigQuery 管理員 (bigquery.admin ) |
撰寫自訂查詢或下載查詢。 |
建立接收器並將記錄檔轉送至 BigQuery
BigQuery 只會儲存建立記錄檔接收器後產生的記錄檔。在建立記錄接收器之前產生的記錄不會顯示在 BigQuery 中。您可以透過 Google Cloud 控制台或 Google Cloud CLI 建立記錄匯入端。
如要在 BigQuery 中建立接收器和路由記錄,請按照下列步驟操作:
主控台
- 在 Google Cloud 控制台中,前往「Log Router」頁面:
- 選取現有的 Google Cloud 專案。
- 按一下「Create Sink」(建立接收器)。
- 在「Sink details」面板中,輸入下列欄位:
- 接收器名稱:輸入接收器名稱
BackupandDR_reports_sink
。您必須使用匯入端名稱BackupandDR_reports_sink
,才能識別其他匯入端的備份和 DR 報表。 - 接收器說明:說明接收器的用途或用途。
- 接收器名稱:輸入接收器名稱
在「Sink destination」面板中,執行下列操作:
- 在「Select sink service」(選取接收器服務) 選單中,選取「BigQuery dataset」(BigQuery 資料集)接收器服務。
- 在「選取 BigQuery 資料集」中,選取「建立新的 BigQuery 資料集」。
- 在「Create dataset」(建立資料集) 頁面中執行下列操作:
- 針對「Dataset ID」(資料集 ID),輸入資料集名稱
BackupandDR_reports
,以便與其他資料集區分開。請勿變更BackupandDR_reports
的資料集名稱。 - 針對「Location type」(位置類型),選擇資料集的地理位置。資料集在建立之後,該位置就無法改變。
- 選用:如要讓這個資料集中的資料表到期,請選取「啟用資料表到期時間」,然後以天為單位指定「預設資料表存在時間上限」。
- 點選「建立資料集」。
- 針對「Dataset ID」(資料集 ID),輸入資料集名稱
在「Choose logs to include in sink」(選擇要納入接收器的記錄檔) 面板中,執行下列操作:
在「Build inclusion filter」欄位中,輸入與您要納入的記錄項目相符的篩選器運算式。
logName=~"projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*"
如要確認您輸入的篩選條件正確無誤,請選取「預覽記錄」。系統會在新分頁中開啟記錄檔探索工具,並預先填入篩選條件。
選用步驟:在「Choose logs to filter out of sink」面板中,執行下列操作:
選取「Create Sink」(建立接收器)。
您可以在 BigQuery Studio 中查看資料集。
gcloud
- 前往「啟用 Cloud Shell」,然後按一下「開啟編輯器」。
- 按一下 圖示,選取「File」,然後選取「New text file」。
複製並貼上下列指令碼。
#!/bin/bash echo "This script will set up a log sink for BackupDR reports to be available in BigQuery" # Get the default project ID DEFAULT_PROJECT_ID=$(gcloud config get-value project) read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID # Use default if no input is provided if [ -z "$PROJECT_ID" ]; then PROJECT_ID=$DEFAULT_PROJECT_ID fi # Set the project ID result=$(gcloud config set project $PROJECT_ID) if [ $? -ne 0 ]; then echo "Error setting the project to $PROJECT_ID" exit 1 fi # --- Check if BigQuery API is already enabled, enable if not --- echo "Checking if BigQuery API is enabled..." if gcloud services list | grep "bigquery.googleapis.com" >/dev/null; then echo "BigQuery API is already enabled for $PROJECT_ID" else echo "For logs to be available in BigQuery, we need to enable BigQuery service in the project if not done already. This might mean additional costs incurred. Please check the pricing at https://cloud.google.com/backup-disaster-recovery/docs/monitor-reports/reports-overview#pricing before proceeding." read -p "Do you want to continue(Y/N)?" continue if [ "$continue" = "y" ] || [ "$continue" = "Y" ]; then echo "Enabling BigQuery API..." result=$(gcloud services enable bigquery.googleapis.com --project $PROJECT_ID) if [ $? -eq 0 ]; then echo "Successfully enabled BigQuery api for $PROJECT_ID" else echo "Error in setting up the BigQuery api for the project. $result" exit 1 fi else exit 0 fi fi # --- Check if BigQuery data set already exists, create if not --- echo "Checking if BigQuery data set exists..." if bq ls | grep "BackupandDR_reports" >/dev/null; then echo "Dataset BackupandDR_reports already exists for $PROJECT_ID" else echo "Creating bigQuery dataset BackupandDR_reports..." # --- Get dataset location from user (default: US) --- read -p "Enter dataset location (default: US, press Enter to use): " DATASET_LOCATION if [ -z "$DATASET_LOCATION" ]; then DATASET_LOCATION="US" fi # --- Get table expiration in days from user (default: no expiration) --- read -p "Enter default table expiration in days (default: no expiration, press Enter to skip): " TABLE_EXPIRATION_DAYS # Calculate table expiration in seconds if provided if [ -n "$TABLE_EXPIRATION_DAYS" ]; then TABLE_EXPIRATION_SECONDS=$((TABLE_EXPIRATION_DAYS * 24 * 60 * 60)) EXPIRATION_FLAG="--default_table_expiration $TABLE_EXPIRATION_SECONDS" else EXPIRATION_FLAG="" fi result=$(bq --location=$DATASET_LOCATION mk $EXPIRATION_FLAG BackupandDR_reports) if [ $? -eq 0 ]; then echo "Created a BigQuery dataset BackupandDR_reports successfully." else echo "" echo "ERROR : Failed to create the BigQuery dataset." echo $result exit 1 fi fi # --- Check if Log Sink already exists, create if not --- echo "Checking if Log Sink exists..." if gcloud logging sinks list | grep "BackupandDR_reports_sink" >/dev/null; then echo "Log Sink BackupandDR_reports_sink already exists for $PROJECT_ID" else log_filter="projects/$PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*" echo "Creating log sink BackupandDR_reports_sink..." result=$(gcloud logging sinks create BackupandDR_reports_sink bigquery.googleapis.com/projects/$PROJECT_ID/datasets/BackupandDR_reports --log-filter="logName=~\"$log_filter\"") if [ $? -eq 0 ]; then echo "Created a logsink BackupandDR_reports_sink successfully." else echo "" echo "ERROR : Failed to create logsink." exit 1 fi fi # --- Add IAM Policy binding for Cloud logging service account to write logs to BigQuery --- result=$(gcloud projects add-iam-policy-binding $(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") --member=serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-logging.iam.gserviceaccount.com --role=roles/bigquery.dataEditor --condition=None) if [ $? -eq 0 ]; then echo "Added permission for cloud logging to write to BigQuery datasets" else echo "" echo "ERROR : Failed to add permissions for cloud logging to write to BigQuery datasets. Please make sure that you have correct access rights in order to be able to proceed." exit 1 fi echo "Setup complete. The logs for the project $PROJECT_ID will now start flowing to bigquery." exit 0
請使用 Bash 檔案副檔名的名稱儲存檔案,例如
script.sh
。使用剛剛建立的檔案執行指令 bash。例如
bash script.sh
。您可以在 BigQuery Studio 中查看已建立的資料集。
設定預先建構的報表
您可以執行下列指令碼,在資料集中設定預先建構的報表,以便將記錄資料路由至記錄接收器。
指令碼會新增下列預先建構的報表:
如要在 BigQuery 中設定預建報表,請按照下列步驟操作:
gcloud
- 前往「啟用 Cloud Shell」,然後按一下「開啟編輯器」。
- 建立新的文字檔案。
複製並貼上下列預先建構報表指令碼到 Google Cloud CLI。
backup_job_summary_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 29 DAY) AND DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1 GROUP BY insertId ) SELECT DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent, COUNT(*) as Total_Jobs, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded, COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed, COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled, COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run, FROM filtered_data WHERE jsonPayload.job_category = "Backup Job" AND jsonPayload.job_status != "retry" GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type ORDER BY Date, Resource_Name;' recovery_job_summary_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 29 DAY) AND DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1 GROUP BY insertId ) SELECT DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent, COUNT(*) as Total_Jobs, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded, COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed, COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled, COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run, FROM filtered_data WHERE jsonPayload.job_category = "Recovery Job" AND jsonPayload.job_status != "retry" GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type ORDER BY Date, Resource_Name;' failed_job_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` GROUP BY insertId ) SELECT jsonPayload.job_name as Job_Name, jsonPayload.job_category as Job_Category, jsonPayload.job_type as Job_Type, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.error_code as Error_Code, jsonPayload.error_message as Error_Message, jsonPayload.hostname as Host_Name, jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, jsonPayload.appliance_name as Appliance_Name, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(jsonPayload.job_start_time)) as Job_Start_Time, DATE(jsonPayload.job_start_time) as Job_Date, jsonPayload.backup_type as Backup_Type FROM filtered_data WHERE jsonPayload.job_status = "failed" ORDER BY Job_Start_Time DESC, Resource_Name' job_details_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` GROUP BY insertId ) SELECT jsonPayload.job_name as Job_Name, jsonPayload.job_category as Job_Category, jsonPayload.job_type as Job_Type, jsonPayload.log_backup as Log_Backup, jsonPayload.job_status as Job_Status, DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Job_Date, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.error_code as Error_Code, jsonPayload.error_message as Error_Message, jsonPayload.job_initiation_failure_reason as Job_Initiation_Failure_Reason, jsonPayload.appliance_name as Appliance_Name, jsonPayload.hostname as Host_Name, jsonPayload.target_appliance_name as Target_Appliance_Name, jsonPayload.target_pool_name as Target_Pool_Name, jsonPayload.target_host_name as Target_Host_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_queued_time !="" THEN jsonPayload.job_queued_time ELSE NULL END)) as Job_Queue_Time, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_start_time!="" THEN jsonPayload.job_start_time ELSE NULL END)) as Job_Start_Time, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_end_time!="" THEN jsonPayload.job_end_time ELSE NULL END)) as Job_End_Time, jsonPayload.job_duration_in_hours as Job_Duration_In_Hours, jsonPayload.backup_consistency as Backup_Consistency, jsonPayload.resource_data_size_in_gib as Resource_Data_Size_In_GiB, jsonPayload.snapshot_disk_size_in_gib as Snapshot_Disk_Size_in_GiB, jsonPayload.pre_compress_in_gib as Pre_Compress_In_GiB, jsonPayload.compression_ratio as Compression_Ratio, jsonPayload.data_sent_in_gib as Streamsnap_Data_Sent_In_GiB, jsonPayload.data_written_in_gib as Streamsnap_Data_Written_In_GiB, jsonPayload.data_copied_in_gib as Data_Copied_In_GiB, jsonPayload.data_change_rate as Data_Change_Rate_Percent, jsonPayload.backup_type as Backup_Type, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.recovery_point!="" THEN jsonPayload.recovery_point ELSE NULL END)) as Recovery_Point, jsonPayload.onvault_pool_storage_consumed_in_gib as OnVault_Pool_Storage_Consumed_In_GiB, FROM filtered_data ORDER BY Job_Start_Time DESC, Resource_Name' unresolved_failures_query='WITH job_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` WHERE jsonPayload.job_status in ("succeeded", "failed") GROUP BY insertId ), -- Select resources which have their latest status as failed unresolved_failed_resources AS ( SELECT jsonPayload.appliance_name as Appliance_Name, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, jsonPayload.job_status as Job_Status, jsonPayload.job_start_time as Job_Start_Time, jsonPayload.hostname as Hostname, jsonPayload.error_code as Error_Code, jsonPayload.error_message as Error_Message FROM job_data j1 WHERE jsonPayload.job_start_time = (SELECT MAX(jsonPayload.job_start_time) FROM job_data j2 WHERE j1.jsonPayload.appliance_name = j2.jsonPayload.appliance_name AND j1.jsonPayload.resource_name = j2.jsonPayload.resource_name AND j1.jsonPayload.resource_type = j2.jsonPayload.resource_type AND j1.jsonPayload.job_type = j2.jsonPayload.job_type) AND jsonPayload.job_status = "failed" ), -- Select all jobs for all the resources that have unresolved failures all_jobs_of_unresolved_failure_resources AS ( SELECT jsonPayload.appliance_name as Appliance_Name, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, jsonPayload.job_category as Job_Category, jsonPayload.job_name as Job_Name, jsonPayload.recovery_point as Recovery_Point, jsonPayload.job_status as Job_Status, jsonPayload.job_start_time as Job_Start_Time, jsonPayload.hostname as Hostname, jsonPayload.error_code as Error_Code FROM job_data j1 JOIN unresolved_failed_resources fr ON j1.jsonPayload.appliance_name = fr.Appliance_Name AND j1.jsonPayload.resource_name = fr.Resource_Name AND j1.jsonPayload.resource_type = fr.Resource_Type AND j1.jsonPayload.job_type = fr.Job_Type ), -- Select the latest successful jobs for the resources with unresolved failures latest_success AS ( SELECT * FROM all_jobs_of_unresolved_failure_resources all_jobs WHERE all_jobs.Job_Status = "succeeded" AND all_jobs.Job_Start_Time = ( SELECT MAX(Job_Start_Time) FROM all_jobs_of_unresolved_failure_resources all_jobs_2 WHERE all_jobs_2.Appliance_Name = all_jobs.Appliance_Name AND all_jobs_2.Resource_Name = all_jobs.Resource_Name AND all_jobs_2.Resource_Type = all_jobs.Resource_Type AND all_jobs_2.Job_Type = all_jobs.Job_Type AND all_jobs_2.job_status = "succeeded" ) ), -- Select all failed jobs after the last success for the resources with unresolved failures failed_jobs_to_report AS ( SELECT all_jobs.Appliance_Name as Appliance_Name, all_jobs.Resource_Name as Resource_Name, all_jobs.Resource_Type as Resource_Type, all_jobs.Job_Type as Job_Type, all_jobs.Job_Name as Job_Name, all_jobs.Recovery_Point as Recovery_Point, all_jobs.Job_Status as Job_Status, all_jobs.Job_Start_Time as Job_Start_Time, all_jobs.Hostname as Hostname, all_jobs.Error_Code as Error_Code, all_jobs.Job_Category as Job_Category, FROM all_jobs_of_unresolved_failure_resources all_jobs LEFT JOIN latest_success success ON success.Appliance_Name = all_jobs.Appliance_Name AND success.Resource_Name = all_jobs.Resource_Name AND success.Resource_Type = all_jobs.Resource_Type AND success.Job_Type = all_jobs.Job_Type WHERE all_jobs.Job_Status = "failed" AND TIMESTAMP(all_jobs.Job_Start_Time) > COALESCE(TIMESTAMP(success.Job_Start_Time), TIMESTAMP(DATE_SUB(CURRENT_DATE(), INTERVAL 3 DAY))) ) SELECT failed_jobs.Resource_Name, failed_jobs.Resource_Type, failed_jobs.Job_Type, failed_jobs.Job_Category, COUNT(*) as Error_Count, failed_jobs.Error_Code, ANY_VALUE(ufr.Error_Message) as Error_Message, FORMAT_TIMESTAMP("%F %R %Z", MIN(TIMESTAMP(failed_jobs.Job_Start_Time))) as First_Failure, FORMAT_TIMESTAMP("%F %R %Z", MAX(TIMESTAMP(failed_jobs.Job_Start_Time))) as Last_Failure, TIMESTAMP_DIFF(CURRENT_TIMESTAMP,TIMESTAMP(MAX(failed_jobs.Job_Start_Time)), HOUR) as Hours_Since_Last_Failure, failed_jobs.Appliance_Name FROM failed_jobs_to_report failed_jobs LEFT JOIN unresolved_failed_resources ufr ON failed_jobs.Appliance_Name = ufr.Appliance_Name AND failed_jobs.Resource_Name = ufr.Resource_Name AND failed_jobs.Resource_Type = ufr.Resource_Type AND failed_jobs.Job_Type = ufr.Job_Type AND failed_jobs.Error_Code = ufr.Error_Code GROUP BY Appliance_Name, Resource_Name, Resource_Type, Job_Type, Job_Category, Error_Code ORDER BY Last_Failure DESC ' daily_schedule_compliance_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_daily_schedule_compliance_*` WHERE DATE(jsonPayload.date) >= DATE_SUB(CURRENT_DATE(), INTERVAL 8 DAY) AND DATE(jsonPayload.date) < CURRENT_DATE() + 1 GROUP BY insertId ) SELECT jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, DATE(jsonPayload.date) as Date, jsonPayload.backup_window_start_time as Backup_Window_Start_Time, jsonPayload.job_type as Job_Type, jsonPayload.status as Status, jsonPayload.comment as Comment, jsonPayload.appliance_name as Appliance_Name, jsonPayload.host_name as Host_Name FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Resource_Name, Date; ' protected_data_resource_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_protected_resource_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.host_name as Host_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, jsonPayload.protected_on as Protected_On, jsonPayload.recovery_point as Recovery_Point, jsonPayload.protected_data_in_gib as Protected_Data_in_Gib, jsonPayload.onvault_in_gib as OnVault_Data_in_Gib, jsonPayload.backup_plan_restrictions as Backup_Plan_Restrictions, jsonPayload.backup_inclusion_or_exclusion as Backup_Inclusion_or_Exclusion, jsonPayload.policy_overrides as Policy_Overrides, jsonPayload.appliance_name as Appliance_Name, jsonPayload.remote_appliance as Remote_Appliance, jsonPayload.source_appliance as Source_Appliance, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Resource_Name;' storage_utilization_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_storage_utilization_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.appliance_name as Appliance_Name, jsonPayload.pool_name as Pool_Name, jsonPayload.storage_type as Storage_Type, jsonPayload.total_capacity_in_gib as Total_Capacity_In_GiB, jsonPayload.used_capacity_in_gib as Used_Capacity_In_GiB, jsonPayload.utilization_percentage as Utilization_percentage, jsonPayload.appliance_id as Appliance_id, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Appliance_Name;' mounted_image_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_mounted_images_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.source_resource_name as Source_Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.appliance_name as Appliance_Name, jsonPayload.mounted_image_name as Mounted_Image_Name, jsonPayload.source_image_name as Source_Image_Name, jsonPayload.source_image_type as Source_Image_Type, jsonPayload.recovery_point_date as Recovery_Point_Date, jsonPayload.last_mount_date as Last_Mount_Date, jsonPayload.source_host_name as Source_Host_Name, jsonPayload.mounted_host_name as Mounted_Host_Name, jsonPayload.mounted_resource_name as Mounted_Resource_Name, jsonPayload.resource_virtual_size_in_gib as Resource_Virtual_Size_In_Gib, jsonPayload.storage_consumed_in_gib as Storage_Consumed_In_Gib, jsonPayload.mounted_resource_label as Mounted_Resource_Label, jsonPayload.restorable_object as Restorable_Object, jsonPayload.mounted_image_age_in_days as Mounted_Image_Age_In_Days, jsonPayload.user_name as User_Name, jsonPayload.read_mode as Read_Mode, jsonPayload.resource_size_in_gib as Resource_Size_In_Gib, jsonPayload.source_image_expiration_date as Source_Image_Expiration_Date, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Mounted_Resource_Name;' unprotected_resource_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_unprotected_resource_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.host_name as Host_Name, jsonPayload.instance_name as Instance_Name, jsonPayload.discovered_on as Discovered_On, jsonPayload.discovered_by as Discovery_Appliance, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Resource_Name;' connector_version_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_connector_version_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.appliance_name as Appliance_Name, jsonPayload.host_name as Host_Name, jsonPayload.host_os_type as Host_OS_Type, jsonPayload.host_ip_address as Host_IP_Address, jsonPayload.db_authentication as DB_Authentication, jsonPayload.installed_version as Installed_Version, jsonPayload.available_version as Available_Version, jsonPayload.version_check as Version_Check, jsonPayload.disk_preference as Disk_Preference, jsonPayload.transport as Transport, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Host_Name;' backup_policy_details_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_rule_policy_details_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.backup_plan_policy_template as Backup_Template, jsonPayload.backup_rule_policy_name as Backup_Policy_Name, jsonPayload.backup_rule_policy_type as Backup_Policy_Type, jsonPayload.backup_schedule_type as Backup_Schedule_Type, jsonPayload.backup_window_start_time as Backup_Window_Start_Time, jsonPayload.backup_window_end_time as Backup_Window_End_Time, jsonPayload.backup_frequency_in_days as Backup_Frequency_In_Days, CASE WHEN jsonPayload.backup_consistency = "Crash Consistency Backup" THEN "Crash Consistent Backup" WHEN jsonPayload.backup_consistency = "Application Consistency Backup" THEN "Application Consistent Backup" ELSE jsonPayload.backup_consistency END as Backup_Consistency, jsonPayload.log_backup_frequency_in_minutes as Log_Backup_Frequency_In_Minutes, jsonPayload.backup_retention_period_in_days as Backup_Retention_Period_In_Days, jsonPayload.protected_resources_count as Protected_Resources_Count, jsonPayload.schedule_disabled_resources_count as Schedule_Disabled_Resources_Count, jsonPayload.appliance_name as Appliance_Name, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id);' table_not_exists_error_flag=false check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_backup_recovery_jobs%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" DATASET_NAME.'Backup Jobs Summary') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" DATASET_NAME.'Backup Jobs Summary' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" DATASET_NAME.'Recovery Jobs Summary') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" DATASET_NAME.'Recovery Jobs Summary' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$failed_job_query" DATASET_NAME.'Failed Jobs') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$failed_job_query" DATASET_NAME.'Failed Jobs' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$job_details_query" DATASET_NAME.'Job Details') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$job_details_query" DATASET_NAME.'Job Details' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" DATASET_NAME.'Unresolved Failures') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" DATASET_NAME.'Unresolved Failures' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_backup_recovery_jobs does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_daily_schedule_compliance%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" DATASET_NAME.'Daily Schedule Compliance') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" DATASET_NAME.'Daily Schedule Compliance' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_daily_schedule_compliance does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_protected_resource%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" DATASET_NAME.'Protected Data Resource') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" DATASET_NAME.'Protected Data Resource' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_protected_resource does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_backup_storage_utilization%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$storage_utilization_query" DATASET_NAME.'Storage Resource Utilization') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$storage_utilization_query" DATASET_NAME.'Storage Resource Utilization' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_backup_storage_utilization does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_unprotected_resource%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$unprotected_resource_query" DATASET_NAME.'Unprotected Resource') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$unprotected_resource_query" DATASET_NAME.'Unprotected Resource' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_unprotected_resource does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_mounted_images%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$mounted_image_query" DATASET_NAME.'Mounted Image Details') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$mounted_image_query" DATASET_NAME.'Mounted Image Details' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_mounted_images does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_connector_version%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$connector_version_query" DATASET_NAME.'Connector Version Details') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$connector_version_query" DATASET_NAME.'Connector Version Details' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_connector_version does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi if [ $table_not_exists_error_flag == true ]; then echo -e "\e[1m\e[33mAll the prebuilt reports could not be created successfully in BigQuery as one or more report logs are missing in the dataset DATASET_NAME." echo -e "Please ensure that you have waited for at least 8 hours after creating the sink, before running the script to create pre built reports. Try re-running the script again after some time to fix the issue." echo -e "Reach out to Google Cloud Support in case you are still facing this issue.\e[0m" fi # Check if view exists check_view=$(bq --project_id PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.VIEWS WHERE table_name='Storage Resource Utilization'" | tail -n +2 | tr -d ' ') if [ "$check_view" == 'true' ]; then # Check if view is working correctly fetch_result=$(bq query --format=csv --use_legacy_sql=false "SELECT COUNT(*) FROM \`DATASET_NAME.Storage Resource Utilization\`" 2>&1) if [[ $? -eq 0 ]]; then echo -e "\e[1m\e[33mTo fix a schema related error in the Backup Storage Utilization view, we updated the software on June 3rd, 2024. As part of the fix, storage utilization data before June 3rd 2024, will be no longer available in the report.\e[0m" echo -e "Deleting the data before June 3rd 2024..." start_date="20240301" end_date="20240603" current_date="$start_date" while [ "$current_date" -le "$end_date" ]; do table_name="DATASET_NAME.backupdr_googleapis_com_gcb_backup_storage_utilization_${current_date}" bq rm -f -t "$table_name" current_date=$(date -d "$(date -d "$current_date" +%Y-%m-%d) + 1 day" +%Y%m%d) done echo -e "Completed successfully." fi fi
更改下列內容:
PROJECT_ID
:專案名稱。DATASET_NAME
:BigQuery 資料集名稱。建議您將備份和災難復原服務報表的 BigQuery 資料集名稱設為預設資料集名稱,也就是BackupandDR_reports
。如果您在建立匯流程時設定的名稱與預設資料集名稱不同,請變更資料集名稱,以便與 BigQuery 資料集相符。
請使用 Bash 檔案副檔名的名稱儲存檔案,例如
backupdrreports.sh
。使用剛剛建立的檔案執行指令 bash。例如
bash backupdrreports.sh
。您可以在 BigQuery Studio 中,查看資料集下方的預建報表。
設定預先建立的報表後,如果您有權存取 Google Workspace,就可以在 Google 已連結的試算表中查看結果。如果您沒有 Google Workspace 存取權,可以在 Looker Studio 中查看報表,或是下載預先建構的報表,並以 CSV 檔案格式儲存。
定價
在 BigQuery 中查看預建報表可能會產生費用。這些費用取決於在 BigQuery 中串流、儲存及查詢的報表資料量。詳情請參閱「定價」一文。