[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[[["\u003cp\u003eThis step outlines how to execute the deployment of Cortex Framework Data Foundation, following the configuration of the \u003ccode\u003econfig.json\u003c/code\u003e file.\u003c/p\u003e\n"],["\u003cp\u003eThe build process involves running commands within the cloned repository, using \u003ccode\u003egcloud builds submit\u003c/code\u003e to start the build, and monitoring the build progress via the terminal or Cloud Build console.\u003c/p\u003e\n"],["\u003cp\u003eFor users with a Cloud Composer (Airflow) instance, generated integration or CDC files can be moved to the Cloud Composer DAG bucket using specific gcloud storage commands.\u003c/p\u003e\n"],["\u003cp\u003eCustomizations specific to enterprise customers are supported, as identified by \u003ccode\u003e## CORTEX-CUSTOMER\u003c/code\u003e tags in the code, and it is recommended to tag all changes clearly in forked or cloned repositories.\u003c/p\u003e\n"],["\u003cp\u003eUtilize Looker Blocks and Dashboards, which provide reusable data models for analytical patterns and common data sources.\u003c/p\u003e\n"]]],[],null,["# Step 6: Execute deployment\n==========================\n\nThis page describes the sixth step to deploy Cortex Framework Data Foundation, the core\nof Cortex Framework. In this step, you execute the deployment of\nCortex Framework Data Foundation.\n| **Note:** The steps outlined in this page are specifically designed for deploying Cortex Framework Data Foundation from the [official GitHub repository](https://github.com/GoogleCloudPlatform/cortex-data-foundation).\n\nBuild process\n-------------\n\nAfter configuring the [`config.json`](https://github.com/GoogleCloudPlatform/cortex-data-foundation/blob/main/config/config.json)\nfile as described in [Step 5: Configure deployment](/cortex/docs/deployment-step-five),\nfollow these instructions to build your process.\n\n1. Run the following command to locate yourself in the cloned repository:\n\n cd cortex-data-foundation\n\n2. Run the build command with the target log bucket:\n\n gcloud builds submit \\\n --substitutions=_GCS_BUCKET=\u003cvar translate=\"no\"\u003eLOGS_BUCKET\u003c/var\u003e,_BUILD_ACCOUNT='projects/\u003cvar translate=\"no\"\u003eSOURCE_PROJECT\u003c/var\u003e/serviceAccounts/\u003cvar translate=\"no\"\u003eSERVICE_ACCOUNT\u003c/var\u003e@\u003cvar translate=\"no\"\u003eSOURCE_PROJECT\u003c/var\u003e.iam.gserviceaccount.com'\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eLOGS_BUCKET\u003c/var\u003e with the bucket name for logs storage. Cloud Build Service Account needs access to write them here.\n - \u003cvar translate=\"no\"\u003eSOURCE_PROJECT\u003c/var\u003e with the source project.\n - \u003cvar translate=\"no\"\u003eSERVICE_ACCOUNT\u003c/var\u003e with the service account ID.\n3. Follow the main build process from looking at the logs in the terminal\n or in the [Cloud Build console](https://console.cloud.google.com/cloud-build/),\n if you have enough permissions. See the following images for more reference.\n\n **Figure 1**. Example of viewing logs progress in the terminal.\n\n **Figure 2**. Example of viewing logs progress in the console.\n4. Track the child build steps triggered from the Cloud Build\n console or within the logs created from the steps. See the following images for\n more reference.\n\n **Figure 3**. Example of tracking child build steps in the console.\n\n **Figure 4**. Example of tracking child build steps within the logs.\n5. Identify any issues with individual builds. Correct errors, if any. It's\n recommended to paste the generated SQL into BigQuery to identify and\n correct the errors. Most errors are related to fields that are selected,\n but not present in the replicated source. The BigQuery UI helps to\n identify and comment those out.\n\n **Figure 5**. Example of identifying issues through Cloud Build logs.\n\nMove files to the Cloud Composer (Airflow) DAG bucket\n-----------------------------------------------------\n\nIf you opted to generate integration or CDC files and have an instance of\nCloud Composer (Airflow), you can move them into their final bucket\nwith the following command: \n\n gcloud storage -m cp -r gs://\u003cvar translate=\"no\"\u003eOUTPUT_BUCKET\u003c/var\u003e/dags/ gs://\u003cvar translate=\"no\"\u003eCOMPOSER_DAG_BUCKET\u003c/var\u003e/\n gcloud storage -m cp -r gs://\u003cvar translate=\"no\"\u003eOUTPUT_BUCKET\u003c/var\u003e/data/ gs://\u003cvar translate=\"no\"\u003eCOMPOSER_DAG_BUCKET\u003c/var\u003e/\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eOUTPUT_BUCKET\u003c/var\u003e with the output bucket.\n- \u003cvar translate=\"no\"\u003eCOMPOSER_DAG_BUCKET\u003c/var\u003e with the Cloud Composer (Airflow) DAG bucket.\n\n| **Note:** This step is only applicable if you have an instance of Airflow running, as Airflow is commonly used to manage and orchestrate data pipelines.\n\nCustomize and prepare for upgrade\n---------------------------------\n\nMany enterprise customers have specific customizations of their systems, such as\nadditional documents in a flow or specific types of a record. These are specific\nto each customer and configured by functional analysts as the business needs arise.\n\nCortex utilizes `## CORTEX-CUSTOMER` tags in code to denote places where such\ncustomizations are likely required. Use the command `grep -R CORTEX-CUSTOMER` to\ncheck all `## CORTEX-CUSTOMER` comments that you should customize.\n\nIn addition to the `CORTEX-CUSTOMER` tags, you might need to further\ncustomize the following by committing all of these changes with a clear\ntag in the code to your own forked or cloned repository:\n\n- Adding business rules.\n- Adding other datasets and joining them with existing views or tables\n- Reusing the provided templates to call additional APIs.\n- Modifying deployment scripts.\n- Applying further data mesh concepts.\n- Adapting some tables or landed APIs to include additional fields not included in the standard.\n\nAdopt a CI/CD pipeline that works for your organization to keep\nthese enhancements tested and your overall solution in a reliable\nand robust state. A pipeline can reuse the `cloudbuild.yaml`\nscripts to trigger end-to-end deployment periodically, or based on\ngit operations depending on your repository of choice by\n[automating builds](/build/docs/automating-builds/create-manage-triggers).\n\nUse the`config.json` file to define different sets of projects and\ndatasets for development, staging, and production environments. Use\nautomated testing with your own sample data to ensure the models\nalways produce what you expect.\n\nTagging your own changes visibly in your fork or clone of a\nrepository together with some deployment and testing automation helps\n[perform upgrades](/cortex/docs/upgrade-recommendations).\n\nSupport\n-------\n\nIf you encounter any issues or have feature requests related to these models\nor deployers, create an issue in the [Cortex Framework Data Foundation](https://github.com/GoogleCloudPlatform/cortex-data-foundation) repository. To assist with gathering the necessary\ninformation, execute `support.sh` from the cloned directory. This script\nguides you through a series of steps to help troubleshoot.\n\nFor any Cortex Framework requests or issues, go to the\n[support](/cortex/docs/overview#support) section in the overview page.\n\nLooker Blocks and Dashboards\n----------------------------\n\nTake advantage of the available Looker Blocks and Dashboards. These\nare essentially reusable data models for common analytical patterns and data\nsources for Cortex Framework. For more information, see\n[Looker Blocks and Dashboards overview](/cortex/docs/looker-block-overview)."]]