Use the Colab Enterprise Data Science Agent with BigQuery

The Data Science Agent (DSA) for Colab Enterprise and BigQuery lets you automate exploratory data analysis, perform machine learning tasks, and deliver insights all within a Colab Enterprise notebook.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. Enable the Vertex AI, Dataform, and Compute Engine APIs.

    Enable the APIs

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Verify that billing is enabled for your Google Cloud project.

  7. Enable the Vertex AI, Dataform, and Compute Engine APIs.

    Enable the APIs

If you're new to Colab Enterprise in BigQuery, see the setup steps on the Create notebooks page.

Limitations

  • The Data Science Agent supports the following data sources:
    • CSV files
    • BigQuery tables
  • The code produced by the Data Science Agent only runs in your notebook's runtime.
  • The Data Science Agent isn't supported in projects that have enabled VPC Service Controls.
  • The first time you run the Data Science Agent, you may experience some latency of approximately five to ten minutes. This only occurs once per project during initial setup.

When to use the Data Science Agent

The Data Science Agent helps you with tasks ranging from exploratory data analysis to generating machine learning predictions and forecasts. You can use the DSA to:

  • Generate a plan: Generate and modify a plan to complete a particular task.
  • Data exploration: Explore a dataset to understand its structure, identify potential issues like missing values and outliers, and examine the distribution of key variables.
  • Data cleaning: Clean your data. For example, remove data points that are outliers.
  • Data wrangling: Convert categorical features into numerical representations using techniques like one-hot encoding or label encoding. Create new features for analysis.
  • Data analysis: Analyze the relationships between different variables. Calculate correlations between numerical features and explore distributions of categorical features. Look for patterns and trends in the data.
  • Data visualization: Create visualizations such as histograms, box plots, scatter plots, and bar charts that represent the distributions of individual variables and the relationships between them.
  • Feature engineering: Engineer new features from a cleaned dataset.
  • Data splitting: Split an engineered dataset into training, validation, and testing datasets.
  • Model training: Train a model by using the training data (X_train, y_train).
  • Model optimization: Optimize a model by using the validation set. Explore alternative models like DecisionTreeRegressor and RandomForestRegressor and compare their performance.
  • Model evaluation: Evaluate the best performing model on the test dataset (X_test_imputed, y_test).

Use the Data Science Agent in BigQuery

The following general steps show you how to use the Data Science Agent in BigQuery.

  1. Create or open a Colab Enterprise notebook.
  2. Upload a CSV file or reference a BigQuery table in your prompt.
  3. Enter a prompt that describes the data analysis you want to perform or the prototype you want to build. For help, see the sample prompts.
  4. Examine the results.

Analyze a CSV file

To analyze a CSV using the Data Science Agent in BigQuery, follow these steps.

  1. Go to the BigQuery page.

    Go to BigQuery

  2. On the BigQuery Studio welcome page, under Create new, click Notebook.

    Alternatively, in the tab bar, click the drop-down arrow next to the + icon, and then click Notebook > Empty notebook.

  3. In the toolbar, click the spark Toggle Gemini button to open the chat dialog.

  4. Upload your CSV file.

    1. In the chat dialog, click Add files.

    2. If necessary, authorize your Google Account.

    3. In the action pane, click Upload file.

    4. Browse to the location of the CSV file, and then click Open.

    5. Beside the filename, click the More actions icon, and then choose Add to Gemini.

  5. Enter your prompt in the chat window. For example: Identify trends and anomalies in this file.

  6. Click Send.

    The results appear in the chat window.

    The data analysis plan generated by the DSA

  7. You can ask the agent to change the plan, or you can run it by clicking Accept & run. As the plan runs, generated code and text appear in the notebook. Click Cancel to stop.

Analyze a BigQuery table

To analyze a BigQuery table, provide a reference to the table in your prompt.

  1. Go to the BigQuery page.

    Go to BigQuery

  2. On the BigQuery Studio welcome page, under Create new, click Notebook.

    Alternatively, in the tab bar, click the drop-down arrow next to the + icon, and then click Notebook > Empty notebook.

  3. In the toolbar, click the spark Toggle Gemini button to open the chat dialog.

  4. Enter your prompt in the chat window. For example, "Help me perform exploratory data analysis and get insights about the data in this table: project_id:dataset.table."

    Replace the following:

    • project_id: your project ID
    • dataset: the name of the dataset that contains the table you're analyzing
    • table: the name of the table you're analyzing
  5. Click Send.

    The results appear in the chat window.

  6. You can ask the agent to change the plan, or you can run it by clicking Accept & run. As the plan runs, generated code and text appear in the notebook. Click Cancel to stop.

Sample prompts

Regardless of the complexity of the prompt that you use, the Data Science Agent generates a plan that you can refine to meet your needs.

The following examples show the types of prompts that you can use with the DSA.

  • Investigate and fill missing values by using the k-Nearest Neighbors (KNN) machine learning algorithm.
  • Create a plot of salary by experience level. Use the experience_level column to group the salaries, and create a box plot for each group showing the values from the salary_in_usd column.
  • Use the XGBoost algorithm to make a model for determining the class variable of a particular fruit. Split the data into training and testing datasets to generate a model and to determine the model's accuracy. Create a confusion matrix to show the predictions amongst each class, including all predictions that are correct and incorrect.
  • Create a pandas dataframe for my data. Analyze the data for null values, and then graph the distribution of each column using the graph type. Use violin plots for measured values and bar plots for categories.
  • Read in the csv for the dataset and construct a DataFrame, run analysis on the DataFrame to determine what needs to be done with values (replace or remove missing values, fix duplicate rows), and determine the distribution of the amount of money invested in USD per city location. Graph results on a bar graph in descending order as Location versus Avg Amount Invested (USD), graphing only the top 20 results.
  • Forecast target_variable from filename.csv for the next six months.
  • Build and evaluate a classification model on filename.csv for target_variable.

Turn off Gemini in BigQuery

To turn off Gemini in BigQuery for a Google Cloud project, an administrator must turn off the Gemini for Google Cloud API. See Disabling services.

To turn off Gemini in BigQuery for a specific user, an administrator needs to revoke the Gemini for Google Cloud User (roles/cloudaicompanion.user) role for that user. See Revoke a single IAM role.

Pricing

During Preview, you are charged only for running code in the notebook's runtime. For more information, see Colab Enterprise pricing.

Supported regions

To view the supported regions for Colab Enterprise's Data Science Agent, see Locations.