Analyze multimodal data in Python with BigQuery DataFrames
This tutorial shows you how to analyze multimodal data in a Python notebook by using BigQuery DataFrames classes and methods.
This tutorial uses the product catalog from the public Cymbal pet store dataset.
To upload a notebook already populated with the tasks covered in this tutorial, see BigFrames Multimodal DataFrame.
Objectives
- Create multimodal DataFrames.
- Combine structured and unstructured data in a DataFrame.
- Transform images.
- Generate text and embeddings based on image data.
- Chunk PDFs for further analysis.
Costs
In this document, you use the following billable components of Google Cloud:
- BigQuery: you incur costs for the data that you process in BigQuery.
- BigQuery Python UDFs: you incur costs for using BigQuery DataFrames image transformation and chunk PDF methods.
- Cloud Storage: you incur costs for the objects stored in Cloud Storage.
- Vertex AI: you incur costs for calls to Vertex AI models.
To generate a cost estimate based on your projected usage,
use the pricing calculator.
For more information about, see the following pricing pages:
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the BigQuery, BigQuery Connection, Cloud Storage, and Vertex AI APIs.
Required roles
To get the permissions that you need to complete this tutorial, ask your administrator to grant you the following IAM roles:
-
Create a connection:
BigQuery Connection Admin (
roles/bigquery.connectionAdmin
) -
Grant permissions to the connection's service account:
Project IAM Admin (
roles/resourcemanager.projectIamAdmin
) -
Create a Cloud Storage bucket:
Storage Admin (
roles/storage.admin
) -
Run BigQuery jobs:
BigQuery User (
roles/bigquery.user
) -
Create and call Python UDFs:
BigQuery Data Editor (
roles/bigquery.dataEditor
) -
Create URLs that let you read and modify Cloud Storage objects:
BigQuery ObjectRef Admin (
roles/bigquery.objectRefAdmin
) -
Use notebooks:
-
BigQuery Read Session User (
roles/bigquery.readSessionUser
) -
Notebook Runtime User (
roles/aiplatform.notebookRuntimeUser
) -
Notebook Runtime User (
roles/aiplatform.notebookRuntimeUser
) -
Code Creator (
roles/dataform.codeCreator
)
-
BigQuery Read Session User (
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Set up
In this section, you create the Cloud Storage bucket, connection, and notebook used in this tutorial.
Create a bucket
Create a Cloud Storage bucket for storing transformed objects:
In the Google Cloud console, go to the Buckets page.
Click
Create.On the Create a bucket page, in the Get started section, enter a globally unique name that meets the bucket name requirements.
Click Create.
Create a connection
Create a Cloud resource connection and get the connection's service account. BigQuery uses the connection to access objects in Cloud Storage.
Go to the BigQuery page.
In the Explorer pane, click
Add data.The Add data dialog opens.
In the Filter By pane, in the Data Source Type section, select Business Applications.
Alternatively, in the Search for data sources field, you can enter
Vertex AI
.In the Featured data sources section, click Vertex AI.
Click the Vertex AI Models: BigQuery Federation solution card.
In the Connection type list, select Vertex AI remote models, remote functions and BigLake (Cloud Resource).
In the Connection ID field, type
bigframes-default-connection
.Click Create connection.
Click Go to connection.
In the Connection info pane, copy the service account ID for use in a later step.
Grant permissions to the connection's service account
Grant the connection's service account the roles that it needs to access Cloud Storage and Vertex AI. You must grant these roles in the same project you created or selected in the Before you begin section.
To grant the role, follow these steps:
Go to the IAM & Admin page.
Click
Grant access.In the New principals field, enter the service account ID that you copied earlier.
In the Select a role field, choose Cloud Storage, and then select Storage Object User.
Click Add another role.
In the Select a role field, select Vertex AI, and then select Vertex AI User.
Click Save.
Create a notebook
Create a notebook where you can run Python code:
Go to the BigQuery page.
In the tab bar of the editor pane, click the
drop-down arrow next to SQL query, and then click Notebook.In the Start with a template pane, click Close.
Click Connect > Connect to a runtime.
If you have an existing runtime, accept the default settings and click Connect. If you don't have an existing runtime, select Create new Runtime, and then click Connect.
It might take several minutes for the runtime to get set up.
Create a multimodal DataFrame
Create a multimodal DataFrame that integrates structured and unstructured data
by using the
from_glob_path
method
of the
Session
class:
- In the notebook, create a code cell and copy the following code into it:
Click
Run.The final call to
df_image
returns the images that have been added to the DataFrame. Alternatively, you could call the.display
method.
Combine structured and unstructured data in the DataFrame
Combine text and image data in the multimodal DataFrame:
- In the notebook, create a code cell and copy the following code into it:
Click Run
.The code returns the DataFrame data.
In the notebook, create a code cell and copy the following code into it:
Click Run
.The code returns images from the DataFrame where the
author
column value isalice
.
Perform image transformations
Transform image data by using the following methods of the
Series.BlobAccessor
class:
The transformed images are written to Cloud Storage.
Transform images:
- In the notebook, create a code cell and copy the following code into it:
- Update all references to
{dst_bucket}
to refer to the bucket that you created, in the formatgs://mybucket
. Click Run
.The code returns the original images as well as all of their transformations.
Generate text
Generate text from multimodal data by using the
predict
method
of the
GeminiTextGenerator
class:
- In the notebook, create a code cell and copy the following code into it:
Click Run
.The code returns the first two images in
df_image
, along with text generated in response to the questionwhat item is it?
for both images.In the notebook, create a code cell and copy the following code into it:
Click Run
.The code returns the first two images in
df_image
, with text generated in response to the questionwhat item is it?
for the first image, and text generated in response to the questionwhat color is the picture?
for the second image.
Generate embeddings
Generate embeddings for multimodal data by using the
predict
method
of the
MultimodalEmbeddingGenerator
class:
- In the notebook, create a code cell and copy the following code into it:
Click Run
.The code returns the embeddings generated by a call to an embedding model.
Chunk PDFs
Chunk PDF objects by using the
pdf_chunk
method
of the
Series.BlobAccessor
class:
- In the notebook, create a code cell and copy the following code into it:
Click Run
.The code returns the chunked PDF data.
Clean up
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.