Summarize text content using Generative AI (Generative AI)

Summarize text content using a publisher text model.

Code sample

C#

Before trying this sample, follow the C# setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI C# API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


using Google.Cloud.AIPlatform.V1;
using System;
using System.Collections.Generic;
using System.Linq;
using Value = Google.Protobuf.WellKnownTypes.Value;

// Text Summarization with a Large Language Model
public class PredictTextSummarizationSample
{
    public string PredictTextSummarization(
        string projectId = "your-project-id",
        string locationId = "us-central1",
        string publisher = "google",
        string model = "text-bison@001")
    {
        // Initialize client that will be used to send requests.
        // This client only needs to be created once,
        // and can be reused for multiple requests.
        var client = new PredictionServiceClientBuilder
        {
            Endpoint = $"{locationId}-aiplatform.googleapis.com"
        }.Build();

        // Configure the parent resource.
        var endpoint = EndpointName.FromProjectLocationPublisherModel(projectId, locationId, publisher, model);

        // Initialize request argument(s).
        var content = @"
Provide a summary with about two sentences for the following article:
The efficient-market hypothesis (EMH) is a hypothesis in financial
economics that states that asset prices reflect all available
information. A direct implication is that it is impossible to
""beat the market"" consistently on a risk-adjusted basis since market
prices should only react to new information. Because the EMH is
formulated in terms of risk adjustment, it only makes testable
predictions when coupled with a particular model of risk. As a
result, research in financial economics since at least the 1990s has
focused on market anomalies, that is, deviations from specific
models of risk. The idea that financial market returns are difficult
to predict goes back to Bachelier, Mandelbrot, and Samuelson, but
is closely associated with Eugene Fama, in part due to his
influential 1970 review of the theoretical and empirical research.
The EMH provides the basic logic for modern risk-based theories of
asset prices, and frameworks such as consumption-based asset pricing
and intermediary asset pricing can be thought of as the combination
of a model of risk with the EMH. Many decades of empirical research
on return predictability has found mixed evidence. Research in the
1950s and 1960s often found a lack of predictability (e.g. Ball and
Brown 1968; Fama, Fisher, Jensen, and Roll 1969), yet the
1980s-2000s saw an explosion of discovered return predictors (e.g.
Rosenberg, Reid, and Lanstein 1985; Campbell and Shiller 1988;
Jegadeesh and Titman 1993). Since the 2010s, studies have often
found that return predictability has become more elusive, as
predictability fails to work out-of-sample (Goyal and Welch 2008),
or has been weakened by advances in trading technology and investor
learning (Chordia, Subrahmanyam, and Tong 2014; McLean and Pontiff
2016; Martineau 2021).
Summary:";


        var instances = new List<Value>
        {
            Value.ForStruct(new()
            {
                Fields =
                {
                    ["content"] = Value.ForString(content),
                }
            })
        };

        var parameters = Value.ForStruct(new()
        {
            Fields =
            {
                { "temperature", new Value { NumberValue = 0.2 } },
                { "maxOutputTokens", new Value { NumberValue = 256 } },
                { "topP", new Value { NumberValue = 0.95 } },
                { "topK", new Value { NumberValue = 40 } }
            }
        });

        // Make the request.
        var response = client.Predict(endpoint, instances, parameters);

        // Parse and return the content
        var responseContent = response.Predictions.First().StructValue.Fields["content"].StringValue;
        Console.WriteLine($"Content: {responseContent}");
        return responseContent;
    }
}

Java

Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


import com.google.cloud.aiplatform.v1.EndpointName;
import com.google.cloud.aiplatform.v1.PredictResponse;
import com.google.cloud.aiplatform.v1.PredictionServiceClient;
import com.google.cloud.aiplatform.v1.PredictionServiceSettings;
import com.google.protobuf.Value;
import com.google.protobuf.util.JsonFormat;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

// Text Summarization with a Large Language Model
public class PredictTextSummarizationSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    // Designing prompts for text summerization with supported large language models:
    // https://cloud.google.com/vertex-ai/docs/generative-ai/text/summarization-prompts
    String instance =
        "{ \"content\": \"Background: There is evidence that there have been significant changes \n"
            + "in Amazon rainforest vegetation over the last 21,000 years through the Last \n"
            + "Glacial Maximum (LGM) and subsequent deglaciation. Analyses of sediment \n"
            + "deposits from Amazon basin paleo lakes and from the Amazon Fan indicate that \n"
            + "rainfall in the basin during the LGM was lower than for the present, and this \n"
            + "was almost certainly associated with reduced moist tropical vegetation cover \n"
            + "in the basin. There is debate, however, over how extensive this reduction \n"
            + "was. Some scientists argue that the rainforest was reduced to small, isolated \n"
            + "refugia separated by open forest and grassland; other scientists argue that \n"
            + "the rainforest remained largely intact but extended less far to the north, \n"
            + "south, and east than is seen today. This debate has proved difficult to \n"
            + "resolve because the practical limitations of working in the rainforest mean \n"
            + "that data sampling is biased away from the center of the Amazon basin, and \n"
            + "both explanations are reasonably well supported by the available data.\n"
            + "\n"
            + "Q: What does LGM stands for?\n"
            + "A: Last Glacial Maximum.\n"
            + "\n"
            + "Q: What did the analysis from the sediment deposits indicate?\n"
            + "A: Rainfall in the basin during the LGM was lower than for the present.\n"
            + "\n"
            + "Q: What are some of scientists arguments?\n"
            + "A: The rainforest was reduced to small, isolated refugia separated by open forest"
            + " and grassland.\n"
            + "\n"
            + "Q: There have been major changes in Amazon rainforest vegetation over the last how"
            + " many years?\n"
            + "A: 21,000.\n"
            + "\n"
            + "Q: What caused changes in the Amazon rainforest vegetation?\n"
            + "A: The Last Glacial Maximum (LGM) and subsequent deglaciation\n"
            + "\n"
            + "Q: What has been analyzed to compare Amazon rainfall in the past and present?\n"
            + "A: Sediment deposits.\n"
            + "\n"
            + "Q: What has the lower rainfall in the Amazon during the LGM been attributed to?\n"
            + "A:\"}";
    String parameters =
        "{\n"
            + "  \"temperature\": 0,\n"
            + "  \"maxOutputTokens\": 32,\n"
            + "  \"topP\": 0,\n"
            + "  \"topK\": 1\n"
            + "}";
    String project = "YOUR_PROJECT_ID";
    String location = "us-central1";
    String publisher = "google";
    String model = "text-bison@001";

    predictTextSummarization(instance, parameters, project, location, publisher, model);
  }

  // Get summarization from a supported text model
  public static void predictTextSummarization(
      String instance,
      String parameters,
      String project,
      String location,
      String publisher,
      String model)
      throws IOException {
    String endpoint = String.format("%s-aiplatform.googleapis.com:443", location);
    PredictionServiceSettings predictionServiceSettings =
        PredictionServiceSettings.newBuilder()
            .setEndpoint(endpoint)
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (PredictionServiceClient predictionServiceClient =
        PredictionServiceClient.create(predictionServiceSettings)) {
      final EndpointName endpointName =
          EndpointName.ofProjectLocationPublisherModelName(project, location, publisher, model);

      // Use Value.Builder to convert instance to a dynamically typed value that can be
      // processed by the service.
      Value.Builder instanceValue = Value.newBuilder();
      JsonFormat.parser().merge(instance, instanceValue);
      List<Value> instances = new ArrayList<>();
      instances.add(instanceValue.build());

      // Use Value.Builder to convert parameter to a dynamically typed value that can be
      // processed by the service.
      Value.Builder parameterValueBuilder = Value.newBuilder();
      JsonFormat.parser().merge(parameters, parameterValueBuilder);
      Value parameterValue = parameterValueBuilder.build();

      PredictResponse predictResponse =
          predictionServiceClient.predict(endpointName, instances, parameterValue);
      System.out.println("Predict Response");
      System.out.println(predictResponse);
    }
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

const aiplatform = require('@google-cloud/aiplatform');

// Imports the Google Cloud Prediction service client
const {PredictionServiceClient} = aiplatform.v1;

// Import the helper module for converting arbitrary protobuf.Value objects.
const {helpers} = aiplatform;

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

const publisher = 'google';
const model = 'text-bison@001';

// Instantiates a client
const predictionServiceClient = new PredictionServiceClient(clientOptions);

async function callPredict() {
  // Configure the parent resource
  const endpoint = `projects/${project}/locations/${location}/publishers/${publisher}/models/${model}`;

  const instance = {
    content: `Provide a summary with about two sentences for the following article:
  The efficient-market hypothesis (EMH) is a hypothesis in financial \
  economics that states that asset prices reflect all available \
  information. A direct implication is that it is impossible to \
  "beat the market" consistently on a risk-adjusted basis since market \
  prices should only react to new information. Because the EMH is \
  formulated in terms of risk adjustment, it only makes testable \
  predictions when coupled with a particular model of risk. As a \
  result, research in financial economics since at least the 1990s has \
  focused on market anomalies, that is, deviations from specific \
  models of risk. The idea that financial market returns are difficult \
  to predict goes back to Bachelier, Mandelbrot, and Samuelson, but \
  is closely associated with Eugene Fama, in part due to his \
  influential 1970 review of the theoretical and empirical research. \
  The EMH provides the basic logic for modern risk-based theories of \
  asset prices, and frameworks such as consumption-based asset pricing \
  and intermediary asset pricing can be thought of as the combination \
  of a model of risk with the EMH. Many decades of empirical research \
  on return predictability has found mixed evidence. Research in the \
  1950s and 1960s often found a lack of predictability (e.g. Ball and \
  Brown 1968; Fama, Fisher, Jensen, and Roll 1969), yet the \
  1980s-2000s saw an explosion of discovered return predictors (e.g. \
  Rosenberg, Reid, and Lanstein 1985; Campbell and Shiller 1988; \
  Jegadeesh and Titman 1993). Since the 2010s, studies have often \
  found that return predictability has become more elusive, as \
  predictability fails to work out-of-sample (Goyal and Welch 2008), \
  or has been weakened by advances in trading technology and investor \
  learning (Chordia, Subrahmanyam, and Tong 2014; McLean and Pontiff \
  2016; Martineau 2021).
  Summary: 
  `,
  };
  const instanceValue = helpers.toValue(instance);
  const instances = [instanceValue];

  const parameter = {
    temperature: 0.2,
    maxOutputTokens: 256,
    topP: 0.95,
    topK: 40,
  };
  const parameters = helpers.toValue(parameter);

  const request = {
    endpoint,
    instances,
    parameters,
  };

  // Predict request
  const [response] = await predictionServiceClient.predict(request);
  console.log('Get text summarization response');
  const predictions = response.predictions;
  console.log('\tPredictions :');
  for (const prediction of predictions) {
    console.log(`\t\tPrediction : ${JSON.stringify(prediction)}`);
  }
}

callPredict();

Python

Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

import vertexai
from vertexai.language_models import TextGenerationModel

# TODO(developer): update project_id & location
vertexai.init(project=PROJECT_ID, location="us-central1")

parameters = {
    "temperature": 0,
    "max_output_tokens": 256,
    "top_p": 0.95,
    "top_k": 40,
}

model = TextGenerationModel.from_pretrained("text-bison@002")
response = model.predict(
    """Provide a summary with about two sentences for the following article:
    The efficient-market hypothesis (EMH) is a hypothesis in financial \
    economics that states that asset prices reflect all available \
    information. A direct implication is that it is impossible to \
    "beat the market" consistently on a risk-adjusted basis since market \
    prices should only react to new information. Because the EMH is \
    formulated in terms of risk adjustment, it only makes testable \
    predictions when coupled with a particular model of risk. As a \
    result, research in financial economics since at least the 1990s has \
    focused on market anomalies, that is, deviations from specific \
    models of risk. The idea that financial market returns are difficult \
    to predict goes back to Bachelier, Mandelbrot, and Samuelson, but \
    is closely associated with Eugene Fama, in part due to his \
    influential 1970 review of the theoretical and empirical research. \
    The EMH provides the basic logic for modern risk-based theories of \
    asset prices, and frameworks such as consumption-based asset pricing \
    and intermediary asset pricing can be thought of as the combination \
    of a model of risk with the EMH. Many decades of empirical research \
    on return predictability has found mixed evidence. Research in the \
    1950s and 1960s often found a lack of predictability (e.g. Ball and \
    Brown 1968; Fama, Fisher, Jensen, and Roll 1969), yet the \
    1980s-2000s saw an explosion of discovered return predictors (e.g. \
    Rosenberg, Reid, and Lanstein 1985; Campbell and Shiller 1988; \
    Jegadeesh and Titman 1993). Since the 2010s, studies have often \
    found that return predictability has become more elusive, as \
    predictability fails to work out-of-sample (Goyal and Welch 2008), \
    or has been weakened by advances in trading technology and investor \
    learning (Chordia, Subrahmanyam, and Tong 2014; McLean and Pontiff \
    2016; Martineau 2021).
    Summary:""",
    **parameters,
)
print(f"Response from Model: {response.text}")

What's next

To search and filter code samples for other Google Cloud products, see the Google Cloud sample browser.