Deploy GKE Inference Gateway


This page describes how to deploy GKE Inference Gateway.

This page is intended for Networking specialists responsible for managing GKE infrastructure, and platform administrators who manage AI workloads.

Before reading this page, ensure that you're familiar with the following:

GKE Inference Gateway enhances Google Kubernetes Engine (GKE) Gateway to optimize the serving of generative AI applications. GKE Inference Gateway lets you optimize the serving of generative AI workloads on GKE. It provides efficient management and scaling of AI workloads, enables workload-specific performance objectives such as latency, and enhances resource utilization, observability, and AI safety.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.
  • Enable the Compute Engine API, the Network Services API, and the Model Armor API if needed.

    Go to Enable access to APIs and follow the instructions.

GKE Gateway controller requirements

  • GKE version 1.32.3.
  • Google Cloud CLI version 407.0.0 or later.
  • The Gateway API is supported on VPC-native clusters only.
  • You must enable a proxy-only subnet.
  • Your cluster must have the HttpLoadBalancing add-on enabled.
  • If you are using Istio, you must upgrade Istio to one of the following versions:
    • 1.15.2 or later
    • 1.14.5 or later
    • 1.13.9 or later
  • If you are using Shared VPC, then in the host project, you need to assign the Compute Network User role to the GKE Service account for the service project.

Restrictions and limitations

The following restrictions and limitations apply:

  • Multi-cluster Gateways are not supported.
  • GKE Inference Gateway is only supported on the gke-l7-regional-external-managed and gke-l7-rilb GatewayClass resources.
  • Cross-regional internal Application Load Balancers are not supported.

Configure GKE Inference Gateway

To configure GKE Inference Gateway, consider this example. A team runs vLLM and Llama3 models and actively experiments with two distinct LoRA fine-tuned adapters: "food-review" and "cad-fabricator".

The high-level workflow for configuring GKE Inference Gateway is as follows:

  1. Prepare your environment: set up the necessary infrastructure and components.
  2. Create an inference pool: define a pool of model servers using the InferencePool Custom Resource.
  3. Specify model serving objectives: specify model objectives using the InferenceModel Custom Resource.
  4. Create the Gateway: expose the inference service using the Gateway API.
  5. Create the HTTPRoute: define how HTTP traffic is routed to the inference service.
  6. Send inference requests: make requests to the deployed model.

Prepare your environment

  1. Install Helm.

  2. Create a GKE cluster:

  3. To install the InferencePool and InferenceModel Custom Resource Definition (CRDs) in your GKE cluster, run the following command:

    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/releases/download/v0.3.0/manifests.yaml
    

    Replace VERSION with the version of the CRDs you want to install (for example, v0.3.0).

  4. If you are using GKE version earlier than v1.32.2-gke.1182001 and you want to use Model Armor with GKE Inference Gateway, you must install the traffic and routing extension CRDs:

    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-gateway-api/refs/heads/main/config/crd/networking.gke.io_gcptrafficextensions.yaml
    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-gateway-api/refs/heads/main/config/crd/networking.gke.io_gcproutingextensions.yaml
    
  5. To set up authorization to scrape metrics, create the inference-gateway-sa-metrics-reader-secret secret:

    kubectl apply -f - <<EOF
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: inference-gateway-metrics-reader
    rules:
    - nonResourceURLs:
      - /metrics
      verbs:
      - get
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: inference-gateway-sa-metrics-reader
      namespace: default
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: inference-gateway-sa-metrics-reader-role-binding
      namespace: default
    subjects:
    - kind: ServiceAccount
      name: inference-gateway-sa-metrics-reader
      namespace: default
    roleRef:
      kind: ClusterRole
      name: inference-gateway-metrics-reader
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: inference-gateway-sa-metrics-reader-secret
      namespace: default
      annotations:
        kubernetes.io/service-account.name: inference-gateway-sa-metrics-reader
    type: kubernetes.io/service-account-token
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: inference-gateway-sa-metrics-reader-secret-read
    rules:
    - resources:
      - secrets
      apiGroups: [""]
      verbs: ["get", "list", "watch"]
      resourceNames: ["inference-gateway-sa-metrics-reader-secret"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: gmp-system:collector:inference-gateway-sa-metrics-reader-secret-read
      namespace: default
    roleRef:
      name: inference-gateway-sa-metrics-reader-secret-read
      kind: ClusterRole
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - name: collector
      namespace: gmp-system
      kind: ServiceAccount
    EOF
    

Create a model server and model deployment

This section shows how to deploy a model server and model. The example uses a vLLM model server with a Llama3 model. The deployment is labeled as app:vllm-llama3-8b-instruct. This deployment also uses two LoRA adapters named food-review and cad-fabricator from Hugging Face.

You can adapt this example with your own model server container and model, serving port, and deployment name. You can also configure LoRA adapters in the deployment, or deploy the base model. The following steps describe how to create the necessary Kubernetes resources.

  1. Create a Kubernetes Secret to store your Hugging Face token. This token is used to access the LoRA adapters:

    kubectl create secret generic hf-token --from-literal=token=HF_TOKEN
    

    Replace HF_TOKEN with your Hugging Face token.

  2. To deploy on a nvidia-h100-80gb accelerator type, save the following manifest as vllm-llama3-8b-instruct.yaml. This manifest defines a Kubernetes Deployment with your model and model server:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: vllm-llama3-8b-instruct
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: vllm-llama3-8b-instruct
      template:
        metadata:
          labels:
            app: vllm-llama3-8b-instruct
        spec:
          containers:
            - name: vllm
              image: "vllm/vllm-openai:latest"
              imagePullPolicy: Always
              command: ["python3", "-m", "vllm.entrypoints.openai.api_server"]
              args:
              - "--model"
              - "meta-llama/Llama-3.1-8B-Instruct"
              - "--tensor-parallel-size"
              - "1"
              - "--port"
              - "8000"
              - "--enable-lora"
              - "--max-loras"
              - "2"
              - "--max-cpu-loras"
              - "12"
              env:
                # Enabling LoRA support temporarily disables automatic v1, we want to force it on
                # until 0.8.3 vLLM is released.
                - name: PORT
                  value: "8000"
                - name: HUGGING_FACE_HUB_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: hf-token
                      key: token
                - name: VLLM_ALLOW_RUNTIME_LORA_UPDATING
                  value: "true"
              ports:
                - containerPort: 8000
                  name: http
                  protocol: TCP
              lifecycle:
                preStop:
                  # vLLM stops accepting connections when it receives SIGTERM, so we need to sleep
                  # to give upstream gateways a chance to take us out of rotation. The time we wait
                  # is dependent on the time it takes for all upstreams to completely remove us from
                  # rotation. Older or simpler load balancers might take upwards of 30s, but we expect
                  # our deployment to run behind a modern gateway like Envoy which is designed to
                  # probe for readiness aggressively.
                  sleep:
                    # Upstream gateway probers for health should be set on a low period, such as 5s,
                    # and the shorter we can tighten that bound the faster that we release
                    # accelerators during controlled shutdowns. However, we should expect variance,
                    # as load balancers may have internal delays, and we don't want to drop requests
                    # normally, so we're often aiming to set this value to a p99 propagation latency
                    # of readiness -> load balancer taking backend out of rotation, not the average.
                    #
                    # This value is generally stable and must often be experimentally determined on
                    # for a given load balancer and health check period. We set the value here to
                    # the highest value we observe on a supported load balancer, and we recommend
                    # tuning this value down and verifying no requests are dropped.
                    #
                    # If this value is updated, be sure to update terminationGracePeriodSeconds.
                    #
                    seconds: 30
                  #
                  # IMPORTANT: preStop.sleep is beta as of Kubernetes 1.30 - for older versions
                  # replace with this exec action.
                  #exec:
                  #  command:
                  #  - /usr/bin/sleep
                  #  - 30
              livenessProbe:
                httpGet:
                  path: /health
                  port: http
                  scheme: HTTP
                # vLLM's health check is simple, so we can more aggressively probe it.  Liveness
                # check endpoints should always be suitable for aggressive probing.
                periodSeconds: 1
                successThreshold: 1
                # vLLM has a very simple health implementation, which means that any failure is
                # likely significant. However, any liveness triggered restart requires the very
                # large core model to be reloaded, and so we should bias towards ensuring the
                # server is definitely unhealthy vs immediately restarting. Use 5 attempts as
                # evidence of a serious problem.
                failureThreshold: 5
                timeoutSeconds: 1
              readinessProbe:
                httpGet:
                  path: /health
                  port: http
                  scheme: HTTP
                # vLLM's health check is simple, so we can more aggressively probe it.  Readiness
                # check endpoints should always be suitable for aggressive probing, but may be
                # slightly more expensive than readiness probes.
                periodSeconds: 1
                successThreshold: 1
                # vLLM has a very simple health implementation, which means that any failure is
                # likely significant,
                failureThreshold: 1
                timeoutSeconds: 1
              # We set a startup probe so that we don't begin directing traffic or checking
              # liveness to this instance until the model is loaded.
              startupProbe:
                # Failure threshold is when we believe startup will not happen at all, and is set
                # to the maximum possible time we believe loading a model will take. In our
                # default configuration we are downloading a model from HuggingFace, which may
                # take a long time, then the model must load into the accelerator. We choose
                # 10 minutes as a reasonable maximum startup time before giving up and attempting
                # to restart the pod.
                #
                # IMPORTANT: If the core model takes more than 10 minutes to load, pods will crash
                # loop forever. Be sure to set this appropriately.
                failureThreshold: 600
                # Set delay to start low so that if the base model changes to something smaller
                # or an optimization is deployed, we don't wait unnecessarily.
                initialDelaySeconds: 2
                # As a startup probe, this stops running and so we can more aggressively probe
                # even a moderately complex startup - this is a very important workload.
                periodSeconds: 1
                httpGet:
                  # vLLM does not start the OpenAI server (and hence make /health available)
                  # until models are loaded. This may not be true for all model servers.
                  path: /health
                  port: http
                  scheme: HTTP
    
              resources:
                limits:
                  nvidia.com/gpu: 1
                requests:
                  nvidia.com/gpu: 1
              volumeMounts:
                - mountPath: /data
                  name: data
                - mountPath: /dev/shm
                  name: shm
                - name: adapters
                  mountPath: "/adapters"
          initContainers:
            - name: lora-adapter-syncer
              tty: true
              stdin: true
              image: us-central1-docker.pkg.dev/k8s-staging-images/gateway-api-inference-extension/lora-syncer:main
              restartPolicy: Always
              imagePullPolicy: Always
              env:
                - name: DYNAMIC_LORA_ROLLOUT_CONFIG
                  value: "/config/configmap.yaml"
              volumeMounts: # DO NOT USE subPath, dynamic configmap updates don't work on subPaths
              - name: config-volume
                mountPath:  /config
          restartPolicy: Always
    
          # vLLM allows VLLM_PORT to be specified as an environment variable, but a user might
          # create a 'vllm' service in their namespace. That auto-injects VLLM_PORT in docker
          # compatible form as `tcp://<IP>:<PORT>` instead of the numeric value vLLM accepts
          # causing CrashLoopBackoff. Set service environment injection off by default.
          enableServiceLinks: false
    
          # Generally, the termination grace period needs to last longer than the slowest request
          # we expect to serve plus any extra time spent waiting for load balancers to take the
          # model server out of rotation.
          #
          # An easy starting point is the p99 or max request latency measured for your workload,
          # although LLM request latencies vary significantly if clients send longer inputs or
          # trigger longer outputs. Since steady state p99 will be higher than the latency
          # to drain a server, you may wish to slightly this value either experimentally or
          # via the calculation below.
          #
          # For most models you can derive an upper bound for the maximum drain latency as
          # follows:
          #
          #   1. Identify the maximum context length the model was trained on, or the maximum
          #      allowed length of output tokens configured on vLLM (llama2-7b was trained to
          #      4k context length, while llama3-8b was trained to 128k).
          #   2. Output tokens are the more compute intensive to calculate and the accelerator
          #      will have a maximum concurrency (batch size) - the time per output token at
          #      maximum batch with no prompt tokens being processed is the slowest an output
          #      token can be generated (for this model it would be about 100ms TPOT at a max
          #      batch size around 50)
          #   3. Calculate the worst case request duration if a request starts immediately
          #      before the server stops accepting new connections - generally when it receives
          #      SIGTERM (for this model that is about 4096 / 10 ~ 40s)
          #   4. If there are any requests generating prompt tokens that will delay when those
          #      output tokens start, and prompt token generation is roughly 6x faster than
          #      compute-bound output token generation, so add 20% to the time from above (40s +
          #      16s ~ 55s)
          #
          # Thus we think it will take us at worst about 55s to complete the longest possible
          # request the model is likely to receive at maximum concurrency (highest latency)
          # once requests stop being sent.
          #
          # NOTE: This number will be lower than steady state p99 latency since we stop receiving
          #       new requests which require continuous prompt token computation.
          # NOTE: The max timeout for backend connections from gateway to model servers should
          #       be configured based on steady state p99 latency, not drain p99 latency
          #
          #   5. Add the time the pod takes in its preStop hook to allow the load balancers have
          #      stopped sending us new requests (55s + 30s ~ 85s)
          #
          # Because termination grace period controls when the Kubelet forcibly terminates a
          # stuck or hung process (a possibility due to a GPU crash), there is operational safety
          # in keeping the value roughly proportional to the time to finish serving. There is also
          # value in adding a bit of extra time to deal with unexpectedly long workloads.
          #
          #   6. Add a 50% safety buffer to this time since the operational impact should be low
          #      (85s * 1.5 ~ 130s)
          #
          # One additional source of drain latency is that some workloads may run close to
          # saturation and have queued requests on each server. Since traffic in excess of the
          # max sustainable QPS will result in timeouts as the queues grow, we assume that failure
          # to drain in time due to excess queues at the time of shutdown is an expected failure
          # mode of server overload. If your workload occasionally experiences high queue depths
          # due to periodic traffic, consider increasing the safety margin above to account for
          # time to drain queued requests.
          terminationGracePeriodSeconds: 130
          nodeSelector:
            cloud.google.com/gke-accelerator: "nvidia-h100-80gb"
          volumes:
            - name: data
              emptyDir: {}
            - name: shm
              emptyDir:
                medium: Memory
            - name: adapters
              emptyDir: {}
            - name: config-volume
              configMap:
                name: vllm-llama3-8b-adapters
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: vllm-llama3-8b-adapters
    data:
      configmap.yaml: |
          vLLMLoRAConfig:
            name: vllm-llama3.1-8b-instruct
            port: 8000
            defaultBaseModel: meta-llama/Llama-3.1-8B-Instruct
            ensureExist:
              models:
              - id: food-review
                source: Kawon/llama3.1-food-finetune_v14_r8
              - id: cad-fabricator
                source: redcathode/fabricator
    ---
    kind: HealthCheckPolicy
    apiVersion: networking.gke.io/v1
    metadata:
      name: health-check-policy
      namespace: default
    spec:
      targetRef:
        group: "inference.networking.x-k8s.io"
        kind: InferencePool
        name: vllm-llama3-8b-instruct
      default:
        config:
          type: HTTP
          httpHealthCheck:
              requestPath: /health
              port: 8000
    
  3. Apply the sample manifest to your cluster:

    kubectl apply -f vllm-llama3-8b-instruct.yaml
    

After you apply the manifest, consider the following key fields and parameters:

  • replicas: specifies the number of Pods for the Deployment.
  • image: specifies the Docker image for the model server.
  • command: specifies the command to run when the container starts.
  • args: specifies the arguments to pass to the command.
  • env: specifies environment variables for the container.
  • ports: specifies the ports exposed by the container.
  • resources: specifies the resource requests and limits for the container, such as GPU.
  • volumeMounts: specifies how volumes are mounted into the container.
  • initContainers: specifies containers that run prior to the application container.
  • restartPolicy: specifies the restart policy for the Pods.
  • terminationGracePeriodSeconds: specifies the grace period for Pod termination.
  • volumes: specifies the volumes used by the Pods.

You can modify these fields to match your specific requirements.

Create an inference pool

The InferencePool Kubernetes custom resource defines a group of Pods with a common base large language model (LLM) and compute configuration. The selector field specifies which Pods belong to this pool. The labels in this selector must exactly match the labels applied to your model server Pods. The targetPort field defines the ports that the model server uses within the Pods. The extensionRef field references an extension service that provides additional capability for the inference pool. The InferencePool enables GKE Inference Gateway to route traffic to your model server Pods.

Before you create the InferencePool, ensure that the Pods that the InferencePool selects are already running.

To create an InferencePool using Helm, perform the following steps:

helm install vllm-llama3-8b-instruct \
  --set inferencePool.modelServers.matchLabels.app=vllm-llama3-8b-instruct \
  --set provider.name=gke \
  --version v0.3.0 \
  oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool

Change the following field to match your Deployment:

  • inferencePool.modelServers.matchLabels.app: the key of the label used to select your model server Pods.

The Helm install automatically installs the necessary timeout policy, endpoint-picker and the Pods needed for observability.

This creates an InferencePool object: vllm-llama3-8b-instruct referencing the model endpoint services within the Pods. It also creates a deployment of the Endpoint Picker named app:vllm-llama3-8b-instruct-epp for this created InferencePool.

Specify model serving objectives

The InferenceModel custom resource defines a specific model to be served that includes support for LoRA-tuned models and its serving criticality. You must define which models are served on an InferencePool by creating InferenceModel resources. These InferenceModel resources can reference base models, or LoRA adapters supported by the model servers in the InferencePool.

The modelName field specifies the name of the base model or LoRA adapter. The Criticality field specifies the serving criticality of the model. The poolRef field specifies the InferencePool on which this model is served.

To create an InferenceModel, perform the following steps:

  1. Save the following sample manifest as inferencemodel.yaml:

    apiVersion: inference.networking.x-k8s.io/v1alpha2
    kind: InferenceModel
    metadata:
      name: inferencemodel-sample
    spec:
      modelName: MODEL_NAME
      criticality: VALUE
      poolRef:
        name: INFERENCE_POOL_NAME
    

    Replace the following:

    • MODEL_NAME: the name of your base model or LoRA adapter. For example, food-review.
    • VALUE: the chosen serving criticality. Choose from Critical, Standard, or Sheddable. For example, Standard.
    • INFERENCE_POOL_NAME: the name of the InferencePool you created in the previous step. For example, vllm-llama3-8b-instruct.
  2. Apply the sample manifest to your cluster:

    kubectl apply -f inferencemodel.yaml
    

The following example creates an InferenceModel object that configures the food-review LoRA model on the vllm-llama3-8b-instruct InferencePool with a Standard serving criticality. The InferenceModel object also configures the base model to be served with a Critical priority level.

apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
  name: food-review
spec:
  modelName: food-review
  criticality: Standard
  poolRef:
    name: vllm-llama3-8b-instruct
  targetModels:
  - name: food-review
    weight: 100

---
apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
  name: llama3-base-model
spec:
  modelName: meta-llama/Llama-3.1-8B-Instruct
  criticality: Critical
  poolRef:
    name: vllm-llama3-8b-instruct

Create the Gateway

The Gateway resource is the entry point for external traffic into your Kubernetes cluster. It defines the listeners that accept incoming connections.

The GKE Inference Gateway works with the following Gateway Classes:

  • gke-l7-rilb: for regional internal Application Load Balancers.
  • gke-l7-regional-external-managed

For more information, see Gateway Classes documentation.

To create a Gateway, perform the following steps:

  1. Save the following sample manifest as gateway.yaml:

    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: GATEWAY_NAME
    spec:
      gatewayClassName: GATEWAY_CLASS
      listeners:
        - protocol: HTTP
          port: 80
          name: http
    

    Replace GATEWAY_NAME with a unique name for your Gateway resource (for example inference-gateway), and GATEWAY_CLASS with the Gateway Class you want to use (for example, gke-l7-regional-external-managed).

  2. Apply the manifest to your cluster:

    kubectl apply -f gateway.yaml
    

Note: For more information about configuring TLS to secure your Gateway with HTTPS, see the GKE documentation on TLS configuration.

Create the HTTPRoute

The HTTPRoute resource defines how the GKE Gateway routes incoming HTTP requests to backend services, which in this context would be your InferencePool. The HTTPRoute resource specifies matching rules (for example, headers or paths) and the backend to which traffic should be forwarded.

  1. To create an HTTPRoute, save the following sample manifest as httproute.yaml:

    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: HTTPROUTE_NAME
    spec:
      parentRefs:
      - name: GATEWAY_NAME
      rules:
      - matches:
        - path:
            type: PathPrefix
            value: PATH_PREFIX
        backendRefs:
        - name: INFERENCE_POOL_NAME
          kind: InferencePool
    

    Replace the following:

    • HTTPROUTE_NAME: a unique name for your HTTPRoute resource. For example, my-route.
    • GATEWAY_NAME: the name of the Gateway resource that you created. For example, inference-gateway.
    • PATH_PREFIX: the path prefix that you use to match incoming requests. For example, / to match all.
    • INFERENCE_POOL_NAME: the name of the InferencePool resource that you want to route traffic to. For example, vllm-llama3-8b-instruct.
  2. Apply the manifest to your cluster:

    kubectl apply -f httproute.yaml
    

Send inference request

After you have configured GKE Inference Gateway, you can send inference requests to your deployed model. This lets you generate text based on your input prompt and specified parameters.

To send inference requests, perform the following steps:

  1. To get the Gateway endpoint, run the following command:

    IP=$(kubectl get gateway/GATEWAY_NAME -o jsonpath='{.status.addresses[0].value}')
    PORT=PORT_NUMBER # Use 80 for HTTP
    

    Replace the following:

    • GATEWAY_NAME: the name of your Gateway resource.
    • PORT_NUMBER: the port number you configured in the Gateway.
  2. To send a request to the /v1/completions endpoint using curl, run the following command:

    curl -i -X POST ${IP}:${PORT}/v1/completions \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer $(gcloud auth print-access-token)' \
    -d '{
        "model": "MODEL_NAME",
        "prompt": "PROMPT_TEXT",
        "max_tokens": MAX_TOKENS,
        "temperature": "TEMPERATURE"
    }'
    

    Replace the following:

    • MODEL_NAME: the name of the model or LoRA adapter to use.
    • PROMPT_TEXT: the input prompt for the model.
    • MAX_TOKENS: the maximum number of tokens to generate in the response.
    • TEMPERATURE: controls the randomness of the output. Use the value 0 for deterministic output, or a higher number for more creative output.

The following example shows you how to send a sample request to GKE Inference Gateway:

curl -i -X POST ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -H 'Authorization: Bearer $(gcloud auth print-access-token)' -d '{
    "model": "food-review",
    "prompt": "What is the best pizza in the world?",
    "max_tokens": 2048,
    "temperature": "0"
}'

Be aware of the following behaviours:

  • Request body: the request body can include additional parameters like stop and top_p. Refer to the OpenAI API specification for a complete list of options.
  • Error handling: implement proper error handling in your client code to handle potential errors in the response. For example, check the HTTP status code in the curl response. A non-200 status code generally indicates an error.
  • Authentication and authorization: for production deployments, secure your API endpoint with authentication and authorization mechanisms. Include the appropriate headers (for example, Authorization) in your requests.

What's next