偵測臉部

臉部偵測可偵測圖片中的多個臉部,以及相關的重要臉部特徵,例如表情或wearing headwear

不支援特定個人臉部辨識

歡迎試用

如果您未曾使用過 Google Cloud,歡迎建立帳戶,親自體驗實際使用 Cloud Vision API 的成效。新客戶可以獲得價值 $300 美元的免費抵免額,可用於執行、測試及部署工作負載。

免費試用 Cloud Vision API

臉部偵測要求

設定 Google Cloud 專案和驗證

偵測本機圖片中的臉孔

您可以使用 Vision API 在本機圖片檔案上執行功能偵測。

針對 REST 要求,請在要求主體中,將圖片檔案的內容傳送為base64 編碼字串。

針對 gcloud 和用戶端程式庫要求,請在要求中指定本機圖片的路徑。

REST

使用任何要求資料之前,請先替換以下項目:

  • BASE64_ENCODED_IMAGE:二進位圖片資料的 Base64 表示法 (ASCII 字串)。這個字串應類似下列字串:
    • /9j/4QAYRXhpZgAA...9tAVx/zDQDlGxn//2Q==
    詳情請參閱「base64 編碼」主題。
  • RESULTS_INT:(選用) 要傳回的結果整數值。如果您省略 "maxResults" 欄位及其值,API 會傳回預設值 10 個結果。這個欄位不適用於下列地圖項目類型:TEXT_DETECTIONDOCUMENT_TEXT_DETECTIONCROP_HINTS
  • PROJECT_ID:您的 Google Cloud 專案 ID。

HTTP 方法和網址:

POST https://vision.googleapis.com/v1/images:annotate

JSON 要求主體:

{
  "requests": [
    {
      "image": {
        "content": "BASE64_ENCODED_IMAGE"
      },
      "features": [
        {
          "maxResults": RESULTS_INT,
          "type": "FACE_DETECTION"
        }
      ]
    }
  ]
}

如要傳送要求,請選擇以下其中一個選項:

curl

將要求主體儲存在名為 request.json 的檔案中,然後執行下列指令:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/images:annotate"

PowerShell

將要求主體儲存在名為 request.json 的檔案中,然後執行下列指令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_ID" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

如果要求成功,伺服器會傳回 200 OK HTTP 狀態碼與 JSON 格式的回應。

FACE_DETECTION 回應包含所有偵測到的臉孔的定界框、臉孔上偵測到的地標 (眼睛、鼻子、嘴巴等),以及臉孔和圖片屬性 (喜悅、悲傷、憤怒、驚訝等) 的可信度評分。

Go

在試用這個範例之前,請先按照 使用用戶端程式庫的 Vision 快速入門中的操作說明設定 Go。詳情請參閱 Vision Go API 參考說明文件

如要向 Vision 進行驗證,請設定應用程式預設憑證。詳情請參閱「為本機開發環境設定驗證機制」。


// detectFaces gets faces from the Vision API for an image at the given file path.
func detectFaces(w io.Writer, file string) error {
	ctx := context.Background()

	client, err := vision.NewImageAnnotatorClient(ctx)
	if err != nil {
		return err
	}
	defer client.Close()

	f, err := os.Open(file)
	if err != nil {
		return err
	}
	defer f.Close()

	image, err := vision.NewImageFromReader(f)
	if err != nil {
		return err
	}
	annotations, err := client.DetectFaces(ctx, image, nil, 10)
	if err != nil {
		return err
	}
	if len(annotations) == 0 {
		fmt.Fprintln(w, "No faces found.")
	} else {
		fmt.Fprintln(w, "Faces:")
		for i, annotation := range annotations {
			fmt.Fprintln(w, "  Face", i)
			fmt.Fprintln(w, "    Anger:", annotation.AngerLikelihood)
			fmt.Fprintln(w, "    Joy:", annotation.JoyLikelihood)
			fmt.Fprintln(w, "    Surprise:", annotation.SurpriseLikelihood)
		}
	}
	return nil
}

Java

在試用這個範例之前,請先按照 Vision API 快速入門:使用用戶端程式庫中的操作說明設定 Java 環境。詳情請參閱 Vision API Java 參考說明文件


import com.google.cloud.vision.v1.AnnotateImageRequest;
import com.google.cloud.vision.v1.AnnotateImageResponse;
import com.google.cloud.vision.v1.BatchAnnotateImagesResponse;
import com.google.cloud.vision.v1.FaceAnnotation;
import com.google.cloud.vision.v1.Feature;
import com.google.cloud.vision.v1.Image;
import com.google.cloud.vision.v1.ImageAnnotatorClient;
import com.google.protobuf.ByteString;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class DetectFaces {

  public static void detectFaces() throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String filePath = "path/to/your/image/file.jpg";
    detectFaces(filePath);
  }

  // Detects faces in the specified local image.
  public static void detectFaces(String filePath) throws IOException {
    List<AnnotateImageRequest> requests = new ArrayList<>();

    ByteString imgBytes = ByteString.readFrom(new FileInputStream(filePath));

    Image img = Image.newBuilder().setContent(imgBytes).build();
    Feature feat = Feature.newBuilder().setType(Feature.Type.FACE_DETECTION).build();
    AnnotateImageRequest request =
        AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
    requests.add(request);

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
      BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
      List<AnnotateImageResponse> responses = response.getResponsesList();

      for (AnnotateImageResponse res : responses) {
        if (res.hasError()) {
          System.out.format("Error: %s%n", res.getError().getMessage());
          return;
        }

        // For full list of available annotations, see http://g.co/cloud/vision/docs
        for (FaceAnnotation annotation : res.getFaceAnnotationsList()) {
          System.out.format(
              "anger: %s%njoy: %s%nsurprise: %s%nposition: %s",
              annotation.getAngerLikelihood(),
              annotation.getJoyLikelihood(),
              annotation.getSurpriseLikelihood(),
              annotation.getBoundingPoly());
        }
      }
    }
  }
}

Node.js

在試用這個範例之前,請先按照 使用用戶端程式庫的 Vision 快速入門中的操作說明設定 Node.js。詳情請參閱 Vision Node.js API 參考說明文件

如要向 Vision 進行驗證,請設定應用程式預設憑證。詳情請參閱「為本機開發環境設定驗證機制」。

// Imports the Google Cloud client library
const vision = require('@google-cloud/vision');

// Creates a client
const client = new vision.ImageAnnotatorClient();

async function detectFaces() {
  /**
   * TODO(developer): Uncomment the following line before running the sample.
   */
  // const fileName = 'Local image file, e.g. /path/to/image.png';

  const [result] = await client.faceDetection(fileName);
  const faces = result.faceAnnotations;
  console.log('Faces:');
  faces.forEach((face, i) => {
    console.log(`  Face #${i + 1}:`);
    console.log(`    Joy: ${face.joyLikelihood}`);
    console.log(`    Anger: ${face.angerLikelihood}`);
    console.log(`    Sorrow: ${face.sorrowLikelihood}`);
    console.log(`    Surprise: ${face.surpriseLikelihood}`);
  });
}
detectFaces();

Python

在試用這個範例之前,請先按照 使用用戶端程式庫的 Vision 快速入門中的操作說明設定 Python。詳情請參閱 Vision Python API 參考說明文件

如要向 Vision 進行驗證,請設定應用程式預設憑證。詳情請參閱「為本機開發環境設定驗證機制」。

def detect_faces(path):
    """Detects faces in an image."""
    from google.cloud import vision

    client = vision.ImageAnnotatorClient()

    with open(path, "rb") as image_file:
        content = image_file.read()

    image = vision.Image(content=content)

    response = client.face_detection(image=image)
    faces = response.face_annotations

    # Names of likelihood from google.cloud.vision.enums
    likelihood_name = (
        "UNKNOWN",
        "VERY_UNLIKELY",
        "UNLIKELY",
        "POSSIBLE",
        "LIKELY",
        "VERY_LIKELY",
    )
    print("Faces:")

    for face in faces:
        print(f"anger: {likelihood_name[face.anger_likelihood]}")
        print(f"joy: {likelihood_name[face.joy_likelihood]}")
        print(f"surprise: {likelihood_name[face.surprise_likelihood]}")

        vertices = [
            f"({vertex.x},{vertex.y})" for vertex in face.bounding_poly.vertices
        ]

        print("face bounds: {}".format(",".join(vertices)))

    if response.error.message:
        raise Exception(
            "{}\nFor more info on error messages, check: "
            "https://cloud.google.com/apis/design/errors".format(response.error.message)
        )

其他語言

C#:請按照用戶端程式庫頁面上的 C# 設定說明操作,然後參閱 .NET 適用的 Vision 參考說明文件

PHP:請按照用戶端程式庫頁面上的 PHP 設定操作說明操作,然後參閱 PHP 適用的 Vision 參考文件

Ruby:請按照用戶端程式庫頁面上的 Ruby 設定說明操作,然後參閱 Ruby 適用的 Vision 參考文件

偵測遠端圖片中的臉孔

您可以使用 Vision API,針對位於 Cloud Storage 或網際網路上的遠端圖片檔案執行功能偵測。如要傳送遠端檔案要求,請在要求內文中指定檔案的網頁網址或 Cloud Storage URI。

REST

使用任何要求資料之前,請先替換以下項目:

  • CLOUD_STORAGE_IMAGE_URI:Cloud Storage 值區中有效圖片檔案的路徑。您至少必須具備檔案的讀取權限。示例:
    • gs://cloud-samples-data/vision/face/faces.jpeg
  • RESULTS_INT:(選用) 要傳回的結果整數值。如果您省略 "maxResults" 欄位及其值,API 會傳回預設值 10 個結果。這個欄位不適用於下列地圖項目類型:TEXT_DETECTIONDOCUMENT_TEXT_DETECTIONCROP_HINTS
  • PROJECT_ID:您的 Google Cloud 專案 ID。

HTTP 方法和網址:

POST https://vision.googleapis.com/v1/images:annotate

JSON 要求主體:

{
  "requests": [
    {
      "image": {
        "source": {
          "imageUri": "CLOUD_STORAGE_IMAGE_URI"
        }
       },
       "features": [
         {
           "maxResults": RESULTS_INT,
           "type": "FACE_DETECTION"
         }
       ]
    }
  ]
}

如要傳送要求,請選擇以下其中一個選項:

curl

將要求主體儲存在名為 request.json 的檔案中,然後執行下列指令:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/images:annotate"

PowerShell

將要求主體儲存在名為 request.json 的檔案中,然後執行下列指令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_ID" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

如果要求成功,伺服器會傳回 200 OK HTTP 狀態碼與 JSON 格式的回應。

FACE_DETECTION 回應包含所有偵測到的臉孔的定界框、臉孔上偵測到的地標 (眼睛、鼻子、嘴巴等),以及臉孔和圖片屬性 (喜悅、悲傷、憤怒、驚訝等) 的可信度評分。

Go

在試用這個範例之前,請先按照 使用用戶端程式庫的 Vision 快速入門中的操作說明設定 Go。詳情請參閱 Vision Go API 參考說明文件

如要向 Vision 進行驗證,請設定應用程式預設憑證。詳情請參閱「為本機開發環境設定驗證機制」。


// detectFaces gets faces from the Vision API for an image at the given file path.
func detectFacesURI(w io.Writer, file string) error {
	ctx := context.Background()

	client, err := vision.NewImageAnnotatorClient(ctx)
	if err != nil {
		return err
	}

	image := vision.NewImageFromURI(file)
	annotations, err := client.DetectFaces(ctx, image, nil, 10)
	if err != nil {
		return err
	}
	if len(annotations) == 0 {
		fmt.Fprintln(w, "No faces found.")
	} else {
		fmt.Fprintln(w, "Faces:")
		for i, annotation := range annotations {
			fmt.Fprintln(w, "  Face", i)
			fmt.Fprintln(w, "    Anger:", annotation.AngerLikelihood)
			fmt.Fprintln(w, "    Joy:", annotation.JoyLikelihood)
			fmt.Fprintln(w, "    Surprise:", annotation.SurpriseLikelihood)
		}
	}
	return nil
}

Java

在試用這個範例之前,請先按照 Vision API 快速入門:使用用戶端程式庫中的操作說明設定 Java 環境。詳情請參閱 Vision API Java 參考說明文件


import com.google.cloud.vision.v1.AnnotateImageRequest;
import com.google.cloud.vision.v1.AnnotateImageResponse;
import com.google.cloud.vision.v1.BatchAnnotateImagesResponse;
import com.google.cloud.vision.v1.FaceAnnotation;
import com.google.cloud.vision.v1.Feature;
import com.google.cloud.vision.v1.Image;
import com.google.cloud.vision.v1.ImageAnnotatorClient;
import com.google.cloud.vision.v1.ImageSource;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class DetectFacesGcs {

  public static void detectFacesGcs() throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String filePath = "gs://your-gcs-bucket/path/to/image/file.jpg";
    detectFacesGcs(filePath);
  }

  // Detects faces in the specified remote image on Google Cloud Storage.
  public static void detectFacesGcs(String gcsPath) throws IOException {
    List<AnnotateImageRequest> requests = new ArrayList<>();

    ImageSource imgSource = ImageSource.newBuilder().setGcsImageUri(gcsPath).build();
    Image img = Image.newBuilder().setSource(imgSource).build();
    Feature feat = Feature.newBuilder().setType(Feature.Type.FACE_DETECTION).build();

    AnnotateImageRequest request =
        AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
    requests.add(request);

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
      BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
      List<AnnotateImageResponse> responses = response.getResponsesList();

      for (AnnotateImageResponse res : responses) {
        if (res.hasError()) {
          System.out.format("Error: %s%n", res.getError().getMessage());
          return;
        }

        // For full list of available annotations, see http://g.co/cloud/vision/docs
        for (FaceAnnotation annotation : res.getFaceAnnotationsList()) {
          System.out.format(
              "anger: %s%njoy: %s%nsurprise: %s%nposition: %s",
              annotation.getAngerLikelihood(),
              annotation.getJoyLikelihood(),
              annotation.getSurpriseLikelihood(),
              annotation.getBoundingPoly());
        }
      }
    }
  }
}

Node.js

在試用這個範例之前,請先按照 使用用戶端程式庫的 Vision 快速入門中的操作說明設定 Node.js。詳情請參閱 Vision Node.js API 參考說明文件

如要向 Vision 進行驗證,請設定應用程式預設憑證。詳情請參閱「為本機開發環境設定驗證機制」。

// Imports the Google Cloud client libraries
const vision = require('@google-cloud/vision');

// Creates a client
const client = new vision.ImageAnnotatorClient();

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const bucketName = 'Bucket where the file resides, e.g. my-bucket';
// const fileName = 'Path to file within bucket, e.g. path/to/image.png';

// Performs face detection on the gcs file
const [result] = await client.faceDetection(`gs://${bucketName}/${fileName}`);
const faces = result.faceAnnotations;
console.log('Faces:');
faces.forEach((face, i) => {
  console.log(`  Face #${i + 1}:`);
  console.log(`    Joy: ${face.joyLikelihood}`);
  console.log(`    Anger: ${face.angerLikelihood}`);
  console.log(`    Sorrow: ${face.sorrowLikelihood}`);
  console.log(`    Surprise: ${face.surpriseLikelihood}`);
});

Python

在試用這個範例之前,請先按照 使用用戶端程式庫的 Vision 快速入門中的操作說明設定 Python。詳情請參閱 Vision Python API 參考說明文件

如要向 Vision 進行驗證,請設定應用程式預設憑證。詳情請參閱「為本機開發環境設定驗證機制」。

def detect_faces_uri(uri):
    """Detects faces in the file located in Google Cloud Storage or the web."""
    from google.cloud import vision

    client = vision.ImageAnnotatorClient()
    image = vision.Image()
    image.source.image_uri = uri

    response = client.face_detection(image=image)
    faces = response.face_annotations

    # Names of likelihood from google.cloud.vision.enums
    likelihood_name = (
        "UNKNOWN",
        "VERY_UNLIKELY",
        "UNLIKELY",
        "POSSIBLE",
        "LIKELY",
        "VERY_LIKELY",
    )
    print("Faces:")

    for face in faces:
        print(f"anger: {likelihood_name[face.anger_likelihood]}")
        print(f"joy: {likelihood_name[face.joy_likelihood]}")
        print(f"surprise: {likelihood_name[face.surprise_likelihood]}")

        vertices = [
            f"({vertex.x},{vertex.y})" for vertex in face.bounding_poly.vertices
        ]

        print("face bounds: {}".format(",".join(vertices)))

    if response.error.message:
        raise Exception(
            "{}\nFor more info on error messages, check: "
            "https://cloud.google.com/apis/design/errors".format(response.error.message)
        )

gcloud

如要執行臉部偵測,請使用 gcloud ml vision detect-faces 指令,如以下範例所示:

gcloud ml vision detect-faces gs://cloud-samples-data/vision/face/faces.jpeg

其他語言

C#:請按照用戶端程式庫頁面上的 C# 設定說明操作,然後參閱 .NET 適用的 Vision 參考說明文件

PHP:請按照用戶端程式庫頁面上的 PHP 設定操作說明操作,然後參閱 PHP 適用的 Vision 參考文件

Ruby:請按照用戶端程式庫頁面上的 Ruby 設定說明操作,然後參閱 Ruby 適用的 Vision 參考文件

立即試用

請試試下方的臉部偵測功能。您可以使用已指定的圖片 (gs://cloud-samples-data/vision/face/faces.jpeg),也可以改用自己的圖片。選取「Execute」即可傳送要求。

要求主體:

{
  "requests": [
    {
      "features": [
        {
          "maxResults": 10,
          "type": "FACE_DETECTION"
        }
      ],
      "image": {
        "source": {
          "imageUri": "gs://cloud-samples-data/vision/face/faces.jpeg"
        }
      }
    }
  ]
}