O produto descrito nesta documentação, os clusters do Anthos na AWS (geração anterior), agora está no modo de manutenção. Todas as novas instalações precisam usar o produto de geração atual Clusters do Anthos na AWS.
Para se conectar ao GKE nos recursos da AWS, execute as etapas a
seguir. Selecione se você tem uma VPC atual da AWS (ou conexão direta com a VPC) ou criou uma VPC dedicada ao criar seu serviço de gerenciamento.
VPC atual
Se você tiver uma conexão direta ou VPN com uma VPC atual, omita a linha env HTTP_PROXY=http://localhost:8118 dos comandos neste tópico.
VPC dedicada
Quando você cria um serviço de gerenciamento em uma VPC dedicada, o GKE na AWS inclui um bastion host em uma sub-rede pública.
Para se conectar ao serviço de gerenciamento, siga estas etapas:
Para abrir o túnel, execute o script bastion-tunnel.sh. O túnel encaminha
para localhost:8118.
Para abrir um túnel para o Bastion Host, execute o seguinte comando:
./bastion-tunnel.sh -N
As mensagens do túnel SSH aparecem nessa janela. Quando estiver pronto para fechar a conexão, interrompa o processo usando Control+C ou fechando a janela.
Abra um novo terminal e mude para o diretório anthos-aws.
cd anthos-aws
Verifique se você consegue se conectar ao cluster com kubectl.
Em Editar contêiner, selecione Imagem de contêiner atual para escolher
uma imagem de contêiner disponível no Container Registry. Preencha o Caminho da imagem com a imagem do contêiner que você quer usar e a versão dela. Para este guia de início rápido, use nginx:latest.
Clique em Concluído e em Continuar. A tela Configuração
será exibida.
Altere o nome do aplicativo da implantação e
o namespace do Kubernetes. Para este guia de início rápido, use
o nome do aplicativo nginx-1 e o namespace default
No menu suspenso Cluster, escolha o cluster de usuários. Por padrão,
seu primeiro cluster de usuário é denominado cluster-0.
Selecione Implantar. O GKE na AWS inicia sua implantação NGINX.
A tela Detalhes da implantação é exibida.
Como expor seus pods
Esta seção mostra como realizar uma das seguintes ações:
Exponha a implantação internamente no cluster e confirme se ela está disponível
com kubectl port-forward.
Exponha a implantação do Console do Google Cloud para os endereços permitidos
pelo grupo de segurança do pool de nós.
kubectl
Exponha a porta 80 à implantação no cluster com kubectl expose.
Na página Detalhes da implantação, clique em Expor. A
tela Expor uma implantação é exibida.
Na seção Mapeamento de porta, mantenha a porta padrão (80) e
clique em Concluído.
Em Tipo de serviço, selecione Balanceador de carga. Para mais informações
sobre outras opções, consulte
Serviços de publicação (ServiceTypes)
na documentação do Kubernetes.
Clique no link para Endpoints externos. Se o balanceador de carga estiver
pronto, a página da Web NGINX padrão será exibida.
Ver sua implantação no console do Google Cloud
Se o cluster estiver
conectado ao console do Google Cloud,
será possível visualizar sua implantação na página "Cargas de trabalho" do GKE. Para visualizar
sua carga de trabalho, siga estas etapas:
É possível usar outros tipos de cargas de trabalho do Kubernetes com o GKE na AWS.
Consulte a documentação do GKE para mais informações sobre
como implantar cargas de trabalho.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2024-07-01 UTC."],[],[],null,["# Quickstart\n\nThis topic shows you how to create a workload on GKE on AWS\nand expose it internally to your cluster.\n\nBefore you begin\n----------------\n\n\nBefore you start using GKE on AWS, make sure you have performed the following tasks:\n\n- Complete the [Prerequisites](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/prerequisites).\n\n\u003c!-- --\u003e\n\n- Install a [management service](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/installing-management).\n- Create a [user cluster](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/creating-user-cluster).\n- From your `anthos-aws` directory, use `anthos-gke` to switch context to your user cluster. \n\n ```sh\n cd anthos-aws\n env HTTPS_PROXY=http://localhost:8118 \\\n anthos-gke aws clusters get-credentials CLUSTER_NAME\n ```\n Replace \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e with your user cluster name.\n\nYou can perform these steps with `kubectl`, or with the Google Cloud console if you\nhave\n[Authenticated with Connect](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/connecting-to-a-cluster).\nIf you are using the Google Cloud console, skip to\n[Launch an NGINX Deployment](#launch_an_nginx_deployment).\n\nTo connect to your GKE on AWS resources, perform the following\nsteps. Select if you have an existing AWS VPC (or direct connection to\nyour VPC) or created a dedicated VPC when creating your management service. \n\n### Existing VPC\n\nIf you have a direct or VPN connection to an existing VPC, omit the line\n`env HTTP_PROXY=http://localhost:8118` from commands in this topic.\n\n### Dedicated VPC\n\nWhen you create a management service in a dedicated VPC,\nGKE on AWS includes a\n[bastion](https://en.wikipedia.org/wiki/Bastion_host) host in a\npublic subnet.\n| **Important:** If you restart your terminal session or the SSH connection is lost, you need to re-launch the `bastion-tunnel.sh` script.\n\nTo connect to your management service, perform the following steps:\n\n1. Change to the directory with your GKE on AWS configuration.\n You created this directory when\n [Installing the management service](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/installing-management).\n\n ```sh\n cd anthos-aws\n ```\n\n \u003cbr /\u003e\n\n2. To open the tunnel, run the `bastion-tunnel.sh` script. The tunnel forwards\n to `localhost:8118`.\n\n To open a tunnel to the bastion host, run the following command: \n\n ./bastion-tunnel.sh -N\n\n Messages from the SSH tunnel appear in this window. When you are ready to\n close the connection, stop the process by using \u003ckbd\u003eControl+C\u003c/kbd\u003e or\n closing the window.\n3. Open a new terminal and change into your `anthos-aws` directory.\n\n ```sh\n cd anthos-aws\n ```\n4. Check that you're able to connect to the cluster with `kubectl`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl cluster-info\n\n The output includes the URL for the management service API server.\n\nLaunch an NGINX Deployment\n--------------------------\n\nIn this section, you create a\n[Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)\nof the NGINX webserver named `nginx-1`. \n\n### kubectl\n\n1. Use `kubectl create` to create the Deployment.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl create deployment --image nginx nginx-1\n\n2. Use `kubectl` to get the status of the Deployment. Note the Pod's `NAME`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl get deployment\n\n### Console\n\nTo launch a NGINX Deployment with the Google Cloud console, perform the\nfollowing steps:\n\n1. Visit the GKE Workloads menu in\n Google Cloud console.\n\n [Visit the Workloads menu](https://console.cloud.google.com/kubernetes/workload)\n2. Click **Deploy**.\n\n3. Under **Edit container** , select **Existing container image** to choose\n a container image available from Container Registry. Fill **Image path**\n with the container image that you want to use\n and its version. For this quickstart, use `nginx:latest`.\n\n4. Click **Done** , and then click **Continue** . The **Configuration**\n screen appears.\n\n5. You can change your Deployment's **Application name** and\n Kubernetes **Namespace** . For this quickstart, you can use\n the application name `nginx-1` and namespace `default`\n\n6. From the **Cluster** drop-down menu, select your user cluster. By\n default, your first user cluster is named `cluster-0`.\n\n7. Click **Deploy** . GKE on AWS launches your NGINX Deployment.\n The **Deployment details** screen appears.\n\n### Exposing your pods\n\nThis section shows how to do one of the following:\n\n- Expose your Deployment internally in your cluster and confirm it is available\n with `kubectl port-forward`.\n\n- Expose your Deployment from the Google Cloud console to the addresses allowed by\n your node pool [security group](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/reference/security-groups).\n\n### kubectl\n\n1. Expose port 80 the Deployment to the cluster with `kubectl expose`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl expose deployment nginx-1 --port=80\n\n The Deployment is now accessible from within the cluster.\n2. Forward port `80` on the Deployment to port `8080` on your local machine\n with `kubectl port-forward`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl port-forward deployment/nginx-1 8080:80\n\n3. Connect to `http://localhost:8080` with `curl` or your web browser.\n The default NGINX web page appears.\n\n curl http://localhost:8080\n\n### Console\n\n1. Visit the GKE Workloads menu in Google Cloud console.\n\n [Visit the Workloads menu](https://console.cloud.google.com/kubernetes/workload)\n2. From the **Deployment details** screen, click **Expose** . The\n **Expose a deployment** screen appears.\n\n3. In the **Port mapping** section, leave the default port (`80`), and\n click **Done**.\n\n4. For **Service type** , select **Load balancer** . For more information\n on other options, see\n [Publishing services (ServiceTypes)](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)\n in the Kubernetes documentation.\n\n5. Click **Expose** . The **Service details** screen appears.\n GKE on AWS creates a\n [Classic Elastic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html)\n for the Service.\n\n | **Note:** Creating the load balancer takes several minutes. The hostname appears before you are able to access the deployment.\n6. Click on the link for **External Endpoints**. If the load balancer is\n ready, the default NGINX web page appears.\n\n### View your Deployment on Google Cloud console\n\nIf your cluster is\n[connected to Google Cloud console](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/connecting-to-a-cluster),\nyou can view your Deployment in the GKE Workloads page. To view\nyour workload, perform the following steps:\n\n1. In your browser, visit the Google Kubernetes Engine [Workloads page](https://console.cloud.google.com/kubernetes/workload).\n\n [Visit the Google Kubernetes Engine Workloads page](https://console.cloud.google.com/kubernetes/workload)\n\n The list of Workloads appears.\n2. Click the name of your workload, `nginx-1`. The **Deployment details**\n screen appears.\n\n3. From this screen, you can get details on your Deployment; view and edit YAML\n configuration; and take other Kubernetes actions.\n\nFor more information on options available from this page, see\n[Deploying a stateless application](/kubernetes-engine/docs/how-to/stateless-apps#console)\nin the GKE documentation.\n\n### Cleanup\n\nTo delete your NGINX Deployment, use `kubectl delete` or the Google Cloud console. \n\n### kubectl\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl delete service nginx-1 &&\\\n kubectl delete deployment nginx-1\n\n### Console\n\n1. Visit the Services and Ingress page menu on the Google Cloud console.\n\n [Visit the Services and Ingress page](https://console.cloud.google.com/kubernetes/discovery)\n2. Find your NGINX Service and click its **Name** . By default, the name\n is `nginx-1-service`. The **Service details** screen appears.\n\n3. Click delete\n **Delete** and confirm that you want to delete the Service.\n GKE on AWS deletes the load balancer.\n\n4. Visit the Google Kubernetes Engine [Workloads page](https://console.cloud.google.com/kubernetes/workload).\n\n [Visit the Google Kubernetes Engine Workloads page](https://console.cloud.google.com/kubernetes/workload)\n\n The list of Workloads appears.\n5. Click the name of your workload, `nginx-1`. The **Deployment details**\n screen appears.\n\n6. Click delete\n **Delete** and confirm that you want to delete the Deployment.\n GKE on AWS deletes the Deployment.\n\nWhat's next?\n------------\n\nCreate an internal or external load balancer using one of the following Services:\n\n- [AWS Classic and Network Load balancer](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/loadbalancer)\n- [AWS Application Load Balancer](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/loadbalancer-alb)\n- [Ingress with Cloud Service Mesh](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/ingress)\n\nYou can use other types of Kubernetes Workloads with GKE on AWS.\nSee the GKE documentation for more information on\n[Deploying workloads](/kubernetes-engine/docs/how-to/deploying-workloads-overview)."]]