[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],[],[],[],null,["# Tutorials\n\nLarge Language Models\n---------------------\n\n- [### JetStream MaxText inference on v6e\n A guide to set up and use JetStream with MaxText for inference on v6e.](/tpu/docs/tutorials/LLM/jetstream-maxtext-inference-v6e)\n- [### JetStream PyTorch inference on v6e\n A guide to set up and use JetStream with PyTorch for inference on v6e.](/tpu/docs/tutorials/LLM/jetstream-pytorch-inference-v6e)\n- [### vLLM inference on v6e\n A guide to set up and use vLLM for inference on v6e.](/tpu/docs/tutorials/LLM/vllm-inference-v6e)\n- [### Serve an LLM using TPUs on GKE with vLLM\n A guide to using vLLM to serve large language models (LLMs) using Tensor\nProcessing Units (TPUs) on Google Kubernetes Engine (GKE).](https://cloud.google.com/kubernetes-engine/docs/tutorials/serve-vllm-tpu) \n\nDiffusion Models\n----------------\n\n- [### MaxDiffusion inference on v6e\nA guide to set up and use MaxDiffusion for inference on v6e.](/tpu/docs/tutorials/LLM/maxdiffusion-inference-v6e) \n\nImage Classification\n--------------------\n\n- [### Training ResNet on Cloud TPU (PyTorch)\n A ResNet image classification model using PyTorch, optimized to run on\n Cloud TPU.](/tpu/docs/tutorials/resnet-pytorch)"]]