Mulai 29 April 2025, model Gemini 1.5 Pro dan Gemini 1.5 Flash tidak tersedia di project yang belum pernah menggunakan model ini, termasuk project baru. Untuk mengetahui detailnya, lihat Versi dan siklus proses model.
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Petunjuk sistem adalah alat yang efektif untuk memandu perilaku model bahasa besar. Dengan memberikan petunjuk yang jelas dan spesifik, Anda dapat membantu model menghasilkan respons yang aman dan sesuai dengan kebijakan Anda.
Petunjuk sistem dapat digunakan untuk menambah atau menggantikan filter keamanan.
Petunjuk sistem secara langsung mengarahkan perilaku model, sedangkan filter keamanan bertindak sebagai penghalang terhadap serangan yang disengaja, dengan memblokir output berbahaya apa pun yang mungkin dihasilkan model. Pengujian kami menunjukkan bahwa dalam banyak situasi, petunjuk sistem yang dibuat dengan baik sering kali lebih efektif daripada filter keamanan dalam menghasilkan output yang aman.
Halaman ini menguraikan praktik terbaik untuk membuat petunjuk sistem yang efektif guna
mencapai sasaran ini.
Contoh petunjuk sistem
Terjemahkan kebijakan dan batasan khusus organisasi Anda menjadi petunjuk yang jelas dan dapat ditindaklanjuti untuk model. Hal ini dapat mencakup:
Topik yang dilarang: Secara eksplisit menginstruksikan model untuk menghindari pembuatan output yang termasuk dalam kategori konten berbahaya tertentu, seperti konten seksual atau diskriminatif.
Topik sensitif: Secara eksplisit menginstruksikan model tentang topik yang harus dihindari atau ditangani dengan hati-hati, seperti politik, agama, atau topik kontroversial.
Pernyataan penyangkalan: Berikan bahasa pernyataan penyangkalan jika model menemukan topik yang dilarang.
Contoh untuk mencegah konten tidak aman:
You are an AI assistant designed to generate safe and helpful content. Adhere to
the following guidelines when generating responses:
* Sexual Content: Do not generate content that is sexually explicit in
nature.
* Hate Speech: Do not generate hate speech. Hate speech is content that
promotes violence, incites hatred, promotes discrimination, or disparages on
the basis of race or ethnic origin, religion, disability, age, nationality,
veteran status, sexual orientation, sex, gender, gender identity, caste,
immigration status, or any other characteristic that is associated with
systemic discrimination or marginalization.
* Harassment and Bullying: Do not generate content that is malicious,
intimidating, bullying, or abusive towards another individual.
* Dangerous Content: Do not facilitate, promote, or enable access to harmful
goods, services, and activities.
* Toxic Content: Never generate responses that are rude, disrespectful, or
unreasonable.
* Derogatory Content: Do not make negative or harmful comments about any
individual or group based on their identity or protected attributes.
* Violent Content: Avoid describing scenarios that depict violence, gore, or
harm against individuals or groups.
* Insults: Refrain from using insulting, inflammatory, or negative language
towards any person or group.
* Profanity: Do not use obscene or vulgar language.
* Illegal: Do not assist in illegal activities such as malware creation, fraud, spam generation, or spreading misinformation.
* Death, Harm & Tragedy: Avoid detailed descriptions of human deaths,
tragedies, accidents, disasters, and self-harm.
* Firearms & Weapons: Do not promote firearms, weapons, or related
accessories unless absolutely necessary and in a safe and responsible context.
If a prompt contains prohibited topics, say: "I am unable to help with this
request. Is there anything else I can help you with?"
Pedoman keamanan merek
Petunjuk sistem harus selaras dengan identitas dan nilai merek Anda.
Hal ini membantu model menghasilkan respons yang berkontribusi positif terhadap citra merek Anda dan menghindari potensi kerusakan. Pertimbangkan hal berikut:
Pesan dan gaya merek: Instruksikan model untuk membuat respons yang konsisten dengan gaya komunikasi merek Anda. Hal ini dapat mencakup gaya bahasa yang formal atau informal, lucu atau serius, dll.
Nilai merek: Memandu output model untuk mencerminkan nilai inti merek Anda. Misalnya, jika keberlanjutan adalah nilai utama, model harus menghindari pembuatan konten yang mempromosikan praktik yang merusak lingkungan.
Target audiens: Sesuaikan bahasa dan gaya model agar sesuai dengan target audiens Anda.
Percakapan kontroversial atau di luar topik: Berikan panduan yang jelas tentang cara model harus menangani topik sensitif atau kontroversial yang terkait dengan merek atau industri Anda.
Contoh untuk agen pelanggan retailer online:
You are an AI assistant representing our brand. Always maintain a friendly,
approachable, and helpful tone in your responses. Use a conversational style and
avoid overly technical language. Emphasize our commitment to customer
satisfaction and environmental responsibility in your interactions.
You can engage in conversations related to the following topics:
* Our brand story and values
* Products in our catalog
* Shipping policies
* Return policies
You are strictly prohibited from discussing topics related to:
* Sex & nudity
* Illegal activities
* Hate speech
* Death & tragedy
* Self-harm
* Politics
* Religion
* Public safety
* Vaccines
* War & conflict
* Illicit drugs
* Sensitive societal topics such abortion, gender, and guns
If a prompt contains any of the prohibited topics, respond with: "I am unable to
help with this request. Is there anything else I can help you with?"
Petunjuk Pengujian dan Penyempurnaan
Keunggulan utama petunjuk sistem dibandingkan filter keamanan adalah Anda dapat menyesuaikan dan meningkatkan kualitas petunjuk sistem. Sangat penting untuk melakukan hal berikut:
Lakukan pengujian: Bereksperimen dengan berbagai versi petunjuk untuk menentukan versi yang memberikan hasil paling aman dan efektif.
Lakukan iterasi dan tingkatkan kualitas petunjuk: Perbarui petunjuk berdasarkan perilaku dan masukan model yang diamati. Anda dapat menggunakan Pengoptimal Perintah untuk
meningkatkan kualitas perintah dan petunjuk sistem.
Pantau output model secara terus-menerus: Tinjau respons model secara rutin untuk mengidentifikasi area yang perlu disesuaikan instruksinya.
Dengan mengikuti panduan ini, Anda dapat menggunakan petunjuk sistem untuk membantu model
membuat output yang aman, bertanggung jawab, dan selaras dengan kebutuhan dan kebijakan
spesifik Anda.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-25 UTC."],[],[],null,["# System instructions for safety\n\n| To see an example of safety prompt engineering,\n| run the \"Gen AI \\& LLM Security for developers\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/responsible-ai/gemini_prompt_attacks_mitigation_examples.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fresponsible-ai%2Fgemini_prompt_attacks_mitigation_examples.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fresponsible-ai%2Fgemini_prompt_attacks_mitigation_examples.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/responsible-ai/gemini_prompt_attacks_mitigation_examples.ipynb)\n\nSystem instructions are a powerful tool for guiding the behavior of large\nlanguage models. By providing clear and specific instructions, you can help\nthe model output responses that are safe and aligned with your policies.\n\n[System instructions](/vertex-ai/generative-ai/docs/learn/prompts/system-instructions) can be used to augment or replace [safety filters](/vertex-ai/generative-ai/docs/multimodal/configure-safety-filters).\nSystem instructions directly steer the model's behavior, whereas safety filters\nact as a barrier against motivated attack, blocking any harmful outputs the\nmodel might produce. Our testing shows that in many situations well-crafted\nsystem instructions are often more effective than safety filters at generating\nsafe outputs.\n\nThis page outlines best practices for crafting effective system instructions to\nachieve these goals.\n\nSample system instructions\n--------------------------\n\nTranslate your organization's specific policies and constraints into clear,\nactionable instructions for the model. This could include:\n\n- Prohibited topics: Explicitly instruct the model to avoid generating outputs that fall within specific harmful content categories, such as sexual or discriminatory content.\n- Sensitive topics: Explicitly instruct the model on topics to avoid or treat with caution, such as politics, religion, or controversial topics.\n- Disclaimer: Provide disclaimer language in case the model encounters prohibited topics.\n\nExample for preventing unsafe content: \n\n You are an AI assistant designed to generate safe and helpful content. Adhere to\n the following guidelines when generating responses:\n\n * Sexual Content: Do not generate content that is sexually explicit in\n nature.\n * Hate Speech: Do not generate hate speech. Hate speech is content that\n promotes violence, incites hatred, promotes discrimination, or disparages on\n the basis of race or ethnic origin, religion, disability, age, nationality,\n veteran status, sexual orientation, sex, gender, gender identity, caste,\n immigration status, or any other characteristic that is associated with\n systemic discrimination or marginalization.\n * Harassment and Bullying: Do not generate content that is malicious,\n intimidating, bullying, or abusive towards another individual.\n * Dangerous Content: Do not facilitate, promote, or enable access to harmful\n goods, services, and activities.\n * Toxic Content: Never generate responses that are rude, disrespectful, or\n unreasonable.\n * Derogatory Content: Do not make negative or harmful comments about any\n individual or group based on their identity or protected attributes.\n * Violent Content: Avoid describing scenarios that depict violence, gore, or\n harm against individuals or groups.\n * Insults: Refrain from using insulting, inflammatory, or negative language\n towards any person or group.\n * Profanity: Do not use obscene or vulgar language.\n * Illegal: Do not assist in illegal activities such as malware creation, fraud, spam generation, or spreading misinformation.\n * Death, Harm & Tragedy: Avoid detailed descriptions of human deaths,\n tragedies, accidents, disasters, and self-harm.\n * Firearms & Weapons: Do not promote firearms, weapons, or related\n accessories unless absolutely necessary and in a safe and responsible context.\n\n If a prompt contains prohibited topics, say: \"I am unable to help with this\n request. Is there anything else I can help you with?\"\n\n### Brand safety guidelines\n\nSystem instructions should be aligned with your brand's identity and values.\nThis helps the model output responses that contribute positively to your brand\nimage and avoid any potential damage. Consider the following:\n\n- Brand voice and tone: Instruct the model to generate responses that are consistent with your brand's communication style. This could include being formal or informal, humorous or serious, etc.\n- Brand values: Guide the model's outputs to reflect your brand's core values. For example, if sustainability is a key value, the model should avoid generating content that promotes environmentally harmful practices.\n- Target audience: Tailor the model's language and style to resonate with your target audience.\n- Controversial or off-topic conversations: Provide clear guidance on how the model should handle sensitive or controversial topics related to your brand or industry.\n\nExample for a customer agent for an online retailer: \n\n You are an AI assistant representing our brand. Always maintain a friendly,\n approachable, and helpful tone in your responses. Use a conversational style and\n avoid overly technical language. Emphasize our commitment to customer\n satisfaction and environmental responsibility in your interactions.\n\n You can engage in conversations related to the following topics:\n * Our brand story and values\n * Products in our catalog\n * Shipping policies\n * Return policies\n\n You are strictly prohibited from discussing topics related to:\n * Sex & nudity\n * Illegal activities\n * Hate speech\n * Death & tragedy\n * Self-harm\n * Politics\n * Religion\n * Public safety\n * Vaccines\n * War & conflict\n * Illicit drugs\n * Sensitive societal topics such abortion, gender, and guns\n\n If a prompt contains any of the prohibited topics, respond with: \"I am unable to\n help with this request. Is there anything else I can help you with?\"\n\n### Test and refine Instructions\n\nA key advantage of system instructions over safety filters is that you can\ncustomize and improve system instructions. It's crucial to do the\nfollowing:\n\n- Conduct testing: Experiment with different versions of instructions to determine which ones yield the safest and most effective results.\n- Iterate and refine instructions: Update instructions based on observed model behavior and feedback. You can use [Prompt Optimizer](/vertex-ai/generative-ai/docs/learn/prompts/prompt-optimizer) to improve prompts and system instructions.\n- Continuously monitor model outputs: Regularly review the model's responses to identify areas where instructions need to be adjusted.\n\nBy following these guidelines, you can use system instructions to help the model\ngenerate outputs that are safe, responsible, and aligned with your specific\nneeds and policies.\n\nWhat's next\n-----------\n\n- Learn about [abuse monitoring](/vertex-ai/generative-ai/docs/learn/abuse-monitoring).\n- Learn more about [responsible AI](/vertex-ai/generative-ai/docs/learn/responsible-ai).\n- Learn about [data governance](/vertex-ai/generative-ai/docs/data-governance)."]]