Responsible Generative AI Toolkit  |  Google AI for Developers (original) (raw)

Tools and guidance to design, build and evaluate open AI models responsibly.

Get started

Define system-level policies

Determine what type of content your application should and should not generate.

Design for safety

Define your overall approach to implement risk mitigation techniques, considering technical and business tradeoffs.

Be transparent

Communicate your approach with artifacts like model cards.

Get started

Craft safer, more robust prompts

Use the power of LLMs to help craft safer prompt templates with the Model Alignment library.

Investigate model prompts

Build safe and helpful prompts through iterative improvement with the Learning Interpretability Tool (LIT).

Model prompts video

Get started

LLM Comparator

Conduct side-by-side evaluations with LLM Comparator to qualitatively assess differences in responses between models, different prompts for the same model, or even different tunings of a model

LLM Comparator video

Get started

Agile classifiers

Create safety classifiers for your specific policies using parameter efficient tuning (PET) with relatively little training data

Checks AI Safety

Ensure AI safety compliance against your content policies with APIs and monitoring dashboards.

Text moderation service

Detect a list of safety attributes, including various potentially harmful categories and topics that may be considered sensitive with this Google Cloud Natural Language API available for free below a certain usage limit.

Perspective API

Identify "toxic" comments with this free Google Jigsaw API to mitigate online toxicity and ensure healthy dialogue.