Post
355
๐ก๏ธ AI Guardrails with Open Language Models - Tutorial
๐ https://haystack.deepset.ai/cookbook/safety_moderation_open_lms
How do you ensure your AI application is safe from harmful or inappropriate user inputs?
This is a core requirement for real-world AI deployments. Luckily, several open Language Models are built specifically for safety moderation.
I've been exploring them and put together a hands-on tutorial using the Haystack framework to build your own AI guardrails.
In the notebook, you'll learn how to use and customize:
๐น Meta Llama Guard (via Hugging Face API)
๐น IBM Granite Guardian (via Ollama), which can also evaluate RAG specific risk dimensions
๐น Google ShieldGemma (via Ollama)
๐น Nvidia NemoGuard models family, including a model for topic control
You'll also see how to integrate content moderation into a ๐ RAG pipeline.
๐ https://haystack.deepset.ai/cookbook/safety_moderation_open_lms
How do you ensure your AI application is safe from harmful or inappropriate user inputs?
This is a core requirement for real-world AI deployments. Luckily, several open Language Models are built specifically for safety moderation.
I've been exploring them and put together a hands-on tutorial using the Haystack framework to build your own AI guardrails.
In the notebook, you'll learn how to use and customize:
๐น Meta Llama Guard (via Hugging Face API)
๐น IBM Granite Guardian (via Ollama), which can also evaluate RAG specific risk dimensions
๐น Google ShieldGemma (via Ollama)
๐น Nvidia NemoGuard models family, including a model for topic control
You'll also see how to integrate content moderation into a ๐ RAG pipeline.