Post
353
š”ļø AI Guardrails with Open Language Models - Tutorial
š https://haystack.deepset.ai/cookbook/safety_moderation_open_lms
How do you ensure your AI application is safe from harmful or inappropriate user inputs?
This is a core requirement for real-world AI deployments. Luckily, several open Language Models are built specifically for safety moderation.
I've been exploring them and put together a hands-on tutorial using the Haystack framework to build your own AI guardrails.
In the notebook, you'll learn how to use and customize:
š¹ Meta Llama Guard (via Hugging Face API)
š¹ IBM Granite Guardian (via Ollama), which can also evaluate RAG specific risk dimensions
š¹ Google ShieldGemma (via Ollama)
š¹ Nvidia NemoGuard models family, including a model for topic control
You'll also see how to integrate content moderation into a š RAG pipeline.
š https://haystack.deepset.ai/cookbook/safety_moderation_open_lms
How do you ensure your AI application is safe from harmful or inappropriate user inputs?
This is a core requirement for real-world AI deployments. Luckily, several open Language Models are built specifically for safety moderation.
I've been exploring them and put together a hands-on tutorial using the Haystack framework to build your own AI guardrails.
In the notebook, you'll learn how to use and customize:
š¹ Meta Llama Guard (via Hugging Face API)
š¹ IBM Granite Guardian (via Ollama), which can also evaluate RAG specific risk dimensions
š¹ Google ShieldGemma (via Ollama)
š¹ Nvidia NemoGuard models family, including a model for topic control
You'll also see how to integrate content moderation into a š RAG pipeline.