\

Guardrails Guard for LLM Apps

A guardrail is a guide that helps keep the conversation between a human and an AI on track. At present There are two major open-source guardrail.

Guardrails, acts as a kind of censor for applications built on large language models (LLMs).



Guardrails enables developers to set up three kinds of boundaries:

  • Topical guardrails to “prevent apps from veering off into undesired areas.” The example Cohen used in the briefing was an employee asking an HR chatbot which employees had adopted children. The guardrails prevented the chatbot from attempting to answer this
  • Safety guardrails is a broad category that includes fact-checking (preventing hallucinations), filtering out unwanted language, and preventing hateful content.
  • Security guardrails “restrict apps to making connections only to external third-party applications known to be safe.”