Red-teaming a RAG app with the Azure AI Evaluation SDK

When we develop user-facing applications that are powered by LLMs, we’re taking on a big risk that the LLM may produce output that is unsafe in some way – like responses that encourage violence, hate speech, or self-harm. How can we be confident that a troll won’t get our app…

Learn More
Share:

You may be interested in

What you're searching for?

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors