Skip to main content
DASH NYC, June 9-10 | AI + Observability

Back to Catalog

When WAFs Aren't Enough: How We Secured Our LLM-Powered Agent with Datadog AI Guard

About this Session

When NuLab started building AI-powered features for Backlog, their project management tool used by teams worldwide, they assumed their existing WAF and input validation would be enough. They were wrong.

 

Prompt injection attacks, unintended tool calls by their AI agent, and the risk of sensitive data exfiltration revealed that traditional defenses couldn't control the non-deterministic behavior of LLMs. The attack surface had fundamentally changed: it wasn't just about malicious user input, but also about what the AI itself might generate and execute.

 

In this session, Yuichi Watanabe (Principal Engineer) will share how he built an open-source security middleware for the Vercel AI SDK that integrates with Datadog AI Guard, evaluating prompts and tool calls in real time—before they reach the LLM or execute—and blocking threats based on centralized security policies, all while maintaining full observability across the AI pipeline.

 

You'll learn why input validation falls short for LLM applications, how to design middleware that intercepts both non-streaming and streaming LLM calls with per-tool-call evaluation, how monitoring-only and blocking modes enable gradual rollout in production, and practical lessons on fail-open vs fail-closed trade-offs, malformed LLM output, and SDK version compatibility.

Related Sessions