Session description
When we give AI agents access to our GraphQL APIs, we introduce a new class of distributed system challenges: non-deterministic queries, potential N+1 floods, and authorization bypasses. How do we ensure our "AI-generated" queries are safe and efficient?
This talk bridges the gap between AI Quality Engineering and GraphQL governance. Building on my work designing evaluation frameworks for multi-agent systems, I will present strategies for monitoring and governing agents that interact with GraphQL endpoints. We will discuss how to implement "Semantic Rate Limiting" (analyzing query complexity vs. user intent) and how to evaluate the accuracy of agent-generated GraphQL syntax using "LLM-as-a-Judge" frameworks.
We will also cover the "Human-in-the-Loop" aspect: using GraphQL subscriptions to stream agent reasoning to human supervisors for real-time validation before a mutation is executed. Attendees will learn how to open their Graphs to AI without compromising on security or performance reliability.