← Back to Founders
Dr. Priyanka Tembey headshot

Dr. Priyanka Tembey

with Vrajesh Bhavsar

Runtime defense for AI applications, APIs, and cloud workloads against modern threats.

AI Security Runtime Security API Security Cloud Security LLM Security

Overview

Operant AI was founded on the observation that the AI application stack introduces a class of runtime attacks that traditional WAFs and API gateways were never designed to handle. Prompt injection, model extraction, data exfiltration through LLM outputs — these require new detection logic at the inference layer, not just the network perimeter.

What They’re Building

The platform instruments AI applications, APIs, and cloud workloads at runtime — intercepting requests and responses to detect and block:

  • Prompt injection and jailbreaking — adversarial inputs designed to subvert LLM behavior
  • Rogue agent activity — autonomous AI taking unauthorized actions
  • Data poisoning — malicious inputs intended to corrupt model outputs
  • Sensitive data exfiltration — PII, credentials, or proprietary data leaking through LLM responses

Protection covers the full stack: the API gateway, the application layer, and the AI inference endpoints.

Traction

  • $10M Series A (September 2024)
  • Growing customer base across enterprises running AI in production

Why It Matters

Every major enterprise is deploying AI-powered applications, and most have no runtime visibility into what those systems are actually doing. Operant fills the gap between “we deployed an LLM” and “we know it’s behaving safely.”