See when your AI app
breaks itself.
Your chatbot loops. Your agent burns through tokens. Your RAG bot makes things up. Whoopsie catches it live and shows you what happened. Free forever.
Catches what's going wrong
Loops. Repetition. Hallucinations. Cost spikes. Tasks the agent ignored. Seven kinds of failure, all live.
No code, no terminal
Copy a prompt. Paste it into your AI builder's chat. Your AI edits the code for you. You watch the dashboard.
Private by default
Emails, phone numbers, API keys, JWT tokens — all redacted in the SDK before they leave your app. You can toggle off any time.
How it works
- 01
Pick where you build
Lovable, Replit, Bolt, Cursor, v0 — all supported. We give you a prompt tailored to that tool.
- 02
Paste the prompt
Open your AI builder's chat. Paste. Send. The AI installs whoopsie and wires it up. Takes about a minute.
- 03
Watch your live dashboard
The first time someone uses your app, every chat call shows up. Failures get a red tag with what went wrong, in plain English.
What we catch
Each one runs locally on your traces. No second LLM call, no extra cost.
- loop
Your agent kept calling the same tool over and over.
- repetition
Your bot's reply repeated the same line.
- cost-spike
A single call burned a lot of tokens or dollars.
- completion-gap
Stopped early or ran on forever.
- hallucination-lite
Said something that wasn't in its sources.
- context-neglect
Ignored the user's settings or context.
- derailment
Did the wrong thing for the task it was given.
For developers (the actual code)▸
import { wrapLanguageModel, streamText } from "ai";
import { whoopsieMiddleware } from "@whoopsie/sdk";
const model = wrapLanguageModel({
model: openai("gpt-4o"),
middleware: whoopsieMiddleware(),
});
await streamText({ model, prompt: "..." });That's the entire wrap. Vercel AI SDK v6, runs in any Next.js app (or anywhere you call streamText). If you're comfortable in a terminal, npx @whoopsie/cli init does the wrap for you.