PromptivePromptive

Debug LLMs. Ship the right model.

Observability tells you what you're spending. Promptive tells you if you're spending it on the right model — with logs, evals, and real cost per provider.

10,000 free calls.·No overages, ever.·Your keys, never ours.·Model comparison built in.

Overview

7-day calls

2,847

↑12%

Avg latency

1.4s

P95: 4.1s

Avg tokens/call

1,240

820 in · 420 out

CallsLatencyTokensCost
MonTueWedThuFriSatSun

Top Issues

Slow calls (>5s)
2
No system prompt
1

Slowest Prompts

1doc-extractor
8.1s
2summarizer
5.8s
3classifier
1.2s

Most Expensive

1doc-extractor
$1.62
2summarizer
$1.14
3classifier
$0.94

Recent Logs

Last 5 calls — click a row to inspect

View all logs →
summarizerhaiku-4.5423ms1,132 tok$0.0008
classifiersonnet-4.61.2s1,328 tok$0.0041
summarizerhaiku-4.55.8s1,138 tok$0.0009
doc-extractoropus-4.68.1s3,820 tok$0.082
classifiersonnet-4.6980ms1,272 tok$0.0039
Logging

Log Everything, Automatically

Every LLM call is captured — latency, token count, cost, model, and the full input/output. Zero extra code after the one-time setup.

  • Latency & p95 tracking
  • Full prompt & completion capture
  • Token counts per call

Logs

2,847 calls in view · filter, search, and inspect

Quick:AnomaliesSlow (>5s)High cost
Filter by prompt or version…
1h24h7d30dall
PromptVersionModelLatencyInOutCostTime
summarizerv1.0.1haiku-4.5423ms820312$0.00082m ago
classifierv2.0.0sonnet-4.61.2s124088$0.00415m ago
summarizerv1.0.1haiku-4.55.8s840298$0.000918m ago
doc-extractorv1.0.0opus-4.68.1s3100720$0.0821h ago
classifierv2.0.0sonnet-4.6980ms118092$0.00391h ago
Versioning

Compare Prompt Versions

Tag each call with a prompt ID and version number. Metrics update in real-time so you can see exactly what changed between iterations.

  • Side-by-side metric comparison
  • Real-time version diff
  • Regression detection

Prompts

4 prompts tracked

PromptSource AppVersionsCallsAvg LatencyTotal CostLast Called
summarizerprod-api
v1.0.1v1.0.0
1,4204.2s$1.142m ago

Performance

Avg latency4.2s
Avg cost/call$0.0008
Total calls1,420
Total cost$1.14

Versions

v1.0.1activelatest
v1.0.0

Actions

↗ Open in Playground
⊙ Version Analysis →
$ Cost Breakdown
classifierprod-api
v2.0.0
8901.1s$0.9414m ago
doc-extractor
v1.0.0
3128.1s$1.621h ago
chat-agentstaging
v3.0.0
2252.3s$0.223h ago
Cost Control

Hard Caps, Zero Surprises

Track spend per prompt and set hard monthly caps. When the limit is hit, calls stop gracefully — no overages, no bill shock.

  • Cost per prompt breakdown
  • Monthly hard caps
  • Projected spend forecast

Cost Analytics

30-day spend

$3.92

2,847 calls

Avg per call

$0.0014

3 models

Daily burn rate

$0.13/day

This month avg

Proj. monthly

$3.92

18 days left

Daily Spend — Last 30 days

Cost by prompt version

PromptVersionCallsTotal costAvg / call% of spend
doc-extractorv1.0.0312$1.62$0.0052
42%
summarizerv1.0.11,420$1.14$0.0008
29%
classifierv2.0.0890$0.94$0.0011
24%
chat-agentv3.0.0225$0.22$0.0010
6%

Three Lines of Code

Add Promptive to any existing project in under a minute.

TypeScript

// 1. Install

npm install @promptive/sdk

// 2. Wrap any LLM call

import { Promptive } from "@promptive/sdk";

const promptive = new Promptive({
  apiKey: "YOUR_API_KEY",
});

const response = await promptive.wrap({
  promptId: "my-prompt",
  version:  "v1",
  call: () => openai.chat.completions.create({
    model: "gpt-4o",
    messages,
  }),
});

Common Questions

Everything you need to know before getting started.

Promptive is an LLM observability tool. Add three lines to your app and every prompt call gets logged — latency, token counts, cost, input, output — all in a searchable dashboard.

Install the SDK with npm install @promptive/sdk, create a project to get an API key, then wrap your LLM calls with promptive.wrap(). That's it — no proxy required.

Yes. Promptive works with any provider — OpenAI, Anthropic, Google, Mistral, local models. You pass the call result directly, so there's no SDK lock-in.

Console logs vanish. Promptive persists every call with structured metadata, lets you diff prompt versions, see cost trends, replay requests with copy-as-curl, and alert you when costs spike.

Your app keeps running — only logging stops for the rest of the month. No surprise shutdowns, no broken production calls. Upgrade to Builder for 25,000 calls/month.