StemmaStemma
Documentation

Stemma Docs

Stemma is a lightweight LLM observability SDK. Log every call, track prompt versions, and monitor cost and latency — in three lines of code.

Quick Start

Install the SDK:

bash
npm install @stemma/sdk

Initialize with your API key (found in your dashboard under Projects):

typescript
import { Stemma } from "@stemma/sdk";

const stemma = new Stemma({
  apiKey: process.env.STEMMA_API_KEY,  // from your project page
});

Wrap any LLM call with stemma.wrap() — model, tokens, and latency are captured automatically:

typescript
const response = await stemma.wrap({
  promptId: "my-prompt",
  version:  "v1",
  input:    messages,
  call:     () => openai.chat.completions.create({ model: "gpt-4o", messages }),
});

That's it. Your call will appear in the dashboard within seconds.

SDK Reference

new Stemma(config) accepts:

apiKeystringrequired

Your project API key from the dashboard.

silentboolean

If true, suppresses console warnings when logging fails. Default: false.

stemma.wrap(params) accepts:

promptIdstringrequired

Identifier for this prompt — e.g. 'summarizer'. Groups all calls together in the dashboard.

versionstringrequired

Version label — e.g. 'v1', 'v2-concise'. Used for side-by-side comparisons.

call() => Promise<T>required

The LLM call to execute. Model, tokens, and latency are extracted automatically.

inputunknown

The messages array sent to the LLM. Stored as JSON.

metadataRecord<string, string | number | boolean>

Optional key-value pairs — e.g. { userId, environment }.

Return value: the raw LLM response, unmodified. Logging is fire-and-forget and never throws.

OpenAI Integration

Wrap openai.chat.completions.create:

typescript
import OpenAI from "openai";
import { Stemma } from "@stemma/sdk";

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const stemma = new Stemma({ apiKey: process.env.STEMMA_API_KEY });

const messages = [{ role: "user", content: "Summarize this: ..." }];

const response = await stemma.wrap({
  promptId: "summarizer",
  version:  "v1",
  input:    messages,
  call:     () => openai.chat.completions.create({ model: "gpt-4o", messages }),
});

return response.choices[0].message.content;

Anthropic Integration

Wrap anthropic.messages.create:

typescript
import Anthropic from "@anthropic-ai/sdk";
import { Stemma } from "@stemma/sdk";

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const stemma    = new Stemma({ apiKey: process.env.STEMMA_API_KEY });

const messages = [{ role: "user", content: text }];

const response = await stemma.wrap({
  promptId: "classifier",
  version:  "v1",
  input:    messages,
  call:     () => anthropic.messages.create({
    model:      "claude-sonnet-4-20250514",
    max_tokens: 1024,
    messages,
  }),
});

return response.content[0].text;

Metadata & Custom Props

Attach any key-value pairs via the metadata field. Values must be string | number | boolean.

typescript
await stemma.wrap({
  promptId: "summarizer",
  version:  "v1",
  input:    messages,
  call:     () => openai.chat.completions.create({ model: "gpt-4o", messages }),
  metadata: { userId: "user_abc123", environment: "production" },
});

Metadata is stored with every log entry and visible in the log detail panel in the dashboard. You can filter logs by metadata values in the Logs page.