Skip to main content
The API exposes three nested concepts. Knowing how they relate makes the endpoints click.
universe (you defined it once)

   │ has many

history rows (one per day per engine — daily aggregate)

   │ derived from

prompts (one per prompt-id per day per engine — raw scrape data)

Universe

A universe is a configuration: a website you want to track + a list of prompts to send to AI assistants + (optionally) a list of competitor domains. You create universes from the dashboard; the API is read-only for them. Endpoints: GET /universes.

History row

Every day a universe is active, the daily prompts pipeline:
  1. Sends every configured prompt to every selected AI engine
  2. Captures the answer + cited URLs in the prompts table (one row per prompt)
  3. Aggregates everything into a single history row per (universe, day, engine) — that’s what the dashboard charts off
Each history row is immutable once written. Re-running the same prompts on the same day overwrites the row (UPSERT keyed on (user_id, universe_id, day, engine)). Endpoints:

Prompt

A prompt is a single natural-language query you want tracked. The prompts table stores one row per prompt per day per engine with the full AI answer (response_text) and the citation URL arrays. Use this table when you want raw data — the actual text the AI returned, which exact URLs it cited, how long it took, what model handled it. Endpoints: GET /universes/{id}/prompts.

Day boundaries and timezones

Days are stored as DATE in Postgres, not as instants — they refer to UTC calendar days. A prompt scraped at 2026-04-27T23:30:00 UTC lands in the 2026-04-27 history row; the same scrape at 2026-04-28T00:30:00 UTC lands in 2026-04-28. If you’re charting trends in a different timezone, do the conversion client-side using the day field — don’t try to time-shift via API parameters.

Engines

Today the active engines are:
EngineStatus
chatgptProduction (Bright Data scraper, default)
geminiAvailable but not actively scraping
grokAvailable but not actively scraping
claudeAvailable but not actively scraping
Universes that only have chatgpt data will return empty results when filtered by ?engine=gemini etc. — that’s not a bug, just no data.

Where data comes from (one step deeper)

If you’re integrating tightly:
WhereWhat
prompts tableRaw per-prompt scrape — response_text, citations, timestamps. Written by the BD ChatGPT consumer (and Cloro, the legacy in-process consumer).
history tableDaily aggregate built from completed prompts. Written by history_insert.sql running inside the Go backend (history_poller.go ticks every 10s).
query_universe tableThe universe definition (name, website, competitors, prompts, schedule). Edited via the dashboard.
The API exposes the first two as documented above; the third is exposed as just metadata via GET /universes.