Universe
A universe is a configuration: a website you want to track + a list of prompts to send to AI assistants + (optionally) a list of competitor domains. You create universes from the dashboard; the API is read-only for them. Endpoints:GET /universes.
History row
Every day a universe is active, the daily prompts pipeline:- Sends every configured prompt to every selected AI engine
- Captures the answer + cited URLs in the prompts table (one row per prompt)
- Aggregates everything into a single history row per
(universe, day, engine)— that’s what the dashboard charts off
(user_id, universe_id, day, engine)).
Endpoints:
GET /universes/{id}/history— full row, every columnGET /universes/{id}/metrics/{metric}— one column at a timeGET /universes/{id}/tabs/{tab}— bundle of scalar columns by dashboard tab
Prompt
A prompt is a single natural-language query you want tracked. The prompts table stores one row per prompt per day per engine with the full AI answer (response_text) and the citation URL arrays.
Use this table when you want raw data — the actual text the AI returned,
which exact URLs it cited, how long it took, what model handled it.
Endpoints: GET /universes/{id}/prompts.
Day boundaries and timezones
Days are stored asDATE in Postgres, not as instants — they refer to
UTC calendar days. A prompt scraped at 2026-04-27T23:30:00 UTC
lands in the 2026-04-27 history row; the same scrape at
2026-04-28T00:30:00 UTC lands in 2026-04-28.
If you’re charting trends in a different timezone, do the conversion
client-side using the day field — don’t try to time-shift via API
parameters.
Engines
Today the active engines are:| Engine | Status |
|---|---|
chatgpt | Production (Bright Data scraper, default) |
gemini | Available but not actively scraping |
grok | Available but not actively scraping |
claude | Available but not actively scraping |
chatgpt data will return empty results when
filtered by ?engine=gemini etc. — that’s not a bug, just no data.
Where data comes from (one step deeper)
If you’re integrating tightly:| Where | What |
|---|---|
prompts table | Raw per-prompt scrape — response_text, citations, timestamps. Written by the BD ChatGPT consumer (and Cloro, the legacy in-process consumer). |
history table | Daily aggregate built from completed prompts. Written by history_insert.sql running inside the Go backend (history_poller.go ticks every 10s). |
query_universe table | The universe definition (name, website, competitors, prompts, schedule). Edited via the dashboard. |
GET /universes.