prompts table stores one row per (universe, day, engine, prompt_id). Every endpoint that returns prompt data
(/universes/{id}/prompts,
/universes/{id}/prompts/{prompt_id},
/prompts/{id})
returns the same column set documented below.
Identity
Primary key. Stable across re-runs only when the underlying
(universe, day, engine, prompt_id) row is upserted in place; if
a row is deleted and re-created, the UUID changes.Example: "a7be38b9-6582-4f33-a239-7045b27f7002"UTC calendar day this row aggregates. Format
YYYY-MM-DD.Example: "2026-04-27"Owner of the universe — same as
query_universe.user_id.The universe this prompt belongs to. FK to
query_universe.id.Background job that produced this row. Multiple prompts share
one
job_id per run.1-based position of this prompt within the universe. Stable across
days —
prompt_id=5 on 2026-04-27 is the same prompt as
prompt_id=5 on 2026-04-26.Example: 1Request inputs
The natural-language query that was sent to the AI assistant.Example:
"best AI visibility tool for brands"AI assistant the prompt was run against — one of
chatgpt,
gemini, grok, claude. Lowercase, case-sensitive.Scraper pipeline that produced the row. Bright Data values:
BD_chatgpt, BD_gemini, BD_grok, BD_perplexity. Empty/null
for legacy in-process Cloro rows. See Engines.Universe’s website object captured at run time — typically
{url, location, brand_tokens?}.Example: {"url": "https://verseodin.com/", "location": "us"}Competitor URL list captured at run time — JSON array.Example:
[{"url": "https://semrush.com/"}, {"url": "https://tryprofound.com/"}]Internal scheduling priority.
0 = normal, higher = run sooner.
Set by the dashboard’s “AI agent” generator at 1000 for
agent-created universes.Scrape provenance
Underlying scraping provider, e.g.
brightdata. Useful when
cross-referencing with the Bright Data console.Example: "brightdata"How many scrape attempts the row consumed before reaching its
terminal state. Higher numbers indicate retries — useful for
spotting flaky prompts.
Bright Data snapshot identifier — the same string you’d see in
their dashboard.Example:
"sd_mogvqssus24jfei5h"Lifecycle
pending | processing | completed | failed. The terminal
states are completed and failed. New rows start as pending;
the consumer flips them to processing while running and to
completed/failed when done.Failure reason — only set when
status='failed'. Common values
include "empty_answer_after_retries", "snapshot_canceled",
"download_parse_error".When the row was first inserted by the daily prompts creator.
When the consumer claimed the row for processing. Null while
status is still pending.When the row reached its terminal state (
completed or failed).
Null while still processing.Response data — the AI answer + citations
Full AI answer body as returned by the assistant. Plain text
with markdown formatting preserved (the AI’s own bullet lists,
headings, etc.). Empty / null while
status is pending or
processing; populated when status='completed'.Example (truncated):Every URL the AI mentioned in its answer, in order of appearance.
May contain duplicates (the AI may cite the same URL multiple
times). Each item is a fully-qualified URL string.Example:
["https://verseodin.com/", "https://semrush.com/blog/...", "https://semrush.com/blog/..."]De-duplicated
appeared_links, preserving first-appearance order.Links from scrape “run 1”. Bright Data fires one run per snapshot,
so for BD-produced rows this equals
appeared_links. Cloro’s
legacy double-run architecture used both _run1 and _run2.Links from scrape run 2 (legacy Cloro). Always empty
[] for
Bright Data rows.URLs from the AI’s answer that point at your domain — matched
against the universe’s
website.url. Subset of
appeared_links_unique filtered to only your own domain.Example: ["https://verseodin.com/blog/ai-visibility"]URLs from the AI’s answer that point at any competitor domain
— matched against the universe’s competitor list.Example:
["https://semrush.com/blog/ai-overviews"]Stats — per-prompt counts
Total cited URLs in this answer (yours + competitor + uncategorised).
Equivalent to
len(appeared_links_unique).How many citations to your domain. Equivalent to
len(my_citations).Times your brand name was mentioned in the answer text — independent
of citations. Counts substring matches against the universe’s
brand_tokens list.Model metadata
Model identifier the AI returned — e.g.
gpt-4o, gpt-5-3,
gemini-2.5-pro. Useful when comparing answers across model
versions.Whether the AI invoked web search while answering. Search-backed
answers tend to cite more sources; useful for filtering.
Cost of producing this row in 1/1000 of a cent
(so
1234 means $0.01234). Internal accounting field —
exposed so customers running their own cost reconciliation
can compare against their billing.