Skip to main content
The prompts table stores one row per (universe, day, engine, prompt_id). Every endpoint that returns prompt data (/universes/{id}/prompts, /universes/{id}/prompts/{prompt_id}, /prompts/{id}) returns the same column set documented below.

Identity

id
string (uuid)
required
Primary key. Stable across re-runs only when the underlying (universe, day, engine, prompt_id) row is upserted in place; if a row is deleted and re-created, the UUID changes.Example: "a7be38b9-6582-4f33-a239-7045b27f7002"
day
string (date)
required
UTC calendar day this row aggregates. Format YYYY-MM-DD.Example: "2026-04-27"
user_id
string (uuid)
required
Owner of the universe — same as query_universe.user_id.
universe_id
string (uuid)
required
The universe this prompt belongs to. FK to query_universe.id.
job_id
string (uuid)
Background job that produced this row. Multiple prompts share one job_id per run.
prompt_id
integer
required
1-based position of this prompt within the universe. Stable across days — prompt_id=5 on 2026-04-27 is the same prompt as prompt_id=5 on 2026-04-26.Example: 1

Request inputs

prompt_text
string
required
The natural-language query that was sent to the AI assistant.Example: "best AI visibility tool for brands"
engine
string
required
AI assistant the prompt was run against — one of chatgpt, gemini, grok, claude. Lowercase, case-sensitive.
engine_account
string
Scraper pipeline that produced the row. Bright Data values: BD_chatgpt, BD_gemini, BD_grok, BD_perplexity. Empty/null for legacy in-process Cloro rows. See Engines.
website
object
Universe’s website object captured at run time — typically {url, location, brand_tokens?}.Example: {"url": "https://verseodin.com/", "location": "us"}
competitor_websites
array
Competitor URL list captured at run time — JSON array.Example: [{"url": "https://semrush.com/"}, {"url": "https://tryprofound.com/"}]
priority_level
integer
Internal scheduling priority. 0 = normal, higher = run sooner. Set by the dashboard’s “AI agent” generator at 1000 for agent-created universes.

Scrape provenance

provider
string
Underlying scraping provider, e.g. brightdata. Useful when cross-referencing with the Bright Data console.Example: "brightdata"
attempts
integer
How many scrape attempts the row consumed before reaching its terminal state. Higher numbers indicate retries — useful for spotting flaky prompts.
snapshot_id
string
Bright Data snapshot identifier — the same string you’d see in their dashboard.Example: "sd_mogvqssus24jfei5h"

Lifecycle

status
string
required
pending | processing | completed | failed. The terminal states are completed and failed. New rows start as pending; the consumer flips them to processing while running and to completed/failed when done.
error_text
string
Failure reason — only set when status='failed'. Common values include "empty_answer_after_retries", "snapshot_canceled", "download_parse_error".
created_at
string (datetime)
required
When the row was first inserted by the daily prompts creator.
started_at
string (datetime)
When the consumer claimed the row for processing. Null while status is still pending.
finished_at
string (datetime)
When the row reached its terminal state (completed or failed). Null while still processing.

Response data — the AI answer + citations

response_text
string
Full AI answer body as returned by the assistant. Plain text with markdown formatting preserved (the AI’s own bullet lists, headings, etc.). Empty / null while status is pending or processing; populated when status='completed'.Example (truncated):
There isn't a single "best" AI visibility tool for brands —
because the right choice depends on your team size, budget, and
whether you want insights vs. execution. Here's a grounded
breakdown 👇 ...
Every URL the AI mentioned in its answer, in order of appearance. May contain duplicates (the AI may cite the same URL multiple times). Each item is a fully-qualified URL string.Example: ["https://verseodin.com/", "https://semrush.com/blog/...", "https://semrush.com/blog/..."]
De-duplicated appeared_links, preserving first-appearance order.
Links from scrape “run 1”. Bright Data fires one run per snapshot, so for BD-produced rows this equals appeared_links. Cloro’s legacy double-run architecture used both _run1 and _run2.
Links from scrape run 2 (legacy Cloro). Always empty [] for Bright Data rows.
my_citations
array (string)
URLs from the AI’s answer that point at your domain — matched against the universe’s website.url. Subset of appeared_links_unique filtered to only your own domain.Example: ["https://verseodin.com/blog/ai-visibility"]
competitor_citations
array (string)
URLs from the AI’s answer that point at any competitor domain — matched against the universe’s competitor list.Example: ["https://semrush.com/blog/ai-overviews"]

Stats — per-prompt counts

total_citations_count
integer
Total cited URLs in this answer (yours + competitor + uncategorised). Equivalent to len(appeared_links_unique).
my_domain_citations_count
integer
How many citations to your domain. Equivalent to len(my_citations).
my_brand_mentions_count
integer
Times your brand name was mentioned in the answer text — independent of citations. Counts substring matches against the universe’s brand_tokens list.

Model metadata

model_used
string
Model identifier the AI returned — e.g. gpt-4o, gpt-5-3, gemini-2.5-pro. Useful when comparing answers across model versions.
web_search_triggered
boolean
Whether the AI invoked web search while answering. Search-backed answers tend to cite more sources; useful for filtering.
cost_milli_cents
integer
Cost of producing this row in 1/1000 of a cent (so 1234 means $0.01234). Internal accounting field — exposed so customers running their own cost reconciliation can compare against their billing.

Filtering recipes

# Only rows that successfully scraped
GET /api/v1/universes/<id>/prompts?status=completed

# Only rows where your brand was mentioned at least once
# (filter client-side after pulling — there's no my_brand_mentions_count
# query parameter today)

# Just one prompt's full record by integer position
GET /api/v1/universes/<id>/prompts/1

# Or by UUID if you have it from another endpoint
GET /api/v1/prompts/a7be38b9-6582-4f33-a239-7045b27f7002

# Iterate every prompt over the last 7 days
for day in $(curl -sH "Authorization: Bearer $KEY" \
  "https://verseodin.com/api/v1/universes/<id>/days" \
  | jq -r '.data[:7] | .[]'); do
  curl -sH "Authorization: Bearer $KEY" \
    "https://verseodin.com/api/v1/universes/<id>/prompts?day=$day&status=completed&limit=1000"
done