/universes/{id}/brands and /universes/{id}/brands/{brand}
endpoints expose a brand-centric view of your dashboard data.
Instead of querying ai_visibility_ranking, then
ai_competitive_analysis, then geo_competitor_analysis and joining
them by brand_name yourself, the API does it for you.
What’s on a brand row
Every entry in the response carries seven numbers — three from AI mentions, two from GEO citations, plus rank and share-of-voice. Your own brand sits in the same array as every competitor.Display name as it appears in the underlying history arrays.
Preserves casing.Example:
"verseodin.com", "Semrush", "tryprofound.com"Count of prompts where the AI mentioned this brand by name in its
answer. Drawn from
ai_visibility_ranking.Range: 0 to total_prompts for the universe.This brand’s mentions as a percentage of total brand mentions in
the universe. The
share_of_voice values across all brands sum to
~100% (rounding aside).Example: 18.7 means this brand accounts for ~19% of brand
mentions across the universe.1-indexed rank among all tracked brands by
trust_mentions. 1 is
the most-mentioned brand. Ties are broken by alphabetical order of
brand_name.Tip: if your brand’s rank goes from 3 → 1 over a week, that’s
the win you want to track.Percentage of prompts where the AI mentioned this brand. Drawn from
ai_competitive_analysis.coverage_pct.Difference vs trust_mentions: ai_coverage_pct is the rate
(mentions / total_prompts × 100); trust_mentions is the count.Percentage of prompts where the AI cited a URL belonging to
this brand’s domain. Independent of whether the brand was named in
the prose. Drawn from
geo_competitor_analysis.coverage_pct.Count of prompts where this brand’s domain was cited at least once.
Drawn from
geo_competitor_analysis.prompt_count.Three things this view answers in one call
”How is my own brand doing?"
"How is competitor X doing?”
Same call, just swap the brand name:“Show me the leaderboard”
The list endpoint sorts by visibility_rank ASC — top of the array is the most-mentioned brand:Filtering
Both endpoints accept the standard?day= and ?engine= query
params:
engine= and the universe has multiple engines), the most recent
one is used. The response’s day + engine fields tell you which
row was actually consulted, so you can detect this.
When a brand isn’t found
The single-brand endpoint returns404 not_found when:
- The brand name doesn’t appear in any of the three source arrays for the matched history row, OR
- No history row matches the day/engine filters
200 with data: [] and day: null,
engine: null if the filters match no history row at all.
How this maps onto the raw history columns
If you want to do this synthesis yourself instead of using these endpoints, here are the three source columns:Field on BrandSummary | Source column on history |
|---|---|
trust_mentions, share_of_voice, visibility_rank | ai_visibility_ranking[] |
ai_coverage_pct | ai_competitive_analysis[].coverage_pct |
geo_coverage_pct, geo_prompt_count | geo_competitor_analysis[].coverage_pct and .prompt_count |
brand_name case-insensitively across the three arrays.