Transparent & Reproducible

How we measure AI visibility

Every score we show is backed by real AI model responses, not estimates. Here's exactly how our scoring works - no black boxes.

System Architecture

50K+ Prompts
200+ categories
4 AI Models
Parallel queries
NLP Analysis
Entity extraction
AI Score
0–100 scale
Data Sources

4 models. Real-time coverage.

100%
Coverage
ChatGPTGPT-4o
42%
ClaudeClaude 3.5
24%
GeminiGemini 2.0
22%
PerplexitySonar Pro
12%
ChatGPT
GPT-4o
42%
Query Share
Claude
Claude 3.5
24%
Query Share
Gemini
Gemini 2.0
22%
Query Share
Perplexity
Sonar Pro
12%
Query Share
Pipeline

From prompt to score in 4 steps

01

Prompt Generation

We maintain a library of 50,000+ prompts across 200+ categories, updated weekly to reflect real user search patterns.

~8,000 new prompts/week
02

Multi-Model Querying

Each prompt is sent to all 4 AI models simultaneously using their latest APIs. Temperature is set to 0 for reproducibility.

Temperature: 0.0
03

Response Parsing

Proprietary NLP pipeline extracts brand mentions, context, position, sentiment, and recommendation signals from each response.

< 200ms per response
04

Score Calculation

Weighted scoring algorithm combines all 5 components into a single 0-100 AI Visibility Score, normalized across the category.

Updated every 24h
Scoring Framework

5 components. 1 score.

The AI Visibility Score (0–100) is a weighted composite of five independent metrics, each measuring a distinct dimension of brand presence in AI-generated answers.

Weight Distribution

30%
25%
20%
15%
10%
Mention Frequency
Recommendation Strength
Sentiment Score
Share of Voice
Prompt Coverage
30%
Mention Frequency

How often the brand appears in AI-generated answers for relevant queries

MF = (brand_mentions / total_responses) × 100
25%
Recommendation Strength

Whether the AI positions the brand as a top choice, alternative, or passing mention

RS = Σ(position_weight × mention_context) / n
20%
Sentiment Score

NLP analysis of how positively the AI describes the brand

SS = (positive_signals − negative_signals) / total_signals
15%
Share of Voice

Brand mentions compared to competitors in the same category

SoV = brand_mentions / Σ(all_competitor_mentions) × 100
10%
Prompt Coverage

Breadth of query types that trigger brand mentions across categories

PC = unique_triggering_prompts / total_tracked_prompts × 100
Final Score Calculation
AI Score = 0.30×MF + 0.25×RS + 0.20×SS + 0.15×SoV + 0.10×PC

Scores are normalized to 0–100 within each industry category for fair cross-brand comparison.

Validation

Statistically rigorous

We validate our methodology against human expert evaluation and statistical benchmarks to ensure accuracy and reliability.

0.94
Inter-rater reliability

Agreement between our automated scoring and human expert evaluation (Cohen's κ)

±1.2
Test-retest stability

Average score variance when same prompts are re-run within 24 hours

0.87
Cross-model correlation

Pearson correlation between brand visibility scores across different AI models

200+
Category coverage

Industry categories with statistically significant sample sizes (n > 500 prompts)

Reproducibility Guarantee

All AI model queries use temperature 0.0 and deterministic API settings. Every data point is timestamped and stored. Enterprise customers can request raw response data for any score to independently verify our calculations. We believe in trust through transparency.

See the methodology in action

Enter your brand and get a real AI Visibility Score calculated with the exact methodology described above.