Back
Insights
Personas in AI search: Why your audience changes what AI says about you
AI search returns an answer shaped by who’s asking. Here’s why that distinction matters for how you track and improve your brand’s visibility.

Anand-Arnaud Pajaniradjane
CEO & Founder
Try asking ChatGPT which skincare brand a dermatologist would recommend. Then ask which brand a 28-year-old shopping on a budget would choose. You’ll get different answers.
AI processes queries through context, which includes signals about who’s asking: their role, their location, their apparent intent, and sometimes the platform they’re using. Your product and services’ visibility in AI search is a distribution that shifts depending on the audience.
Most GEO tools miss this entirely.
What query-level tracking gets right
Tools like Profound and Bluefish are built around query-level tracking, which is the clear starting point. You pick a set of prompts relevant to your category, check whether your brand appears, and monitor changes over time.
It works, up to a point.
What it doesn’t tell you is whether your brand shows up for the right audience. A luxury hotel might rank well when the query is neutral, yet drop out entirely when the simulated user is a business traveler comparing premium options.
The gap is invisible if you’re only tracking the query.
How AI models use persona signals
LLMs are trained on human-generated data. That data reflects how different people talk about products, ask questions, and make decisions. Over time, models develop associations between audiences and recommendations.
Some of these signals are explicit. A user who identifies as a CMO asking about enterprise software will get different recommendations than an individual asking the same question. Some are implicit. The vocabulary someone uses, the platform they’re on, and how they frame a question all shift the model’s response. This implies the same query can produce materially different results depending on who appears to be asking it. If your tracking fails to account for that, you’re measuring brand visibility nobody in particular.
Methodology gap
Here’s an example.
Let’s say you’re tracking the query “best project management tool for marketing teams.” Most tools run that query as-is and record whether your brand appears. One data point.
A persona-based approach runs the same query as several different simulated users: a solo freelancer, a marketing director at a mid-market company, an agency operations lead. Each simulation uses an account or context that reflects that person’s profile rather than a keyword in a system prompt.
The difference in results is often significant. A brand that appears consistently for the freelancer query might not show up at all for the director query. If your tracking uses a single neutral query, you’re averaging over that gap without knowing it exists.
How to evaluate your existing generative engine optimization (GEO) tool stack
If you’re using an existing GEO platform, ask these questions:
Does it track the same query across multiple simulated audience types, or only as a single neutral prompt? If it’s the latter, you’re getting one slice of a multidimensional picture.
Does it use actual platform accounts to simulate personas, or does it embed persona context in the system prompt? System prompt personas are weaker signals. AI models respond differently to actual account-level context than to instructions like “pretend you are a 45-year-old CMO.”
Does it account for geographic proxying? A query run from a U.S. IP address can return different results than the same query run from a European one, even for global brands. If your tool isn’t proxying for the audience’s actual location, your data may not match what your audience actually sees.
Does it report persona-level breakdowns, or only aggregate visibility scores? Aggregate scores can mask significant gaps across audience segments.
If the answer to most of these is no, you’re not tracking brand visibility. You’re tracking one arbitrary version of it.
Persona-based tracking enables precision
Persona segmentation allows you to prioritize which audience to target. If your brand shows up consistently for one audience but poorly for another, and that second audience is your actual ICP, you know where to focus.
When you’re considering a different messaging, you can simulate how it performs for specific audience types before publishing. A change that improves visibility for one segment sometimes hurts visibility for another.
You can also set more honest benchmarks. Share of voice measured across a single query type gives you a number. Share of voice measured across your actual target personas gives you a signal you can act on.
Scope built this into how we track queries. Every query runs across multiple persona simulations using account-level context. Our outputs are broken down by audience type, so you can pinpoint where you’re winning and where you’re not, and the reasoning behind it.
The investment worth making
Query-level tracking established that AI search visibility is measurable and that changes to content and strategy have measurable effects.
The next step is ensuring the data actually reflect customer behavior. The companies that will win in AI search aren’t the ones with the highest aggregate visibility scores. Rather, they’re the ones that show up specifically for the customers who are actually looking for them.
Read More
From the Scope team.
