Crypto founders we work with usually arrive with one specific frustration around measurement: they can see citations in ChatGPT search and Perplexity, they can feel the lift in inbound demos and lead quality, but they cannot get a clean number on how much pipeline AI search is actually driving.

Standard analytics undercount AI-search-driven traffic by a wide margin. Last-click attribution credits the conversion to wherever the buyer landed last — usually direct or branded search — even when AI search is what told them to come look in the first place. The lift is real; the attribution is broken by default.

Here’s the stack we actually run, and what each layer catches.

The AI-search buyer journey looks like this in our client traffic logs.

The buyer asks ChatGPT search, Perplexity, or asks Google in a way that triggers AI Overviews. The AI tool gives them a sourced answer that mentions specific brands or solutions. The buyer sees three or four named sources in the answer. They don’t click immediately because they want to evaluate which to trust.

A few hours or days later they search the brand directly — typing your company name into Google, or going to your site directly because they remember it. They land on your homepage from “direct” or “branded organic”. They convert. The conversion gets attributed to direct or branded search.

The AI step is invisible to the analytics platform but it’s the step that produced the conversion. We measured the gap by running prompt-monitoring side-by-side with last-click attribution on a Crypto SEO retainer client through 2025. AI-search citations led branded-search lift by 30–45 days, and branded-search lift led conversion lift by another 14–28 days. By the time the conversion happened, the AI-search trigger was 6–10 weeks in the past and not visible to standard tracking.

The undercounting is somewhere between 40% and 70% in our client data. The exact number depends on niche, buyer-LTV, and how aggressively the brand has invested in AI-search visibility, but the floor of the range is 40%.

What can actually be detected from referrers?

Different AI tools handle referrer headers differently. The split as of late 2025:

  • Perplexity passes the referrer cleanly. Traffic from Perplexity citations shows up as perplexity.ai in your analytics, with the source URL in the path. Easy to count.
  • Phind passes the referrer. Most of Phind’s traffic is dev-tooling-adjacent, so it shows up more for B2B SaaS and DeFi infrastructure clients than for retail crypto.
  • You.com, Brave Search, Kagi also pass referrers. Lower volume, easier to count what’s there.
  • ChatGPT does not pass referrer. Traffic appears as direct in most analytics tools, indistinguishable from someone typing the URL or clicking a bookmark.
  • Claude does not pass referrer when used through Anthropic’s web interface; same direct-traffic appearance.
  • Google AI Overviews does not pass a distinct referrer. Traffic from an AI-Overview source link appears as standard organic search, which means GSC reports it but you can’t separate AI-Overview clicks from regular SERP clicks.

What this means in practice: 30–50% of AI-search-driven traffic shows up cleanly in referrer logs as Perplexity or similar. The other 50–70% is either invisible (ChatGPT/Claude going to direct traffic) or mixed in with regular organic (AI Overviews).

How does GSC catch what referrer logs miss?

Google Search Console reports impressions and clicks for every query that surfaces your site. AI Overviews, when they reference your domain, count as impressions in GSC. The query that triggered the impression is in the report. The click counts only when the user actually clicks through.

The signal worth tracking weekly: branded-search query growth. When AI search is delivering pipeline, branded queries grow first. We track three brand-query buckets per client.

BucketExamplesWhat it indicates
Pure brand”[client name]”, “[client domain]“Net new awareness — someone heard your name
Brand + product”[client name] crypto exchange”, “[client name] reviews”Active evaluation — someone is researching
Brand + jurisdiction”[client name] EU”, “[client name] for UK customers”High-intent — buyer is mapping fit

Sustained 10–20% month-over-month growth in any of these buckets, lasting two consecutive months, correlates with downstream conversion lift in nearly every client we’ve tracked. We use it as the primary leading indicator of AI-search-driven pipeline, ahead of citation-rate metrics, because branded search is the bridge between the AI-citation event and the eventual conversion.

What does self-reported source actually catch?

The most-reliable single data source for AI-driven attribution is asking the buyer directly. We add a free-text field to every lead-intake form — “How did you first hear about us?” — and parse the responses monthly.

The free-text field is more useful than a multiple-choice list because the buyer’s mental model of how they found you doesn’t fit neat categories. Sample responses we’ve coded out of 12 months of one client’s intake forms:

  • “ChatGPT recommended you when I asked about crypto licensing” (clear AI-search attribution)
  • “I asked Perplexity for the best agencies and you came up” (clear, citable platform)
  • “Saw you mentioned in some article I was reading” (likely AI search, but ambiguous; coded as “content-driven” with a confidence score)
  • “Google” (low-confidence; could be regular organic or could have been Google AI Overviews)
  • “[Person name] told me about you” (referral; not AI-driven)
  • “Don’t remember” (high frequency for B2B; the buyer evaluated weeks ago and forgot)

After 6–12 months of data, the free-text responses cluster into 6–10 source categories that we can quantify. AI-search-attributed leads land somewhere between 15% and 35% of total inbound for the crypto clients we’ve measured this on, well above what last-click attribution shows for the same period.

What do prompt-monitoring tools actually measure?

Searchable, Profound, AthenaHQ, and a handful of others have launched in 2024–2025 to track citation rates across AI platforms. They take a list of target prompts, run them weekly across ChatGPT, Perplexity, Gemini, Claude, Google AI Overviews, and report which brands get cited and at what positions.

These tools measure citation rate, not pipeline. They tell you whether you’re cited; they don’t tell you how many people who saw the citation eventually became leads. So the tools are necessary but not sufficient.

The role they play in our measurement stack: leading indicator. Citation-rate growth precedes branded-search growth, which precedes lead-volume growth. A 30-prompt monitor that shows citation rate climbing from 10% to 30% over a quarter tells us the work is moving in the right direction even before the GSC and self-report data catches up.

For client engagements we run a custom 22-prompt monitor — selected with the client during discovery, refreshed quarterly — across 4–5 platforms, weekly. The cost of running the monitor is low; the cost of not running it is having no leading indicator and waiting for last-click data that arrives 6–10 weeks late.

How do these stack together?

The measurement stack we run for active retainers has four layers, in increasing latency.

Citation-rate monitor (weekly). What gets cited, on which prompts, on which platforms. Earliest signal.

GSC branded-query trend (weekly). Brand-search growth across the three brand-query buckets. Confirms that citation rate is translating to user intent.

Referrer analysis (monthly). Direct measurement of Perplexity, Phind, You.com, and other referrer-passing platforms. Confirms volume of clicks from the visible AI sources.

Self-reported source (monthly). What buyers say about how they found you. Catches the invisible AI-search step that referrers miss.

The four layers cross-validate each other. Citation rate up + branded search up + Perplexity referrals up + self-reported AI mentions up = AI-search work is producing pipeline. If any of the four is flat while others move, we investigate the disconnect (often a content-quality issue on a specific theme).

For clients on Crypto SEO retainers, this measurement stack is part of the standard reporting. For clients running Crypto SEO + AI-search visibility as separate scopes, the measurement is what justifies the second scope existing as its own line item.

What’s the discovery call test for measurement?

If you’re already running an SEO program with a different agency and want to evaluate whether AI search is producing pipeline you can’t see, three things to bring to a discovery call.

A 12-month GSC brand-query export. We’ll plot the three brand-query buckets and identify whether you’re seeing the leading-indicator pattern.

The last 6 months of lead-intake responses (with the “how did you find us” field if you have one). We’ll code them and estimate the AI-attribution percentage you’re missing.

A list of 10–20 target prompts you’d want to be cited on. We’ll run them in front of you on the call and show you where you currently sit on each.

The call is free, 30 minutes, named lead. The output is concrete: a number on how much AI search is currently driving for you, with a 90-day projection if you invested in changing it. If the number is small enough that it doesn’t justify our retainer, we’ll say so.