Books for Bots: What Happens When You Let Claude Interrogate Your GA4 Data

For the past several weeks I have been running a live experiment on helpnewyork.com: using Claude-in-Chrome to interrogate Google’s Analytics Advisor inside GA4, session by session, until I had a complete behavioral profile of every AI platform sending traffic to the site.

What came out of it is not what I expected. I expected traffic data. I got a content strategy.

The Setup

Claude-in-Chrome is Anthropic’s browser extension that lets Claude operate directly inside your browser — reading pages, clicking elements, filling inputs, capturing output. Analytics Advisor is Google’s Gemini-powered chat interface built into GA4, available to English-language accounts since December 2025. It answers natural language questions about your property data with charts, tables, and narrative interpretation.

The combination is unusual. You are using one AI (Claude) to systematically interrogate another AI (Gemini) about your site’s data, then synthesizing what comes back into strategy. The token budget for the heavy data reasoning stays inside Google’s infrastructure. Claude handles the query architecture, the capture protocol, and the synthesis.

I ran four structured sessions across two sittings, using a specific sequence of queries built to extract progressively deeper signal. Session 1 established baseline traffic. Session 2 closed gaps and confirmed AI referral data existed. Session 3 was the AI deep dive. Session 4 was velocity and geography.

What the Data Showed

Three AI platforms were sending meaningful traffic to helpnewyork.com during the 28-day window: ChatGPT, Claude, and Copilot. The behavioral profiles were so different from each other that treating them as a single “AI traffic” segment would have produced wrong conclusions.

Claude.ai traffic showed a 64% engagement rate and an average session duration of over 3 minutes. The dominant landing page was an NYC Summer Internships guide, accounting for over 60% of all Claude sessions. Geographic concentration was academic: Ithaca (Cornell), State College (Penn State), Washington DC. The users arriving from Claude were reading to act — they needed specific information, they found it, they stayed.

ChatGPT traffic showed a 21% engagement rate and an average session of 24 seconds. The top landing page was a cherry blossom guide. The users were fact-grabbing: they asked ChatGPT where to see cherry blossoms in New York, got a citation, clicked through, confirmed the location, and left. The content served its purpose in under half a minute.

Copilot traffic was between the two: 46% engagement, roughly 2-minute sessions, desktop-heavy, concentrated in New York’s suburbs. The top pages were civic services — SNAP benefits, tenant rights, transit discounts. These users were in planning mode, researching before they decided or applied.

The Finding That Reframes GEO

The cross-AI page overlap query was the most important one in the entire four-session arc. I asked Analytics Advisor which pages appeared in the top landing pages for more than one AI source. Only one real content page appeared in all three: the cherry blossom guide.

The obvious interpretation is that the cherry blossom guide was “AI-optimized.” The actual interpretation, once you look at the full traffic breakdown, is the opposite. Bing drove 59 sessions to that page. Yahoo drove 16 at 75% engagement and a 3-minute 46-second average session. DuckDuckGo drove 35. The combined AI traffic to that page was 32 sessions — 17% of total. The AI platforms were citing it because traditional search engines had already validated it as the highest-quality answer in the index.

AI citations are downstream of search quality, not upstream. The path to getting cited by ChatGPT, Claude, and Copilot is not to optimize for AI retrieval patterns. It is to build pages that win on Bing and Yahoo with enough depth that AI models treat them as authoritative sources. The GEO play is a traditional SEO play with better content.

The Content Strategy That Follows

Once you have the per-AI behavioral profiles, you have a content variant framework. The same article can be written in three structural architectures, each tuned to how one AI model retrieves and presents information.

The Claude variant is dense and process-oriented. Headers, eligibility criteria, numbered steps, official program names. Built for the student or researcher who arrived with a specific question and needs a complete answer they can act on.

The ChatGPT variant is a scannable list. Named items, one specific detail per item, direct answer in the first two sentences. Built for the user who will spend 24 seconds on the page and needs the answer immediately or they’re gone.

The Copilot variant is comparison and planning framing. What to know before you go, Option A versus Option B, cost context, logistics. Built for the desktop user doing research before they make a decision.

The core article is the same. The architecture is different. The AI that cites you depends on which structure you used.

The Methodology Is the Product

The query sequence I developed across these four sessions is a repeatable extraction methodology. It works on any GA4 property with Analytics Advisor enabled. The intelligence it produces — per-AI audience profiles, geographic signals, velocity trends, cross-AI content overlap — is not available through DataForSEO, SpyFu, or GSC. It requires Gemini’s reasoning layer operating on top of your property data, orchestrated by a structured query architecture.

I have packaged the complete methodology as a downloadable kit: the full query architecture across all four sessions, the capture protocol, the content variant framework, and the flags to escalate before your next content sprint. It is called Books for Bots: GA4 AI Referral Audit Kit.

The free version covers Session 3 alone — the AI deep dive queries that surface your ChatGPT, Claude, and Copilot traffic split. That alone will show you something most site owners have never seen: which AI is sending them traffic, to which pages, and how engaged those users actually are.

The full kit covers all four sessions and includes the content variant framework that translates the behavioral data into a writing system.

Both are available at tygartmedia.com. What you do with the data after that is yours.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *