Category: Uncategorized

  • Books for Bots: GA4 New vs Returning Intelligence Kit

    Books for Bots: GA4 New vs Returning Intelligence Kit

    Returning visitor pushing through revolving door while new visitor stands outside

    BOOKS FOR BOTS — GA4 SERIES — BOOK 05

    GA4 New vs Returning Intelligence Kit

    What brings people back. Loyalty signals, returning user behavior, cohort patterns, and the content that turns a single visit into a relationship.

    4m 12s vs 18sReturning vs new user session duration — same site, same pages
    COMING SOON — $27

    Same Site. Different Humans.

    New user acquisition gets all the attention. Returning users are where the business actually is — engaging 3x more, staying 14x longer, going 3x deeper per session. This kit surfaces exactly what is driving them back.

    New user 22% engagement 18s vs returning user 61% engagement 4m12s

    CORE INSIGHT

    Most sites treat new and returning users identically and leave all of that value on the table. Your returning users are already telling you exactly what they value. This kit listens.

    Bookshelf with three glowing loyalty anchor pages among dozens of dim untouched ones23% of sessions are returning visitors — your retention baselineWell-worn forest path toward golden lightRegular at corner table, new customers reading menus, barista pouring

    What’s Inside

    • 7 copy-paste queries for Analytics Advisor — one session
    • New vs returning engagement baseline comparison
    • Loyalty anchor page identification — content bringing people back
    • Return visit trigger content analysis
    • Best retention channel by acquisition source
    • Returning user session depth and navigation path mapping
    • Retention rate baseline score to track quarterly

    What You Need

    • Claude-in-Chrome — free from Anthropic
    • Editor or Analyst access to a GA4 property
    • Analytics Advisor (BETA) enabled
    • 30–60 minutes

    THE KEY INSIGHT

    If your retention rate is under 20%, you have an acquisition addiction. Every session you earn disappears because nothing is pulling people back. The loyalty anchor pages this kit surfaces are the cure.

    Individual Kit — Instant PDF Download

    COMING SOON — $27

    No subscription.

    BUNDLE

    Get All 6 Kits for $97

    Every GA4 intelligence methodology. Save $65.

    $162$97

    COMING SOON

    FREE STARTER

    Try Session 3 Free

    Seven queries revealing your ChatGPT vs Claude vs Copilot split in 30 minutes.

    COMING SOON — FREE

    Validated on live GA4 properties. April 2026.

  • Books for Bots: GA4 Referral Quality Audit

    Books for Bots: GA4 Referral Quality Audit

    Search query pointing to wrong page with red X and correct guide with green arrow

    BOOKS FOR BOTS — GA4 SERIES — BOOK 06

    GA4 Search Intent Alignment Kit

    Are your keywords landing on the right pages? Diagnose intent mismatch between what users searched and what they found — and surface what your audience wanted and could not find.

    39% misalignedOf organic landing pages delivering the wrong content for the search intent
    COMING SOON — $27

    A Page Can Rank Well and Still Fail

    If the user searched “how to apply for X” and landed on a page about “what X is,” they bounce immediately. GA4 captures this failure even when you cannot see the original query. High organic traffic with low engagement is almost always intent mismatch in disguise.

    Two puzzle pieces QUERY and CONTENT that do not fit

    CORE INSIGHT

    Internal site search is the most underused intelligence in GA4. When a user searches your site, they are explicitly telling you what they wanted and could not find. This kit makes that signal visible and actionable.

    User search queries rising like smoke from internal site searchPerson pulling wrong book while the right answer glows out of reachIntent alignment gauge 61% aligned 39% misaligned — run quarterlySearch intent key vs landing page lock — MISMATCH

    What’s Inside

    • 7 copy-paste queries for Analytics Advisor — one session
    • Organic traffic to engagement mismatch identification
    • Internal search term extraction — top 20 with gap analysis
    • Zero-result internal search diagnosis
    • Homepage navigation gap analysis
    • Intent alignment score — baseline metric to track quarterly
    • Content repositioning recommendation framework

    What You Need

    • Claude-in-Chrome — free from Anthropic
    • Editor or Analyst access to a GA4 property
    • Analytics Advisor (BETA) enabled
    • 30–60 minutes

    THE KEY INSIGHT

    Internal search tells you what people search on your site after they arrived. That is a different and more valuable signal than anything a keyword tool produces — and it is sitting in your GA4 right now.

    Individual Kit — Instant PDF Download

    COMING SOON — $27

    No subscription.

    BUNDLE

    Get All 6 Kits for $97

    Every GA4 intelligence methodology. Save $65.

    $162$97

    COMING SOON

    FREE STARTER

    Try Session 3 Free

    Seven queries revealing your ChatGPT vs Claude vs Copilot split in 30 minutes.

    COMING SOON — FREE

    Validated on live GA4 properties. April 2026.

  • Books for Bots: GA4 Exit Intelligence Kit

    Books for Bots: GA4 Exit Intelligence Kit

    Aerial maze amber exit vs cold blue dead end

    BOOKS FOR BOTS — GA4 SERIES — BOOK 03

    GA4 Exit Intelligence Kit

    Where users leave your site — and what it means. Distinguish satisfied exits from abandoned ones, find your dead-end pages, and map your internal linking gaps.

    85% exit rate
    With 3m 20s duration — a satisfied exit, not a problem to fix
    COMING SOON — $27

    Not All Exits Are Failures

    A user who reads your guide for three minutes and then leaves got exactly what they needed. A user who hits your page and bounces in four seconds got nothing. GA4 treats them identically. This kit teaches you to tell the difference.

    Satisfied exit 85% 3m20s vs abandoned exit 87% 4 seconds

    FIELD FINDING — LIVE SESSION

    The NYC Summer Internships page has an 85% exit rate AND a 3m 20s average session. That is a satisfied exit. Adding CTAs to interrupt it would reduce performance, not improve it.

    90 seconds satisfied exit, 4 seconds abandoned exit

    Satisfied exit — man leaving library corridor through warm door

    Satisfied exit.

    Abandoned exit — man facing blank wall with no way out

    Abandoned exit.

    Website sitemap blueprint with dead-end pages circled in red

    What’s Inside

    • 7 copy-paste queries for Analytics Advisor — one session
    • Satisfied vs abandoned exit classification framework
    • Dead-end page audit — pages with zero internal link clicks
    • Homepage navigation effectiveness score
    • Internal link opportunity map — Advisor generates specific page pairings
    • Exit-to-content-gap mapping for abandoned pages

    What You Need

    • Claude-in-Chrome — free from Anthropic
    • Editor or Analyst access to a GA4 property
    • Analytics Advisor (BETA) enabled
    • 30–60 minutes

    THE KEY INSIGHT

    The internal link fix is the highest ROI action from this kit. No new content, no design changes, no developer. Add one sentence with a link on an abandoned exit page pointing to a relevant high-engagement page.

    Individual Kit — Instant PDF Download

    COMING SOON — $27

    No subscription.

    BUNDLE — ALL 6 KITS

    Get All 6 Kits for $97

    Every GA4 intelligence methodology in one purchase. Save $65.

    $162$97

    COMING SOON

    FREE STARTER

    Try Session 3 Free

    Seven queries revealing your ChatGPT vs Claude vs Copilot split in under 30 minutes. No purchase required.

    COMING SOON — FREE

    Validated on live GA4 properties. April 2026.

  • Books for Bots: GA4 AI Referral Audit Kit

    Books for Bots: GA4 AI Referral Audit Kit

    ChatGPT, Claude, and Copilot sending traffic beams to a website

    Books for Bots — GA4 Series — Book 01

    GA4 AI Referral Audit Kit

    The complete 4-session Claude-in-Chrome methodology for extracting per-AI audience intelligence from Google Analytics 4 — and turning it into content every AI model cites.

    64% vs 21%
    Claude.ai engagement rate vs ChatGPT — same site, same pages
    COMING SOON — $27

    119 ChatGPT sessions, 42 Claude sessions, 28 Copilot sessions — 28 day data

    CORE FINDING

    AI citations are downstream of search quality, not upstream. Pages that win Bing and Yahoo with long-form depth get cited by AI models as a derivative effect.

    Search earns it. AI cites it.
    Claude 64% engagement, ChatGPT 21%, Copilot 46%
    Three content variant notebooks for Claude, ChatGPT, and Copilot
    Analytics Advisor session running at night on a laptop

    What’s Inside

    • Full 4-session query architecture — 26 queries, copy-paste ready
    • Pre-flight checklist and capture protocol for each session
    • Per-AI behavioral profiles: ChatGPT, Claude, Copilot
    • Content variant framework — 3 structural templates, one per AI retrieval pattern
    • Flags to escalate before your next content sprint
    • The cross-AI page overlap query — your highest-confidence GEO signal

    What You Need

    • Claude-in-Chrome extension — free from Anthropic
    • Editor or Analyst access to a GA4 property
    • Analytics Advisor (BETA) enabled — English-language accounts
    • Approximately 30–60 minutes

    THE KEY INSIGHT

    AI citations are downstream of search quality — not upstream. The path to getting cited by ChatGPT, Claude, and Copilot is not to optimize for AI retrieval patterns. It is to build pages that win on Bing and Yahoo with enough depth that AI models treat them as authoritative sources.

    Individual Kit — Instant PDF Download

    COMING SOON — $27

    No subscription. One-time purchase.

    BETTER VALUE

    Get All 6 Kits for $97

    The complete Books for Bots library. Every GA4 intelligence methodology in one purchase.

    $162 separately$97

    COMING SOON — SEE BUNDLE

    Developed and validated across live sessions on a real GA4 property. April 2026.

  • Books for Bots: What Happens When You Let Claude Interrogate Your GA4 Data

    For the past several weeks I have been running a live experiment on helpnewyork.com: using Claude-in-Chrome to interrogate Google’s Analytics Advisor inside GA4, session by session, until I had a complete behavioral profile of every AI platform sending traffic to the site.

    What came out of it is not what I expected. I expected traffic data. I got a content strategy.

    The Setup

    Claude-in-Chrome is Anthropic’s browser extension that lets Claude operate directly inside your browser — reading pages, clicking elements, filling inputs, capturing output. Analytics Advisor is Google’s Gemini-powered chat interface built into GA4, available to English-language accounts since December 2025. It answers natural language questions about your property data with charts, tables, and narrative interpretation.

    The combination is unusual. You are using one AI (Claude) to systematically interrogate another AI (Gemini) about your site’s data, then synthesizing what comes back into strategy. The token budget for the heavy data reasoning stays inside Google’s infrastructure. Claude handles the query architecture, the capture protocol, and the synthesis.

    I ran four structured sessions across two sittings, using a specific sequence of queries built to extract progressively deeper signal. Session 1 established baseline traffic. Session 2 closed gaps and confirmed AI referral data existed. Session 3 was the AI deep dive. Session 4 was velocity and geography.

    What the Data Showed

    Three AI platforms were sending meaningful traffic to helpnewyork.com during the 28-day window: ChatGPT, Claude, and Copilot. The behavioral profiles were so different from each other that treating them as a single “AI traffic” segment would have produced wrong conclusions.

    Claude.ai traffic showed a 64% engagement rate and an average session duration of over 3 minutes. The dominant landing page was an NYC Summer Internships guide, accounting for over 60% of all Claude sessions. Geographic concentration was academic: Ithaca (Cornell), State College (Penn State), Washington DC. The users arriving from Claude were reading to act — they needed specific information, they found it, they stayed.

    ChatGPT traffic showed a 21% engagement rate and an average session of 24 seconds. The top landing page was a cherry blossom guide. The users were fact-grabbing: they asked ChatGPT where to see cherry blossoms in New York, got a citation, clicked through, confirmed the location, and left. The content served its purpose in under half a minute.

    Copilot traffic was between the two: 46% engagement, roughly 2-minute sessions, desktop-heavy, concentrated in New York’s suburbs. The top pages were civic services — SNAP benefits, tenant rights, transit discounts. These users were in planning mode, researching before they decided or applied.

    The Finding That Reframes GEO

    The cross-AI page overlap query was the most important one in the entire four-session arc. I asked Analytics Advisor which pages appeared in the top landing pages for more than one AI source. Only one real content page appeared in all three: the cherry blossom guide.

    The obvious interpretation is that the cherry blossom guide was “AI-optimized.” The actual interpretation, once you look at the full traffic breakdown, is the opposite. Bing drove 59 sessions to that page. Yahoo drove 16 at 75% engagement and a 3-minute 46-second average session. DuckDuckGo drove 35. The combined AI traffic to that page was 32 sessions — 17% of total. The AI platforms were citing it because traditional search engines had already validated it as the highest-quality answer in the index.

    AI citations are downstream of search quality, not upstream. The path to getting cited by ChatGPT, Claude, and Copilot is not to optimize for AI retrieval patterns. It is to build pages that win on Bing and Yahoo with enough depth that AI models treat them as authoritative sources. The GEO play is a traditional SEO play with better content.

    The Content Strategy That Follows

    Once you have the per-AI behavioral profiles, you have a content variant framework. The same article can be written in three structural architectures, each tuned to how one AI model retrieves and presents information.

    The Claude variant is dense and process-oriented. Headers, eligibility criteria, numbered steps, official program names. Built for the student or researcher who arrived with a specific question and needs a complete answer they can act on.

    The ChatGPT variant is a scannable list. Named items, one specific detail per item, direct answer in the first two sentences. Built for the user who will spend 24 seconds on the page and needs the answer immediately or they’re gone.

    The Copilot variant is comparison and planning framing. What to know before you go, Option A versus Option B, cost context, logistics. Built for the desktop user doing research before they make a decision.

    The core article is the same. The architecture is different. The AI that cites you depends on which structure you used.

    The Methodology Is the Product

    The query sequence I developed across these four sessions is a repeatable extraction methodology. It works on any GA4 property with Analytics Advisor enabled. The intelligence it produces — per-AI audience profiles, geographic signals, velocity trends, cross-AI content overlap — is not available through DataForSEO, SpyFu, or GSC. It requires Gemini’s reasoning layer operating on top of your property data, orchestrated by a structured query architecture.

    I have packaged the complete methodology as a downloadable kit: the full query architecture across all four sessions, the capture protocol, the content variant framework, and the flags to escalate before your next content sprint. It is called Books for Bots: GA4 AI Referral Audit Kit.

    The free version covers Session 3 alone — the AI deep dive queries that surface your ChatGPT, Claude, and Copilot traffic split. That alone will show you something most site owners have never seen: which AI is sending them traffic, to which pages, and how engaged those users actually are.

    The full kit covers all four sessions and includes the content variant framework that translates the behavioral data into a writing system.

    Both are available at tygartmedia.com. What you do with the data after that is yours.

  • Claude Sent Us 63 Readers Last Month: The First Measurable AI-Referral Channel for Publishers

    Short version: In the last 29 days, Claude, ChatGPT, Perplexity, Microsoft Copilot, Gemini, NotebookLM, and Kagi collectively sent at least 94 new readers to tygartmedia.com — a site whose #1 content vertical is explaining Claude. AI assistants are now our #4 traffic source, ahead of Facebook, ahead of LinkedIn, ahead of every search engine except Google and Bing. The product is citing the publication that covers the product. That’s the loop. Here is what it looks like when you can actually measure it.

    The finding that made me stop scrolling

    I built a Claude-powered browser agent to poke around our GA4 account and surface “interesting stuff” a human analyst would miss. One of the first things it flagged was our Source/Medium report. Here is the top of the list, unedited:

    RankSource / MediumNew Users (29 days)Notes
    1(direct) / (none)738Mystery bucket
    2google / organic289Standard Google SEO
    3bing / organic701m 20s average session — high intent
    4claude.ai / referral63Claude itself
    5m.facebook.com43Mostly 4-second bounces
    6duckduckgo / organic411m 02s average
    13chatgpt.com / referral9ChatGPT
    15perplexity.ai / referral5Perplexity
    21copilot.com3Microsoft Copilot
    24gemini.google.com2Google Gemini
    28notebooklm.google.com1Google NotebookLM
    35kagi.com1Kagi AI results

    Add up everything with an AI-assistant referrer and the combined count is at least 94 new users in 29 days — roughly 6.7% of all new users on the site. Claude alone, at 63 referred users, is our #4 traffic source. It is ahead of Facebook. It is ahead of LinkedIn. It is ahead of every search engine except Google and Bing. And we have been cited, at least once, by every major AI surface in the English-speaking internet: Claude, ChatGPT, Perplexity, Microsoft Copilot, Gemini, NotebookLM, and Kagi.

    Why this is different from “we show up in Google”

    Generative Engine Optimization (GEO) is the practice of structuring content so that large language models cite it as a source inside their answers. It is the younger, messier cousin of SEO. Most publishers cannot yet prove it is working. The feedback loop is long, the data is hidden inside a chat window, and the traffic that does leak through often lands in a “(direct)” bucket with no attribution at all.

    We can see ours. GA4, for reasons that are probably accidental, already records claude.ai, chatgpt.com, perplexity.ai, copilot.com, gemini.google.com, notebooklm.google.com, and kagi.com as discrete referral sources when a user clicks a citation link. That means AI-assistant traffic is measurable as a first-class channel right now, today, with the free version of Google Analytics, on any site that happens to get cited.

    The poetic layer of what we are looking at: Claude is the top AI referrer to a website whose #1 content vertical is explaining Claude. The product is sending readers to the publication that covers the product. If that is not a GEO moat, I do not know what one looks like.

    These are not bounced visitors. They are readers.

    The single biggest worry with any new traffic source is that it might be garbage — bots, previews, accidental clicks. The engagement data says the opposite. Users arriving from claude.ai spend 23 seconds on average and produce 0.56 engaged sessions per user. ChatGPT referrals average 21 seconds and 0.44 engaged sessions per user. For context, the site-wide average engagement time is dragged down hard by in-app social browsers; the Facebook mobile webview, for example, sits at about 14 seconds with 4-second bounces.

    People arriving from an AI assistant are not scrolling past. They clicked the citation because the AI told them this was the primary source, and when they got here they read. That is a qualitatively different kind of traffic than Facebook or a random Google search. These are the highest-intent non-search users we have.

    The secondary finding: Seattle is reading for three minutes

    The same GA4 pass surfaced a city-level pattern we were not expecting. Seattle readers — 61 of them in 29 days — spent an average of 3 minutes and 6 seconds on site at a 61.3% engagement rate. The site-wide average session is roughly 40 seconds. Seattle readers are spending about 4–5x longer on the page than the typical visitor, at nearly twice the engagement rate.

    CityActive UsersEngagement RateAverage Time
    Seattle6161.3%3m 06s
    The Dalles, OR310%1s
    Shelton, WA2627.6%15s
    Des Moines2437.5%10s
    Beijing316.5%0s
    Singapore2821.4%5s

    A few things jump out. The Dalles, Oregon at 31 users / 0% engagement / 1 second is almost certainly Google’s data center there returning preview requests — ignore it. Shelton, Washington is a real Mason County hyperlocal beachhead; 26 actual humans in our home county in 29 days is a legitimate foothold for the local desk. Beijing at 31 users / 0 seconds has the classic signature of cloud-hosted scrapers. And Seattle at 3 minutes is the single most valuable city in our data and it is not close.

    The browser split confirms an unusually technical audience

    BrowserUsersEngagement Rate
    Chrome850 (60%)31.3%
    Safari232 (16%)32.7%
    Edge99 (7%)62.3%
    Firefox33 (2.3%)60.5%

    Edge at 62.3% engagement and Firefox at 60.5% engagement are not normal consumer numbers. A typical general-interest site sees those two browsers hovering in the 5–15% range. Microsoft Edge is the default on corporate-managed Windows machines. Firefox is the dev-preferred privacy browser. The combination of high Edge engagement, high Firefox engagement, and a Claude-heavy referral list all point at the same audience: developers and technical professionals at real companies, reading on managed workstations.

    How to measure AI-assistant referrals in your own GA4

    If you publish anything technical and want to see your own version of this number, the fastest path is a custom GA4 exploration with one segment. Open GA4 → Explore → Free Form. Add a segment with this condition:

    Session source contains one of:
      claude.ai
      chatgpt.com
      perplexity.ai
      perplexity
      copilot.com
      gemini.google.com
      notebooklm.google.com
      kagi.com
      you.com
      phind.com

    Break it down by landing page, engagement rate, and average engagement time. That is your AI-Referral dashboard. Watch it weekly. A non-trivial number of sites will discover they already have measurable AI traffic and never bothered to look.

    Frequently asked questions

    What is a GEO referral?

    A GEO referral, or AI-assistant referral, is a visit to your site from a user who clicked a citation link inside an answer generated by a large language model such as Claude, ChatGPT, Perplexity, Microsoft Copilot, Gemini, NotebookLM, or Kagi. In Google Analytics 4 these visits appear as referral traffic from the assistant’s domain — for example claude.ai / referral or chatgpt.com / referral.

    How many AI-referred users did tygartmedia.com receive in 29 days?

    At least 94 new users across seven distinct AI assistants: 63 from Claude, 14 from ChatGPT (9 attributed + 5 unassigned), 10 from Perplexity (5 attributed + 5 unassigned), 3 from Microsoft Copilot, 2 from Gemini, 1 from NotebookLM, and 1 from Kagi. That is roughly 6.7% of all new users on the site for the period.

    Are AI-assistant referrals real readers or bots?

    Real readers. Average engagement time from claude.ai is 23 seconds and from chatgpt.com is 21 seconds, with engagement rates of 0.56 and 0.44 engaged sessions per user respectively. Those numbers are qualitatively higher than in-app social browser traffic (Facebook mobile webview averages about 14 seconds) and indicate a deliberate click-through from an AI citation, not a scraper.

    Can any publisher measure AI-assistant referrals in GA4?

    Yes. GA4 records visits from claude.ai, chatgpt.com, perplexity.ai, copilot.com, gemini.google.com, notebooklm.google.com, and kagi.com as discrete referral sources by default. Build a Free Form exploration with a segment that filters Session source on those domains and you will see the channel immediately if it exists for your site.

    What is GEO in marketing?

    GEO stands for Generative Engine Optimization. It is the practice of structuring web content, schema markup, and publishing signals so that large language models cite the content as a source inside AI-generated answers. GEO is to AI assistants what SEO is to search engines — the discipline of being the answer the machine hands to the reader.

    The loop, and why it matters

    The most interesting thing about this data is not the traffic. It is the feedback structure. Tygart Media publishes explainers about Claude. Claude crawls and cites those explainers. Readers click through from Claude’s answer back to tygartmedia.com. We publish more. Claude cites more. The site becomes, in effect, training data and a recommended source for the next iteration of the product it covers. That is the recursive loop that makes AI-native publishing a different business than search-era publishing.

    I do not think every site can build this loop. It requires a narrow, technically-defensible topic — something an AI assistant would rather cite than paraphrase — and the patience to publish at a cadence LLMs reward. What I do think is that any publisher can check, today, whether the loop has quietly started forming underneath them. Most have not bothered. This post is partly a flex and partly an invitation: go look.

    What happens next at Tygart Media

    Three things. We are standing up a permanent AI-Referral channel in our GA4 so the number can be watched weekly instead of rediscovered quarterly. We are writing the playbook — the one this post hints at — for publishers who want to do the same. And we are building the browser agent that found this in the first place into a repeatable audit any publisher can run against their own GA4 in an afternoon. If that last one sounds useful, the newsletter is the place to follow along.

    Claude sent us 63 readers last month. It will send more next month. We will be counting.

  • Claude, ChatGPT, and Perplexity Cite Totally Different Pages: The Per-Model AI Citation Playbook

    Part 2 of 2. In the first post I showed that Claude, ChatGPT, Perplexity, Copilot, Gemini, NotebookLM, and Kagi collectively sent tygartmedia.com at least 94 new readers in 29 days — and that Claude alone is our #4 traffic source. That is the headline. What follows is the interesting part: when you filter the landing-page report one AI model at a time, the three major assistants cite completely different kinds of pages, and the pattern is actionable.

    Claude cites a small number of pages, a lot of times

    Claude.ai sent 79 sessions across 63 users to 16 distinct pages. Two pages ate more than half of it:

    #PageSessions% of Claude trafficAvg Time
    1/claude-student-discount2227.9%35s
    2/anthropic-console2126.6%11s
    3(not set)1316.5%5s
    4/claude-edu45.1%6s
    5/claude-pro-vs-chatgpt-plus45.1%7s
    6/claude-code-on-vertex-ai-gcp33.8%3s
    7/claude-desktop22.5%40s
    8/how-to-install-claude-code22.5%2s
    9/claude-4-deprecation11.3%1m 07s
    10/claude-managed-agents-pricing-cost-analysis11.3%1m 38s

    The two biggest pages, /claude-student-discount and /anthropic-console, are 54.5% of all Claude-referred traffic to the site. Those are extremely specific query shapes — “how do students get Claude Pro free” and “how do I access the Anthropic Console” — and Claude has apparently decided our pages are the canonical answer for both.

    The engagement twist is worth staring at. The two biggest Claude-referred pages have the worst time-on-page: 35 seconds and 11 seconds. The two pages that got a single Claude visit each — /claude-managed-agents-pricing-cost-analysis and /claude-4-deprecation — got 1 minute 38 seconds and 1 minute 7 seconds of real read time. The pattern is clean. When Claude can extract the answer directly into its chat window, users click through briefly to verify and leave. When the answer is deeper than Claude can summarize, readers stay to actually read. Both behaviors are valuable and both are measurable.

    ChatGPT cites broadly, favors “X vs Y” content, and (oddly) sends geographic traffic

    ChatGPT’s footprint is shaped differently. 16 sessions across 14 users to 13 distinct pages — almost every page received exactly one visit, which is the signature of a model citing a wide range of sources once each rather than reaching for a favorite.

    PageSessionsAvg Time
    /claude-student-discount315s
    /claude-computer-use-tutorial12m 07s
    /grok-vs-claude115s
    /opus-4-7-vs-gpt-5-4-vs-gemini-3-1-pro10s
    /claude-pro-vs-chatgpt-plus(cross-model)
    /claude-for-nonprofits130s
    /everett-waterfront-visitor-guide…10s
    /hood-canal-shellfish-season-2026…10s
    /rakuten-claude-managed-agents-enterprise-deployment10s

    Two patterns in that list. First, ChatGPT appears to cite us disproportionately for model comparisonsgrok-vs-claude, opus-4-7-vs-gpt-5-4-vs-gemini-3-1-pro, and the cross-model claude-pro-vs-chatgpt-plus page. Second, and stranger, ChatGPT sent visits to two hyperlocal Pacific Northwest pages: an Everett waterfront guide and a Hood Canal shellfish season page. That is ChatGPT using our site as a reference source for geographic queries, which is not a pattern any other model shows.

    The hidden gem: /claude-computer-use-tutorial received one ChatGPT referral and that referral stayed for 2 minutes 7 seconds. ChatGPT appears willing to cite long-form technical tutorials in a way Claude does not.

    Perplexity treats us like a research database

    Perplexity sent 12 sessions across 10 users to 9 pages — the most evenly distributed of the three and the only model that cites people, founders, and company-history content.

    PageSessionsAvg Time
    /anthropic-founders-2217s
    /claude-code-on-vertex-ai-gcp254s
    /claude-student-discount20s
    /claude-desktop14s
    /claude-team-plan10s
    /how-to-install-claude-code10s
    /restoration-team-training-claude-cowork10s

    Perplexity is the only model that pulled visits on /anthropic-founders-2, which implies Perplexity is fielding a different query shape — something closer to “who founded Anthropic” than “how do I use Claude.” Perplexity is also the only model that surfaced the very niche B2B page /restoration-team-training-claude-cowork. That is a long-tail, vertical-specific query and Perplexity cited us as the source. That is exactly the behavior you would hope for from a research-flavored assistant.

    The three models have completely different citation personalities

    Once you lay the three patterns side by side, the strategy falls out of the page.

    • Claude.ai favors short, factual, access-related pages. Product info, pricing, how-to-access. If you want more Claude citations, write more narrow “how do I do this one specific thing” pages.
    • ChatGPT favors comparisons and long-tail references. X vs Y, alternatives, and — unexpectedly — some geographic content. If you want more ChatGPT citations, write more “X vs Y” posts with tight comparison tables.
    • Perplexity favors people, history, and niche research. Founders, company background, domain-specific tutorials. If you want more Perplexity citations, write more research-flavored background pieces.

    This is the single most practical insight in the data set. Most people talk about “AI SEO” as if it is one thing. It is three things, at minimum, and the content shape that wins one model will not automatically win the other two.

    The crown jewel: one page, 17% of all AI-referred traffic

    The clearest cross-model winner on the site is /claude-student-discount. Claude sent 22 sessions. ChatGPT sent 3. Perplexity sent 2. Combined that is 27 sessions — roughly 17% of all AI-referred traffic we received in 29 days, from a single URL. No other page on the site is cited by all three major LLMs in meaningful volume.

    There is a playbook inside that one data point. The page works because the query “how do I get Claude for free as a student” is an extremely high-frequency question across every chat surface, and the page happens to be structured the way LLMs like to cite: a short, direct answer near the top, specific eligibility rules in a scannable block, and no wall of context before the reader gets to the fact. That structural recipe — front-load the answer, make the facts liftable, keep the page narrow — is repeatable.

    The bigger finding: 90% of our Claude content is invisible to AI

    tygartmedia.com has more than 250 Claude-related articles. Exactly 25 of them show up in the AI-referral data set at all. The 90% that do not get cited are not low-quality — several of them have strong engagement from regular search traffic:

    • /claude-managed-agents-complete-pricing-guide-2026 — 17 sessions at ~1 minute from search, zero AI citations
    • /notion-knowledge-base-for-claude — 10 sessions at 1m 23s, uncited
    • /claude-rate-limits — classic FAQ shape, 6 sessions, not cited
    • /claude-md-playbook — 1 session at 2m 33s, zero AI pickup
    • The full /claude-cowork-* family of 12+ pages, almost entirely invisible to every model

    The difference between an AI-cited page and an AI-invisible page is rarely the quality of the content. It is the shape. Pages that get cited have an early summary, short headings, bulleted facts, and a quotable direct-answer sentence. Pages that do not get cited tend to open with context, build up to the answer, and bury the quotable line in paragraph 9.

    The content-cluster scorecard

    ClusterApprox. PagesApprox. SessionsEngagementAI Citations
    Claude pricing & access~10~160MixedHigh
    Claude managed agents~12~130Strong (25s–1m)Low
    Claude Code~8~60High (18s–3m)Moderate
    Model comparisons (X vs Y)~10~45Very high (1–7 min)Moderate
    Anthropic people/company~8~30MediumModerate
    Claude how-to / tutorials~20~50MediumLow
    Claude Cowork family~15~40Very low (0–10s)Almost none

    Two clusters deserve action. The Claude Cowork family is a content swamp — 15 pages, low traffic, no AI citations, and 0–10 second engagement on the traffic that does land. That cluster should be consolidated into two or three flagship posts and the rest redirected. The model comparisons cluster is the opposite: low volume but 1–7 minutes of engagement and cross-model citations. One well-researched comparison post outperforms ten mediocre explainers on every metric that matters here.

    The playbook, in one list

    • Write more narrow single-answer pages. Candidates I would ship next: /claude-web-search, /claude-api-keys, /claude-max-plan-vs-pro, /how-to-cancel-claude, /claude-mobile-app, /claude-desktop-vs-web, /claude-subscription-refund. Each is ~600 words, answer-first, scannable. That is the shape Claude cites.
    • Add a Quick Answer block to the top of every long-form piece. Two or three sentences. Quotable. That alone moves a real share of our invisible content into AI-citation range.
    • Invest in comparison posts for ChatGPT pickup. We already know ChatGPT cites our existing X-vs-Y content. Ship more of them, with tight tables.
    • Write more founder/history/background pieces for Perplexity pickup. Research-flavored. Dates, names, primary sources.
    • Consolidate the Cowork cluster. Two or three flagship pages, everything else redirected.
    • Ship a permanent AI-Referral dashboard in GA4. Segment on all seven assistant domains. Watch it weekly. This is now a first-class channel.

    Frequently asked questions

    What kinds of pages does Claude.ai cite most often?

    Based on the tygartmedia.com data, Claude.ai disproportionately cites short, factual, access-related pages — product info, pricing, how-to-access, and eligibility details. On our site, two pages (/claude-student-discount and /anthropic-console) accounted for 54.5% of all Claude-referred traffic in a 29-day window.

    What kinds of pages does ChatGPT cite most often?

    ChatGPT’s citation pattern favors comparison and long-tail reference pages — “X vs Y” posts like Grok vs Claude, model-to-model comparisons, and, surprisingly, some geographic and local content. ChatGPT tends to cite many pages once each rather than concentrating on a small set.

    What kinds of pages does Perplexity cite most often?

    Perplexity cites research-flavored content — founders and company history, domain-specific tutorials, and niche B2B pages. It is the only major AI assistant that sent traffic to our Anthropic founders page and to a vertical-specific training page in our data set.

    Why does the same page get different citation volume from different AI models?

    Because each assistant is answering a slightly different distribution of queries. Claude is most often used for “how do I use this product” questions and favors narrow how-to pages. ChatGPT receives more comparison and alternative-seeking queries. Perplexity skews toward research and background questions. A page that is the best answer for one query type will not automatically be the best answer for another.

    How do I structure a page to get cited by AI assistants?

    Lead with a direct, quotable answer in the first paragraph. Use short scannable headings. Keep facts in bulleted or tabular form. Include an explicit FAQ block with question-shaped subheadings. Keep the page narrow — one topic, one canonical answer — rather than a sprawling multi-topic explainer.

    The bigger picture

    The meta-insight worth sitting with: we are currently being cited inside Claude’s internal answer graph for “Claude student discount” because a human sat down and wrote a clear, narrow page about it. That is almost the entire game for publishers for the next three years. Most of the web has not noticed yet. We noticed, and now we have a measurement stack to act on what we noticed.

    If you are a publisher, the thing to do this week is boring and powerful: segment your GA4 on the seven AI-assistant domains from Part 1, sort your landing pages by AI-referral volume, and look at the pages that are winning. They will have a shape. Copy it.

    — If you missed it, Part 1 is here.

  • Direct Test

    Direct test

  • Worker Smoke Test 2

    Second attempt

  • They Printed March Madness on My Guinness. I Haven’t Stopped Thinking About It.

    They Printed March Madness on My Guinness. I Haven’t Stopped Thinking About It.

    I was at Doyle’s last night for my wife’s birthday when the bartender slid a Guinness in front of me. On the foam head: the NCAA March Madness logo, printed in caramel brown like it belonged there. I forgot they did this. And then I couldn’t stop thinking about what it actually meant.

    Let me be clear about what I saw. A neighborhood bar in Tacoma had executed a national brand partnership — NCAA licensing, custom logo printing technology, a real experiential moment — and delivered it to me in a pint glass for maybe twelve bucks. The NCAA didn’t have to run a TV spot to get in front of me. They got in front of me at the exact moment I was already in a good mood, already spending money, already present.

    That’s not marketing. That’s infiltration. And it was brilliant.

    The Technology Behind the Pour

    The machine doing the printing is called a Ripple Maker. It’s a countertop device that uses food-safe ink and an inkjet-style system to print images directly onto foam — coffee, cocktails, beer heads. The company behind it, Ripples, has been running since around 2016. You can print anything: a logo, a photo, a QR code, a personalized message.

    For a bar like Doyle’s, it’s a few hundred dollars a month to run. For a national brand like the NCAA, it’s a scalable ambient media buy — get into bars running March Madness watch parties across the country, put your brand on every beer ordered during the game, and make it feel organic instead of promotional.

    The NCAA didn’t buy an ad. They bought a moment. There’s a meaningful difference between those two things.

    The NCAA didn’t buy an ad. They bought a moment. There’s a meaningful difference. An ad interrupts. A moment becomes part of the memory. I’m writing about this the next day. Nobody writes about a banner ad the next day.

    What Local Businesses Can Take From This

    Bartender using Ripple Maker foam printer to create branded beer at a bar
    The Ripple Maker prints directly onto foam — coffee, beer, cocktails. A $300/month experiential media channel most brands haven’t touched.

    Here’s where I start thinking about the businesses I work with — restoration contractors, lenders, cold storage operators, B2B service companies. Most of them are buying the same tired channels: Google Ads, Yelp, direct mail. They’re paying to interrupt people.

    What Doyle’s pulled off — even if they didn’t frame it this way — was contextual experiential marketing. The right message, delivered through the right medium, at the right moment, in a way that felt native to the environment. That’s the playbook. The technology is almost incidental.

    Small venues can execute national-brand-level experiential marketing for a few hundred dollars a month. The tech is there. The question is whether you have the creativity to find the right moment for your audience — and whether you’re willing to pay for a moment instead of an impression.

    The restoration contractor who sponsors the coffee at a claims adjuster’s office every Monday morning is doing the same thing. The cold storage company that puts their logo on the temperature monitoring printout that goes to the produce buyer every week is doing the same thing. You find the moment your customer is already present and mentally open, and you show up there — without asking anything of them.

    Why This Matters for Content Strategy

    I run a content agency. We build articles, landing pages, entity clusters — things designed to get found. And I believe in that work. But what Doyle’s reminded me is that not everything distributable is digital.

    The Guinness moment became a story I’m telling today. That story will probably become a LinkedIn post. That post might become a case study in a pitch deck. The physical moment seeded a digital content chain — and the NCAA got attribution in all of it without ever asking for it.

    That’s the loop worth understanding: physical moments, done well, generate organic digital content from the people who experience them. You don’t need to manufacture virality. You need to manufacture memorability.

    Physical moments, done well, generate organic digital content from the people who experience them. Manufacture memorability, not virality.

    I don’t know how much Doyle’s pays for the Ripple Maker. I don’t know what the NCAA paid for the partnership. What I know is that it worked on me — a guy who builds content systems for a living and should theoretically be immune to this stuff. That’s the tell. When the marketing works on the skeptic, it’s really working.


    Happy birthday to my wife, Stef. Best Guinness I’ve had in a while — even if I spent most of it thinking about marketing instead of the moment. She’s used to it.