Category: AEO & AI Search

Google is not the only search engine anymore. Your next customer might find you through a ChatGPT answer, a Perplexity citation, or a Google AI Overview that pulls your content into the answer box. AEO is how restoration companies show up in the answer layer — featured snippets, People Also Ask, voice search, and zero-click results that put your name in front of decision-makers before they ever click a link.

AEO and AI Search covers answer engine optimization, featured snippet capture, People Also Ask strategies, voice search optimization, zero-click search positioning, AI Overview placement, and direct answer formatting for restoration industry queries across Google, Bing, ChatGPT, Perplexity, and Gemini.

  • The claude_delta Standard: How We Built a Context Engineering System for a 27-Site AI Operation

    What Is the claude_delta Standard?

    The claude_delta standard is a lightweight JSON metadata block injected at the top of every page in a Notion workspace. It gives an AI agent — specifically Claude — a machine-readable summary of that page’s current state, status, key data, and the first action to take when resuming work. Instead of fetching and reading a full page to understand what it contains, Claude reads the delta and often knows everything it needs in under 100 tokens.

    Think of it as a git commit message for your knowledge base — a structured, always-current summary that lives at the top of every page and tells any AI agent exactly where things stand.

    Why We Built It: The Context Engineering Problem

    Running an AI-native content operation across 27+ WordPress sites means Claude needs to orient quickly at the start of every session. Without any memory scaffolding, the opening minutes of every session are spent on reconnaissance: fetch the project page, fetch the sub-pages, fetch the task log, cross-reference against other sites. Each Notion fetch adds 2–5 seconds and consumes a meaningful slice of the context window — the working memory that Claude has available for actual work.

    This is the core problem that context engineering exists to solve. Over 70% of errors in modern LLM applications stem not from insufficient model capability but from incomplete, irrelevant, or poorly structured context, according to a 2024 RAG survey cited by Meta Intelligence. The bottleneck in 2026 isn’t the model — it’s the quality of what you feed it.

    We were hitting this ceiling. Important project state was buried in long session logs. Status questions required 4–6 sequential fetches. Automated agents — the toggle scanner, the triage agent, the weekly synthesizer — were spending most of their token budget just finding their footing before doing any real work.

    The claude_delta standard was the solution we built to fix this from the ground up.

    How It Works

    Every Notion page in the workspace gets a JSON block injected at the very top — before any human content. The format looks like this:

    {
      "claude_delta": {
        "page_id": "uuid",
        "page_type": "task | knowledge | sop | briefing",
        "status": "not_started | in_progress | blocked | complete | evergreen",
        "summary": "One sentence describing current state",
        "entities": ["site or project names"],
        "resume_instruction": "First thing Claude should do",
        "key_data": {},
        "last_updated": "ISO timestamp"
      }
    }

    The standard pairs with a master registry — the Claude Context Index — a single Notion page that aggregates delta summaries from every page in the workspace. When Claude starts a session, fetching the Context Index (one API call) gives it orientation across the entire operation. Individual page fetches only happen when Claude needs to act on something, not just understand it.

    What We Did: The Rollout

    We executed the full rollout across the Notion workspace in a single extended session on April 8, 2026. The scope:

    • 70+ pages processed in one session, starting from a base of 79 and reaching 167 out of approximately 300 total workspace pages
    • All 22 website Focus Rooms received deltas with site-specific status and resume instructions
    • All 7 entity Focus Rooms received deltas linking to relevant strategy and blocker context
    • Session logs, build logs, desk logs, and content batch pages all injected with structured state
    • The Context Index updated three times during the session to reflect the running total

    The injection process for each page follows a read-then-write pattern: fetch the page content, synthesize a delta from what’s actually there (not from memory), inject at the top via Notion’s update_content API, and move on. Pages with active state get full deltas. Completed or evergreen pages get lightweight markers. Archived operational logs (stale work detector runs, etc.) get skipped entirely.

    The Validation Test

    After the rollout, we ran a structured A/B test to measure the real impact. Five questions that mimic real session-opening patterns — the kinds of things you’d actually say at the start of a workday.

    The results were clear:

    • 4 out of 5 questions answered correctly from deltas alone, with zero additional Notion fetches required
    • Each correct answer saved 2–4 fetches, or roughly 10–25 seconds of tool call time
    • One failure: a client checklist showed 0/6 complete in the delta when the live page showed 6/6 — a staleness issue, not a structural one
    • Exact numerical data (word counts, post IDs, link counts) matched the live pages to the digit on all verified tests

    The failure mode is worth understanding: a delta becomes stale when a page gets updated after its delta was written. The fix is simple — check last_updated before trusting a delta on any in_progress page older than 3 days. If it’s stale, a single verification fetch is cheaper than the 4–6 fetches that would have been needed without the delta at all.

    Why This Matters Beyond Our Operation

    2025 was the year of “retention without understanding.” Vendors rushed to add retention features — from persistent chat threads and long context windows to AI memory spaces and company knowledge base integrations. AI systems could recall facts, but still lacked understanding. They knew what happened, but not why it mattered, for whom, or how those facts relate to each other in context.

    The claude_delta standard is a lightweight answer to this problem at the individual operator level. It’s not a vector database. It’s not a RAG pipeline. Long-term memory lives outside the model, usually in vector databases for quick retrieval. Because it’s external, this memory can grow, update, and persist beyond the model’s context window. But vector databases are infrastructure — they require embedding pipelines, similarity search, and significant engineering overhead.

    What we built is something a single operator can deploy in an afternoon: a structured metadata convention that lives inside the tool you’re already using (Notion), updated by the AI itself, readable by any agent with Notion API access. No new infrastructure. No embeddings. No vector index to maintain.

    Context Engineering is a systematic methodology that focuses not just on the prompt itself, but on ensuring the model has all the context needed to complete a task at the moment of LLM inference — including the right knowledge, relevant history, appropriate tool descriptions, and structured instructions. If Prompt Engineering is “writing a good letter,” then Context Engineering is “building the entire postal system.”

    The claude_delta standard is a small piece of that postal system — the address label that tells the carrier exactly what’s in the package before they open it.

    The Staleness Problem and How We’re Solving It

    The one structural weakness in any delta-based system is staleness. A delta that was accurate yesterday may be wrong today if the underlying page was updated. We identified three mitigation strategies:

    1. Age check rule: For any in_progress page with a last_updated more than 3 days old, always verify with a live fetch before acting on the delta
    2. Agent-maintained freshness: The automated agents that update pages (toggle scanner, triage agent, content guardian) should also update the delta on the same API call
    3. Context Index timestamp: The master registry shows its own last-updated time, so you know how fresh the index itself is

    None of these require external tooling. They’re behavioral rules baked into how Claude operates on this workspace.

    What’s Next

    The rollout is at 167 of approximately 300 pages. The remaining ~130 pages include older session logs from March, a new client project sub-pages, the Technical Reference domain sub-pages, and a tail of Second Brain auto-entries. These will be processed in subsequent sessions using the same read-then-inject pattern.

    The longer-term evolution of this system points toward what the field is calling Agentic RAG — an architecture that upgrades the traditional “retrieve-generate” single-pass pipeline into an intelligent agent architecture with planning, reflection, and self-correction capabilities. The BigQuery operations_ledger on GCP is already designed for this: 925 knowledge chunks with embeddings via text-embedding-005, ready for semantic retrieval when the delta system alone isn’t enough to answer a complex cross-workspace query.

    For now, the delta standard is the right tool for the job — low overhead, human-readable, self-maintaining, and already demonstrably cutting session startup time by 60–80% on the questions we tested.

    Frequently Asked Questions

    What is the claude_delta standard?

    The claude_delta standard is a structured JSON metadata block injected at the top of Notion pages that gives AI agents a machine-readable summary of each page’s current status, key data, and next action — without requiring a full page fetch to understand context.

    How does claude_delta differ from RAG?

    RAG (Retrieval-Augmented Generation) uses vector embeddings and semantic search to retrieve relevant chunks from a knowledge base. Claude_delta is a simpler, deterministic approach: a structured summary at a known location in a known format. RAG scales to massive knowledge bases; claude_delta is designed for a single operator’s structured workspace where pages have clear ownership and status.

    How do you prevent delta summaries from going stale?

    The key_data field includes a last_updated timestamp. Any delta on an in_progress page older than 3 days triggers a verification fetch before Claude acts on it. Automated agents that modify pages are also expected to update the delta in the same API call.

    Can this approach work for other AI systems besides Claude?

    Yes. The JSON format is model-agnostic. Any agent with Notion API access can read and write claude_delta blocks. The standard was designed with Claude’s context window and tool-call economics in mind, but the pattern applies to any agent that needs to orient quickly across a large structured workspace.

    What is the Claude Context Index?

    The Claude Context Index is a master registry page in Notion that aggregates delta summaries from every processed page in the workspace. It’s the first page Claude fetches at the start of any session — a single API call that provides workspace-wide orientation across all active projects, tasks, and site operations.

  • AI Citation Monitoring: How to Know If ChatGPT and Claude Are Actually Talking About You

    What is AI citation monitoring? AI citation monitoring is the practice of systematically tracking whether generative AI systems — including ChatGPT, Claude, Perplexity, Google AI Overviews, and similar tools — are citing, referencing, or recommending your content when users ask relevant questions. It’s the GEO equivalent of rank tracking: instead of asking “where do I rank on Google?”, you’re asking “does AI think I’m worth mentioning?”

    Here’s a scenario that’s playing out right now across thousands of websites: a business owner spends months creating genuinely excellent content. It ranks well. People find it. The traffic dashboards look good. And then, quietly, something changes. Fewer people are clicking through from Google. The traffic dips but the rankings haven’t moved. What happened?

    AI happened. Specifically: AI search features are now answering questions directly — and the content they choose to summarize, reference, or cite is not necessarily the content that ranks #1. It’s the content that AI systems have determined is trustworthy, factual, well-structured, and authoritative. Whether that’s you depends on whether you’ve been paying attention.

    AI citation monitoring is how you pay attention.

    Why AI Citations Are a New Category of Search Visibility

    Traditional SEO gave us a clean, rankable world. Query goes in, ten blue links come out, you live or die by position one through ten. The metrics were unambiguous. Either you’re visible or you’re not.

    AI search doesn’t work that way. When someone asks ChatGPT a question, they don’t get ten links — they get an answer. That answer might cite your content, paraphrase it without attribution, or ignore it entirely in favor of a competitor whose content happened to be better structured for machine consumption. There’s no “position 1” equivalent. There’s cited, mentioned, or absent.

    This creates a new visibility dimension that most businesses aren’t tracking at all. They’re optimizing for Google’s traditional index while AI systems quietly form opinions about whose content is worth recommending — and those opinions are influencing a growing share of how people discover information.

    According to data from Semrush and BrightEdge, AI Overviews now appear in roughly 13-15% of all Google searches in the US as of early 2026 — disproportionately for informational queries, which are exactly the queries that content marketing is designed to capture. If your content isn’t getting cited in those overviews, you’re invisible to a significant portion of your potential audience.

    What AI Citation Monitoring Actually Involves

    AI citation monitoring has three core components — and they require different approaches because each AI system works differently.

    Google AI Overviews monitoring. This is the highest-volume opportunity for most businesses. Google’s AI Overviews appear at the top of search results for qualifying queries and pull from indexed web content. You can monitor citation appearances using rank tracking tools that have added AI Overview detection — Semrush, Ahrefs, and SE Ranking all have versions of this. The manual approach: run your target queries in a fresh browser session and note whether your domain appears in any AI Overview source citations.

    Perplexity monitoring. Perplexity is citation-native — it almost always shows source links. This makes it easier to monitor: run your core queries directly in Perplexity and see what it cites. You can do this manually at scale by building a query list and running it weekly. There are also emerging tools like Profound and Otterly.ai that automate Perplexity citation tracking.

    ChatGPT and Claude monitoring. These are harder because responses vary by session, model version, and user phrasing. The practical approach is prompt-based: run 10-20 of your highest-value queries as ChatGPT and Claude prompts asking for recommendations or explanations. Note whether your brand or content gets mentioned. Do this monthly. It’s not a perfect signal, but patterns emerge — if you’re never mentioned across 20 queries where you should be, that tells you something.

    How to Set Up AI Citation Monitoring Without Losing Your Mind

    The good news: you don’t need a $500/month enterprise tool to get started. Here’s a working system using mostly free or low-cost resources:

    1. Build your query list. Identify 20-30 informational queries that your ideal customers are likely asking AI systems. These should be questions your content already attempts to answer — the alignment matters. If you write about franchise marketing, your queries might include “how does SEO work for franchise locations” or “best marketing strategy for restoration franchises.”
    2. Run baseline checks. Go through each query manually in Perplexity, ChatGPT, and Google (looking for AI Overviews). Document what gets cited, mentioned, or surfaced. This is your Day 0 benchmark.
    3. Set a monitoring cadence. Monthly is realistic for most teams. Weekly if your content velocity is high or you’re actively running a GEO optimization campaign. Quarterly is the absolute minimum if you want to catch trends before they become problems.
    4. Track changes over time. A simple spreadsheet — query, platform, date, your citation (yes/no), competitor citations — is enough to start seeing patterns. You’re looking for: which queries you consistently appear in, which you never appear in, and which competitors keep showing up instead of you.
    5. Use the gaps to drive content decisions. Every query where a competitor gets cited and you don’t is a content gap — either you don’t have content on that topic, or your existing content isn’t structured in a way AI systems can easily extract and cite. Fix one or the other.

    What Makes Content More Likely to Get Cited by AI

    AI citation isn’t random. Systems like Perplexity and Google AI Overviews have consistent preferences, and understanding them is the foundation of any effective AI content monitoring and optimization strategy.

    Factual density. AI systems prefer content that makes specific, verifiable claims over vague generalizations. “Email marketing generates $42 in return for every $1 spent, according to Litmus’s 2023 State of Email report” is more citable than “email marketing has great ROI.” Specificity signals reliability.

    Clear question-and-answer structure. Content that explicitly poses a question as a heading and answers it directly in the following paragraph is easy for AI systems to extract. This is Answer Engine Optimization (AEO) in practice — and it’s directly correlated with AI citation frequency.

    Author authority signals. Named authors with associated credentials, social profiles, and a content history perform better in AI citation environments than anonymous or brand-attributed content. The E-E-A-T framework Google uses for quality evaluation translates directly to AI citability.

    Entity saturation. Content that correctly identifies and accurately describes key entities in a topic area — named people, organizations, products, concepts — is easier for AI to contextualize and cite accurately. Vague content gets paraphrased. Entity-rich content gets cited.

    The Monitoring Stack We Use at Tygart Media

    For monitoring AI citations across our managed sites, we run a combination of automated and manual checks. The automated layer uses rank trackers with AI Overview detection — primarily Semrush’s AI Overview tracker — combined with custom scripts that run Perplexity queries via API and log citation appearances to a shared tracking sheet.

    The manual layer is a monthly prompt audit: 20 queries run through ChatGPT-4o and Claude Sonnet, logged and compared to the previous month. It takes about 45 minutes per site and surfaces patterns that automated tools miss — particularly for conversational queries where phrasing variations change AI behavior significantly.

    What we’ve learned: citation frequency is strongly correlated with content structure, not just content quality. A well-structured 800-word post with clear headers and explicit answer formatting consistently outperforms a sprawling 3,000-word post that buries the answer in paragraph five. AI systems are extracting, not reading.

    Frequently Asked Questions About AI Citation Monitoring

    What is AI citation monitoring?

    AI citation monitoring is the practice of tracking whether AI-powered search tools and chatbots — including Google AI Overviews, Perplexity, ChatGPT, and Claude — are citing, referencing, or recommending your website’s content when users ask relevant questions. It’s a form of search visibility measurement designed for the generative AI era.

    Why does AI citation monitoring matter for SEO?

    AI-generated answers in Google, Perplexity, and other platforms are now intercepting click traffic that would previously have gone to organically ranked content. If AI systems cite your competitors but not you when answering questions in your category, you’re losing visibility and traffic that traditional rank tracking won’t show you.

    How can I track if ChatGPT is citing my website?

    Run your target queries directly in ChatGPT and note whether your brand or domain appears in the response or sources. Because ChatGPT responses vary by session, run each query two to three times. For systematic tracking, build a query list and run it monthly, logging results to a spreadsheet. Emerging tools like Profound.ai offer automated ChatGPT citation monitoring.

    What is the difference between AI citation monitoring and GEO?

    AI citation monitoring is a measurement practice — it tells you whether AI systems are currently citing you. Generative Engine Optimization (GEO) is the optimization practice — it covers the content structure, entity signals, and authority markers that make your content more likely to be cited. Monitoring tells you where you are. GEO is how you improve it.

    How often should I run AI citation monitoring?

    Monthly monitoring is a practical baseline for most businesses. If you’re actively publishing and optimizing content, weekly checks let you correlate content changes with citation frequency more precisely. Quarterly is the minimum for any site that wants to stay aware of AI search trends in their category.

  • AEO, GEO, SEO Is the New Social Media

    AEO, GEO, SEO Is the New Social Media

    The Feed Changed. You Just Didn’t Notice.

    Social media trained an entire generation of marketers to think in formats. Carousel or Reel. Thread or Story. 30 seconds or 60. Vertical or square. We built content calendars around what the algorithm wanted to see, not what the audience actually needed to know.

    That era is ending — not because social platforms are dying, but because the consumer sitting on the other side of the screen is changing. Increasingly, the first “person” to read your content isn’t a person at all. It’s an AI agent — a chatbot, an assistant, a search model — pulling information on behalf of someone who asked a question.

    And that changes everything about what “social” means.

    When the Consumer Is a Bot, the Format Doesn’t Matter

    The entire social media economy is built on format constraints. Instagram rewards visual-first. LinkedIn rewards text-heavy thought leadership with engagement bait hooks. TikTok rewards pace and pattern interrupts. Twitter rewards brevity and provocation. Every platform has its own grammar, its own algorithm, its own definition of “good content.”

    But when the consumer is an AI model — Claude, ChatGPT, Gemini, Perplexity, a Google AI Overview — format is irrelevant. What matters is the substance. The depth. The accuracy. The authority.

    An AI agent doesn’t care about your hook. It cares about whether your content actually answers the question its user asked. It doesn’t care about your carousel design. It cares about whether your claims are sourced, your entities are clear, and your expertise is demonstrable.

    This is what AEO, GEO, and SEO — the modern trifecta — actually represent. They aren’t just search optimization tactics. They are the new social media distribution layer.

    No-Click Impressions Are the New Likes

    In the social media world, the metric that matters is the impression. Someone saw your post. If they liked it, they tapped a heart. If they really liked it, they commented or shared. That engagement signaled to the algorithm that your content was worth showing to more people.

    The same feedback loop now exists in AI-mediated search — it just looks different.

    When your website content appears in a Google AI Overview, that’s an impression. When Perplexity cites your page in an answer, that’s engagement. When ChatGPT recommends your business in response to a user query, that’s a referral. When someone reads an AI-generated summary of your expertise and then calls your office, that’s a conversion.

    The funnel is the same. The channel changed.

    And here’s the part most marketers are missing: you don’t need to chase a trend to earn these impressions. You don’t need to dance. You don’t need a hook. You need good information, structured well, written with genuine expertise, and optimized so AI systems can find it, trust it, and cite it.

    The Passion Advantage

    Social media has an alignment problem. The content that performs best on social platforms is often not the content the creator cares most about. It’s the content that matches the algorithm’s preferences. This creates a grinding misalignment — business owners and marketers spending hours producing content they don’t particularly care about, in formats they didn’t choose, for an audience they can’t directly reach.

    AEO/GEO/SEO flips that equation.

    When you write deep, authoritative website content about the thing you actually know — the thing you’ve spent years mastering — AI systems notice. They learn your expertise. They map your authority. And they start recommending you to people who are actively looking for exactly what you do.

    The data that learns you, learns them.

    That’s not a slogan. It’s how the technology works. Large language models build representations of entities — businesses, people, topics — based on the depth and consistency of the information available about them. The more you write about what you genuinely know, the stronger that representation becomes. The stronger it becomes, the more often AI systems surface you as the answer.

    This is the exact opposite of social media’s content treadmill. Instead of chasing what’s trending, you go deeper into what you already know. Instead of adapting to a platform’s format, you write for substance. Instead of fighting for attention, you earn citation.

    Website Content Is Now the Most Social Thing You Can Do

    Here’s the reframe that matters: your website is no longer a brochure. It’s your most important social channel.

    Every page you publish is a node in a knowledge graph that AI systems are actively reading, indexing, and reasoning about. Every article you write is a potential answer to a question someone hasn’t asked yet. Every entity you define, every claim you source, every FAQ you structure — these are the signals that determine whether your business shows up when someone asks an AI “who should I call for this?”

    Social media posts disappear in 24 hours. Website content compounds. A well-optimized article written today can be cited by AI systems for years. It doesn’t need an algorithm boost. It doesn’t need paid promotion. It needs to be right, and it needs to be findable.

    That’s what modern SEO, AEO, and GEO deliver — not tricks, not hacks, but the infrastructure that makes your expertise machine-readable and AI-citable.

    What This Means for Your Business

    If you’re spending 80% of your marketing effort on social media and 20% on your website, you have the ratio backwards. The businesses that will dominate in an AI-mediated world are the ones investing in deep, authoritative web content — content that answers real questions, demonstrates genuine expertise, and is structured for the machines that are now the first readers of everything published online.

    The feed changed. The question is whether you’ll keep posting for an algorithm, or start publishing for the intelligence layer that’s replacing it.

  • The Freelancer’s AEO Gap: Your Clients’ Content Is Ranking but Nobody’s Quoting It

    The Freelancer’s AEO Gap: Your Clients’ Content Is Ranking but Nobody’s Quoting It

    Rankings Aren’t the Finish Line Anymore

    You did the work. The client’s target page ranks in the top five for their primary keyword. Traffic is up. The monthly report looks good. But something is shifting underneath those numbers that most freelance SEO consultants haven’t had time to fully reckon with.

    Search engines aren’t just ranking content anymore — they’re quoting it. Featured snippets pull a direct answer and display it above position one. People Also Ask boxes expand with quoted passages from pages across the web. Voice assistants read a single answer aloud and move on. The result that gets quoted wins a fundamentally different kind of visibility than the result that merely ranks.

    If your client ranks number three for a high-value query but another site owns the featured snippet, your client is invisible in the most prominent real estate on that search results page. They did the SEO work. They just didn’t do the answer engine optimization work. That’s the gap.

    What Answer Engine Optimization Actually Involves

    AEO isn’t a rebrand of SEO. It’s a different optimization target with different structural requirements. Where SEO focuses on signals that help a page rank — authority, relevance, technical health, backlinks — AEO focuses on signals that help a page get quoted.

    The structural pattern for capturing a paragraph featured snippet is specific: a question phrased as a heading, followed immediately by a concise direct answer, followed by expanded depth. The direct answer needs to be tight — search engines typically pull passages that function as standalone responses. Too long and it gets truncated. Too short and it lacks the specificity that earns selection.

    For list-format snippets, the content needs ordered or unordered lists with clear, parallel structure. For table snippets, the data needs to live in actual HTML tables with proper header rows. Each format has its own structural requirements, and the same page might need different sections optimized for different snippet formats depending on the queries it targets.

    Then there’s the schema layer. FAQPage schema tells search engines explicitly which questions the page answers. HowTo schema structures step-by-step processes. Speakable schema identifies which sections are suitable for voice readback. These aren’t optional enhancements anymore — they’re the markup that makes content machine-readable in the way answer engines expect.

    Why This Is a Bandwidth Problem, Not a Knowledge Problem

    You probably know most of this already. You’ve read about featured snippets. You’ve seen the schema documentation. The gap isn’t ignorance — it’s implementation. Restructuring every piece of client content for snippet capture, writing FAQ sections that target real PAA clusters, implementing and validating schema markup, monitoring which snippets you’ve won and which you’ve lost — that’s a significant amount of additional work on top of the SEO fundamentals you’re already delivering.

    For a freelance consultant managing multiple clients, adding a full AEO layer to every engagement means either raising your rates significantly, working more hours, or cutting corners somewhere else. None of those options feel great.

    The Middleware Solution

    This is where the plugin model works. Instead of becoming an AEO specialist yourself, you plug in someone who already built the infrastructure. I run AEO optimization passes on your clients’ published content — restructuring key sections for snippet capture, writing FAQ sections that target actual question clusters in your client’s space, generating and injecting the appropriate schema markup, and monitoring results.

    The work runs through your client’s existing WordPress installation via the REST API. Nothing changes about their site architecture, their theme, their plugins, or their hosting. The content that’s already ranking gets restructured to also compete for direct answer placements. New content gets AEO-optimized from the start.

    You report the results to your client the same way you report everything else. Featured snippet wins. PAA placements. Voice search visibility. These are tangible outcomes that clients can see when they search their own terms — which makes them some of the most powerful proof points in any reporting conversation.

    What This Looks Like in Practice

    Say you have a client in the home services space. They rank well for several high-intent queries. You’ve done strong on-page work and their content is solid. But a competitor owns the featured snippet for their most valuable keyword — the one that drives the most qualified leads.

    I look at that snippet, analyze the structure of the content that currently holds it, identify the format (paragraph, list, table), and restructure your client’s content to compete for that placement. I write a direct answer block that addresses the query more completely and more concisely. I add FAQ schema targeting the related PAA questions. I check whether speakable schema makes sense for voice search on that topic.

    The optimization runs through the API. Your client’s post is updated. Within the next crawl cycle, the restructured content starts competing for the snippet. Sometimes it wins quickly. Sometimes it takes a few iterations. But the content is now structurally built to compete for answer placements — something it wasn’t doing before, no matter how well it ranked.

    The Client Conversation

    Your clients don’t need to understand AEO methodology. They understand “your company is now the answer Google shows when someone asks this question.” They understand “when someone asks their voice assistant about this service, your business is the one that gets recommended.” Those are outcomes, not techniques. And they’re outcomes that differentiate your service from every other SEO consultant who’s still reporting rankings and traffic without addressing the answer layer.

    Frequently Asked Questions

    How long does it take to win a featured snippet after AEO optimization?

    It varies by competition and query. Some snippets flip within days of restructured content being crawled. Others take weeks of iteration. The structural optimization puts your client’s content in position to compete — the timeline depends on how strong the current snippet holder is and how frequently Google recrawls the page.

    Does AEO optimization ever hurt existing rankings?

    When done properly, no. The structural changes — adding direct answer blocks, FAQ sections, schema markup — add value to existing content without removing or diluting the elements that earned the current ranking. The optimization is additive, not substitutive.

    Can you do AEO on content I’ve already written and published?

    That’s the primary use case. Published content that’s already ranking is the best candidate for AEO optimization because it has existing authority. The restructuring work makes that authority visible to answer engines, not just traditional ranking algorithms.

    What if my client uses a page builder like Elementor or Divi?

    The optimization runs through the WordPress REST API at the content level. Page builders manage layout and design — the AEO work happens in the content blocks themselves. Schema gets injected at the post level. In most cases, page builders don’t interfere with AEO optimization, but we’d verify compatibility for any specific setup before making changes.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “The Freelancers AEO Gap: Your Clients Content Is Ranking but Nobodys Quoting It”,
    “description”: “Your SEO work gets clients to page one. AEO gets them quoted directly in search results. Here’s why that gap matters and how to close it without becoming “,
    “datePublished”: “2026-04-03”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/the-freelancers-aeo-gap-your-clients-content-is-ranking-but-nobodys-quoting-it/”
    }
    }

  • Schema Isn’t Your Job. But Your Clients Need It Done.

    Schema Isn’t Your Job. But Your Clients Need It Done.

    The Invisible Layer That Connects Everything

    If SEO is about getting found, AEO is about getting quoted, and GEO is about getting cited by AI — schema markup is the wiring that makes all three possible. It’s the structured data layer that tells machines exactly what your client’s content means, who created it, what organization stands behind it, and how it all connects.

    Without schema, search engines and AI systems have to guess. They read the content and infer meaning from context. Sometimes they get it right. Sometimes they don’t. With proper schema markup, there’s no guessing. The machines know this is a how-to guide written by a licensed contractor at a specific company that serves a specific region. They know which questions the page answers. They know which sections are suitable for voice readback. They know the entity relationships between the author, the organization, and the topic.

    That clarity is what separates content that merely ranks from content that gets selected for featured snippets, cited by AI systems, and surfaced in knowledge panels. Schema is the bridge between good content and machine understanding of that content.

    Why Most Freelance SEO Consultants Skip It

    Let’s be honest. Schema markup is technical, tedious, and time-consuming. Writing valid JSON-LD, testing it in Google’s structured data testing tool, debugging validation errors, keeping up with schema.org’s evolving vocabulary, implementing it correctly within WordPress without breaking the theme — it’s developer-adjacent work that most SEO consultants would rather not touch.

    And historically, you could get away with skipping it. Rankings were driven primarily by content quality, backlinks, and technical SEO fundamentals. Schema was a nice-to-have. A bonus. Something you’d recommend in an audit but rarely implement yourself.

    That’s changing. Featured snippet selection increasingly favors pages with FAQ schema. AI systems give weight to content with clear entity markup. Rich results in search — star ratings, FAQ dropdowns, how-to steps, event details — require schema to appear. The “nice-to-have” became a competitive advantage, and it’s trending toward a baseline expectation.

    The Schema Types That Actually Matter

    Not every schema type is worth implementing for every client. The ones that move the needle for most business websites are specific and practical.

    Organization schema establishes the business as a recognized entity — name, logo, contact information, social profiles, founding date. This is the foundation that everything else builds on. Without it, AI systems don’t have a clear entity to associate with the content.

    FAQPage schema tells search engines which questions a page answers and provides the answer text. This is the schema type most directly connected to featured snippet and PAA selection. When a page has FAQ schema that matches a user’s query, search engines have a structured signal that this page is an answer source.

    HowTo schema structures step-by-step content in a way that enables rich results — the expandable how-to cards that appear in search results with numbered steps. For service businesses, this can dramatically improve visibility for process-oriented queries.

    Article schema with author markup connects content to specific people with specific expertise. This feeds E-E-A-T signals and helps AI systems evaluate whether the content comes from a credible source.

    Speakable schema identifies which sections of a page are suitable for text-to-speech — enabling voice assistants to read your client’s content aloud as the answer to a voice query.

    How I Handle Schema as a Plugin

    When I plug into a freelance consultant’s operation, schema implementation is one of the layers I bring. I audit the client’s existing schema (usually there’s very little — maybe a basic plugin adding minimal markup). I determine which schema types are most impactful for their business type, industry, and content. Then I generate and inject the structured data through the WordPress REST API.

    The schema is valid JSON-LD — the format Google recommends. It’s injected at the post level, so it doesn’t depend on the theme or any specific plugin. If the client switches themes, the schema stays. If they deactivate a plugin, the schema stays. It’s embedded in the content layer, not the presentation layer.

    For clients with multiple locations, I build location-specific schema that establishes each location as a distinct entity with its own address, service area, and contact information — all connected to the parent organization. For clients with key personnel whose expertise matters (consultants, attorneys, medical professionals), I add person schema that establishes individual authority signals.

    I also maintain the schema over time. When new content gets published, it gets appropriate schema. When schema.org updates its vocabulary with new properties or types, I update existing markup. When Google changes its rich result requirements, the schema adapts. This isn’t a one-time implementation — it’s an ongoing layer of structural optimization.

    What Schema Does for Your Client Reports

    Schema wins are some of the most visually compelling results you can show a client. Rich results stand out in search pages — FAQ dropdowns, star ratings, how-to cards, knowledge panel enhancements. When a client sees their search result taking up twice the space of a competitor’s plain blue link, they understand the value immediately without needing a technical explanation.

    Google Search Console also reports on structured data — which schema types are detected, any validation errors, and which pages generate rich results. That data feeds directly into your existing reporting workflow. You can show the client exactly which pages have enhanced search presence through schema and track the impact over time.

    The Bottom Line for Freelancers

    Schema implementation is work that needs to happen for your clients. It connects the dots between SEO, AEO, and GEO. It enables rich results, featured snippet selection, voice search readback, and AI citation clarity. But it’s technical, time-consuming, and ongoing — which makes it a perfect candidate for the plugin model. You don’t need to become a schema expert. You need someone who already is, plugged into your operation, handling the implementation while you handle the strategy and the relationship.

    Frequently Asked Questions

    Do SEO plugins like Yoast or RankMath handle schema adequately?

    SEO plugins add basic schema — usually Article or WebPage markup and simple organization data. They don’t generate the strategic schema types that drive AEO and GEO results: FAQPage with targeted questions, HowTo with structured steps, Speakable for voice, or the entity relationship architecture that helps AI systems understand expertise signals. Plugin-generated schema is a starting point, not a solution.

    Can schema markup hurt a site if done wrong?

    Invalid schema or schema that misrepresents content can trigger manual actions from Google. That’s why implementation matters — the markup needs to be valid, accurate, and aligned with what the page actually contains. This is another reason schema is better handled by someone with specific experience rather than generated by a generic tool.

    How many pages on a typical client site need schema work?

    Organization schema goes on every page (usually site-wide). Beyond that, priority goes to the pages with the most search visibility potential — service pages, key blog posts, FAQ pages, how-to content. For a typical small business site, that might mean strategic schema on the homepage, service pages, and top-performing content — not necessarily every page.

  • What ‘Search’ Means Now: A Practical Guide for Freelance SEO Consultants Navigating the AI Shift

    What ‘Search’ Means Now: A Practical Guide for Freelance SEO Consultants Navigating the AI Shift

    Search Fragmented. Your Strategy Needs to Follow.

    When you started doing SEO, “search” meant Google. Ten blue links. Maybe Yahoo or Bing on the margins. You optimized for one algorithm, one results page, one set of ranking factors. The game was complex but the playing field was singular.

    That’s not the world your clients operate in anymore. Their potential customers search through Google’s traditional results, Google’s AI Overviews, ChatGPT’s search integration, Perplexity’s answer engine, Claude’s knowledge base, voice assistants on phones and smart speakers, and whatever new AI-powered search interface launches next quarter. Each surface has different selection criteria. Each one determines visibility through different signals.

    As a freelance SEO consultant, you’re being asked — explicitly or implicitly — to keep your clients visible across all of these surfaces. That’s a reasonable expectation from the client’s perspective. They pay you for search visibility, and search now happens in more places than it did when you started.

    The question is how you deliver on that expanding expectation without becoming a different person.

    The Three Surfaces, Simplified

    Strip away the jargon and search visibility now operates on three surfaces. They overlap but they’re not the same.

    Surface one is traditional organic search. Google, Bing, their traditional ranking algorithms. This is what SEO has always addressed. Authority signals, relevance signals, technical health, backlinks, content quality. Your bread and butter. Still important. Still driving the majority of search-driven business outcomes for most industries.

    Surface two is answer engines. Featured snippets, People Also Ask, voice search responses, direct answer boxes. These surfaces pull content from the same web as traditional search but select it based on different criteria — structural clarity, direct answer quality, schema markup, content format. A page can rank number one and still not own the featured snippet. The optimization requirements are related to but distinct from traditional SEO.

    Surface three is generative AI. ChatGPT, Perplexity, Claude, Google’s AI Overviews, Siri’s AI-enhanced responses. These systems synthesize answers from multiple sources and cite specific content as references. The selection criteria include factual density, entity authority, structural readability, and source consistency across the web. This surface is growing rapidly and the optimization discipline — GEO — is still maturing.

    Each surface requires attention. Ignoring any one of them means your client is invisible somewhere their customers are looking. But addressing all three simultaneously is work that goes beyond what traditional SEO covers.

    What Changes and What Doesn’t

    Here’s the good news for experienced SEO consultants: surface one — traditional organic — is still the foundation. Nothing about AEO or GEO works without solid SEO underneath. Rankings still matter. Technical health still matters. Content quality still matters. Backlinks still matter. Everything you’ve built your career on remains relevant.

    What changes is what you layer on top. For surface two, the content you’re already creating needs structural refinement — snippet-ready formatting, FAQ sections with schema, direct answer blocks at the top of relevant sections. For surface three, the content needs entity optimization — stronger factual density, clearer attribution, consistent entity signals, and structural elements that help AI systems extract and cite information accurately.

    Neither layer contradicts or undermines SEO. They extend it. The work you’re doing today becomes more valuable when AEO and GEO layers are added, not less. That’s the practical reality that gets lost in the marketing hype around AI search.

    The Realistic Assessment

    I’m not going to tell you that AI search is replacing Google tomorrow. I don’t know the exact trajectory, and neither does anyone else claiming certainty. What I can tell you is that the trend is directional: more search activity is happening through more interfaces, and each interface has its own optimization surface.

    Some industries are seeing significant AI search impact already. Others are barely touched. The pace varies by vertical, by query type, by user demographics. For some of your clients, AI search optimization is urgent. For others, it’s a forward-looking investment. Part of the value of the plugin model is having someone who can help you make that assessment for each client individually, based on their specific competitive landscape and search behavior patterns.

    What I won’t do is manufacture urgency with made-up statistics or scare you into action with doomsday predictions about traditional SEO. The landscape is evolving. The smart response is to evolve with it — deliberately, with clear-eyed assessment of where the opportunity actually is for each client.

    Where the Plugin Fits

    The plugin model addresses the capability gap between surface one (your expertise) and surfaces two and three (the expanding landscape). You continue to own the SEO strategy. The plugin layer adds the AEO and GEO optimization that extends your clients’ visibility into the answer engine and generative AI surfaces.

    Over time, some consultants choose to build their own AEO and GEO expertise and internalize these capabilities. The plugin model supports that transition too — I’m happy to teach the methodology and help you build the skills to do this work yourself. The goal isn’t dependency. The goal is making sure your clients are visible across every surface where their customers search, whether that capability comes from you directly or from the plugin layer.

    Frequently Asked Questions

    Should I be telling my clients about AI search even if their industry isn’t heavily impacted yet?

    Yes — but framed as awareness, not alarm. “We’re monitoring how AI-powered search is evolving in your industry and positioning your content to be visible across these new surfaces as they grow” is a proactive, responsible message that positions you as forward-thinking without manufacturing urgency.

    Is traditional SEO becoming less important?

    No. Traditional SEO is the foundation that everything else builds on. What’s happening is that SEO alone covers a shrinking percentage of total search visibility as new surfaces emerge. That doesn’t make SEO less important — it makes it necessary but no longer sufficient on its own for comprehensive search presence.

    How do I decide which clients need AEO/GEO optimization now versus later?

    Look at three factors: how information-rich their queries are (informational queries trigger AI answers more than transactional ones), how competitive their search landscape is (saturated markets see AI impact faster), and how their customers actually search (B2B research queries are heavily impacted by AI, simple local searches less so). Those factors help prioritize which clients benefit most from early AEO/GEO investment.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “What Search Means Now: A Practical Guide for Freelance SEO Consultants Navigating the AI Shift”,
    “description”: “Search is no longer just Google’s ten blue links. A practical overview of every surface where your clients need to be visible — and what it takes to show “,
    “datePublished”: “2026-04-03”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/what-search-means-now-a-practical-guide-for-freelance-seo-consultants-navigating-the-ai-shift/”
    }
    }

  • The Middleware Manifesto: Why the Best Search Operations Are Built in Layers, Not Silos

    The Middleware Manifesto: Why the Best Search Operations Are Built in Layers, Not Silos

    This is not a pitch. This is a thesis. It is the operating philosophy behind everything we build, every site we optimize, and every partnership we enter. If you read one thing on this site, make it this.

    The Problem Nobody Wants to Name

    Search fractured. It happened gradually, then all at once.

    For years, search meant one thing: Google’s ten blue links. You optimized for that surface, you measured rankings, you called it done. Then featured snippets appeared. Then People Also Ask boxes. Then voice assistants started reading answers aloud. Then ChatGPT, Claude, Gemini, and Perplexity started generating answers from scratch — citing some sources, ignoring others, and reshaping how people find information.

    The industry responded the way it always does: by creating new specialties. SEO became its own discipline. Answer Engine Optimization (AEO) became another. Generative Engine Optimization (GEO) became a third. Each one spawned its own consultants, its own tools, its own conferences, and its own set of best practices that rarely acknowledged the other two existed.

    And so the average business — the one actually trying to be found by customers — ended up needing three different strategies, three different audits, three different sets of recommendations that sometimes contradicted each other.

    That is the problem. Not that search changed. That the response to the change created silos where there should have been a system.

    The Middleware Thesis

    There is a better architecture. We know because we built it.

    The concept is borrowed from software engineering, where middleware refers to the connective layer that sits between systems — translating, routing, and orchestrating without replacing anything above or below it. A database doesn’t need to know how the front end works. The front end doesn’t need to know where the data lives. Middleware handles the translation.

    Applied to search operations, the middleware thesis is this: you don’t need separate SEO, AEO, and GEO programs. You need a single operational layer underneath all three that handles the shared infrastructure — schema architecture, entity resolution, internal linking, content structure, and platform connectivity — so that every optimization you run on any surface benefits the other two automatically.

    This is not theoretical. It is how we operate across every site we touch.

    What the Layer Actually Does

    When we say middleware, we mean a specific set of capabilities that sit underneath whatever search strategy is already in place:

    Schema Architecture

    Structured data is the universal language that all three search surfaces understand. Traditional search uses it for rich results. Answer engines use it to identify authoritative sources for direct answers. Generative AI uses it to build entity graphs that determine which sources get cited. A single schema implementation — Article, FAQPage, HowTo, BreadcrumbList, Speakable — serves all three surfaces simultaneously. The middleware layer handles this once, correctly, across every page.

    Entity Resolution

    AI systems do not rank pages. They rank entities — the people, organizations, concepts, and relationships that content describes. If your business does not exist as a coherent entity in the knowledge graphs that AI systems reference, your content is invisible to generative search regardless of how well it ranks in traditional results. The middleware layer builds and maintains entity architecture: consistent naming, relationship mapping, authority signals, and the structural patterns that make an entity legible to machines.

    Internal Link Architecture

    Internal links are not just navigation. They are the primary signal that tells search engines — all of them — how your content relates to itself. Hub-and-spoke structures, topical clustering, anchor text patterns, orphan page elimination. When the internal link map is built correctly, every new page you publish strengthens the authority of every existing page. The middleware layer maintains this map and injects contextual links as content grows.

    Content Structure

    The way content is structured determines which surfaces can use it. Traditional search needs heading hierarchy and keyword relevance. Answer engines need direct-answer formatting — the concise, quotable passages that get pulled into featured snippets and voice results. Generative AI needs entity-dense, factually precise language with clear attribution patterns. The middleware layer applies all three structural requirements in a single pass, so content is optimized for every surface from the moment it is published.

    Platform Connectivity

    Most search operations break down at the execution layer. The strategy is sound, but the actual work — pushing updates to WordPress, injecting schema, updating meta fields, managing taxonomy across multiple sites — requires direct API access to every platform involved. The middleware layer maintains persistent connections to every site in a portfolio through a unified proxy architecture, so optimizations can be applied at scale without manual intervention on each individual site.

    Why Layers Beat Silos

    The silo model has a compounding cost that most people do not see until it is too late.

    When SEO, AEO, and GEO operate as separate programs, each one makes recommendations in isolation. The SEO audit says consolidate these three pages into one pillar page. The AEO audit says break content into shorter, more answerable chunks. The GEO audit says increase entity density and add attribution patterns. These recommendations do not just differ — they actively conflict.

    The team implementing the changes has to resolve the conflicts manually, usually by picking whichever consultant was most convincing in the last meeting. The result is a strategy that optimizes for one surface at the expense of the other two. Every quarter, priorities shift, and the cycle repeats.

    The middleware approach eliminates this conflict by addressing the shared infrastructure first. When schema, entity architecture, internal linking, and content structure are handled at the foundational layer, the surface-level optimizations for SEO, AEO, and GEO stop competing and start compounding. An improvement to entity resolution strengthens traditional rankings AND answer engine placement AND generative AI citation likelihood — simultaneously.

    This is not an incremental improvement. It is a fundamentally different operating model.

    What This Looks Like in Practice

    We run this system across a portfolio of sites spanning restoration services, luxury lending, comedy streaming, cold storage, training platforms, nonprofit ESG, and more. The verticals are wildly different. The middleware layer is the same.

    A single content brief enters the system. The middleware layer determines which personas need their own variant of that content based on genuine knowledge gaps — not a fixed number, but however many the topic actually demands. Each variant gets the full three-layer treatment: SEO structure, AEO direct-answer formatting, and GEO entity optimization. Schema is injected. Internal links are mapped and placed. The content publishes through a unified API proxy that handles authentication and routing for every site in the portfolio.

    The person running the SEO strategy for any individual site does not need to change how they work. The middleware layer operates underneath. It does not replace their expertise. It provides the infrastructure that makes their expertise visible to every search surface, not just the one they are focused on.

    The Person, Not the Platform

    Here is the part that matters most: this is not a SaaS product. There is no login. There is no dashboard you subscribe to.

    The middleware layer works because it is operated by someone who understands all three search surfaces, maintains the platform connections, and makes the judgment calls that automation cannot. Which schema types to apply. When entity architecture needs restructuring. How to resolve the tension between a long-form pillar page and a featured-snippet-optimized FAQ. These are not configuration decisions. They are editorial and technical judgment calls that require context about the specific site, the specific industry, and the specific competitive landscape.

    That is why this model works as a person, not a platform. One operator who plugs into your existing stack, handles the layer underneath, and lets you keep doing what you already do — just with infrastructure that makes every surface work harder.

    The Invitation

    If you run an SEO agency, you do not need to add AEO and GEO departments. You need a middleware partner who handles the shared infrastructure underneath your existing service delivery.

    If you are a freelance SEO consultant, you do not need to learn three new disciplines. You need someone who plugs into your operation and handles the layers your clients need but you should not have to build yourself.

    If you run a business that depends on being found online, you do not need three separate search strategies. You need one foundational layer that makes all of them work.

    That is the middleware thesis. That is what we built. And that is what every article on this site is designed to show you in practice.

    The best search operations are not built by adding more specialists. They are built by adding the layer that connects them all.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “The Middleware Manifesto: Why the Best Search Operations Are Built in Layers, Not Silos”,
    “description”: “The search industry keeps building new silos. SEO teams, AEO specialists, GEO consultants. The answer is not more people. It is a layer underneath everything th”,
    “datePublished”: “2026-04-03”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/the-middleware-manifesto-why-the-best-search-operations-are-built-in-layers-not-silos/”
    }
    }

  • The Unsnippetable Strategy: How We Beat Zero-Click Search by Building Things Google Can’t Summarize

    The Unsnippetable Strategy: How We Beat Zero-Click Search by Building Things Google Can’t Summarize

    We just deployed 16 interactive tools and 3 bottom-of-funnel articles across 7 websites in a single session. Here’s why, and how you can do the same thing.

    The Problem: 4,000 Impressions, Zero Clicks

    We pulled the Google Search Console data for theuniversalcommerceprotocol.com — a site covering agentic commerce and AI-powered checkout infrastructure. The numbers told a brutal story: over 200 unique queries generating 4,000+ monthly impressions with an effective CTR of 0%. Not low. Zero.

    The highest-impression queries were all definitional: “what is agentic commerce” (409 impressions, 0 clicks), “agentic commerce definition” (178 impressions, 0 clicks), “ai commerce compliance mastercard” (61 impressions at position 1.25, 0 clicks). Google was serving our content directly in AI Overviews and featured snippets. Users got what they needed without ever visiting the site.

    This isn’t unique to UCP. It’s the new reality. 58.5% of US Google searches now end without a click. For AI Mode searches, it’s 93%. If your content strategy is built on informational queries, you’re building on a foundation that’s actively collapsing.

    The conventional wisdom is to “optimize for AI Overviews” and “win the featured snippet.” But that’s backwards. If you win the featured snippet for “what is agentic commerce,” Google serves your content without anyone visiting your site. You’ve won the battle and lost the war.

    The Insight: Two-Layer Content Architecture

    The solution isn’t to fight zero-click search. It’s to use it. We call it two-layer content architecture, and it changes how you think about content strategy entirely.

    Layer 1: SERP Bait. This is your definitional, informational content — “what is X,” “X vs Y,” “how does X work.” This content is designed to be consumed on the SERP without a click. Its job isn’t traffic. Its job is brand impressions at massive scale. Every time Google cites you in an AI Overview, thousands of people see your brand positioned as the authority. That’s not a failure. That’s a free brand campaign.

    Layer 2: Click Magnets. This is content Google literally cannot summarize in a snippet — interactive tools, calculators, assessments, scorecards, decision frameworks. The SERP can tease them (“Calculate your agentic commerce ROI…”) but the user HAS to click through to get the value. The tool requires input. The output is personalized. There’s nothing for Google to extract.

    The connection between the layers is where the magic happens. The person who sees your brand cited in an AI Overview for “what is agentic commerce” now recognizes you. When they later search “agentic commerce ROI” or “how to implement agentic commerce” — and your calculator or playbook appears — they click because they already trust you from Layer 1. Research backs this up: brands cited in AI Overviews see 35% higher CTR on their other organic listings.

    You’re not fighting the zero-click reality. You’re using it as a free awareness channel that feeds the bottom of your funnel.

    What We Built: 16 Tools Across 7 Sites

    We didn’t just theorize about this. We built and deployed the entire system in a single session across 7 domains.

    UCP (theuniversalcommerceprotocol.com) — 6 pieces

    Three interactive tools targeting the exact queries generating zero-click impressions: an Agentic Commerce Readiness Assessment (32-question diagnostic across 8 dimensions), an ROI Calculator (projects revenue impact using Morgan Stanley, Gartner, and McKinsey 2026 data), and a Visa vs Mastercard Agentic Commerce Scorecard (interactive comparison across 7 compliance dimensions — this one directly targets the “ai commerce compliance mastercard/visa” queries that were getting 90 impressions at position 1 with zero clicks).

    Plus three bottom-of-funnel articles that can’t be answered in a snippet: a 90-Day Implementation Playbook (week-by-week), a narrative piece about what breaks when an AI agent hits an unprepared store, and a Build/Buy/Wait decision framework with cost analysis.

    Tygart Media (tygartmedia.com) — 5 tools

    Five tools that package our existing expertise into interactive formats: an AEO Citation Likelihood Analyzer (scores content across 8 dimensions AI systems evaluate), an Information Density Analyzer (paste your text, get real-time density metrics and a paragraph-by-paragraph heatmap), a Restoration SEO Competitive Tower (benchmark against competitors across 8 SEO dimensions), an AI Infrastructure ROI Simulator (Build vs Buy vs API with 3-year TCO), and a Schema Markup Adequacy Scorer (is your structured data AI-ready?).

    Knowledge Cluster (5 sites) — 5 industry-specific tools

    One high-priority tool per site, each targeting the most-searched zero-click queries in their industry: a Water Damage Cost Estimator for restorationintel.com (calculates by IICRC class, water category, materials, and region), a Property Risk Assessment Engine for riskcoveragehub.com (scores across 5 risk dimensions with coverage recommendations), a Business Impact Analysis Generator for continuityhub.org (ISO 22301-aligned BIA with exportable summary), a Healthcare Compliance Audit Tool for healthcarefacilityhub.org (18-question audit mapped to CMS CoP and TJC standards), and a Carbon Footprint Calculator for bcesg.org (Scope 1/2/3 with EPA emission factors and reduction scenarios).

    Why Interactive Tools Beat Articles in Zero-Click

    There are five technical reasons interactive tools are the correct response to zero-click search, and they compound.

    They’re non-serializable. A calculator’s output depends on user input. Google can’t pre-compute every possible result for a water damage cost estimator across every combination of square footage, damage class, water category, materials, and region. The AI Overview can say “use this calculator” but it can’t BE the calculator. The citation becomes a call to action.

    They generate engagement signals at scale. Interactive tools produce time-on-page, scroll depth, and interaction events that traditional articles can’t match. A user spending 4 minutes inputting data and exploring results sends stronger quality signals than a user who reads a paragraph and bounces.

    They’re bookmarkable. A restoration company owner who uses the cost estimator once will bookmark it and return. Insurance adjusters will save the risk assessment tool. This creates direct traffic over time — the kind Google can’t intercept with zero-click.

    They’re natural link magnets. Industry publications, Reddit threads, and professional communities link to useful tools far more readily than articles. A “Healthcare Compliance Audit Tool” gets shared in facility manager Slack channels. A “What Is Healthcare Compliance” article doesn’t.

    They’re AI Overview proof. Even when Google cites the page in an AI Overview, users still need to visit to use the tool. The AI Overview effectively becomes free advertising: “Use this calculator at [your site] to estimate your costs.” Every zero-click impression becomes a branded CTA.

    The Methodology: Replicable for Any Site

    You can run this exact playbook on any site in about 4 hours. Here’s the step-by-step:

    Step 1: Pull your GSC data. Export the Queries and Pages reports. Sort by impressions descending. Identify every query with significant impressions and near-zero CTR. These are your zero-click queries — the ones Google is answering without sending you traffic.

    Step 2: Categorize the queries. Split them into two buckets. Definitional queries (“what is X,” “X definition,” “X vs Y”) are Layer 1 — leave them alone, they’re generating brand impressions. Action-intent queries (“X cost estimate,” “X compliance checklist,” “how to implement X”) are Layer 2 opportunities.

    Step 3: For each Layer 2 opportunity, ask one question. “What would someone who already knows the answer still need to click for?” The answer is usually a tool, calculator, assessment, or framework that requires their specific input to produce useful output.

    Step 4: Build the tool. Single-file HTML with inline CSS/JS. No external dependencies. Dark theme, mobile responsive, professional design. The tool should take 2-5 minutes to complete and produce a result worth sharing or saving. Include a “copy results” or “download report” function.

    Step 5: Embed in WordPress. Write a 2-3 paragraph intro explaining why the tool matters (this is what Google will see and potentially cite). Then embed the full HTML. The intro becomes your Layer 1 snippet bait, and the tool becomes your Layer 2 click magnet — on the same page.

    Step 6: Cross-link. Add CTAs from your existing Layer 1 content to the new tools. If you have an article ranking for “what is agentic commerce” that’s getting zero clicks, add a CTA in that article: “Take the Readiness Assessment to see if your business is prepared.” You’re converting brand impressions into tool engagement.

    Step 7: Monitor. Track CTR changes over 30/60/90 days. Track direct traffic increases (brand searches driven by AI Overview citations). Track tool engagement: completion rates, time on page. Track backlink acquisition from industry sites linking to your tools.

    What We’re Measuring

    This isn’t a “publish and pray” strategy. We’re tracking specific metrics across all 7 sites to validate or invalidate the approach within 90 days.

    First, CTR change on previously zero-click queries. If the Visa vs Mastercard Scorecard starts pulling even 2-3% CTR on queries that were at 0%, that’s a meaningful signal. Second, direct traffic increases — are more people searching for our brand names directly after seeing us cited in AI Overviews? Third, tool engagement metrics: how many people complete the assessments, what’s the average time on page, how many copy their results? Fourth, organic backlinks — do industry sites start linking to our tools? Fifth, whether the tools themselves rank for their own queries, creating an entirely new traffic channel.

    The Bigger Picture

    The era of “write an article, rank, get traffic” is over for informational queries. Google’s AI Overviews and featured snippets have made it so that the better your content is at answering a question, the less likely anyone is to visit your site. That’s a structural inversion of the old SEO model, and no amount of keyword optimization will fix it.

    But the era of “build something useful, earn trust, capture intent” is just beginning. Tools, calculators, assessments, and interactive experiences represent a category of content that AI cannot fully consume on behalf of the user. They require participation. They produce personalized output. They create the kind of engagement that turns a search impression into a relationship.

    We deployed 16 of these tools across 7 sites today. In 90 days, we’ll know exactly how much zero-click traffic they converted. But based on the early research — 35% higher CTR for AI-cited brands, 42.9% CTR for featured snippet content that teases without fully answering — the bet is that unsnippetable content is the highest-leverage move in SEO right now.

    The tools are already live. The impressions are already flowing. Now we find out if the clicks follow.

    { “@context”: “https://schema.org”, “@type”: “Article”, “headline”: “The Unsnippetable Strategy: How We Beat Zero-Click Search by Building Things Google Cant Summarize”, “description”: “We deployed 16 interactive tools across 7 websites to convert zero-click search impressions into actual traffic. Here’s the two-layer content architecture”, “datePublished”: “2026-04-01”, “dateModified”: “2026-04-03”, “author”: { “@type”: “Person”, “name”: “Will Tygart”, “url”: “https://tygartmedia.com/about” }, “publisher”: { “@type”: “Organization”, “name”: “Tygart Media”, “url”: “https://tygartmedia.com”, “logo”: { “@type”: “ImageObject”, “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png” } }, “mainEntityOfPage”: { “@type”: “WebPage”, “@id”: “https://tygartmedia.com/unsnippetable-strategy-beat-zero-click-search/” } }
  • Schema Markup Adequacy Scorer: Is Your Structured Data AI-Ready?

    Schema Markup Adequacy Scorer: Is Your Structured Data AI-Ready?

    Standard schema markup is a business card. AI systems need a full dossier. Most sites implement the bare minimum Schema.org markup and wonder why AI ignores them.

    This scorer evaluates your structured data across 6 dimensions — from basic coverage and property depth to AI-specific signals and inter-entity relationships. Each dimension is scored with specific recommendations and code snippet examples for improvement.

    Take the assessment below to find out if your schema markup is a business card or a dossier.

    Schema Markup Adequacy Scorer: Is Your Structured Data AI-Ready?

    Schema Markup Adequacy Scorer

    Is Your Structured Data AI-Ready?

    Your Progress
    0/24
    0
    Schema Adequacy Score

    Category Breakdown

    Recommended Improvements

    Read AgentConcentrate: Why Standard Schema Is a Business Card →
    Powered by Tygart Media | tygartmedia.com
  • Information Density Analyzer: Is Your Content Dense Enough for AI?

    Information Density Analyzer: Is Your Content Dense Enough for AI?

    AI systems select sources based on information density — the ratio of unique, verifiable claims to filler text. Most content fails this test. We found that 16 AI models unanimously agree on what makes content worth citing, and it comes down to density.

    This tool analyzes your text in real-time and produces 8 metrics including unique concepts per 100 words, claim density, filler ratio, and actionable insight score. It also generates a paragraph-by-paragraph heatmap showing exactly where your content is dense and where it’s fluff.

    Paste your article text below and see how your content measures up against AI-citable benchmarks.

    Information Density Analyzer: Is Your Content Dense Enough for AI? * { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: -apple-system, BlinkMacSystemFont, ‘Segoe UI’, Roboto, ‘Helvetica Neue’, Arial, sans-serif; background: linear-gradient(135deg, #0f172a 0%, #1a2551 100%); color: #e5e7eb; min-height: 100vh; padding: 20px; } .container { max-width: 1200px; margin: 0 auto; } header { text-align: center; margin-bottom: 40px; animation: slideDown 0.6s ease-out; } h1 { font-size: 2.5rem; background: linear-gradient(135deg, #3b82f6, #10b981); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; margin-bottom: 10px; font-weight: 700; } .subtitle { font-size: 1.1rem; color: #9ca3af; } .input-section { background: rgba(15, 23, 42, 0.8); border: 1px solid rgba(59, 130, 246, 0.2); border-radius: 12px; padding: 40px; margin-bottom: 30px; backdrop-filter: blur(10px); animation: fadeIn 0.8s ease-out; } .textarea-group { margin-bottom: 20px; } .textarea-label { display: block; margin-bottom: 12px; font-weight: 600; font-size: 1.05rem; color: #e5e7eb; } textarea { width: 100%; min-height: 250px; padding: 15px; background: rgba(255, 255, 255, 0.03); border: 1px solid rgba(59, 130, 246, 0.2); border-radius: 8px; color: #e5e7eb; font-family: inherit; font-size: 0.95rem; resize: vertical; transition: all 0.3s ease; } textarea:focus { outline: none; border-color: rgba(59, 130, 246, 0.5); background: rgba(59, 130, 246, 0.05); } .button-group { display: flex; gap: 15px; margin-top: 20px; flex-wrap: wrap; } button { padding: 12px 30px; border: none; border-radius: 8px; font-weight: 600; cursor: pointer; transition: all 0.3s ease; font-size: 1rem; } .btn-primary { background: linear-gradient(135deg, #3b82f6, #2563eb); color: white; flex: 1; min-width: 200px; } .btn-primary:hover { transform: translateY(-2px); box-shadow: 0 10px 20px rgba(59, 130, 246, 0.3); } .btn-secondary { background: rgba(59, 130, 246, 0.1); color: #3b82f6; border: 1px solid rgba(59, 130, 246, 0.3); } .btn-secondary:hover { background: rgba(59, 130, 246, 0.2); transform: translateY(-2px); } .results-section { display: none; animation: fadeIn 0.8s ease-out; } .results-section.visible { display: block; } .content-section { background: rgba(15, 23, 42, 0.8); border: 1px solid rgba(59, 130, 246, 0.2); border-radius: 12px; padding: 40px; margin-bottom: 30px; backdrop-filter: blur(10px); } .density-score { text-align: center; margin-bottom: 40px; padding: 40px; background: linear-gradient(135deg, rgba(59, 130, 246, 0.1), rgba(16, 185, 129, 0.1)); border-radius: 12px; } .score-number { font-size: 4rem; font-weight: 700; background: linear-gradient(135deg, #3b82f6, #10b981); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; } .score-label { font-size: 1rem; color: #9ca3af; margin-top: 10px; } .gauge { width: 100%; height: 20px; background: rgba(255, 255, 255, 0.05); border-radius: 10px; overflow: hidden; margin: 20px 0; } .gauge-fill { height: 100%; background: linear-gradient(90deg, #ef4444, #f59e0b, #10b981); border-radius: 10px; transition: width 0.6s ease-out; } .metrics-grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 20px; margin-bottom: 30px; } .metric-card { background: rgba(255, 255, 255, 0.02); border: 1px solid rgba(59, 130, 246, 0.2); border-radius: 8px; padding: 20px; text-align: center; } .metric-value { font-size: 2rem; font-weight: 700; color: #3b82f6; margin-bottom: 8px; } .metric-label { font-size: 0.85rem; color: #9ca3af; text-transform: uppercase; letter-spacing: 0.5px; } .heatmap { margin: 30px 0; } .heatmap-title { font-size: 1.2rem; font-weight: 600; margin-bottom: 20px; color: #e5e7eb; } .heatmap-legend { display: flex; gap: 20px; margin-bottom: 20px; flex-wrap: wrap; } .legend-item { display: flex; align-items: center; gap: 8px; font-size: 0.9rem; } .legend-color { width: 20px; height: 20px; border-radius: 4px; } .paragraph { background: rgba(255, 255, 255, 0.02); border-left: 4px solid #ef4444; padding: 15px; margin-bottom: 12px; border-radius: 4px; font-size: 0.9rem; line-height: 1.6; color: #d1d5db; } .paragraph.dense { border-left-color: #10b981; } .paragraph.moderate { border-left-color: #f59e0b; } .insights { background: rgba(16, 185, 129, 0.05); border: 1px solid rgba(16, 185, 129, 0.2); border-radius: 8px; padding: 20px; margin-top: 30px; } .insights h3 { color: #10b981; margin-bottom: 15px; font-size: 1.1rem; } .insights p { color: #d1d5db; line-height: 1.6; margin-bottom: 12px; } .comparison { background: rgba(59, 130, 246, 0.05); border: 1px solid rgba(59, 130, 246, 0.2); border-radius: 8px; padding: 20px; margin-top: 20px; } .comparison h4 { color: #3b82f6; margin-bottom: 10px; } .comparison p { color: #d1d5db; font-size: 0.95rem; line-height: 1.6; } .cta-link { display: inline-block; color: #3b82f6; text-decoration: none; font-weight: 600; margin-top: 20px; padding: 10px 0; border-bottom: 2px solid rgba(59, 130, 246, 0.3); transition: all 0.3s ease; } .cta-link:hover { border-bottom-color: #3b82f6; padding-right: 5px; } footer { text-align: center; padding: 30px; color: #6b7280; font-size: 0.85rem; margin-top: 50px; } @keyframes slideDown { from { opacity: 0; transform: translateY(-20px); } to { opacity: 1; transform: translateY(0); } } @keyframes fadeIn { from { opacity: 0; } to { opacity: 1; } } @media (max-width: 768px) { h1 { font-size: 1.8rem; } .input-section, .content-section { padding: 25px; } .score-number { font-size: 3rem; } textarea { min-height: 200px; } .metrics-grid { grid-template-columns: 1fr 1fr; } }

    Information Density Analyzer

    Is Your Content Dense Enough for AI?

    0
    Information Density Score

    Paragraph-by-Paragraph Density Heatmap

    Dense (AI-Citable)
    Moderate
    Fluffy

    Your Content in AI Terms

    Compared to AI-Citable Benchmark

    Read the Information Density Manifesto →
    Powered by Tygart Media | tygartmedia.com
    const fillerPhrases = [ ‘it’s important to note’, ‘in today’s world’, ‘it goes without saying’, ‘as we all know’, ‘needless to say’, ‘at the end of the day’, ‘in conclusion’, ‘in fact’, ‘to be honest’, ‘basically’, ‘essentially’, ‘practically’, ‘quite frankly’, ‘let me be clear’, ‘obviously’, ‘clearly’, ‘simply put’, ‘as a matter of fact’ ]; const actionVerbs = [ ‘implement’, ‘deploy’, ‘configure’, ‘build’, ‘create’, ‘measure’, ‘test’, ‘optimize’, ‘develop’, ‘establish’, ‘execute’, ‘perform’, ‘analyze’, ‘evaluate’, ‘design’, ‘engineer’, ‘construct’, ‘establish’ ]; function analyzeContent() { const content = document.getElementById(‘contentInput’).value.trim(); if (!content) { alert(‘Please paste your article text first.’); return; } const analysis = performAnalysis(content); displayResults(analysis); } function clearContent() { document.getElementById(‘contentInput’).value = ”; document.getElementById(‘resultsContainer’).classList.remove(‘visible’); } function performAnalysis(content) { const sentences = content.match(/[^.!?]+[.!?]+/g) || []; const paragraphs = content.split(/nn+/).filter(p => p.trim()); const words = content.toLowerCase().match(/bw+b/g) || []; const wordCount = words.length; const sentenceCount = sentences.length; const avgSentenceLength = wordCount / sentenceCount; // Unique concepts (words >4 chars appearing 1-2 times) const wordFreq = {}; words.forEach(word => { if (word.length > 4) { wordFreq[word] = (wordFreq[word] || 0) + 1; } }); const uniqueConcepts = Object.values(wordFreq).filter(count => count { if (numberRegex.test(sent)) claimCount++; }); const claimDensity = (claimCount / sentenceCount) * 100; // Filler ratio let fillerCount = 0; sentences.forEach(sent => { if (fillerPhrases.some(phrase => sent.toLowerCase().includes(phrase))) { fillerCount++; } }); const fillerRatio = (fillerCount / sentenceCount) * 100; // Actionable insight score let actionCount = 0; sentences.forEach(sent => { if (actionVerbs.some(verb => sent.toLowerCase().includes(verb))) { actionCount++; } }); const actionScore = (actionCount / sentenceCount) * 100; // Jargon density (rough estimate) const jargonTerms = words.filter(word => word.length > 7).length; const jargonDensity = (jargonTerms / wordCount) * 100; // Overall density score let densityScore = Math.round( (conceptDensity * 0.25) + (claimDensity * 0.25) + ((100 – fillerRatio) * 0.20) + (actionScore * 0.20) + (Math.min(jargonDensity, 15) * 0.10) ); densityScore = Math.max(0, Math.min(100, densityScore)); // Analyze paragraphs const paragraphAnalysis = paragraphs.map(para => { const paraSentences = para.match(/[^.!?]+[.!?]+/g) || []; const paraWords = para.toLowerCase().match(/bw+b/g) || []; const paraNumbers = para.match(/d+|percent|%/g) || []; const paraFiller = paraSentences.filter(sent => fillerPhrases.some(phrase => sent.toLowerCase().includes(phrase)) ).length; const density = (paraNumbers.length + paraWords.length / 10) / paraSentences.length; const fillerPercent = (paraFiller / paraSentences.length) * 100; let densityClass = ‘dense’; if (fillerPercent > 30 || density 15 || density 150 ? ‘…’ : ”), density: densityClass }; }); return { densityScore, wordCount, sentenceCount, avgSentenceLength: avgSentenceLength.toFixed(1), conceptDensity: conceptDensity.toFixed(1), claimDensity: claimDensity.toFixed(1), fillerRatio: fillerRatio.toFixed(1), actionScore: actionScore.toFixed(1), jargonDensity: jargonDensity.toFixed(1), paragraphs: paragraphAnalysis }; } function displayResults(analysis) { // Score document.getElementById(‘densityScore’).textContent = analysis.densityScore; document.getElementById(‘gaugeFill’).style.width = analysis.densityScore + ‘%’; // Metrics const metricsHTML = `
    ${analysis.wordCount}
    Total Words
    ${analysis.sentenceCount}
    Sentences
    ${analysis.avgSentenceLength}
    Avg Sentence Length
    ${analysis.conceptDensity}%
    Unique Concepts per 100W
    ${analysis.claimDensity}%
    Claim Density
    ${analysis.fillerRatio}%
    Filler Ratio
    ${analysis.actionScore}%
    Action Verbs
    ${analysis.jargonDensity}%
    Jargon Density
    `; document.getElementById(‘metricsGrid’).innerHTML = metricsHTML; // Heatmap const heatmapHTML = analysis.paragraphs .map(para => `
    ${para.text}
    `) .join(”); document.getElementById(‘heatmapContainer’).innerHTML = heatmapHTML; // Insights let likelihood; if (analysis.densityScore >= 75) { likelihood = ‘This content is highly likely to be selected as an AI source. You have excellent unique concept density, strong claim coverage, and minimal filler.’; } else if (analysis.densityScore >= 60) { likelihood = ‘This content has good density and will likely be cited by AI systems. Consider reducing filler phrases and increasing actionable insights.’; } else if (analysis.densityScore >= 40) { likelihood = ‘Your content is moderately dense. AI may cite specific sections, but overall improvement would help. Focus on claims, actions, and uniqueness.’; } else { likelihood = ‘This content lacks the density AI systems prefer. Too many filler phrases, weak claim coverage, and low concept variety reduce citation likelihood.’; } document.getElementById(‘aiLikelihood’).textContent = likelihood; let benchmark; if (analysis.fillerRatio > 20) { benchmark = ‘Your filler ratio is above benchmark. AI-citable content typically has <15% filler phrases.'; } else if (analysis.claimDensity 8) { benchmark = ‘Excellent unique concept density. This makes your content more likely to be selected as a source.’; } else { benchmark = ‘Your metrics align well with top-cited content benchmarks across most dimensions.’; } document.getElementById(‘benchmark’).textContent = benchmark; document.getElementById(‘resultsContainer’).classList.add(‘visible’); document.getElementById(‘resultsContainer’).scrollIntoView({ behavior: ‘smooth’ }); } { “@context”: “https://schema.org”, “@type”: “Article”, “headline”: “Information Density Analyzer: Is Your Content Dense Enough for AI?”, “description”: “Paste your article text and get real-time analysis of information density, filler ratio, claim density, and AI-citability score.”, “datePublished”: “2026-04-01”, “dateModified”: “2026-04-03”, “author”: { “@type”: “Person”, “name”: “Will Tygart”, “url”: “https://tygartmedia.com/about” }, “publisher”: { “@type”: “Organization”, “name”: “Tygart Media”, “url”: “https://tygartmedia.com”, “logo”: { “@type”: “ImageObject”, “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png” } }, “mainEntityOfPage”: { “@type”: “WebPage”, “@id”: “https://tygartmedia.com/information-density-analyzer/” } }