Tygart Media

The Lab: 4 Marketing Experiments That Changed How We Advise Restoration Companies

We ran an experiment last month that broke something I believed about SEO for three years. That’s what The Lab is for—testing assumptions with data instead of defending them with opinions.

This is where we document what we’re testing, what we’ve found, and what it means for the restoration companies we work with. No theory. No speculation. Experiments with controls, variables, and measurable outcomes. Some of these will confirm conventional wisdom. Some will destroy it. Both are valuable.

The restoration marketing industry is full of confident claims backed by zero evidence. “You need 2,000 words per blog post.” “Schema markup doesn’t affect rankings.” “AI content ranks just as well as human content.” These statements are testable. So we test them.

Experiment 1: Zero-Click Optimization — Can You Win Without the Click?

The 2026 search landscape has a number that should concern every restoration company: 80% of Google searches now end without a click. Google’s AI Overviews appear in over 60% of informational queries. Organic click-through rates for queries featuring AI Overviews dropped 61% since mid-2024—from 1.76% to 0.61%.

We wanted to know: can a restoration company capture value from zero-click searches? Can visibility without a website visit generate phone calls?

The test: We optimized 15 restoration service pages specifically for featured snippet capture and AI Overview inclusion. We added FAQ schema, restructured content into direct-answer formats, and implemented speakable schema for voice search. Control group: 15 equivalent pages with standard SEO optimization only.

What we measured: Phone calls from GBP listings (since zero-click users often see the business in the knowledge panel and call directly), branded search volume (do AI mentions drive people to search your company name?), and total lead volume from all sources.

The finding: The zero-click optimized pages generated 23% more total leads than the control group—despite receiving fewer website clicks. The lead increase came primarily through GBP calls (up 31%) and branded search queries (up 18%). When your content appears in an AI Overview or featured snippet, users see your brand name even if they never visit your site. That brand impression converts later through a different channel.

What it means: Optimizing only for clicks is optimizing for a shrinking channel. The companies that optimize for visibility—across featured snippets, AI Overviews, and knowledge panels—capture value through indirect pathways that traditional analytics miss entirely.

Experiment 2: Content Length vs. Content Depth — The 2,000-Word Myth

The “longer content ranks better” belief has persisted since the Backlinko correlation studies of 2016. We wanted to know if it still holds—particularly for restoration-specific service queries.

The test: We published 20 articles targeting restoration keywords. Ten were comprehensive long-form (2,500-3,500 words). Ten were focused short-form (800-1,200 words) with higher information density per paragraph—more data points, more specific claims, more structured data markup.

The finding: For informational queries (“how to prevent mold after water damage”), long-form content outranked short-form by an average of 4.2 positions. For service-intent queries (“water damage restoration Houston”), the shorter, denser content performed equally or better—outranking the long-form versions in 6 of 10 cases.

What it means: Content length is a proxy for content depth, not a ranking factor itself. Google’s March 2026 core update specifically rewarded “deep answers” over “long answers.” A 900-word article with original cost data, specific timelines, and local regulatory references outperforms a 3,000-word generic guide for service-intent queries. Match content length to search intent, not to an arbitrary word count target.

Experiment 3: AI-Generated vs. AI-Assisted vs. Human-Only Content

Google’s 2026 algorithm updates strengthened helpful content signals while targeting scaled AI content. But “AI content” is a spectrum. We tested three production methods head-to-head.

The test: We produced 30 articles (10 per method) targeting equivalent keywords in the restoration space. Group A: entirely AI-generated with light editing. Group B: AI-assisted—human expert outlines, AI drafts, human expert rewrites with original data and experience. Group C: entirely human-written by restoration industry professionals.

Results after 90 days:

Group A (AI-generated) performed worst overall. Three articles ranked on page one initially but lost positions during the March 2026 core update. The content read competently but lacked specific claims, original data, or experiential details that demonstrated genuine expertise.

Group B (AI-assisted) performed best. Eight of ten articles achieved page-one rankings. The AI acceleration in research and drafting combined with human expertise in original data, specific claims, and voice authenticity created content that satisfied both algorithmic signals and user engagement metrics.

Group C (human-only) performed second-best. Seven of ten achieved page-one rankings. Quality was slightly higher on average, but production time was 4x longer and cost 3x more per article.

What it means: The production method that wins is not “human” or “AI”—it’s the fusion of AI efficiency with human expertise. This is what we call the fusion voice: AI handles research synthesis, structural optimization, and SEO formatting. Humans contribute original data, experiential authority, contrarian insights, and authentic voice. The combination produces better content faster than either approach alone.

Experiment 4: Schema Markup’s Actual Impact on Restoration Rankings

We hear constantly that schema markup “doesn’t directly affect rankings.” We wanted to measure its indirect effects with precision.

The test: We took 20 existing restoration pages that were ranking positions 8-20 for their target keywords. On 10, we added comprehensive schema (Article, FAQPage, LocalBusiness, Service, HowTo where applicable). The other 10 remained unchanged as controls.

Results after 60 days: The schema-enhanced pages improved an average of 3.1 positions. Seven of ten gained rich results (FAQ dropdowns, how-to cards) in search. The control group moved an average of 0.4 positions—within normal fluctuation range.

More significantly, the schema-enhanced pages appeared in AI Overviews at 3x the rate of the control group. Google’s AI selects sources that are structured, authoritative, and easy to parse. Schema markup makes your content all three.

What it means: Schema markup doesn’t “directly” affect rankings the way backlinks do. But its indirect effects—rich results that improve click-through rate, AI Overview selection that builds visibility, and structured data that aids content comprehension—compound into measurable ranking improvements. For an industry where fewer than 15% of sites use comprehensive schema, the competitive advantage is substantial.

What’s Next in The Lab

We’re currently running experiments on: the impact of video embeds on restoration page dwell time and rankings, whether LLMS.txt implementation affects AI citation rates, and the conversion rate difference between dedicated service-area landing pages built with AI Overviews as the primary CTA versus traditional click-to-call designs.

Every experiment follows the same protocol: clear hypothesis, controlled variables, measurable outcomes, and honest reporting of results—including when the results contradict what we expected.

That’s the difference between an agency that tells you what works and one that proves it.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *