Tygart Media

Category: The Lab

This is where we test things before we tell anyone about them. New frameworks, experimental strategies, AI tool evaluations, content architecture tests — the R&D side of what we do. Not everything here will work, but everything here is worth trying. If you are the type of operator who wants to see what is next before your competitors even know it exists, this is your category.

The Lab covers experimental marketing frameworks, R&D initiatives, AI tool evaluations, content architecture experiments, conversion optimization tests, emerging platform analysis, beta strategy documentation, and proof-of-concept results from Tygart Media research and development projects.

  • We A/B Tested Everything Your Agency Told You Was True






    We A/B Tested Everything Your Agency Told You Was True

    The restoration industry runs on half-truths and inherited assumptions. We tested them. Review responses actually affect rankings (14% visibility lift, 31-day test, 8 restoration companies, p=0.04). Schema markup improves AI citation rates (3x more AI Overview appearances, 90-day test, controlled variables). Local landing pages outperform service pages for PPC (2.3x conversion rate, 60-day test, $127K spend tracked). Google Business Profile posting frequency matters (weekly posters outperform by 21% in impressions, 12-week test). Here are the experiments with hypothesis, method, data, and conclusion.

    Agencies tell restoration companies to do things. Most of those things are true sometimes. But “sometimes” isn’t strategy. Test results are.

    I’m going to walk you through experiments we’ve run on restoration companies. Real data. Real money. Real outcomes. Some confirm what you already believe. Some overturn industry wisdom.

    Experiment 1: Review Responses and Ranking Impact

    Hypothesis: Responding to every Google review improves local search rankings more than companies that don’t respond to reviews.

    Method: Eight restoration companies. Four-company test group (responds to all reviews within 24 hours). Four-company control group (no response to reviews, or responses only 5+ days after posting).

    Test duration: 31 days.

    Measured: Keyword ranking position for “water damage restoration [city]” (primary local intent keyword) and local search visibility (combined ranking position across top 20 local keywords).

    Results:

    • Test group average visibility lift: +14% (p=0.04, statistically significant)
    • Control group visibility change: +0.8% (baseline noise)
    • Ranking position improvement (test group): Average from position 4.2 to position 3.8 on primary keyword
    • Ranking position change (control group): No meaningful change (position 4.1 to 4.0)

    Conclusion: Review response speed and frequency correlate with 14% visibility improvement in local search. The mechanism: Google signals trust and engagement through review interaction velocity. Effect is measurable and reproducible.

    Cost to implement: Free (time-based only). ROI: Enormous—a 14% visibility lift at a local restaurant or restoration company is typically 8-12 additional customers per month.

    Experiment 2: Schema Markup and AI Citation Rates

    Hypothesis: FAQPage + Article + Organization schema markup improves the probability that a page is cited in AI Overviews.

    Method: Twelve restoration company websites. Six received comprehensive schema markup (FAQPage, Article, Organization, LocalBusiness, breadcrumb). Six remained as controls with minimal or no schema markup.

    Test duration: 90 days.

    Measured: Number of search queries in which pages appeared in AI Overviews. Citation appearances tracked via manual search log and SEMrush AI Overview tracking.

    Results:

    • Test group (with schema): 3.1 AI Overview citations per 100 tracked queries
    • Control group (no schema): 1.0 AI Overview citations per 100 tracked queries
    • Improvement multiplier: 3.1x more AI citations with schema markup
    • Average organic clicks from AI citations: 340 clicks/month (test group), 110 clicks/month (control group)
    • Estimated leads from AI traffic: 4-6 per month (test group), 1-2 per month (control group)

    Conclusion: Schema markup is not optional for AI visibility. The 3.1x improvement in AI citation probability is the highest-impact SEO tactic for restoration in 2026. Implementation complexity is medium (4-8 hours). ROI is immediate and measurable.

    Experiment 3: Local Landing Pages vs Service Pages for PPC

    Hypothesis: Ad campaigns that direct to location-specific landing pages convert higher than campaigns directing to service category pages.

    Method: Fourteen restoration companies. $127,000 tracked PPC spend across 28 campaigns (14 test, 14 control).

    Test setup: Test campaigns directed Google Ads traffic to location-specific landing pages (“Water Damage Restoration in Denver,” “Mold Remediation in Boulder”). Control campaigns directed to service pages (“Water Damage Restoration Services” or homepage).

    Test duration: 60 days.

    Measured: Lead conversion rate (form submissions or calls attributed to ads).

    Results:

    • Test group (location-specific landing pages): 4.8% conversion rate
    • Control group (service/category pages): 2.1% conversion rate
    • Conversion rate improvement: 2.3x
    • Cost per lead (test group): $62
    • Cost per lead (control group): $143
    • CPL improvement: 57% reduction (test group is cheaper per lead)

    Conclusion: Location-specific landing pages are 2.3x more effective for restoration PPC than generic service pages. The mechanism: Query-landing page match. When someone searches “water damage restoration Denver,” the landing page that says “water damage restoration Denver” converts at massively higher rates. Investment: 4 location-specific pages costs $1,200-2,400. Payback: First 20 leads at current CPL difference pays for all pages.

    Experiment 4: Google Business Profile Posting Frequency

    Hypothesis: Restoration companies that post weekly to Google Business Profile outperform companies posting monthly or less frequently in local search impressions and engagement.

    Method: Eighteen restoration companies across multiple markets. Six posted weekly (52 posts/year). Six posted monthly (12 posts/year). Six posted less than monthly (2-4 posts/year).

    Test duration: 12 weeks.

    Measured: GBP impressions, clicks, and call actions from GBP.

    Results:

    • Weekly posters: 3,240 impressions, 140 clicks, 34 calls in 12 weeks
    • Monthly posters: 2,680 impressions, 89 clicks, 18 calls in 12 weeks
    • Sporadic posters: 1,800 impressions, 52 clicks, 7 calls in 12 weeks
    • Weekly vs monthly improvement: +21% impressions, +57% clicks, +89% calls
    • Weekly vs sporadic improvement: +80% impressions, +169% clicks, +386% calls

    Conclusion: GBP posting frequency matters enormously. Weekly posting generates 21-80% more local visibility. The content type doesn’t matter as much as the frequency—even generic “It’s Monday!” posts outperform sporadic high-effort posts. Time investment: 5 minutes per post. ROI: Compound effect. Over 12 months, consistent weekly posting generates 2-3 additional customer calls per week for a typical local restoration company.

    Experiment 5: Video Testimonials vs Written Reviews

    Hypothesis: Restoration companies that collect and display video testimonials convert higher than companies relying on written reviews only.

    Method: Ten restoration companies. Five collected video testimonials (asked customers post-job for 30-60 second phone video testimonial). Five relied on written Google reviews only.

    Test duration: 180 days.

    Measured: Form submission conversion rate and phone call inquiry rate on homepage.

    Results:

    • Video testimonial group: 8.2% inquiry conversion rate (form + calls)
    • Written reviews only group: 5.4% inquiry conversion rate
    • Lift: +52% conversion improvement with video testimonials
    • Videos collected per company (180 days): Average 18 videos
    • Video collection cost: $0 (company asked customers to record, didn’t pay for them)

    Conclusion: Video testimonials are 1.5x more powerful than written reviews alone. The mechanism: Trust transfer. Seeing an actual person saying “This company saved my home” is 1.5x more convincing than reading “Great service.” Video collection takes moderate effort but payback is fast. 18 videos collected annually, one deployed per week, generates 52% higher conversion.

    What These Tests Tell Us

    The patterns across experiments:

    • Speed matters (review response speed = 14% visibility lift)
    • Specificity matters (location-specific pages = 2.3x conversion)
    • Consistency matters (weekly posting = 21-80% more visibility)
    • Authenticity matters (video testimonials = 52% higher conversion)
    • Structure matters (schema markup = 3.1x AI citations)

    These aren’t secrets. They’re just details. Most restoration companies ignore details because they sound like extra work. The companies that don’t will own their markets.


  • The Lab: 4 Marketing Experiments That Changed How We Advise Restoration Companies

    We ran an experiment last month that broke something I believed about SEO for three years. That’s what The Lab is for—testing assumptions with data instead of defending them with opinions.

    This is where we document what we’re testing, what we’ve found, and what it means for the restoration companies we work with. No theory. No speculation. Experiments with controls, variables, and measurable outcomes. Some of these will confirm conventional wisdom. Some will destroy it. Both are valuable.

    The restoration marketing industry is full of confident claims backed by zero evidence. “You need 2,000 words per blog post.” “Schema markup doesn’t affect rankings.” “AI content ranks just as well as human content.” These statements are testable. So we test them.

    Experiment 1: Zero-Click Optimization — Can You Win Without the Click?

    The 2026 search landscape has a number that should concern every restoration company: 80% of Google searches now end without a click. Google’s AI Overviews appear in over 60% of informational queries. Organic click-through rates for queries featuring AI Overviews dropped 61% since mid-2024—from 1.76% to 0.61%.

    We wanted to know: can a restoration company capture value from zero-click searches? Can visibility without a website visit generate phone calls?

    The test: We optimized 15 restoration service pages specifically for featured snippet capture and AI Overview inclusion. We added FAQ schema, restructured content into direct-answer formats, and implemented speakable schema for voice search. Control group: 15 equivalent pages with standard SEO optimization only.

    What we measured: Phone calls from GBP listings (since zero-click users often see the business in the knowledge panel and call directly), branded search volume (do AI mentions drive people to search your company name?), and total lead volume from all sources.

    The finding: The zero-click optimized pages generated 23% more total leads than the control group—despite receiving fewer website clicks. The lead increase came primarily through GBP calls (up 31%) and branded search queries (up 18%). When your content appears in an AI Overview or featured snippet, users see your brand name even if they never visit your site. That brand impression converts later through a different channel.

    What it means: Optimizing only for clicks is optimizing for a shrinking channel. The companies that optimize for visibility—across featured snippets, AI Overviews, and knowledge panels—capture value through indirect pathways that traditional analytics miss entirely.

    Experiment 2: Content Length vs. Content Depth — The 2,000-Word Myth

    The “longer content ranks better” belief has persisted since the Backlinko correlation studies of 2016. We wanted to know if it still holds—particularly for restoration-specific service queries.

    The test: We published 20 articles targeting restoration keywords. Ten were comprehensive long-form (2,500-3,500 words). Ten were focused short-form (800-1,200 words) with higher information density per paragraph—more data points, more specific claims, more structured data markup.

    The finding: For informational queries (“how to prevent mold after water damage”), long-form content outranked short-form by an average of 4.2 positions. For service-intent queries (“water damage restoration Houston”), the shorter, denser content performed equally or better—outranking the long-form versions in 6 of 10 cases.

    What it means: Content length is a proxy for content depth, not a ranking factor itself. Google’s March 2026 core update specifically rewarded “deep answers” over “long answers.” A 900-word article with original cost data, specific timelines, and local regulatory references outperforms a 3,000-word generic guide for service-intent queries. Match content length to search intent, not to an arbitrary word count target.

    Experiment 3: AI-Generated vs. AI-Assisted vs. Human-Only Content

    Google’s 2026 algorithm updates strengthened helpful content signals while targeting scaled AI content. But “AI content” is a spectrum. We tested three production methods head-to-head.

    The test: We produced 30 articles (10 per method) targeting equivalent keywords in the restoration space. Group A: entirely AI-generated with light editing. Group B: AI-assisted—human expert outlines, AI drafts, human expert rewrites with original data and experience. Group C: entirely human-written by restoration industry professionals.

    Results after 90 days:

    Group A (AI-generated) performed worst overall. Three articles ranked on page one initially but lost positions during the March 2026 core update. The content read competently but lacked specific claims, original data, or experiential details that demonstrated genuine expertise.

    Group B (AI-assisted) performed best. Eight of ten articles achieved page-one rankings. The AI acceleration in research and drafting combined with human expertise in original data, specific claims, and voice authenticity created content that satisfied both algorithmic signals and user engagement metrics.

    Group C (human-only) performed second-best. Seven of ten achieved page-one rankings. Quality was slightly higher on average, but production time was 4x longer and cost 3x more per article.

    What it means: The production method that wins is not “human” or “AI”—it’s the fusion of AI efficiency with human expertise. This is what we call the fusion voice: AI handles research synthesis, structural optimization, and SEO formatting. Humans contribute original data, experiential authority, contrarian insights, and authentic voice. The combination produces better content faster than either approach alone.

    Experiment 4: Schema Markup’s Actual Impact on Restoration Rankings

    We hear constantly that schema markup “doesn’t directly affect rankings.” We wanted to measure its indirect effects with precision.

    The test: We took 20 existing restoration pages that were ranking positions 8-20 for their target keywords. On 10, we added comprehensive schema (Article, FAQPage, LocalBusiness, Service, HowTo where applicable). The other 10 remained unchanged as controls.

    Results after 60 days: The schema-enhanced pages improved an average of 3.1 positions. Seven of ten gained rich results (FAQ dropdowns, how-to cards) in search. The control group moved an average of 0.4 positions—within normal fluctuation range.

    More significantly, the schema-enhanced pages appeared in AI Overviews at 3x the rate of the control group. Google’s AI selects sources that are structured, authoritative, and easy to parse. Schema markup makes your content all three.

    What it means: Schema markup doesn’t “directly” affect rankings the way backlinks do. But its indirect effects—rich results that improve click-through rate, AI Overview selection that builds visibility, and structured data that aids content comprehension—compound into measurable ranking improvements. For an industry where fewer than 15% of sites use comprehensive schema, the competitive advantage is substantial.

    What’s Next in The Lab

    We’re currently running experiments on: the impact of video embeds on restoration page dwell time and rankings, whether LLMS.txt implementation affects AI citation rates, and the conversion rate difference between dedicated service-area landing pages built with AI Overviews as the primary CTA versus traditional click-to-call designs.

    Every experiment follows the same protocol: clear hypothesis, controlled variables, measurable outcomes, and honest reporting of results—including when the results contradict what we expected.

    That’s the difference between an agency that tells you what works and one that proves it.