Category: Agency Playbook

How we build, scale, and run a digital marketing agency. Behind the scenes, systems, processes.

  • DataForSEO + Claude: The Keyword Research Stack That Replaced 3 Tools

    DataForSEO + Claude: The Keyword Research Stack That Replaced 3 Tools

    We used to pay for SEMrush, Ahrefs, and Moz. Then we discovered we could use the DataForSEO API with Claude to do better keyword research, at 1/10th the cost, with more control over the analysis.

    The Old Stack (and Why It Broke)
    We were paying $600+ monthly across three platforms. Each had different strengths—Ahrefs for backlink data, SEMrush for SERP features, Moz for authority metrics—but also massive overlap. And none of them understood our specific context: managing 19 WordPress sites with different verticals and different SEO strategies.

    The tools gave us data. Claude gives us intelligence.

    DataForSEO + Claude: The New Stack
    DataForSEO is an API that pulls real search data. We hit their endpoints for:
    – Keyword search volume and trend data
    – SERP features (snippets, People Also Ask, related searches)
    – Ranking difficulty and opportunity scores
    – Competitor keyword analysis
    – Local search data (essential for restoration verticals)

    We pay $300/month for enough API calls to cover all 19 sites’ keyword research. That’s it.

    Where Claude Comes In
    DataForSEO gives us raw data. Claude synthesizes it into strategy.

    I’ll ask: “Given the keyword data for ‘water damage restoration in Houston,’ show me the 5 best opportunities to rank where we can compete immediately.”

    Claude looks at:
    – Search volume
    – Current top 10 (from DataForSEO)
    – Our existing content
    – Difficulty-to-opportunity ratio
    – PAA questions and featured snippet targets
    – Local intent signals

    It returns prioritized keyword clusters with actionable insights: “These 3 keywords have 100-500 monthly searches, lower competition in local SERPs, and People Also Ask questions you can answer in depth.”

    Competitive Analysis Without the Black Box
    Instead of trusting a platform’s opaque “difficulty score,” we use Claude to analyze actual SERP data:

    – What’s the common word count in top results?
    – How many have video content? Backlinks?
    – What schema markup are they using?
    – Are they targeting the same user intent or different angles?
    – What questions do they answer that we don’t?

    This gives us real competitive insight, not a number from 1-100.

    The Workflow
    1. Give Claude a target keyword and our target site
    2. Claude queries DataForSEO API for volume, difficulty, SERP data
    3. Claude pulls our existing content on related topics
    4. Claude analyzes the competitive landscape
    5. Claude recommends specific keywords with strategy recommendations
    6. I approve the targets, Claude drafts the content brief
    7. The brief goes to our content pipeline

    This entire workflow happens in 10 minutes. With the old tools, it took 2 hours of hopping between platforms.

    Cost and Scale
    DataForSEO is billed per API call, not per “seat” or “account.” We do ~500 keyword researches per month across all 19 sites. Cost: ~$30-40. Traditional tools would cost the same regardless of usage.

    As we scale content, our tool cost stays flat. With SEMrush, we’d hit overages or need higher plans.

    The Limitations (and Why We Accept Them)
    DataForSEO doesn’t have the 5-year historical trend data that Ahrefs does. We don’t get detailed backlink analysis. We don’t have a competitor tracking dashboard.

    But here’s the truth: we never used those features. We needed keyword opportunity identification and competitive insight. DataForSEO + Claude does that better than expensive platforms because Claude can reason about the data instead of just displaying it.

    What This Enables
    – Continuous keyword research (no tool budget constraints)
    – Smarter targeting (Claude reasons about intent)
    – Faster decisions (10 minutes instead of 2 hours)
    – Transparent methodology (we see exactly how decisions are made)
    – Scalable to all 19 sites simultaneously

    If you’re paying for three SEO platforms, you’re probably paying for one platform and wasting the other two. Try DataForSEO + Claude for your next keyword research cycle. You’ll get more actionable intelligence and spend less than a single month of your current setup.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “DataForSEO + Claude: The Keyword Research Stack That Replaced 3 Tools”,
    “description”: “DataForSEO API + Claude replaces $600/month in SEO tools with $30/month API costs and better analysis. Here’s the keyword research workflow we built.”,
    “datePublished”: “2026-03-30”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/dataforseo-claude-the-keyword-research-stack-that-replaced-3-tools/”
    }
    }

  • I Built a Purchasing Agent That Checks My Budget Before It Buys

    I Built a Purchasing Agent That Checks My Budget Before It Buys

    We built a Claude MCP server (BuyBot) that can execute purchases across all our business accounts, but it requires approval from a centralized budget authority before spending a single dollar. It’s changed how we handle expenses, inventory replenishment, and vendor management.

    The Problem
    We manage 19 WordPress sites, each with different budgets. Some are client accounts, some are owned outright, some are experiments. When we need to buy something—cloud credits, plugins, stock images, tools—we were doing it manually, which meant:

    – Forgetting which budget to charge it to
    – Overspending on accounts with limits
    – Having no audit trail of purchases
    – Spending time on transaction logistics instead of work

    We needed an agent that understood budget rules and could route purchases intelligently.

    The BuyBot Architecture
    BuyBot is an MCP server that Claude can call. It has access to:
    Account registry: All business accounts and their assigned budgets
    Spending rules: Per-account limits, category constraints, approval thresholds
    Payment methods: Which credit card goes with which business unit
    Vendor integrations: APIs for Stripe, Shopify, AWS, Google Cloud, etc.

    When I tell Claude “we need to renew our Shopify plan for the retail client,” it:

    1. Looks up the retail client account and its monthly budget
    2. Checks remaining budget for this cycle
    3. Queries current Shopify pricing
    4. Runs the purchase cost against spending rules
    5. If under the limit, executes the transaction immediately
    6. If over the limit or above an approval threshold, requests human approval
    7. Logs everything to a central ledger

    The Approval Engine
    Not every purchase needs me. Small routine expenses (under $50, category-approved, within budget) execute automatically. Anything bigger hits a Slack notification with full context:

    “Purchasing Agent is requesting approval:
    – Item: AWS credits
    – Amount: $2,000
    – Account: Restoration Client A
    – Current Budget Remaining: $1,200
    – Request exceeds account budget by $800
    – Suggested: Approve from shared operations budget”

    I approve in Slack, BuyBot checks my permissions, and the purchase executes. Full audit trail.

    Multi-Business Budget Pooling
    We manage 7 different business units with different profitability levels. Some months Unit A has excess budget, Unit C is tight. BuyBot has a “borrow against future month” option and a “pool shared operations budget” option.

    If the restoration client needs $500 in cloud credits and their account is at 90% utilization, BuyBot can automatically route the charge to our shared operations account (with logging) and rebalance next month. It’s smart enough to not create budget crises.

    The Vendor Integration Layer
    BuyBot doesn’t just handle internal budget logic—it understands vendor APIs. When we need stock images, it:
    – Checks which vendor is in our approved list
    – Gets current pricing from their API
    – Loads image requirements from the request
    – Queries their library
    – Purchases the right licenses
    – Downloads and stores the files
    – Updates our inventory system

    All in one agent call. No manual vendor portal logins, no copy-pasting order numbers.

    The Results
    – Spending transparency: I see all purchases in one ledger
    – Budget discipline: You can’t spend money that isn’t allocated
    – Automation: Routine expenses happen without my involvement
    – Audit trail: Every transaction has context, approval, and timestamp
    – Intelligent routing: Purchases go to the right account automatically

    What This Enables
    This is the foundation for fully autonomous expense management. In the next phase, BuyBot will:
    – Predict inventory needs and auto-replenish
    – Optimize vendor selection based on cost and delivery
    – Consolidate purchases across accounts for bulk discounts
    – Alert me to unusual spending patterns

    The key insight: AI agents don’t need unrestricted access. Give them clear budget rules, approval thresholds, and audit requirements, and they can handle purchasing autonomously while maintaining complete financial control.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “I Built a Purchasing Agent That Checks My Budget Before It Buys”,
    “description”: “BuyBot is an MCP server that executes purchases autonomously while enforcing budget rules, approval gates, and multi-business account logic. Here’s how it”,
    “datePublished”: “2026-03-30”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/i-built-a-purchasing-agent-that-checks-my-budget-before-it-buys/”
    }
    }

  • The WP Proxy Pattern: How We Route 19 WordPress Sites Through One Cloud Run Endpoint

    The WP Proxy Pattern: How We Route 19 WordPress Sites Through One Cloud Run Endpoint

    Managing 19 WordPress sites means managing 19 IP addresses, 19 DNS records, and 19 potential points of blocking, rate limiting, and geo-restriction. We solved it by routing all traffic through a single Google Cloud Run proxy endpoint that intelligently distributes requests across our estate.

    The Problem We Solved
    Some of our WordPress sites host sensitive content in regulated verticals. Others are hitting API rate limits from data providers. A few are in restrictive geographic regions. Managing each site’s network layer separately was chaos—different security rules, different rate limit strategies, different failure modes.

    We needed one intelligent proxy that could:
    – Route traffic to the correct backend based on request properties
    – Handle rate limiting intelligently (queue, retry, or serve cached content)
    – Manage geographic restrictions transparently
    – Pool API quotas across sites
    – Provide unified logging and monitoring

    Architecture: The Single Endpoint Pattern
    We run a Node.js Cloud Run service on a single stable IP. All 19 WordPress installations point their external API calls, webhook receivers, and cross-site requests through this endpoint.

    The proxy reads the request headers and query parameters to determine the destination site. Instead of individual sites making direct calls to APIs (which triggers rate limits), requests aggregate at the proxy level. We batch and deduplicate before sending to the actual API.

    How It Works in Practice
    Example: 5 WordPress sites need weather data for their posts. Instead of 5 separate API calls to the weather service (hitting their rate limit 5 times), the proxy receives 5 requests, deduplicates them to 1 actual API call, and distributes the result to all 5 sites. We’re using 1/5th of our quota.

    For blocked IPs or geographic restrictions, the proxy handles the retry logic. If a destination API rejects our request due to IP reputation, the proxy can queue it, try again from a different outbound IP (using Cloud NAT), or serve cached results until the block lifts.

    Rate Limiting Strategy
    The proxy implements a weighted token bucket algorithm. High-priority sites (revenue-generating clients) get higher quotas. Background batch processes (like SEO crawls) use overflow capacity during off-peak hours. API quota is a shared resource, allocated intelligently instead of wasted on request spikes.

    Logging and Observability
    Every request hits Cloud Logging. We track:
    – Which site made the request
    – Which API received it
    – Response time and status
    – Cache hits vs. misses
    – Rate limit decisions

    This single source of truth lets us see patterns across all 19 sites instantly. We can spot which integrations are broken, which are inefficient, and which are being overused.

    The Implementation Cost
    Cloud Run runs on a per-request billing model. Our proxy costs about $50/month because it’s processing relatively lightweight metadata—headers, routing decisions, maybe some transformation. The infrastructure is invisible to the cost model.

    Setup time was about 2 weeks to write the routing logic, test failover scenarios, and migrate all 19 sites. The ongoing maintenance is minimal—mostly adding new API routes and tuning rate limit parameters.

    Why This Matters
    If you’re running more than a handful of WordPress sites that make external API calls, a unified proxy isn’t optional—it’s the difference between efficient resource usage and chaos. It collapses your operational blast radius from 19 separate failure modes down to one well-understood system.

    Plus, it’s the foundation for every other optimization we’ve built: cross-site caching, intelligent quota pooling, and unified security policies. One endpoint, one place to think about performance and reliability.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “The WP Proxy Pattern: How We Route 19 WordPress Sites Through One Cloud Run Endpoint”,
    “description”: “How we route all API traffic from 19 WordPress sites through a single Cloud Run proxy—collapsing complexity and eliminating rate limit chaos.”,
    “datePublished”: “2026-03-30”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/the-wp-proxy-pattern-how-we-route-19-wordpress-sites-through-one-cloud-run-endpoint/”
    }
    }

  • The Context Bleed Problem: What Happens When AI Agents Inherit Each Other’s Memory

    The Context Bleed Problem: What Happens When AI Agents Inherit Each Other’s Memory

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “The Context Bleed Problem: What Happens When AI Agents Inherit Each Others Memory”,
    “description”: “When multi-agent pipelines pass full conversation history across handoffs, downstream agents inherit context they were never meant to have. Here is why that is “,
    “datePublished”: “2026-03-23”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/the-context-bleed-problem-what-happens-when-ai-agents-inherit-each-others-memory/”
    }
    }

  • Why We Stopped Calling Ourselves a Restoration Marketing Agency

    Why We Stopped Calling Ourselves a Restoration Marketing Agency

    We built our name in restoration marketing. We were the agency that understood adjusters, knew the difference between mitigation and remediation, and could turn a 12-keyword site into a 340-keyword authority in six months.

    Then something happened. A cold storage company in California’s Central Valley asked if we could do the same thing for them. Then a luxury lending firm in Beverly Hills. Then a comedy club in Manhattan. Then an automotive sales training company in Ohio.

    Every time, we brought the same playbook: deep vertical research, persona-driven content architecture, SEO/AEO/GEO optimization, and relentless measurement. Every time, it worked. Not because we understood cold storage logistics or luxury asset lending – we didn’t, at first – but because the underlying system was industry-agnostic.

    The Framework Is the Product

    Here’s what most agencies won’t tell you: the tactics that work in restoration marketing aren’t restoration-specific. Schema markup doesn’t care about your industry. Entity authority doesn’t care whether you’re optimizing for “water damage restoration” or “temperature-controlled warehousing.” The Google algorithm doesn’t have a vertical preference.

    What matters is the system. Our content intelligence pipeline – the one that identifies gaps, generates persona variants, injects schema, builds internal link architecture, and optimizes for AI citation – works the same way whether we’re deploying it on a roofing contractor’s site or a FinTech lender’s blog.

    The 23-Site Laboratory

    Right now, we manage 23 WordPress sites across restoration, insurance, lending, entertainment, food logistics, healthcare facilities, ESG compliance, and more. Each site is a live experiment. What we learn on one site feeds every other site in the network.

    When Google’s March 2026 core update shifted E-E-A-T signals, we saw it across 23 different verticals simultaneously. We didn’t need to wait for an industry case study – we were the case study, in real time, across every vertical.

    That cross-pollination effect is something a single-vertical agency can never replicate. Our cold storage SEO strategy a luxury asset lenderws from our restoration content architecture. Our comedy club’s AEO optimization uses the same FAQ schema pattern that wins featured snippets for Beverly Hills luxury loans.

    Restoration Is Still Home Base

    We haven’t abandoned restoration. It’s still our deepest vertical, the one where we’ve generated the most data, run the most experiments, and delivered the most measurable results. But it’s no longer the ceiling. It’s the foundation.

    If your industry has a search bar and your competitors have websites, we already know how to outrank them. The vertical doesn’t matter. The system does.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “Why We Stopped Calling Ourselves a Restoration Marketing Agency”,
    “description”: “We built our reputation in restoration. Then we realized the frameworks that tripled restoration revenue work in every industry. Here’s why we stopped nic”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/why-we-stopped-calling-ourselves-restoration-marketing-agency/”
    }
    }

  • How to Run 7 Businesses From One Notion Dashboard

    How to Run 7 Businesses From One Notion Dashboard

    The Problem With Running Multiple Businesses

    When you operate seven companies across different industries – restoration, luxury lending, comedy streaming, cold storage, automotive training, and digital marketing – the natural instinct is to build seven separate operating systems. That instinct will destroy you.

    Separate project management tools, separate CRMs, separate content calendars. Before you know it, you’re spending more time switching contexts than actually building. We learned this the hard way across a restoration company, a luxury lending firm Company, a live comedy platform, a cold storage facility, an automotive training firm, and Tygart Media.

    The fix wasn’t hiring more people. It was architecture. One Notion workspace, six databases, and a triage system that routes every task, every client communication, and every content piece to the right place without human sorting.

    The 6-Database Architecture That Powers Everything

    Our Notion Command Center runs on exactly six databases that talk to each other. Not sixty. Not six per company. Six total.

    The Master Task Database handles every action item across all seven businesses. Each task gets a Company property, a Priority score, and an Owner. When a new task comes in – whether it’s a client request from a luxury asset lender or a content deadline for a storm protection company – it enters the same pipeline.

    The Client Portal Database creates air-gapped views so each client sees only their work. A restoration company in Houston never sees data from a luxury lender in Beverly Hills. Same database, completely isolated views.

    The Content Calendar Database manages editorial across 23 WordPress sites. Every article brief, every publish date, every SEO target lives here. When we run our AI content pipeline, it checks this database to avoid duplicate topics.

    The Agent Registry, Revenue Tracker, and Meeting Notes databases round out the system. Together, they give us a single pane of glass across a portfolio that would otherwise require a dozen tools and a full-time operations manager.

    Why Single-Workspace Architecture Beats Multi-Tool Stacks

    The average small business uses 17 different SaaS tools. When you run seven businesses, that number can balloon to 50+ subscriptions. Beyond the cost, the real killer is context fragmentation – critical information lives in five different places, and no one knows which version is current.

    A single Notion workspace eliminates this entirely. Every team member, contractor, and AI agent pulls from the same source of truth. When our Claude agents generate content briefs, they query the same database that tracks client deliverables. When we review monthly revenue, it’s the same workspace where we plan next month’s campaigns.

    This isn’t about Notion specifically – it’s about the principle that operational architecture should consolidate, not fragment. We chose Notion because its database-relation model maps naturally to multi-entity operations.

    The Custom Agent Layer

    The real leverage comes from building AI agents that operate inside this architecture. We run Claude-powered agents that can read our Notion databases, check WordPress site status, generate content briefs, and triage incoming tasks – all without human intervention for routine operations.

    Each agent has a specific scope: one handles content pipeline operations, another monitors SEO performance across all 23 sites, and a third manages social media scheduling through Metricool. They don’t replace human judgment for strategic decisions, but they eliminate 80% of the repetitive coordination work that used to eat 15+ hours per week.

    The key insight: agents are only as good as the data architecture they sit on top of. Build the databases right, and the automation layer practically writes itself.

    Frequently Asked Questions

    Can Notion really handle enterprise-level multi-business operations?

    Yes, with proper architecture. The limiting factor isn’t Notion’s capability – it’s how you structure your databases. Flat databases with 50 properties break down fast. Relational databases with clean property schemas scale to thousands of entries across multiple companies without performance issues.

    How do you keep client data separate across businesses?

    We use Notion’s filtered views and relation properties to create air-gapped client portals. Each client view is filtered by Company and Client properties, so a restoration client never sees lending data. It’s the same database, but the views are completely isolated.

    What happens when one business needs a different workflow?

    Every business has unique needs, but the underlying data model stays consistent. We handle workflow variations through database views and templates, not separate databases. A restoration project and a luxury lending deal both flow through the same task pipeline with different templates and automations attached.

    How many people can use this system before it breaks?

    We currently have 12+ users across all businesses plus AI agents accessing the workspace simultaneously. Notion handles this well. The bottleneck isn’t users – it’s database design. Keep your relations clean and your property counts reasonable, and the system scales.

    The Bottom Line

    Running multiple businesses doesn’t require multiple operating systems. It requires one well-architected system that treats each business as a filtered view of a unified dataset. Build the architecture once, and every new business you add becomes a configuration change – not a rebuild. If you’re drowning in tools and context-switching, the fix isn’t better tools. It’s better architecture.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “How to Run 7 Businesses From One Notion Dashboard”,
    “description”: “How one Notion workspace with six databases runs seven businesses across restoration, lending, comedy, and marketing.”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/how-to-run-7-businesses-from-one-notion-dashboard/”
    }
    }

  • The AI Stack That Replaced Our $12K/Month Tool Budget

    The AI Stack That Replaced Our $12K/Month Tool Budget

    What We Were Paying For (And Why We Stopped)

    At our peak tool sprawl, Tygart Media was spending over twelve thousand dollars per month on SaaS subscriptions. SEO platforms, content generation tools, social media schedulers, analytics dashboards, CRM integrations, and monitoring services. Every tool solved one problem and created two more – data silos, redundant features, and the constant overhead of managing logins, billing, and updates.

    The turning point came when we realized that 80% of what these tools did could be replicated by a combination of local AI models, open-source software, and well-written automation scripts. Not a theoretical possibility – we actually built it and measured the results over 90 days.

    The Local AI Models That Do the Heavy La flooring companyng

    We run Ollama on a standard laptop – no GPU cluster, no cloud compute bills. The models handle content drafting, keyword analysis, meta description generation, and internal link suggestions. For tasks requiring deeper reasoning, we route to Claude via the Anthropic API, which costs pennies per article compared to enterprise content platforms.

    The cost comparison is stark: a single enterprise SEO tool charges $300-500/month per site. We manage 23 sites. Our AI stack – running locally – handles the same keyword tracking, content gap analysis, and optimization recommendations for the cost of electricity.

    The models we rely on most: Llama 3.1 for fast content drafts, Mistral for technical analysis, and Claude for complex reasoning tasks like content strategy and schema generation. Each model has a specific role, and none of them send a monthly invoice.

    The Automation Layer: PowerShell, Python, and Cloud Run

    AI models alone don’t replace tools – you need the orchestration layer that connects them to your actual workflows. We built ours on three technologies:

    PowerShell scripts handle Windows-side automation: file management, API calls to WordPress sites, batch processing of images, and scheduling tasks. Python scripts handle the heavier data work: SEO signal extraction, content analysis, and reporting. Google Cloud Run hosts the few services that need to be always-on, like our WordPress API proxy and our content publishing pipeline.

    Total cloud cost: under $50/month on Google Cloud’s free tier and minimal compute. Compare that to the $12K we were spending on tools that did less.

    What We Still Pay For (And Why)

    We didn’t eliminate every subscription. Some tools earn their keep:

    Metricool ($50/month) handles social media scheduling across multiple brands – the API integration alone saves hours. DataForSEO (pay-per-use) provides raw SERP data that would be impractical to scrape ourselves. Call Tracking Metrics handles call attribution for restoration clients where phone leads are the primary conversion.

    The principle: pay for data you can’t generate and distribution you can’t replicate. Everything else – content creation, SEO analysis, reporting, optimization – runs on our own stack.

    The 90-Day Results

    After 90 days of running the replacement stack across all client sites and our own properties, the numbers told a clear story. Content output increased by 340%. SEO performance held steady or improved across 21 of 23 sites. Total monthly tool spend dropped from $12,200 to under $800.

    The hidden benefit: ownership. When your tools are your own scripts and models, no vendor can raise prices, change APIs, or sunset features. You own the entire stack.

    Frequently Asked Questions

    Do you need technical skills to build a local AI stack?

    You need basic comfort with command-line tools and scripting. If you can install software and edit a configuration file, you can run Ollama. The automation layer requires Python or PowerShell knowledge, but most scripts are straightforward once the architecture is in place.

    Can local AI models really match enterprise SEO tools?

    For content generation, optimization recommendations, and gap analysis – yes. For real-time SERP tracking and backlink monitoring, you still need external data sources like DataForSEO. The key is understanding which tasks need live data and which can run on local intelligence.

    What about reliability compared to SaaS tools?

    SaaS tools go down too. Local tools run when your machine runs. For cloud-hosted components, Google Cloud Run has a 99.95% uptime SLA. Our stack has been more reliable than the vendor tools it replaced.

    How long did the migration take?

    About six weeks of active development to replace the core tools, plus another month of refinement. The investment pays for itself in the first billing cycle.

    Build or Buy? Build.

    The era of needing expensive SaaS tools for every marketing function is ending. Local AI, open-source automation, and minimal cloud infrastructure can replace the majority of your tool budget while giving you more control, better customization, and zero vendor lock-in.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “The AI Stack That Replaced Our $12K/Month Tool Budget”,
    “description”: “How we replaced $12K/month in SaaS tools with local AI models, PowerShell automation, and minimal cloud infrastructure.”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/the-ai-stack-that-replaced-our-12k-month-tool-budget/”
    }
    }

  • What Happens When Claude Runs Your WordPress for 90 Days

    What Happens When Claude Runs Your WordPress for 90 Days

    The Experiment: Full AI Site Management

    In January 2026, we gave Claude – Anthropic’s AI assistant – the keys to our WordPress operation. Not just content generation, but the full stack: SEO audits, content gap analysis, taxonomy management, schema injection, internal linking, meta optimization, and publishing. Across 23 sites. For 90 days.

    This wasn’t a theoretical exercise. We built Claude into our operational pipeline through custom skills, WordPress REST API connections, and a GCP proxy layer that routes all site management through Google Cloud. Every optimization, every published article, every schema update was executed by Claude with human oversight on strategy and final approval.

    What Claude Actually Did

    During the 90-day period, Claude executed over 2,400 individual WordPress operations across all sites. The breakdown: 847 SEO meta refreshes, 312 new articles published, 156 schema markup injections, 94 taxonomy reorganizations, and 1,000+ internal link additions.

    Each operation followed a skill-based protocol. Our wp-seo-refresh skill handles on-page SEO. The wp-schema-inject skill adds structured data. The wp-interlink skill builds the internal link graph. Claude doesn’t freestyle – it follows proven playbooks that encode our SEO, AEO, and GEO best practices.

    The Results That Surprised Us

    Organic traffic across all 23 sites increased 47% over the 90-day period. The more interesting metric was consistency. Before Claude, our sites had wildly uneven optimization – some posts had full schema markup and internal links, others had nothing. After 90 days, every post on every site met the same baseline quality standard.

    The sites that improved most were the ones neglected longest. a luxury lending firm saw a 120% increase in organic sessions after Claude refreshed every post’s meta data, added FAQ schema, and built the internal link structure. a restoration company went from 12 ranking keywords to over 340.

    Well-optimized sites saw smaller but meaningful gains – typically 15-25% improvements in click-through rates from better meta descriptions and featured snippet capture.

    What Claude Can’t Do (Yet)

    AI site management has clear limitations. Claude can’t make strategic decisions about which markets to enter. It can’t conduct original customer research. It can’t judge whether content truly resonates with a human audience – it can only optimize for signals that correlate with resonance.

    We also found that AI-generated internal links sometimes prioritize SEO logic over user experience. A link that makes sense for PageRank distribution might confuse a reader. Human review improved link quality significantly.

    The right model is AI as operator, human as strategist. Claude handles the repetitive, systematic work that scales linearly with site count. Humans handle the judgment calls.

    Frequently Asked Questions

    Is it safe to give an AI access to your WordPress sites?

    We use WordPress Application Passwords with editor-level permissions – Claude can create and edit content but can’t modify site settings or access user data. All operations route through our GCP proxy with full audit logs.

    How do you prevent AI from making SEO mistakes?

    Every operation follows a validated protocol. Claude doesn’t improvise – it executes predefined skills with guardrails. Critical operations go through a review queue. We run weekly audits comparing pre- and post-optimization metrics.

    Can any business replicate this setup?

    The individual skills work on any WordPress site with REST API access. The scale advantage comes from the orchestration layer. A single-site business can start with basic Claude plus WordPress automation and expand from there.

    What’s the cost of running Claude as a site manager?

    API costs run approximately $50-100/month for our 23-site operation. The GCP proxy adds under $10/month. Compare that to a junior SEO specialist at $4,000-5,000/month handling maybe 3-5 sites.

    The Verdict After 90 Days

    We’re not going back. AI-managed WordPress isn’t a gimmick – it’s a fundamental shift in how digital operations scale. The 90-day experiment became our permanent operating model.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “What Happens When Claude Runs Your WordPress for 90 Days”,
    “description”: “We gave Claude full WordPress management across 23 sites for 90 days. Organic traffic rose 47%.”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/what-happens-when-claude-runs-your-wordpress-for-90-days/”
    }
    }

  • 387 Cowork Sessions and Counting: What Happens When AI Becomes Your Daily Operating Partner

    387 Cowork Sessions and Counting: What Happens When AI Becomes Your Daily Operating Partner

    This Is Not a Chatbot Story

    When people hear I use AI every day, they picture someone typing questions into ChatGPT and getting answers. That’s not what this is. I’ve run 387 working sessions with Claude in Cowork mode since December 2025. Each session is a full operating environment – a Linux VM with file access, tool execution, API connections, and persistent memory across sessions.

    These aren’t conversations. They’re deployments. Content publishes. Infrastructure builds. SEO audits across 18 WordPress sites. Notion database updates. Email monitors. Scheduled tasks. Real operational work that used to require a team of specialists.

    The number 387 isn’t bragging. It’s data. And what that data reveals about how AI actually integrates into daily business operations is more interesting than any demo or product launch.

    What a Typical Session Actually Looks Like

    A session starts when I open Cowork mode and describe what I need done. Not a vague prompt – a specific operational task. “Run the content intelligence audit on a storm protection company.com and generate 15 draft articles.” “Check all 18 WordPress sites for posts missing featured images and generate them using Vertex AI.” “Read my Gmail for VIP messages from the last 6 hours and summarize what needs attention.”

    Claude loads into a sandboxed Linux environment with access to my workspace folder, my installed skills (I have 60+), my MCP server connections (Notion, Gmail, Google Calendar, Metricool, Figma, and more), and a full bash/Python execution layer. It reads my CLAUDE.md file – a persistent memory document that carries context across sessions – and gets to work.

    A single session might involve 50-200 tool calls. Reading files, executing scripts, making API calls, writing content, publishing to WordPress, logging results to Notion. The average session runs 15-45 minutes of active work. Some complex ones – like a full site optimization pass – run over two hours.

    The Skill Layer Changed Everything

    Early sessions were inefficient. I’d explain the same process every time – how to connect to WordPress via the proxy, what format to use for articles, which Notion database to log results in. Repetitive context-setting that ate 30% of every session.

    Then I started building skills. A skill is a structured instruction file (SKILL.md) that Claude reads at the start of a session when the task matches its trigger conditions. I now have skills for WordPress publishing, SEO optimization, content generation, Notion logging, YouTube watch page creation, social media scheduling, site auditing, and dozens more.

    The impact was immediate. A task that took 20 minutes of back-and-forth setup now triggers in one sentence. “Run the wp-intelligence-audit on a luxury asset lender.com” – Claude reads the skill, loads the credentials from the site registry, connects via the proxy, pulls all posts, analyzes gaps, and generates a full report. No explanation needed. The skill contains everything.

    Building skills is the highest-leverage activity I’ve found in AI-assisted work. Every hour spent writing a skill saves 10+ hours across future sessions. At 387 sessions, the compound return is staggering.

    What 387 Sessions Taught Me About AI Workflow

    Specificity beats intelligence. The most productive sessions aren’t the ones where Claude is “smartest.” They’re the ones where I give the most specific instructions. “Optimize this post for SEO” produces mediocre results. “Run wp-seo-refresh on post 247 at a luxury asset lender.com, ensure the focus keyword is ‘luxury asset lending,’ update the meta description to 140-160 characters, and add internal links to posts 312 and 418” produces excellent results. AI amplifies clarity.

    Persistent memory is the unlock. CLAUDE.md – a markdown file that persists across sessions – is the most important file in my entire system. It contains my preferences, operational rules, business context, and standing instructions. Without it, every session starts from zero. With it, session 387 has the accumulated context of all 386 before it. This is the difference between using AI as a tool and using AI as a partner.

    Batch operations reveal true ROI. Publishing one article? AI saves maybe 30 minutes. Publishing 15 articles across 3 sites with full SEO/AEO/GEO optimization, taxonomy assignment, internal linking, and Notion logging? AI saves 15+ hours. The value curve is exponential with batch size. I now default to batch operations for everything – content, audits, meta updates, image generation.

    Failures are cheap and informative. At least 40 of my 387 sessions hit significant errors – API timeouts, disk space issues, credential failures, rate limiting. Each failure taught me something that made the system more resilient. The SSH workaround. The WP proxy to avoid IP blocking. The WinError 206 fix for long PowerShell commands. Failure at high volume is the fastest path to robust systems.

    The Numbers Behind 387 Sessions

    I tracked the data because the data tells the real story:

    Content produced: Approximately 400+ articles published across 18 WordPress sites. Each article is 1,200-1,800 words, SEO-optimized, AEO-formatted with FAQ sections, and GEO-ready with entity optimization. At market rates for this quality of content, that’s roughly ,000-,000 worth of content production.

    Sites managed: 18 WordPress properties across multiple industries – restoration, luxury lending, cold storage, interior design, comedy, training, technology. Each site gets regular content, SEO audits, taxonomy fixes, schema injection, and internal linking.

    Automations built: 7 autonomous AI agents (the droid fleet), 60+ skills, 3 scheduled tasks, a GCP Compute Engine cluster running 5 WordPress sites, a Cloud Run proxy for WordPress API routing, and a Vertex AI chatbot deployment.

    Time investment: Approximately 200 hours of active session time over three months. For context, a single full-time employee working those same 200 hours could not have produced a fraction of this output, because the bottleneck isn’t thinking time – it’s execution speed. Claude executes API calls, writes code, publishes content, and processes data at machine speed. I provide direction at human speed. The combination is multiplicative.

    Why Most People Won’t Do This

    The honest answer: it requires upfront investment that most people aren’t willing to make. Building the skill library took weeks. Configuring the MCP connections, setting up the proxy, provisioning the GCP infrastructure, writing the CLAUDE.md context file – that’s real work before you see any return.

    Most people want AI to be plug-and-play. Type a question, get an answer. And for simple tasks, it is. But for operational AI – AI that runs your business processes daily – the setup cost is significant and the learning curve is real.

    The payoff, though, is not incremental. It’s categorical. I’m not 10% more productive than I was before Cowork mode. I’m operating at a fundamentally different scale. Tasks that would require hiring 3-4 specialists – content writer, SEO analyst, site admin, automation engineer – are handled in daily sessions by one person with a well-configured AI partner.

    That’s not a productivity hack. That’s a structural advantage.

    Frequently Asked Questions

    What is Cowork mode and how is it different from regular Claude?

    Cowork mode is a feature of Claude’s desktop app that gives Claude access to a sandboxed Linux VM, file system, bash execution, and MCP server connections. Regular Claude is a chat interface. Cowork mode is an operating environment where Claude can read files, run code, make API calls, and produce deliverables – not just text responses.

    How much does running 387 sessions cost?

    Cowork mode is included in the Claude Pro subscription at /month. The MCP connections (Notion, Gmail, etc.) use free API tiers. The GCP infrastructure runs about /month. Total cost for three months of operations: approximately . The value produced is orders of magnitude higher.

    Can someone replicate this without technical skills?

    Partially. The basic Cowork mode works out of the box for content creation, research, and file management. The advanced setup – custom skills, GCP infrastructure, API integrations – requires comfort with command-line tools, APIs, and basic scripting. The barrier is falling fast as skills become shareable and MCP servers become plug-and-play.

    What’s the most impactful single skill you’ve built?

    The wp-site-registry skill – a single file containing credentials and connection methods for all 18 WordPress sites. Before this skill existed, every session required manually providing credentials. After it, any wp- skill can connect to any site automatically. It turned 18 separate workflows into one unified system.

    What Comes Next

    Session 387 is not a milestone. It’s a Tuesday. The system compounds. Every skill I build makes future sessions faster. Every failure I fix makes the system more resilient. Every batch I run produces data that informs the next batch.

    The question I get most often is “where do you start?” The answer is boring: start with one task you do repeatedly. Build one skill for it. Run it 10 times. Then build another. By session 50, you’ll have a system. By session 200, you’ll have an operating partner. By session 387, you’ll wonder how you ever worked without one.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “387 Cowork Sessions and Counting: What Happens When AI Becomes Your Daily Operating Partner”,
    “description”: “I’ve run 387 Cowork sessions with Claude in three months. Not chatbot conversations – full working sessions that build skills, publish content, mana”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/387-cowork-sessions-and-counting-what-happens-when-ai-becomes-your-daily-operating-partner/”
    }
    }

  • I Indexed 468 Files Into a Local Vector Database. Now My Laptop Answers Questions About My Business.

    I Indexed 468 Files Into a Local Vector Database. Now My Laptop Answers Questions About My Business.

    The Problem With Having Too Many Files

    I have 468 files that define how my businesses operate. Skill files that tell AI how to connect to WordPress sites. Session transcripts from hundreds of Cowork conversations. Notion exports. API documentation. Configuration files. Project briefs. Meeting notes. Operational playbooks.

    These files contain everything – credentials, workflows, decisions, architecture diagrams, troubleshooting histories. The knowledge is comprehensive. The problem is retrieval. When I need to remember how I configured the WP proxy, or what the resolution was for that SiteGround blocking issue three months ago, or which Notion database stores client portal data – I’m grep-searching through hundreds of files, hoping I remember the right keyword.

    Grep works when you know exactly what you’re looking for. It fails completely when you need to ask a question like “what was the workaround we used when SSH broke on the knowledge cluster VM?” That’s a semantic query. It requires understanding, not string matching.

    So I built a local vector search system. Every file gets chunked, embedded into vectors using a local model, stored in a local database, and queried with natural language. My laptop now answers questions about my own business operations – instantly, accurately, and without sending any data to the cloud.

    The Architecture: Ollama + ChromaDB + Python

    The stack is deliberately minimal. Three components, all running locally, zero cloud dependencies.

    Ollama with nomic-embed-text handles the embedding. This is a 137M parameter model specifically designed for text embeddings – turning chunks of text into 768-dimensional vectors that capture semantic meaning. It runs locally on my laptop, processes about 50 chunks per second, and produces embeddings that rival OpenAI’s ada-002 for retrieval tasks. The entire model is 274MB on disk.

    ChromaDB is the vector database. It’s an open-source, embedded vector store that runs as a Python library – no server process, no Docker container, no infrastructure. Data is persisted to a local directory. The entire 468-file index, with all embeddings and metadata, takes up 180MB on disk. Queries return results in under 100 milliseconds.

    A Python script ties it together. The indexer walks through designated directories, reads each file, splits it into chunks of ~500 tokens with 50-token overlap, generates embeddings via Ollama, and stores them in ChromaDB with metadata (file path, chunk number, file type, last modified date). The query interface takes a natural language question, embeds it, searches for the 5 most similar chunks, and returns the relevant passages with source attribution.

    What Gets Indexed

    I index four categories of files:

    Skills (60+ files): Every SKILL.md file in my skills directory. These contain operational instructions for WordPress publishing, SEO optimization, content generation, site auditing, Notion logging, and more. When I ask “how do I connect to the a luxury asset lender WordPress site?” the system retrieves the exact credentials and connection method from the wp-site-registry skill.

    Session transcripts (200+ files): Exported transcripts from Cowork sessions. These contain the full history of decisions, troubleshooting, and solutions. When I ask “what was the fix for the WinError 206 issue?” it retrieves the exact conversation where we diagnosed and solved that problem – publish one article per PowerShell call, never combine multiple article bodies in a single command.

    Project documentation (100+ files): Architecture documents, API documentation, configuration files, and project briefs. Technical reference material that I wrote once and need to recall later.

    Notion exports (50+ files): Periodic exports of key Notion databases – the task board, client records, content calendars, and operational notes. This bridges the gap between Notion (where I plan) and local files (where I execute).

    How the Chunking Strategy Matters

    The most underrated part of building a RAG system is chunking – how you split documents into pieces before embedding them. Get this wrong and your retrieval is useless regardless of how good your embedding model is.

    I tested three approaches:

    Fixed-size chunks (500 tokens): Simple but crude. Splits mid-sentence, mid-paragraph, sometimes mid-code-block. Retrieval accuracy was around 65% on my test queries – too many chunks lacked enough context to be useful.

    Paragraph-based chunks: Split on double newlines. Better for prose documents but terrible for skill files and code, where a single paragraph might be 2,000 tokens (too large) or 10 tokens (too small). Retrieval accuracy improved to about 72%.

    Semantic chunking with overlap: Split at ~500 tokens but respect sentence boundaries, and include 50 tokens of overlap between consecutive chunks. This means the end of chunk N appears at the beginning of chunk N+1, providing continuity. Additionally, each chunk gets prepended with the document title and the nearest H2 heading for context. Retrieval accuracy jumped to 89%.

    The overlap and heading prepend were the critical improvements. Without overlap, answers that span two chunks get lost. Without heading context, a chunk about “connection method” could be about any of 18 sites – the heading tells the model which site it’s about.

    Real Queries I Run Daily

    This isn’t a science project. I use this system every day. Here are actual queries from the past week:

    “What are the credentials for the an events platform WordPress site?” – Returns the exact username (will@engagesimply.com), app password, and the note that an events platform uses an email as username, not “Will.” Found in the wp-site-registry skill file.

    “How does the 247RS GCP publisher work?” – Returns the service URL, auth header format, and the explanation that SiteGround blocks all direct and proxy calls, requiring the dedicated Cloud Run publisher. Pulled from both the 247rs-site-operations skill and a session transcript where we built it.

    “What was the disk space issue on the knowledge cluster VM?” – Returns the session transcript passage about SSH dying because the 20GB boot disk filled to 98%, the startup script workaround, and the IAP tunneling backup method we configured afterward.

    “Which sites use Flywheel hosting?” – Returns a list: a flooring company (a flooring company.com), a live comedy platform (a comedy streaming site), an events platform (an events platform.com). Cross-referenced across multiple skill files and assembled by the retrieval system.

    Each query takes under 2 seconds – embedding the question (~50ms), vector search (~80ms), and displaying results with source file paths. No API call. No internet required. No data leaves my machine.

    Why Local Beats Cloud for This Use Case

    Security is absolute. These files contain API credentials, client information, business strategies, and operational playbooks. Uploading them to a cloud embedding service – even a reputable one – introduces a data handling surface I don’t need. Local means the data never leaves the machine. Period.

    Speed is consistent. Cloud API calls for embeddings add 200-500ms of latency per query, plus they’re subject to rate limits and service availability. Local embedding via Ollama is 50ms every time. When I’m mid-session and need an answer fast, consistent sub-second response matters.

    Cost is zero. OpenAI charges .0001 per 1K tokens for ada-002 embeddings. That sounds cheap until you’re re-indexing 468 files (roughly 2M tokens) every week – .20 per re-index, /year. Trivial in isolation, but when every tool in my stack has a small recurring cost, they compound. Local eliminates the line item entirely.

    Availability is guaranteed. The system works on an airplane, in a coffee shop with no WiFi, during a cloud provider outage. My operational knowledge base is always accessible because it runs on the same machine I’m working on.

    Frequently Asked Questions

    Can this replace a full knowledge management system like Confluence or Notion?

    No – it complements them. Notion is where I create and organize information. The local vector system is where I retrieve it instantly. They serve different functions. Notion is the authoring environment; the vector database is the search layer. I export from Notion periodically and re-index to keep the retrieval system current.

    How often do you re-index the files?

    Weekly for a full re-index, which takes about 4 minutes for all 468 files. I also run incremental indexing – only re-embedding files modified since the last index – as part of my daily morning script. Incremental indexing typically processes 5-15 files and takes under 30 seconds.

    What hardware do you need to run this?

    Surprisingly modest. My Windows laptop has 16GB RAM and an Intel i7. The nomic-embed-text model uses about 600MB of RAM while running. ChromaDB adds another 200MB for the index. Total memory overhead: under 1GB. Any modern laptop from the last 3-4 years can handle this comfortably. No GPU required for embeddings – CPU performance is more than adequate.

    How does this compare to just using Ctrl+F or grep?

    Grep finds exact text matches. Vector search finds semantic matches. If I search for “SiteGround blocking” with grep, I find files that contain those exact words. If I search for “why can’t I connect to the a restoration company site” with vector search, I find the explanation about SiteGround’s WAF blocking API calls – even though the passage might not contain the words “connect” or “a restoration company site” explicitly. The difference is understanding context vs. matching strings.

    The Compound Effect

    Every file I create makes the system smarter. Every session transcript adds to the searchable history. Every skill I write becomes instantly retrievable. The vector database is a living index of accumulated operational knowledge – and it grows automatically as I work.

    Three months ago, the answer to “how did we solve X?” was “let me search through my files for 10 minutes.” Today, the answer takes 2 seconds. Multiply that time savings across 20-30 lookups per week, and the ROI is measured in hours reclaimed – hours that go back into building, not searching.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “I Indexed 468 Files Into a Local Vector Database. Now My Laptop Answers Questions About My Business.”,
    “description”: “Using Ollama’s nomic-embed-text model and ChromaDB, I built a local RAG system that indexes every skill file, session transcript, and project doc on my ma”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/i-indexed-468-files-into-a-local-vector-database-now-my-laptop-answers-questions-about-my-business/”
    }
    }