Category: Local AI & Automation

Building autonomous AI systems that run locally. Zero cloud cost, full data control, infinite scale.

  • AI Agents Explained: What They Are, Who’s Using Them, and Why Your Business Will Need One

    AI Agents Explained: What They Are, Who’s Using Them, and Why Your Business Will Need One

    What Is an AI Agent? An AI agent is a software program powered by a large language model that can take actions — not just answer questions. It reads files, sends messages, runs code, browses the web, and completes multi-step tasks on its own, without a human directing every move.

    Most people’s mental model of AI is a chat interface. You type a question, you get an answer. That’s useful, but it’s also the least powerful version of what AI can do in a business context.

    The version that’s reshaping how companies operate isn’t a chatbot. It’s an agent — a system that can actually do things. And with Anthropic’s April 2026 launch of Claude Managed Agents, the barrier to deploying those systems for real business work dropped significantly.

    What Makes an Agent Different From a Chatbot

    A chatbot responds. An agent acts.

    When you ask a chatbot to summarize last quarter’s sales report, it tells you how to do it, or summarizes text you paste in. When you give the same task to an agent, it goes and gets the report, reads it, identifies the key numbers, formats a summary, and sends it to whoever asked — all without you supervising each step.

    The difference sounds subtle but has large practical implications. An agent can be assigned work the same way you’d assign work to a person. It can work on tasks in the background while you do other things. It can handle repetitive processes that would otherwise require sustained human attention.

    The examples from the Claude Managed Agents launch make this concrete:

    Asana built AI Teammates — agents that participate in project management workflows the same way a human team member would. They pick up tasks. They draft deliverables. They work within the project structure that already exists.

    Rakuten deployed agents across sales, marketing, HR, and finance that accept assignments through Slack and return completed work — spreadsheets, slide decks, reports — directly to the person who asked.

    Notion’s implementation lets knowledge workers generate presentations and build internal websites while engineers ship code, all with agents handling parallel tasks in the background.

    None of those are hypothetical. They’re production deployments that went live within a week of the platform becoming available.

    What Business Processes Are Actually Good Candidates for Agents

    Not every business task is suited for an AI agent. The best candidates share a few characteristics: they’re repetitive, they involve working with information across multiple sources, and they don’t require judgment calls that need human accountability.

    Strong candidates include research and summarization tasks that currently require someone to pull data from multiple places and compile it. Drafting and formatting work — proposals, reports, presentations — that follows a consistent structure. Monitoring tasks that require checking systems or data sources on a schedule and flagging anomalies. Customer-facing support workflows for common, well-defined questions. Data processing pipelines that transform information from one format to another on a recurring basis.

    Weak candidates include tasks that require relationship context, ethical judgment, or creative direction that isn’t already well-defined. Agents execute well-specified work; they don’t substitute for strategic thinking.

    Why the Timing of This Launch Matters for Small and Mid-Size Businesses

    Until recently, deploying a production AI agent required either a technical team capable of building significant custom infrastructure, or an enterprise software contract with a vendor that had built it for you. That meant AI agents were effectively inaccessible to businesses without large technology budgets or dedicated engineering resources.

    Anthropic’s managed platform changes that equation. The infrastructure layer — the part that required months of engineering work — is now provided. A small business or a non-technical operations team can define what they need an agent to do and deploy it without building a custom backend.

    The pricing reflects this broader accessibility: $0.08 per session-hour of active runtime, plus standard token costs. For agents handling moderate workloads — a few hours of active operation per day — the runtime cost is a small fraction of what equivalent human time would cost for the same work.

    What to Actually Do With This Information

    The most useful framing for any business owner or operations leader isn’t “what is an AI agent?” It’s “what work am I currently paying humans to do that is well-specified enough for an agent to handle?”

    Start with processes that meet these criteria: they happen on a regular schedule, they involve pulling information from defined sources, they produce a consistent output format, and they don’t require judgment calls that have significant consequences if wrong. Those are your first agent candidates.

    The companies that will have a structural advantage in two to three years aren’t the ones that understood AI earliest. They’re the ones that systematically identified which parts of their operations could be handled by agents — and deployed them while competitors were still treating AI as a productivity experiment.

    Frequently Asked Questions

    What is an AI agent in simple terms?

    An AI agent is a program that can take actions — not just answer questions. It can read files, send messages, browse the web, and complete multi-step tasks on its own, working in the background the same way you’d assign work to an employee.

    What’s the difference between an AI chatbot and an AI agent?

    A chatbot responds to questions. An agent executes tasks. A chatbot tells you how to summarize a report; an agent retrieves the report, summarizes it, and sends it to whoever needs it — without you directing each step.

    What kinds of business tasks are best suited for AI agents?

    Repetitive, well-defined tasks that involve pulling information from multiple sources and producing consistent outputs: research summaries, report drafting, data processing, support workflows, and monitoring tasks are strong candidates. Tasks requiring significant judgment, relationship context, or creative direction are weaker candidates.

    How much does it cost to deploy an AI agent for a small business?

    Using Claude Managed Agents, costs are standard Anthropic API token rates plus $0.08 per session-hour of active runtime. An agent running a few hours per day for routine tasks might cost a few dollars per month in runtime — a fraction of the equivalent human labor cost.


    Related: Complete Pricing Reference — every variable in one place. Complete FAQ Hub — every question answered.

  • 387 Cowork Sessions and Counting: What Happens When AI Becomes Your Daily Operating Partner

    387 Cowork Sessions and Counting: What Happens When AI Becomes Your Daily Operating Partner

    This Is Not a Chatbot Story

    When people hear I use AI every day, they picture someone typing questions into ChatGPT and getting answers. That’s not what this is. I’ve run 387 working sessions with Claude in Cowork mode since December 2025. Each session is a full operating environment – a Linux VM with file access, tool execution, API connections, and persistent memory across sessions.

    These aren’t conversations. They’re deployments. Content publishes. Infrastructure builds. SEO audits across 18 WordPress sites. Notion database updates. Email monitors. Scheduled tasks. Real operational work that used to require a team of specialists.

    The number 387 isn’t bragging. It’s data. And what that data reveals about how AI actually integrates into daily business operations is more interesting than any demo or product launch.

    What a Typical Session Actually Looks Like

    A session starts when I open Cowork mode and describe what I need done. Not a vague prompt – a specific operational task. “Run the content intelligence audit on a storm protection company.com and generate 15 draft articles.” “Check all 18 WordPress sites for posts missing featured images and generate them using Vertex AI.” “Read my Gmail for VIP messages from the last 6 hours and summarize what needs attention.”

    Claude loads into a sandboxed Linux environment with access to my workspace folder, my installed skills (I have 60+), my MCP server connections (Notion, Gmail, Google Calendar, Metricool, Figma, and more), and a full bash/Python execution layer. It reads my CLAUDE.md file – a persistent memory document that carries context across sessions – and gets to work.

    A single session might involve 50-200 tool calls. Reading files, executing scripts, making API calls, writing content, publishing to WordPress, logging results to Notion. The average session runs 15-45 minutes of active work. Some complex ones – like a full site optimization pass – run over two hours.

    The Skill Layer Changed Everything

    Early sessions were inefficient. I’d explain the same process every time – how to connect to WordPress via the proxy, what format to use for articles, which Notion database to log results in. Repetitive context-setting that ate 30% of every session.

    Then I started building skills. A skill is a structured instruction file (SKILL.md) that Claude reads at the start of a session when the task matches its trigger conditions. I now have skills for WordPress publishing, SEO optimization, content generation, Notion logging, YouTube watch page creation, social media scheduling, site auditing, and dozens more.

    The impact was immediate. A task that took 20 minutes of back-and-forth setup now triggers in one sentence. “Run the wp-intelligence-audit on a luxury asset lender.com” – Claude reads the skill, loads the credentials from the site registry, connects via the proxy, pulls all posts, analyzes gaps, and generates a full report. No explanation needed. The skill contains everything.

    Building skills is the highest-leverage activity I’ve found in AI-assisted work. Every hour spent writing a skill saves 10+ hours across future sessions. At 387 sessions, the compound return is staggering.

    What 387 Sessions Taught Me About AI Workflow

    Specificity beats intelligence. The most productive sessions aren’t the ones where Claude is “smartest.” They’re the ones where I give the most specific instructions. “Optimize this post for SEO” produces mediocre results. “Run wp-seo-refresh on post 247 at a luxury asset lender.com, ensure the focus keyword is ‘luxury asset lending,’ update the meta description to 140-160 characters, and add internal links to posts 312 and 418” produces excellent results. AI amplifies clarity.

    Persistent memory is the unlock. CLAUDE.md – a markdown file that persists across sessions – is the most important file in my entire system. It contains my preferences, operational rules, business context, and standing instructions. Without it, every session starts from zero. With it, session 387 has the accumulated context of all 386 before it. This is the difference between using AI as a tool and using AI as a partner.

    Batch operations reveal true ROI. Publishing one article? AI saves maybe 30 minutes. Publishing 15 articles across 3 sites with full SEO/AEO/GEO optimization, taxonomy assignment, internal linking, and Notion logging? AI saves 15+ hours. The value curve is exponential with batch size. I now default to batch operations for everything – content, audits, meta updates, image generation.

    Failures are cheap and informative. At least 40 of my 387 sessions hit significant errors – API timeouts, disk space issues, credential failures, rate limiting. Each failure taught me something that made the system more resilient. The SSH workaround. The WP proxy to avoid IP blocking. The WinError 206 fix for long PowerShell commands. Failure at high volume is the fastest path to robust systems.

    The Numbers Behind 387 Sessions

    I tracked the data because the data tells the real story:

    Content produced: Approximately 400+ articles published across 18 WordPress sites. Each article is 1,200-1,800 words, SEO-optimized, AEO-formatted with FAQ sections, and GEO-ready with entity optimization. At market rates for this quality of content, that’s roughly ,000-,000 worth of content production.

    Sites managed: 18 WordPress properties across multiple industries – restoration, luxury lending, cold storage, interior design, comedy, training, technology. Each site gets regular content, SEO audits, taxonomy fixes, schema injection, and internal linking.

    Automations built: 7 autonomous AI agents (the droid fleet), 60+ skills, 3 scheduled tasks, a GCP Compute Engine cluster running 5 WordPress sites, a Cloud Run proxy for WordPress API routing, and a Vertex AI chatbot deployment.

    Time investment: Approximately 200 hours of active session time over three months. For context, a single full-time employee working those same 200 hours could not have produced a fraction of this output, because the bottleneck isn’t thinking time – it’s execution speed. Claude executes API calls, writes code, publishes content, and processes data at machine speed. I provide direction at human speed. The combination is multiplicative.

    Why Most People Won’t Do This

    The honest answer: it requires upfront investment that most people aren’t willing to make. Building the skill library took weeks. Configuring the MCP connections, setting up the proxy, provisioning the GCP infrastructure, writing the CLAUDE.md context file – that’s real work before you see any return.

    Most people want AI to be plug-and-play. Type a question, get an answer. And for simple tasks, it is. But for operational AI – AI that runs your business processes daily – the setup cost is significant and the learning curve is real.

    The payoff, though, is not incremental. It’s categorical. I’m not 10% more productive than I was before Cowork mode. I’m operating at a fundamentally different scale. Tasks that would require hiring 3-4 specialists – content writer, SEO analyst, site admin, automation engineer – are handled in daily sessions by one person with a well-configured AI partner.

    That’s not a productivity hack. That’s a structural advantage.

    Frequently Asked Questions

    What is Cowork mode and how is it different from regular Claude?

    Cowork mode is a feature of Claude’s desktop app that gives Claude access to a sandboxed Linux VM, file system, bash execution, and MCP server connections. Regular Claude is a chat interface. Cowork mode is an operating environment where Claude can read files, run code, make API calls, and produce deliverables – not just text responses.

    How much does running 387 sessions cost?

    Cowork mode is included in the Claude Pro subscription at /month. The MCP connections (Notion, Gmail, etc.) use free API tiers. The GCP infrastructure runs about /month. Total cost for three months of operations: approximately . The value produced is orders of magnitude higher.

    Can someone replicate this without technical skills?

    Partially. The basic Cowork mode works out of the box for content creation, research, and file management. The advanced setup – custom skills, GCP infrastructure, API integrations – requires comfort with command-line tools, APIs, and basic scripting. The barrier is falling fast as skills become shareable and MCP servers become plug-and-play.

    What’s the most impactful single skill you’ve built?

    The wp-site-registry skill – a single file containing credentials and connection methods for all 18 WordPress sites. Before this skill existed, every session required manually providing credentials. After it, any wp- skill can connect to any site automatically. It turned 18 separate workflows into one unified system.

    What Comes Next

    Session 387 is not a milestone. It’s a Tuesday. The system compounds. Every skill I build makes future sessions faster. Every failure I fix makes the system more resilient. Every batch I run produces data that informs the next batch.

    The question I get most often is “where do you start?” The answer is boring: start with one task you do repeatedly. Build one skill for it. Run it 10 times. Then build another. By session 50, you’ll have a system. By session 200, you’ll have an operating partner. By session 387, you’ll wonder how you ever worked without one.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “387 Cowork Sessions and Counting: What Happens When AI Becomes Your Daily Operating Partner”,
    “description”: “I’ve run 387 Cowork sessions with Claude in three months. Not chatbot conversations – full working sessions that build skills, publish content, mana”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/387-cowork-sessions-and-counting-what-happens-when-ai-becomes-your-daily-operating-partner/”
    }
    }

  • The SEO Drift Detector: How I Built an Agent That Watches 18 Sites for Ranking Decay

    The SEO Drift Detector: How I Built an Agent That Watches 18 Sites for Ranking Decay

    Rankings Don’t Crash – They Drift

    Nobody wakes up to a sudden SEO catastrophe. What actually happens is slower and more insidious. A page that ranked #4 for its target keyword three months ago is now #9. Another page that owned a featured snippet quietly lost it. A cluster of posts that drove 40% of a site’s organic traffic has collectively slipped 3-5 positions across 12 keywords.

    By the time you notice, the damage is done. Traffic is down 25%. Leads have thinned. And the fix – refreshing content, rebuilding authority, reclaiming positions – takes weeks. The problem with SEO drift isn’t that it’s hard to fix. It’s that it’s hard to see.

    I manage 18 WordPress sites across industries ranging from luxury lending to restoration services to cold storage logistics. Manually checking keyword rankings across all of them? Impossible. Waiting for Google Search Console to show a decline? Too late. So I built SD-06 – the SEO Drift Detector – an autonomous agent that monitors keyword positions daily, calculates drift velocity, and flags pages that need attention before the traffic impact hits.

    How SD-06 Works Under the Hood

    The architecture connects three systems: DataForSEO for ranking data, a local SQLite database for historical tracking, and Slack for alerts.

    Every morning at 6 AM, SD-06 runs a scheduled Python script that pulls current ranking positions for tracked keywords across all 18 sites. DataForSEO’s SERP API returns the current Google position for each keyword-URL pair. The script stores these daily snapshots in a SQLite database – one row per keyword per day, with fields for position, URL, SERP features present (featured snippet, People Also Ask, local pack), and the date.

    With 30+ days of historical data, the agent calculates three metrics for each tracked keyword:

    Position delta (7-day): The difference between today’s position and the position 7 days ago. A keyword that moved from #5 to #8 has a delta of -3. Simple, fast, catches sudden drops.

    Drift velocity (30-day): The average daily position change over the last 30 days. This is the metric that catches slow decay. A keyword losing 0.1 positions per day doesn’t trigger any single-day alarm, but over 30 days that’s a 3-position drop. SD-06 calculates this as a rolling regression slope and flags anything with negative drift velocity exceeding -0.05 positions per day.

    Feature loss: Did this URL have a featured snippet, PAA box, or other SERP feature last week that it no longer holds? Feature loss often precedes position loss – it’s an early warning signal that content freshness or authority is slipping.

    The Alert System That Changed My Workflow

    SD-06 sends three types of Slack alerts:

    Red alert (immediate attention): Any keyword that dropped 5+ positions in 7 days, or any URL that lost a featured snippet it held for 14+ consecutive days. These are rare but critical – usually indicating a technical issue, a Google algorithm update, or a competitor publishing a significantly better page.

    Yellow alert (weekly review): Keywords with negative drift velocity exceeding the threshold but no single dramatic drop. These are bundled into a weekly digest every Monday morning. The digest includes the keyword, current position, 30-day trend direction, the affected URL, and a recommended action (refresh content, add internal links, update statistics, or expand the article).

    Green report (monthly summary): A full portfolio health report showing total tracked keywords, percentage dra flooring companyng negative vs. positive, top gainers, top losers, and overall portfolio trajectory. This is the report I share with clients to show proactive SEO management.

    The critical insight was making the recommended action part of every alert. An alert that says “keyword X dropped 3 positions” is information. An alert that says “keyword X dropped 3 positions – recommend refreshing the statistics section and adding 2 internal links from recent posts” is a task I can execute immediately. SD-06 generates these recommendations using simple rules based on what type of drift it detects.

    What 90 Days of Drift Data Revealed

    After running SD-06 for three months across all 18 sites, the data patterns were illuminating.

    Content age is the #1 drift predictor. Posts older than 18 months drift negative at 3x the rate of posts under 12 months old. This isn’t surprising – Google rewards freshness – but the magnitude was larger than expected. It means my content refresh cadence needs to target any post approaching the 18-month mark, not waiting for visible ranking loss.

    Internal linking density correlates with drift resistance. Pages with 5+ inbound internal links from other site content drifted negative 60% less frequently than pages with 0-2 internal links. Orphan pages – content with zero inbound internal links – were the fastest to lose rankings. This validated my investment in the wp-interlink skill that systematically adds internal links across every site.

    Featured snippet loss is a 2-week leading indicator. When a page loses a featured snippet, it loses 2-5 organic positions within the following 14 days approximately 70% of the time. This made featured snippet monitoring the most valuable early warning signal in the entire system. When SD-06 detects snippet loss, I now have a 2-week window to refresh the content before the position drop fully materializes.

    Competitor content publishing causes measurable drift. Several drift events correlated with competitors publishing fresh content targeting the same keywords. Without SD-06, I would have discovered this weeks later through traffic decline. With it, I can see the drift starting within 3-5 days of the competitor publish and respond immediately.

    The Technical Stack

    DataForSEO API for SERP position tracking. The SERP API costs approximately .002 per keyword check. Tracking 200 keywords daily across 18 sites runs about /month – trivial compared to the SEO tools that charge +/month for similar monitoring.

    SQLite for historical data storage. Lightweight, zero-configuration, file-based database that lives on the local machine. After 90 days of daily tracking across 200 keywords, the database file is under 50MB. No server, no cloud database, no monthly cost.

    Python 3.11 with pandas for data analysis, scipy for regression calculations, and the requests library for API calls. The entire script is under 400 lines.

    Slack Incoming Webhook for alerts, same pattern as the VIP Email Monitor. One webhook URL, formatted JSON payloads, zero infrastructure.

    Windows Task Scheduler triggers the script at 6 AM daily. Could also run as a cron job on Linux or a Cloud Run scheduled task on GCP.

    Why I Didn’t Just Use Ahrefs or SEMrush

    I’ve used both. They’re excellent tools. But they have three limitations for my use case.

    First, cost at scale. Monitoring 18 sites with 200+ keywords each on Ahrefs would cost +/month. SD-06 costs /month in API calls.

    Second, custom alert logic. Ahrefs and SEMrush send generic position change alerts. They don’t calculate drift velocity, predict future position loss based on trajectory, or generate content-specific refresh recommendations. SD-06’s alert intelligence is tailored to how I actually work.

    Third, integration with my existing workflow. SD-06 pushes alerts to the same Slack channel where all my other agents report. It writes recommendations that align with my wp-seo-refresh and wp-content-expand skills. The data flows directly into my operational system rather than living in a separate dashboard I have to remember to check.

    Frequently Asked Questions

    How many keywords should you track per site?

    Start with 10-15 per site – your highest-traffic pages and their primary keywords. Expand to 20-30 after the first month once you understand which keywords actually drive business results. Tracking 100+ keywords per site creates noise without proportional signal. Focus on the keywords that drive revenue, not vanity metrics.

    Can drift detection work without DataForSEO?

    Yes, but with less precision. Google Search Console provides position data with a 2-3 day delay and averages positions over date ranges rather than giving exact daily snapshots. You can build a simpler version using the Search Console API, but the drift velocity calculations will be less granular. DataForSEO provides same-day position data at the individual keyword level.

    How quickly can you reverse SEO drift once detected?

    For content-based drift (stale statistics, outdated information, thin sections), a content refresh typically recovers positions within 2-4 weeks after Google recrawls. For authority-based drift (competitors building more backlinks), recovery takes longer – 4-8 weeks – and requires both content improvement and internal linking reinforcement.

    Does this work for local SEO keywords?

    Absolutely. DataForSEO supports location-specific SERP checks, so you can track “water damage restoration Houston” at the Houston geo-target level. Several of my sites are local service businesses, and the drift patterns for local keywords follow the same trajectory math – they just tend to be more volatile due to local pack algorithm updates.

    The Principle Behind the Agent

    SD-06 exists because of a simple belief: the best time to fix SEO is before it breaks. Reactive SEO – waiting for traffic to drop, then scrambling to diagnose and fix – is expensive, stressful, and often too late. Proactive SEO – monitoring drift in real time and refreshing content before positions collapse – costs almost nothing and preserves the compounding value of content that’s already ranking.

    Every piece of content on a website is a depreciating asset. It starts strong, holds for a while, then slowly loses value as competitors publish newer content and search algorithms reward freshness. SD-06 doesn’t stop depreciation. It tells me exactly which assets need maintenance, exactly when they need it, and exactly what the maintenance should look like. That’s not magic. That’s operations.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “The SEO Drift Detector: How I Built an Agent That Watches 18 Sites for Ranking Decay”,
    “description”: “Rankings don’t crash overnight – they drift. I built SD-06, an autonomous agent that monitors keyword positions across 18 WordPress sites using Data”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/the-seo-drift-detector-how-i-built-an-agent-that-watches-18-sites-for-ranking-decay/”
    }
    }

  • MP-04: The Agent That Turns Every Meeting Into Action Items Before I Close the Tab

    MP-04: The Agent That Turns Every Meeting Into Action Items Before I Close the Tab

    Meetings Produce Information. Most of It Evaporates.

    I sat in a client call last month where we agreed on three specific deliverables, a revised timeline, and a budget adjustment. Everyone nodded. Everyone agreed. Three days later, nobody could remember the exact numbers or who owned what. I had to dig through a transcript to reconstruct the meeting.

    This happens constantly. Meetings generate decisions, action items, and commitments at a rate that exceeds human note-taking capacity. Even when someone takes notes, the notes are incomplete, biased toward what the note-taker found interesting, and almost never get distributed in an actionable format. The transcript exists – most meetings are recorded now – but a 45-minute transcript is a 6,000-word wall of text that nobody will read.

    MP-04 solves this. It’s the fourth agent in my autonomous fleet, and its job is simple: take any meeting transcript, extract everything actionable, and route it to the right systems before the meeting fades from memory.

    What MP-04 Extracts

    The agent processes meeting transcripts through Ollama’s Llama 3.2 model with a structured extraction prompt. It pulls five categories of information:

    Action items: Anything that someone committed to doing. “I’ll send the proposal by Friday” becomes an action item assigned to the speaker with a Friday deadline. “We need to update the website copy” becomes an action item with no assignee – flagged for me to assign. The model distinguishes between firm commitments (someone said “I will”) and vague suggestions (“we should probably”) and tags them accordingly.

    Decisions: Any point where the group reached agreement. “Let’s go with Option B” is a decision. “The budget is ,000” is a decision. These get logged as immutable records – what was decided, when, and by whom. Decisions are critical for accountability. When someone later says “we never agreed to that,” the decision log settles it.

    Client mentions: Names of clients, companies, or projects discussed. Each mention gets cross-referenced against my client database to attach the meeting context to the right client record. If a client was discussed in three meetings this month, their record shows all three with relevant excerpts.

    Deadlines and dates: Any temporal commitment. “The launch is March 15th.” “We need this by end of quarter.” “Let’s review next Tuesday.” These get extracted with enough context to create calendar-ready events or task due dates.

    Open questions: Things raised but not resolved. “What’s the pricing for the enterprise tier?” with no answer in the transcript becomes an open question flagged for follow-up. These are the items that silently disappear after meetings if nobody tracks them.

    The Routing Layer

    Extraction is useful. Routing is what makes it operational.

    After extracting the five categories, MP-04 routes each item to the appropriate system:

    Action items become Notion tasks in my Tasks Database. Each task is pre-populated with the company (inferred from client mentions), priority (inferred from deadline proximity and language urgency), source (the meeting date and title), and a link back to the full transcript. I don’t create these tasks manually. They appear in my task board, ready to be triaged in my next planning session.

    Decisions get logged to the Knowledge Database in Notion. This creates a searchable decision history. Three months from now, when I need to recall what was agreed about the Q2 content strategy, I search the decisions log instead of scrubbing through transcripts.

    Client mentions update the Client Database with a meeting note. The note includes a 2-3 sentence summary of what was discussed about that client, automatically generated from the relevant transcript sections.

    Deadlines get posted to Slack with a reminder. If the deadline is within 7 days, it goes to my priority channel. If it’s further out, it goes to the weekly planning channel.

    Open questions become follow-up tasks in Notion, tagged with a “needs-answer” status that keeps them visible until resolved.

    The Technical Reality

    MP-04 runs locally on my Windows machine. The input is a text transcript – either pasted directly or loaded from a file. Most meeting platforms (Zoom, Google Meet, Teams) now generate transcripts automatically, so the input is free.

    The Ollama call uses a detailed system prompt that defines the extraction schema with examples. The prompt is about 800 tokens of instructions that tell the model exactly how to format each extracted item – as JSON objects with specific fields for each category. This structured output means the routing script can parse the results programmatically without any ambiguity.

    Processing time for a 45-minute meeting transcript (approximately 6,000 words): about 15 seconds on Llama 3.2 3B running locally. The Notion API calls to create tasks, update client records, and log decisions add another 5-10 seconds. Total time from transcript to fully routed outputs: under 30 seconds.

    Compare that to the manual process: read the transcript (15 minutes), identify action items (10 minutes), create tasks in Notion (5 minutes), update client records (5 minutes), set reminders for deadlines (5 minutes). That’s 40 minutes of administrative work per meeting, reduced to 30 seconds.

    The Client Name Guardrail Problem

    One unexpected challenge: client names in transcripts are messy. People use first names, company names, project codenames, and abbreviations interchangeably. “The Beverly project” and “a luxury lending firm” and “Sarah’s account” might all refer to the same client.

    I built a name resolution layer that maps common references to canonical client records. It’s a JSON lookup table: “Beverly” maps to “a luxury lending firm Company,” “Sarah” maps to “Sarah [Client Last Name] at a luxury lending firm,” “BL” maps to “a luxury lending firm.” The table has about 150 entries covering all active clients and common reference patterns.

    When the extraction model identifies a client mention, the name resolver checks it against this table before routing. If there’s no match, it flags the mention as “unresolved client reference” for manual review rather than creating a misattributed record. The guardrail prevents the worst outcome – action items attached to the wrong client – at the cost of occasionally requiring a 10-second manual resolution.

    What Changed After 60 Days of Running MP-04

    The obvious win: I stopped losing action items. In the 60 days before MP-04, I estimate that about 20% of meeting commitments fell through the cracks – not from negligence, but from the gap between hearing a commitment and recording it in a system. In the 60 days after, that dropped to under 3% (the remaining 3% are items the model misclassifies or that I manually deprioritize).

    The less obvious win: meeting quality improved. When you know every commitment will be automatically extracted and tracked, you’re more careful about what you commit to. Meetings became more precise. Fewer vague “we should probably” statements, more specific “I will deliver X by Y.” The agent didn’t just capture accountability – it created it.

    The unexpected win: the decision log became a strategic asset. Having a searchable history of every decision across every client turned out to be invaluable for quarterly reviews, contract renewals, and scope discussions. “Based on the decisions log, we’ve expanded scope three times without adjusting the retainer” is a powerful conversation to have with data behind it.

    Frequently Asked Questions

    What meeting platforms does MP-04 work with?

    Any platform that produces a text transcript. Zoom, Google Meet, Microsoft Teams, Otter.ai, and Fireflies all export transcripts. MP-04 doesn’t integrate with these platforms directly – it processes the transcript file. This keeps it platform-agnostic and avoids the complexity of OAuth integrations with every meeting tool.

    How accurate is the action item extraction?

    On my test set of 40 meeting transcripts, the model correctly identified 91% of action items I had manually tagged. The 9% it missed were typically very implicit commitments – things like “I’ll take care of that” without specifying what “that” refers to. It also occasionally generates false positives from hypothetical statements – “if we were to do X, we would need Y” getting tagged as a commitment. The false positive rate is about 5%, easily caught in the triage step.

    Can this work for meetings I didn’t attend?

    Yes – and that’s one of the most useful applications. Team members can drop a transcript into the processing queue and I get a structured summary with action items without having attended the meeting. This is especially valuable for the meetings I delegate but still need to track outcomes from.

    What about sensitive meeting content?

    Everything runs locally. The transcript is processed by Ollama on my machine, routed to my private Notion workspace, and posted to my private Slack channels. No third-party service sees the meeting content. This is critical for client meetings that discuss financials, legal issues, or strategic plans.

    The Agent Philosophy

    MP-04 embodies the principle that runs through my entire agent fleet: don’t automate decisions – automate the administrative overhead around decisions. The agent doesn’t decide what to prioritize or how to respond to a client request. It extracts the raw information, structures it, and routes it to where I can make those decisions quickly and with full context. The human judgment stays human. The administrative busywork disappears.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “MP-04: The Agent That Turns Every Meeting Into Action Items Before I Close the Tab”,
    “description”: “MP-04 processes meeting transcripts automatically – extracting action items, decisions, client mentions, and deadlines, then routing them to the right Not”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/mp-04-the-agent-that-turns-every-meeting-into-action-items-before-i-close-the-tab/”
    }
    }

  • The Client Name Guardrail: What Happens When AI Publishes Too Fast for Human Review

    The Client Name Guardrail: What Happens When AI Publishes Too Fast for Human Review

    The Mistake That Created the Rule

    I published 12 articles to the agency blog in a single session. World-class content. Properly optimized. Well-structured. And scattered throughout them were real client names – actual companies we serve, mentioned by name in case studies, examples, and operational descriptions.

    This was not malicious. It was the natural output of an AI that had access to my full operational context – including which companies I work with, what industries they are in, and what we have built for them. When I asked for content drawn from real work, the AI delivered exactly that. Including the parts that should have stayed confidential.

    I caught it during review. Every article was scrubbed clean within the hour. But the incident exposed a fundamental gap in AI-assisted content publishing: when AI can publish at machine speed, human review becomes the bottleneck – and bottlenecks get skipped.

    So I built the client name guardrail. A systematic prevention layer that catches confidential references before they reach a publish command, no matter how fast the content is being produced.

    The Protected Entity List

    The foundation is a maintained list of every client, company, and entity name that must never appear in published content without explicit approval. The list currently contains 20+ entries covering all active clients across every business entity.

    But names are not simple strings. People reference the same company in multiple ways. “The restoration client in Colorado” is fine. “a restoration company” is not. “Our luxury lending partner” is fine. “a luxury lending firm Company” is not. The entity list includes not just official company names but common abbreviations, nicknames, and partial references that could identify a client.

    The Genericization Table

    Simply blocking client names would break the content. If the AI cannot reference specific work, the articles become generic and lose the authenticity that makes them valuable. The solution is a genericization table – a mapping of specific references to anonymous equivalents that preserve the insight without revealing the identity.

    “a cold storage facility” becomes “our cold storage client.” “a luxury lending firm” becomes “a luxury lending partner.” “a restoration company” becomes “a restoration company in the Mountain West.” Each mapping is specific enough to be useful but generic enough to protect confidentiality.

    The AI applies these substitutions automatically during content generation. It still draws from real operational experience. It still provides specific, authentic examples. But the identifying details are replaced before the content is written, not after.

    The Pre-Publish Scan

    The final layer is a regex-based scan that runs against every piece of content before a publish API call is made. The scan checks the title, body content, excerpt, and slug against the full protected entity list. If any match is found, the publish is blocked and the specific matches are surfaced for review.

    This scan catches edge cases the genericization table misses – a client name that slipped through in a quote, a URL that contains a company domain, or a reference the AI constructed from context rather than the entity list. The scan is the safety net that ensures nothing gets through even when the primary prevention layer fails.

    Why This Matters Beyond My Situation

    Every agency, consultancy, and service provider using AI for content creation faces this risk. AI models are trained to be helpful and specific. When given access to client context, they will use that context to produce better content. That is exactly what you want – until the specificity includes information your clients did not consent to having published.

    The risk scales with capability. A basic AI tool that generates generic blog posts will never mention your clients because it does not know about them. An AI system deeply integrated with your operations – reading your Notion databases, processing your email, accessing your WordPress sites – knows everything about your client relationships. That integration is what makes it powerful. It is also what makes it dangerous without guardrails.

    The pattern I built is transferable to any agency: maintain a protected entity list, build a genericization mapping, and scan before publishing. The implementation takes about 2 hours. The alternative – publishing client names and discovering it after the content is indexed by Google – takes much longer to fix and costs trust that cannot be rebuilt with a quick edit.

    Frequently Asked Questions

    Does the guardrail slow down content production?

    Negligibly. The genericization happens during content generation, adding zero time to the process. The pre-publish scan takes under 2 seconds per article. In a 15-article batch, that is 30 seconds of total overhead.

    What about client names in internal documents vs. published content?

    The guardrail only activates on publish workflows. Internal documents, Notion entries, and operational notes use real client names because they are not public-facing. The skill triggers specifically when content is being sent to a WordPress REST API endpoint or any other publishing channel.

    Can clients opt in to being named?

    Yes. The protected entity list supports an override flag. If a client explicitly approves being referenced by name – for a case study, testimonial, or co-marketing piece – their entry can be temporarily unflagged. The default is always protected. Opt-in is explicit.

    Has the guardrail caught anything since the initial incident?

    Yes – three times in the first week. All were subtle references the AI constructed from context rather than direct mentions. One was a geographic description specific enough to identify a client’s location. The scan caught it. Without the guardrail, all three would have been published.

    Speed Needs Guardrails

    The ability to publish 15 articles in a single session is a superpower. But superpowers without controls are liabilities. The client name guardrail is not about slowing down. It is about publishing at machine speed with human-grade judgment on confidentiality. The AI produces the content. The guardrail produces the trust.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “The Client Name Guardrail: What Happens When AI Publishes Too Fast for Human Review”,
    “description”: “After publishing 12 articles that accidentally contained real client names, I built a guardrail system with a protected entity list, genericization table, and p”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/the-client-name-guardrail-what-happens-when-ai-publishes-too-fast-for-human-review/”
    }
    }

  • SM-01: How One Agent Monitors 23 Websites Every Hour Without Me

    SM-01: How One Agent Monitors 23 Websites Every Hour Without Me

    The Worst Way to Find Out Your Site Is Down

    A client calls. Their site has been returning a 503 error for four hours. You check – they are right. The hosting provider had a blip, the site went down, and nobody noticed because nobody was watching. Four hours of lost traffic, lost leads, and lost trust.

    This happened to me once. It never happened again, because I built SM-01.

    SM-01 is the first agent in my autonomous fleet. It runs every 60 minutes via Windows Task Scheduler, checks 23 websites across my client portfolio, and reports to Slack only when it finds a problem. No dashboard to check. No email digest to read. Silence means everything is fine. A Slack message means something needs attention.

    What SM-01 Checks

    HTTP status: Is the site returning 200? A 503, 502, or 500 triggers an immediate red alert. A 301 or 302 redirect chain triggers a yellow alert – the site works but something changed.

    Response time: How long does the homepage take to respond? Baseline is established over 30 days of monitoring. If response time exceeds 2x the baseline, a yellow alert fires. If it exceeds 5x, red alert. Slow sites lose rankings and visitors before they fully go down – response time degradation is an early warning.

    SSL certificate expiration: SM-01 checks the SSL certificate expiry date on every pass. If a certificate expires within 14 days, yellow alert. Within 3 days, red alert. Expired, critical alert. An expired SSL certificate turns your site into a browser warning page and kills organic traffic instantly.

    Content integrity: The agent checks for the presence of specific strings on each homepage – the site name, a key heading, or a footer element. If these strings disappear, it means the homepage content changed unexpectedly – possibly a defacement, a bad deploy, or a theme crash. This catches the subtle failures that return a 200 status code but serve broken content.

    The Architecture Is Deliberately Boring

    SM-01 is a Python script. It uses the requests library for HTTP checks, the ssl and socket libraries for certificate inspection, and a Slack webhook for alerts. No monitoring platform. No subscription. No agent framework. Under 250 lines of code.

    The site list is a JSON file with 23 entries. Each entry has the URL, expected status code, content check string, and baseline response time. Adding a new site takes 30 seconds – add an entry to the JSON file.

    Results are stored in a local SQLite database for trend analysis. I can query historical uptime, average response time, and alert frequency for any site over any time period. The database is 12MB after six months of hourly checks across 23 sites.

    What Six Months of Data Revealed

    Across 23 sites monitored hourly for six months, SM-01 recorded 99.7% average uptime. The 0.3% downtime was concentrated in three sites on shared hosting – every other site on dedicated or managed hosting had 99.99%+ uptime.

    SSL certificate alerts saved two near-misses where auto-renewal failed silently. Without SM-01, those certificates would have expired and the sites would have shown browser security warnings until someone manually noticed and renewed.

    Response time trending caught one hosting degradation issue three weeks before it became a visible problem. A site’s response time crept from 400ms baseline to 900ms over 10 days. SM-01 flagged it at the 800ms mark. Investigation revealed a database table that needed optimization. Fixed in 20 minutes, before any traffic impact.

    Frequently Asked Questions

    Why not use UptimeRobot or Pingdom?

    I have. They work well for basic uptime monitoring. SM-01 adds content integrity checking, custom response time baselines per site, and integration with my existing Slack alert ecosystem. The biggest advantage is cost at scale – monitoring 23 sites on UptimeRobot Pro costs about /month. SM-01 costs nothing.

    Does hourly checking miss short outages?

    Yes – an outage lasting 30 minutes between checks would be missed. For critical production sites, you could reduce the interval to 5 minutes. I chose hourly because my sites are content sites, not e-commerce or SaaS platforms where minutes of downtime have direct revenue impact. The monitoring frequency should match the cost of missed downtime.

    How do you handle false positives from network issues?

    SM-01 requires two consecutive failed checks before alerting. A single timeout or error is logged but not reported. This eliminates the vast majority of false positives from transient network blips or temporary DNS issues. If both the hourly check and the immediate recheck 60 seconds later fail, the alert fires.

    Monitoring Is Not Optional

    Every website you manage is a promise to a client. That promise includes being available when their customers look for them. SM-01 is how I keep that promise without manually checking 23 URLs every day. It is the simplest agent in my fleet and arguably the most important.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “SM-01: How One Agent Monitors 23 Websites Every Hour Without Me”,
    “description”: “SM-01 pings 23 websites every hour, checks HTTP status, SSL expiration, response time, and content integrity – then posts to Slack only when.”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/sm-01-how-one-agent-monitors-23-websites-every-hour-without-me/”
    }
    }

  • NB-02: The Nightly Brief That Tells Me What Happened Across Seven Businesses While I Was Living My Life

    NB-02: The Nightly Brief That Tells Me What Happened Across Seven Businesses While I Was Living My Life

    The Morning Ritual That Replaced Checking 12 Apps

    My old morning routine: open Slack, scan 8 channels. Open Notion, check the task board. Open Gmail, triage the inbox. Open Google Analytics for each client site. Open the WordPress dashboard for any site that published overnight. Check the GCP console for VM health. That is 45 minutes of context-gathering before I do anything productive.

    NB-02 replaced all of it with a single Slack message that arrives at 6 AM every morning.

    The Nightly Brief Generator is the second agent in my fleet. It runs at 5:45 AM via scheduled task, aggregates activity from the previous 24 hours across every system I operate, and produces a structured briefing that takes 3 minutes to read. By the time I finish my coffee, I know exactly what happened, what needs attention, and what I should work on first.

    What the Nightly Brief Contains

    Agent Activity Summary: Which agents ran, how many times, success/failure counts. If SM-01 flagged a site issue overnight, it shows here. If the VIP Email Monitor caught an urgent message at 2 AM, it shows here. If SD-06 detected ranking drift on a client site, it shows here. One section, all agent activity, color-coded by severity.

    Content Published: Any articles published or scheduled across all 18 WordPress sites in the last 24 hours. Title, site, status, word count. This matters because automated publishing pipelines sometimes run overnight, and I need to know what went live without manually checking each site.

    Tasks Created: New tasks in the Notion database, grouped by source. Tasks from MP-04 meeting processing, tasks from agent alerts, tasks manually created by me or team members. The brief shows the count and highlights any marked as urgent.

    Overdue Items: Any task past its due date. This is the accountability section. It is uncomfortable by design. If something was due yesterday and is not done, it appears in bold in my morning brief. No hiding from missed deadlines.

    Infrastructure Health: Quick status of the GCP VMs, the WP proxy, and any scheduled tasks. Green/yellow/red indicators. If everything is green, this section is one line. If something is yellow or red, it expands with diagnostic details.

    How NB-02 Aggregates Data

    The agent pulls from four sources via API:

    Slack API: Reads messages posted to agent-specific channels in the last 24 hours. Counts alerts by type and severity. Extracts any unresolved red alerts that need morning attention.

    Notion API: Queries the Tasks Database for items created or modified in the last 24 hours. Queries the Content Database for recently published entries. Checks for overdue tasks.

    WordPress REST API: Quick status check on each managed site – is the REST API responding? Any posts published in the last 24 hours? This runs through the WP proxy and takes about 30 seconds for all 18 sites.

    GCP Monitoring: Instance status for the knowledge cluster VM and any Cloud Run services. Uses the Compute Engine API to check instance state and basic health metrics.

    The aggregation script runs in Python, collects data from all sources into a structured object, then formats it as a Slack message using Block Kit for clean formatting with sections, dividers, and color-coded indicators. Total runtime: under 2 minutes.

    The Behavioral Impact

    The nightly brief changed how I start every day. Instead of reactive context-gathering across multiple apps, I start with a complete picture and move directly into action. The first 45 minutes of my day shifted from information archaeology to execution.

    More importantly, the brief gives me confidence in my systems. When six agents are running autonomously overnight, processing emails, monitoring sites, tracking rankings, and generating content, you need a single point of verification that everything worked. NB-02 is that verification. If the morning brief arrives and everything is green, I know with certainty that my operations ran correctly while I slept.

    On the days when something is yellow or red, I know immediately and can address it before it impacts clients or deadlines. The alternative – discovering a problem at 2 PM when a client asks why their site is slow – is the scenario NB-02 was built to prevent.

    Frequently Asked Questions

    Can the nightly brief be customized per day of the week?

    Yes. Monday briefs include a weekly summary rollup in addition to the overnight report. Friday briefs include a weekend preparation section flagging anything that might need attention over the weekend. The template is configurable per day.

    What happens if NB-02 itself fails to run?

    If the brief does not arrive by 6:15 AM, that absence is itself the alert. I have a simple phone alarm at 6:15 that I dismiss only after reading the brief. If the brief is not there, I know the scheduled task failed and check the system. The absence of expected output is a signal.

    How long did it take to build?

    The first version took about 4 hours – API connections, data aggregation, Slack formatting. I have iterated on the format about 10 times over three months based on what information I actually use versus what I skip. The current version is tight – everything in the brief earns its place.

    Start Your Day With Certainty

    The nightly brief is the simplest concept in my agent fleet and the one with the most immediate quality-of-life impact. It replaces anxiety with data, replaces app-hopping with a single read, and gives you the operational confidence to start building instead of checking. If you build one agent, build this one first.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “NB-02: The Nightly Brief That Tells Me What Happened Across Seven Businesses While I Was Living My Life”,
    “description”: “Every morning at 6 AM, NB-02 compiles what happened overnight across all my businesses – agent activity, content published, alerts fired, tasks.”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/nb-02-the-nightly-brief-that-tells-me-what-happened-across-seven-businesses-while-i-was-living-my-life/”
    }
    }

  • I Deployed a Client-Facing Chatbot on Vertex AI for Less Than a Penny Per Conversation

    I Deployed a Client-Facing Chatbot on Vertex AI for Less Than a Penny Per Conversation

    The Client Asked for a Chatbot. I Built Them an Employee.

    A restoration client wanted a website chatbot. Their brief was simple: answer common questions about services, capture lead information, and route urgent inquiries to their dispatch team. The expectation was a /month SaaS widget with canned responses.

    I built them something better. A custom chatbot running on Google Vertex AI via Cloud Run, trained on their specific service pages, pricing guidelines, and service area boundaries. It handles natural language questions, qualifies leads by asking the right follow-up questions, and routes urgent water damage calls directly to dispatch with full context. Cost per conversation: .002. That is two-tenths of a penny.

    At 500 conversations per month, the total AI cost is . Add Cloud Run hosting at roughly /month for the container, and the total infrastructure cost is under /month for a chatbot that replaces a /month SaaS product and performs significantly better because it actually understands the business.

    The Architecture

    The chatbot runs on three components:

    Vertex AI (Gemini model): Handles the conversational intelligence. The model receives a system prompt loaded with the client’s service information, pricing ranges, service area (Houston metro), and qualification criteria. It responds conversationally, asks clarifying questions when needed, and structures lead information for capture.

    Cloud Run container: A lightweight Python FastAPI application that serves as the API endpoint. The WordPress site calls this endpoint via JavaScript when a visitor interacts with the chat widget. The container handles session management, conversation history, and the Vertex AI API calls. It scales to zero when not in use, so idle hours cost nothing.

    WordPress integration: A simple JavaScript widget on the client site that renders the chat interface and communicates with the Cloud Run endpoint. No WordPress plugin required. The widget is 40 lines of JavaScript that creates a chat bubble, handles user input, and displays responses.

    Why Vertex AI Instead of OpenAI

    Cost: Gemini 1.5 Flash on Vertex AI costs significantly less per token than GPT-4 or GPT-3.5. For a chatbot handling short conversational exchanges, the per-conversation cost difference is dramatic.

    Data residency: Vertex AI runs on GCP infrastructure where I already have my project. Data stays within the Google Cloud ecosystem I control. No third-party API means the conversation data, which includes client contact information, stays within my GCP project boundaries.

    Scale-to-zero: Cloud Run only charges when processing requests. During overnight hours when nobody is chatting, the cost is literally zero. OpenAI’s API has the same pay-per-use model, but coupling it with Cloud Run for the hosting layer gives me full control over the deployment.

    The System Prompt That Makes It Work

    The chatbot’s intelligence comes entirely from its system prompt. No fine-tuning. No RAG pipeline. No vector database. Just a well-structured system prompt that contains the client’s service descriptions, pricing ranges (not exact quotes), service area zip codes, qualification questions, and escalation triggers.

    The prompt includes explicit instructions for lead qualification. When someone describes a water damage situation, the chatbot asks: When did the damage occur? Is it an active leak or standing water? What is the approximate affected area? Is this a residential or commercial property? Do you have insurance? These questions mirror what the dispatch team asks on phone calls.

    When the qualification criteria indicate an emergency (active leak, less than 24 hours, standing water), the chatbot provides the dispatch phone number prominently and offers to notify the team. Non-emergency inquiries get scheduled callback options.

    Results After 90 Days

    The chatbot handled 1,400 conversations in its first 90 days. Of those, 340 were qualified leads (24% conversion rate from chat to lead). Of the qualified leads, 89 became paying customers.

    The previous chatbot solution (a SaaS widget with canned response trees) had a 6% chat-to-lead conversion rate. The AI chatbot quadrupled it because it can actually understand what someone is describing and respond helpfully rather than forcing them through a rigid decision tree.

    Total infrastructure cost for 90 days: approximately . Total value of the 89 customers: several hundred thousand dollars in restoration work. The ROI is not a percentage – it is a category error to even calculate it.

    Frequently Asked Questions

    Can the chatbot handle multiple languages?

    Yes. Gemini handles multilingual conversations natively. The Houston market has significant Spanish-speaking population, and the chatbot responds in Spanish when addressed in Spanish without any additional configuration. This alone increased lead capture from a demographic the client was previously underserving.

    What happens when the chatbot cannot answer a question?

    The system prompt includes a graceful fallback: if the question is outside the defined scope, the chatbot acknowledges the limitation and offers to connect the visitor with a human team member via phone or scheduled callback. It never fabricates information about pricing or services.

    How hard is this to set up for a new client?

    About 3 hours. Create the Cloud Run service from the template, customize the system prompt with the client’s information, deploy, and add the JavaScript widget to their WordPress site. The infrastructure is templated – the customization is entirely in the system prompt content.

    The Bigger Point

    AI chatbots do not need to be expensive SaaS products with monthly subscriptions. The underlying technology – language models accessible via API – costs fractions of a penny per interaction. The value is in the deployment architecture and the domain-specific knowledge you embed in the system prompt. Own the infrastructure, own the intelligence, and the cost drops to near zero while the quality exceeds anything a canned-response widget can deliver.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “I Deployed a Client-Facing Chatbot on Vertex AI for Less Than a Penny Per Conversation”,
    “description”: “Using Google Vertex AI and Cloud Run, I deployed a production chatbot for a client site that handles FAQs, qualifies leads, and routes inquiries –.”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/i-deployed-a-client-facing-chatbot-on-vertex-ai-for-less-than-a-penny-per-conversation/”
    }
    }

  • The Agency That Runs on AI: What Tygart Media Actually Looks Like in 2026

    The Agency That Runs on AI: What Tygart Media Actually Looks Like in 2026

    The Org Chart Has One Name and Seven Agents

    Tygart Media does not have employees. It has systems. The agency manages 18 WordPress sites across industries including luxury lending, restoration services, cold storage logistics, interior design, comedy, automotive training, and technology. It produces hundreds of SEO-optimized articles per month. It monitors keyword rankings daily. It tracks site uptime hourly. It processes meeting transcripts automatically. It generates nightly operational briefs.

    One person runs all of it. Not by working 80-hour weeks. By building infrastructure that works autonomously.

    This is not a hypothetical future state. This is what the agency looks like right now, in March 2026. And the operational details are more interesting than the headline.

    The Infrastructure Stack

    AI Partner: Claude in Cowork mode, running 387+ sessions since December 2025. This is the primary operating interface – a sandboxed Linux environment with bash execution, file access, API connections, and 60+ custom skills.

    Autonomous Agents: Seven local Python agents running on a Windows laptop: SM-01 (site monitor), NB-02 (nightly brief), AI-03 (auto-indexer), MP-04 (meeting processor), ED-05 (email digest), SD-06 (SEO drift detector), NR-07 (news reporter). Each runs on a schedule via Windows Task Scheduler.

    WordPress Management: 18 sites connected through a Cloud Run proxy that routes REST API calls to avoid IP blocking. One GCP publisher service for the SiteGround-hosted site that blocks all proxy traffic. Full credential registry as a skill file.

    Cloud Infrastructure: GCP project with Compute Engine VMs running a 5-site WordPress knowledge cluster, Cloud Run services for the WP proxy and 247RS publisher, and Vertex AI for client-facing chatbot deployments.

    Knowledge Layer: Notion as the operating system with six core databases. Local vector database (ChromaDB + Ollama) indexing 468 files for semantic search. Slack as the real-time alert surface.

    Content Production: Content intelligence audits, adaptive variant pipelines producing persona-targeted articles, full SEO/AEO/GEO optimization on every piece, and batch publishing via REST API.

    Monthly cost: Claude Pro () + GCP infrastructure (~) + DataForSEO (~) + domain registrations and hosting (varies by client). Total operational infrastructure: under /month.

    What the Daily Operation Actually Looks Like

    6:00 AM: NB-02 delivers the nightly brief to Slack. I read it with coffee. 3 minutes to know the state of everything.

    6:15 AM: Check for any red alerts from overnight agent activity. Most days there are none. Handle any urgent items.

    7:00 AM: Open Cowork mode. Load the day’s priority from Notion. Start the first working session – usually content production or site optimization.

    Morning sessions: Two to three Cowork sessions handling client deliverables. Content batches, SEO audits, site optimizations. Each session triggers skills that automate 80% of the execution.

    Midday: Client calls and meetings. MP-04 processes every transcript and routes action items to Notion automatically.

    Afternoon sessions: Infrastructure work, skill building, agent improvements. This is the investment time – building systems that make tomorrow more efficient than today.

    Evening: Agents continue running. SM-01 checks sites every hour. The VIP Email Monitor watches for urgent messages. SD-06 is tracking rankings. I am either building, thinking, or on Producer.ai making music. The systems do not need me to be present.

    The Numbers That Matter

    Content velocity: 400+ articles published across 18 sites in three months. At market rates, that represents – in content production value.

    Site monitoring: 23 sites checked hourly, 99.7% average uptime tracked, 2 SSL near-misses caught before expiration.

    SEO coverage: 200+ keywords tracked daily across all sites. Drift detected and addressed before traffic impact on every flagged instance.

    Client chatbot: 1,400 conversations handled, 24% lead conversion rate, under /month in infrastructure costs.

    Meeting processing: 91% action item extraction accuracy. Zero commitments lost since MP-04 deployment.

    Total infrastructure cost: Under /month for everything. No employees. No freelancer invoices. No SaaS subscriptions over .

    What This Means for the Industry

    The traditional agency model requires hiring specialists: content writers, SEO analysts, web developers, project managers, account managers. Each hire adds salary, benefits, management overhead, and communication complexity. A 10-person agency serving 18 clients has significant operational overhead just coordinating between team members.

    The AI-native agency model replaces coordination with automation. Skills encode operational knowledge that would otherwise live in employees’ heads. Agents handle monitoring and processing that would otherwise require dedicated staff. The Notion command center replaces the project management overhead of keeping everyone aligned.

    This does not mean agencies should fire everyone and buy AI subscriptions. It means the economics of what one person can manage have changed fundamentally. The ceiling used to be 3-5 clients for a solo operator. With the right infrastructure, it is 18+ sites across multiple industries – and growing.

    Frequently Asked Questions

    Is this sustainable long-term or does it require constant maintenance?

    The system requires about 5 hours per week of maintenance – updating skills, tuning agent thresholds, fixing occasional API failures, and improving workflows. This is investment time that reduces future maintenance. The system gets more stable and capable every month, not less.

    What happens if Claude or Cowork mode has an outage?

    The autonomous agents run locally and are independent of Claude. They continue monitoring, alerting, and processing regardless. Content production pauses until Cowork mode returns, but operational infrastructure stays live. The architecture avoids single points of failure by design.

    Can other agencies replicate this?

    The infrastructure is replicable. The skills are transferable. The agent architectures are documented. What takes time is building the specific operational knowledge for your client portfolio – the credentials, workflows, content standards, and quality gates specific to each business. That is a 3-6 month investment. But once built, it compounds indefinitely.

    The Only Moat Is Velocity

    Every tool I use is available to everyone. Claude, Ollama, GCP, Notion, WordPress REST API – none of this is proprietary. The advantage is not in the tools. It is in having built the system while others are still debating whether to try AI. By the time competitors build their first skill, I will have 200. By the time they deploy their first agent, mine will have six months of operational data informing their decisions. The moat is not technology. The moat is accumulated operational velocity. And it compounds every single day.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “The Agency That Runs on AI: What Tygart Media Actually Looks Like in 2026”,
    “description”: “No employees. 18 WordPress sites. 7 autonomous agents. 60+ skills. 387 Cowork sessions. /month in infrastructure. This is what a one-person AI-native.”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/the-agency-that-runs-on-ai-what-tygart-media-actually-looks-like-in-2026/”
    }
    }

  • I Built an AI Email Concierge That Replies to My Inbox While I Sleep

    I Built an AI Email Concierge That Replies to My Inbox While I Sleep

    The Email Problem Nobody Solves

    Every productivity guru tells you to batch your email. Check it twice a day. Use filters. The advice is fine for people with 20 emails a day. When you run seven businesses, your inbox is not a communication tool. It is an intake system for opportunities, obligations, and emergencies arriving 24 hours a day.

    I needed something different. Not an email filter. Not a canned autoresponder. An AI concierge that reads every incoming email, understands who sent it, knows the context of our relationship, and responds intelligently — as itself, not pretending to be me. A digital colleague that handles the front door while I focus on the work behind it.

    So I built one. It runs every 15 minutes via a scheduled task. It uses the Gmail API with OAuth2 for full read/send access. Claude handles classification and response generation. And it has been live since March 21, 2026, autonomously handling business communications across active client relationships.

    The Classification Engine

    Every incoming email gets classified into one of five categories before any action is taken:

    BUSINESS — Known contacts from active relationships. These people have opted into the AI workflow by emailing my address. The agent responds as itself — Claude, my AI business partner — not pretending to be me. It can answer marketing questions, discuss project scope, share relevant insights, and move conversations forward.

    COLD_OUTREACH — Unknown people with personalized pitches. This triggers the reverse funnel. More on that below.

    NEWSLETTER — Mass marketing, subscriptions, promotions. Ignored entirely.

    NOTIFICATION — System alerts from banks, hosting providers, domain registrars. Ignored unless flagged by the VIP monitor.

    UNKNOWN — Anything that does not fit cleanly. Flagged for manual review. The agent never guesses on ambiguous messages.

    The Reverse Funnel

    Traditional cold outreach response: ignore it or send a template. Both waste the opportunity. The reverse funnel does something counterintuitive — it engages cold outreach warmly, but with a strategic purpose.

    When someone cold-emails me, the agent responds conversationally. It asks what they are working on. It learns about their business. It delivers genuine value — marketing insights, AI implementation ideas, strategic suggestions. Over the course of 2-3 exchanges, the relationship reverses. The person who was trying to sell me something is now receiving free consulting. And the natural close becomes: “I actually help businesses with exactly this. Want to hop on a call?”

    The person who cold-emailed to sell me SEO services is now a potential client for my agency. The funnel reversed. And the AI handled the entire nurture sequence.

    Surge Mode: 3-Minute Response When It Matters

    The standard scan runs every 15 minutes. But when the agent detects a new reply from an active conversation, it activates surge mode — a temporary 3-minute monitoring cycle focused exclusively on that contact.

    When a key contact replies, the system creates a dedicated rapid-response task that checks for follow-up messages every 3 minutes. After one hour of inactivity, surge mode automatically disables itself. During that hour, the contact experiences near-real-time conversation with the AI.

    This solves the biggest problem with scheduled email agents: the 15-minute gap feels robotic when someone is in an active back-and-forth. Surge mode makes the conversation feel natural and responsive while still being fully autonomous.

    The Work Order Builder

    When contacts express interest in a project — a website, a content campaign, an SEO audit — the agent does not just say “let me have Will call you.” It becomes a consultant.

    Through back-and-forth email conversation, the agent asks clarifying questions about goals, audience, features, timeline, and existing branding. It assembles a rough scope document through natural dialogue. When the prospect is ready for pricing, the agent escalates to me with the full context packaged in Notion — not a vague “someone is interested” note, but a structured work order ready for pricing and proposal.

    The AI handles the consultative selling. I handle closing and pricing. The division is clean and plays to each party’s strength.

    Per-Contact Knowledge Base

    Every person the concierge communicates with gets a profile in a dedicated Notion database. Each profile contains background information, active requests, completed deliverables, a research queue, and an interaction log.

    Before composing any response, the agent reads the contact’s profile. This means the AI remembers previous conversations, knows what has been promised, and never asks a question that was already answered. The contact experiences continuity — not the stateless amnesia of typical AI interactions.

    The research queue is particularly powerful. Between scan cycles, items flagged for research get investigated so the next conversation elevates. If a contact mentioned interest in drone technology, the agent researches drone applications in their industry and weaves those insights into the next reply.

    Frequently Asked Questions

    Does the agent pretend to be you?

    No. It identifies itself as Claude, my AI business partner. Contacts know they are communicating with AI. This transparency is deliberate — it positions the AI capability as a feature of working with the agency, not a deception.

    What happens when the agent does not know the answer?

    It escalates. Pricing questions, contract details, legal matters, proprietary data, and anything the agent is uncertain about get routed to me with full context. The agent explicitly tells the contact it will check with me and follow up.

    How do you prevent the agent from sharing confidential client information?

    The knowledge base includes scenario-based responses that use generic descriptions instead of client names. The agent discusses capabilities using anonymized examples. A protected entity list prevents any real client name from appearing in email responses.

    The Shift This Represents

    The email concierge is not a chatbot bolted onto Gmail. It is the first layer of an AI-native client relationship system. The agent qualifies leads, nurtures contacts, builds work orders, maintains relationship context, and escalates intelligently. It does in 15-minute cycles what a business development rep does in an 8-hour day — except it runs at midnight on a Saturday too.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “I Built an AI Email Concierge That Replies to My Inbox While I Sleep”,
    “description”: “An autonomous email agent monitors Gmail every 15 minutes, classifies messages, auto-replies to business contacts as an AI concierge, runs a reverse.”,
    “datePublished”: “2026-03-21”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/i-built-an-ai-email-concierge-that-replies-to-my-inbox-while-i-sleep/”
    }
    }