Category: Local AI & Automation

Building autonomous AI systems that run locally. Zero cloud cost, full data control, infinite scale.

  • 18 Sites, One Proxy: The Architecture That Makes Multi-Site WordPress Management Actually Work

    The Authentication Problem at Scale

    When you manage one WordPress site, authentication is simple. You store the Application Password, make a REST API call, and move on. When you manage eighteen WordPress sites across different hosting providers, different server configurations, and different security plugins, authentication becomes the single biggest source of friction in your entire operation.

    Every site has its own credentials. Every site has its own IP allowlist. Every site has its own rate limits. Every site has its own way of rejecting requests it does not like. I was spending more time debugging authentication failures than actually optimizing content.

    The proxy solved all of it. One endpoint. One authentication layer. Eighteen sites behind it. The proxy handles credential routing, request formatting, error normalization, and retry logic. My agents talk to the proxy. The proxy talks to WordPress. The agents never touch WordPress directly.

    How the Proxy Works

    The proxy is a Cloud Run service deployed on GCP. It accepts REST API requests with custom headers that specify the target WordPress site, the API endpoint, and the authentication credentials. The proxy validates the request, authenticates with the target WordPress installation, forwards the request, and returns the response.

    The authentication flow uses a proxy token for the first layer — proving that the request is coming from an authorized agent — and WordPress Application Passwords for the second layer — proving that the agent has permission to act on the specific site. Two layers of authentication, zero credential exposure in the agent code.

    Every request is logged with the target site, the endpoint, the response code, and the execution time. This gives me a complete audit trail of every API call made to every site in the portfolio. When something fails, I can trace the exact request that caused it.

    Why Not Just Use WordPress Multisite?

    WordPress Multisite solves a different problem. It puts multiple sites on one installation, which creates a single point of failure and makes it nearly impossible to use different hosting environments for different sites. My portfolio includes sites on dedicated servers, shared hosting, managed WordPress hosting, and GCP Compute Engine. Multisite cannot span these environments. The proxy can.

    The proxy also preserves site independence. Each WordPress installation is fully autonomous. It has its own plugins, its own theme, its own database. If one site goes down, the others are completely unaffected. The proxy is stateless — it does not store any WordPress data. It just routes traffic.

    Security Architecture

    The proxy runs on Cloud Run with no public ingress except the authenticated endpoint. The proxy token is a 256-bit hash that rotates on a schedule. WordPress credentials are passed per-request in encrypted headers — they are never stored on the proxy itself.

    Rate limiting is built into the proxy layer. Each site gets a maximum request rate that prevents accidental DDoS of client WordPress installations. If an agent goes haywire and tries to make 500 requests per minute to a single site, the proxy throttles it before the requests ever reach WordPress.

    The proxy also normalizes error responses. Different WordPress installations return errors in different formats depending on their server configuration and security plugins. The proxy catches these variations and returns a consistent error format to the agent, which simplifies error handling in every skill and pipeline that uses it.

    The Credential Registry

    Every site’s credentials live in a unified skill registry — a single document that maps site names to their WordPress URL, API user, Application Password, and any site-specific configuration. When a new site is onboarded, it gets a registry entry. When an agent needs to interact with a site, it pulls the credentials from the registry and passes them to the proxy.

    This centralization is critical for credential rotation. When a site’s Application Password needs to change, I update one registry entry. Every agent, every pipeline, every skill that touches that site automatically uses the new credentials on the next request. No code changes. No deployment. One update, instant propagation.

    Performance at Scale

    Cloud Run auto-scales based on request volume. During a content swarm — when I am running optimization passes across all eighteen sites simultaneously — the proxy handles hundreds of concurrent requests without breaking a sweat. Cold start time is under two seconds, and warm instances handle requests in under 200 milliseconds of proxy overhead.

    The total cost is remarkably low. Cloud Run charges per request and per compute second. At my volume — roughly 5,000 to 10,000 API calls per week — the proxy costs less than per month. That is the price of eliminating every authentication headache across eighteen WordPress sites.

    What I Would Do Differently

    If I were building the proxy from scratch today, I would add request caching for read operations. Many of my audit workflows fetch the same post data multiple times across different optimization passes. A short-lived cache at the proxy layer would cut API calls by 30 to 40 percent.

    I would also add webhook support for real-time notifications when WordPress posts are updated outside my pipeline. Right now, the proxy is request-response only. Adding an event layer would enable reactive workflows that trigger automatically when content changes.

    FAQ

    Can the proxy work with WordPress.com hosted sites?
    No. It requires self-hosted WordPress with REST API access and Application Password support, which means WordPress 5.6 or later.

    What happens if the proxy goes down?
    All API operations pause until the proxy recovers. Cloud Run has 99.95 percent uptime SLA, so this has not happened in production. The agents retry automatically.

    How hard is it to add a new site to the proxy?
    About five minutes. Add the credentials to the registry, verify the connection with a test request, and the site is live. No proxy code changes required.

  • Exploring Olympic Peninsula: How I Built a Hyper-Local AI Content Engine for Tourism

    The Hyper-Local Opportunity Nobody Is Chasing

    Every content marketer chases national keywords. High volume, high competition, low conversion. Meanwhile, hyper-local search terms sit wide open with commercial intent that national players cannot touch. That is the thesis behind Exploring Olympic Peninsula — a content site built entirely by AI agents that covers one of the most beautiful and underserved tourism regions in the Pacific Northwest.

    The Olympic Peninsula is a place I know personally. The rainforests, the hot springs, the coastal towns, the tribal lands, the seasonal rhythms that determine when you can access certain trails. This is not the kind of content that a generic AI can produce well. It requires local knowledge, seasonal awareness, and genuine familiarity with the terrain.

    So I built a system that combines my local expertise with AI-powered content generation, SEO optimization, and automated publishing. The result is a site that produces genuinely useful tourism content at a pace no human writer could sustain alone.

    The Content Architecture

    The site is organized around four content pillars: destinations, activities, seasonal guides, and practical logistics. Each pillar targets a different stage of the traveler’s journey. Destinations capture the dreaming phase. Activities capture the planning phase. Seasonal guides capture the timing decisions. Logistics capture the booking intent.

    Every article is built from a content brief that combines keyword research with local knowledge. The AI does not guess about trail conditions or restaurant quality. I seed every brief with firsthand observations, seasonal notes, and insider tips that only someone who has actually been there would know.

    The publishing pipeline is the same one I use across the entire portfolio: content brief, adaptive variant generation, SEO/AEO/GEO optimization, schema injection, and automated WordPress publishing through the Cloud Run proxy.

    Why Tourism Content Is Perfect for AI-Assisted Publishing

    Tourism content has two properties that make it ideal for AI-assisted production. First, it is evergreen with predictable seasonal updates. A guide to Hurricane Ridge hiking does not change fundamentally year to year — but it needs seasonal freshness signals that AI can inject automatically. Second, the long tail is enormous. Every trailhead, every campground, every small-town restaurant is a potential article that serves genuine search intent.

    The competition in hyper-local tourism content is almost nonexistent. National travel sites cover the Olympic Peninsula with one or two overview articles. Local tourism boards have outdated websites with poor SEO. The gap between search demand and content supply is massive.

    Building the Local Knowledge Layer

    The hardest part of this project is not the technology. It is the knowledge layer. AI can write fluent prose about any topic, but it cannot tell you that the Hoh Rainforest parking lot fills up by 9 AM on summer weekends, or that Sol Duc Hot Springs closes for maintenance every November, or that the best time to see Roosevelt elk is at dawn in the Quinault Valley.

    I built a local knowledge database in Notion that contains hundreds of these micro-observations. Trail conditions by season. Restaurant hours that differ from what Google shows. Road closures that recur annually. Tide tables that affect beach access. This database feeds into every content brief and gives the AI the context it needs to produce content that actually helps people.

    This is the moat. Any competitor can spin up an AI content site about the Olympic Peninsula. Nobody else has the local knowledge database that makes the content trustworthy.

    Monetization Without Compromise

    The site monetizes through affiliate partnerships with local businesses, display advertising, and eventually, a curated trip planning service. The key constraint is editorial integrity. Every recommendation is based on personal experience. No pay-for-play listings. No sponsored content disguised as editorial.

    This matters because tourism content lives or dies on trust. One bad recommendation — a restaurant that closed six months ago, a trail that is actually dangerous in winter — and the site loses credibility permanently. The local knowledge layer is not just a competitive advantage. It is a quality control system.

    Scaling the Model to Other Regions

    The architecture is designed to be replicated. The same content pipeline, the same publishing infrastructure, the same optimization framework can be deployed to any hyper-local tourism market where I have either personal knowledge or a trusted local partner. The Olympic Peninsula is the proof of concept. The model scales to any region where national content sites leave gaps.

    The vision is a network of hyper-local tourism sites, each powered by the same AI infrastructure, each differentiated by genuine local expertise. Not a content farm. A knowledge network.

    FAQ

    How do you ensure content accuracy for a tourism site?
    Every article is seeded with firsthand observations from a local knowledge database. The AI generates the prose, but the facts come from personal experience and verified local sources.

    How many articles can the system produce per week?
    The pipeline can produce 15-20 fully optimized articles per week. The bottleneck is not production — it is knowledge quality. I only publish what I can verify.

    What makes this different from other AI content sites?
    The local knowledge layer. Generic AI tourism content is easy to spot and easy to outrank. Content backed by genuine local expertise serves users better and ranks better long-term.

  • From Google Apps Script to Cloud Run: Migrating a Content Pipeline Without Breaking Production

    The Pipeline That Outgrew Its Home

    It started in a Google Sheet. A simple Apps Script that called Gemini, generated an article, and pushed it to WordPress via the REST API. It worked beautifully — for about three months. Then the volume increased, the content got more complex, the optimization requirements multiplied, and suddenly I was running a production content pipeline inside a spreadsheet.

    Google Apps Script has a six-minute execution limit. My pipeline was hitting it on every run. The script would timeout mid-publish, leaving half-written articles in WordPress and orphaned rows in the Sheet. I was spending more time debugging the pipeline than using it.

    The migration to Cloud Run was not optional. It was survival.

    What the Original Pipeline Did

    The Apps Script pipeline was elegantly simple. A Google Sheet held rows of keyword targets, each with a topic, a target site, and a content brief. The script would iterate through rows marked “ready,” call Gemini via the Vertex AI API to generate an article, format it as HTML, add SEO metadata, and publish it to WordPress using the REST API with Application Password authentication.

    It also logged results back to the Sheet — post ID, publish date, word count, and status. This gave me a running ledger of every article the pipeline had ever produced. At its peak, the Sheet had over 300 rows spanning eight different WordPress sites.

    The problem was not the logic. The logic was sound. The problem was the execution environment. Apps Script was never designed to run content pipelines that make multiple API calls, process large text payloads, and handle error recovery across external services.

    The Cloud Run Architecture

    The new pipeline runs on Google Cloud Run as a containerized service. It is triggered by a Cloud Scheduler cron job or by manual invocation through the proxy. The container pulls the content queue from Notion (replacing the Google Sheet), generates articles through the Vertex AI API, optimizes them through the SEO/AEO/GEO framework, and publishes through the WordPress proxy.

    The key architectural change was moving from synchronous to asynchronous processing. Apps Script runs everything in sequence — one article at a time, blocking on each API call. Cloud Run processes articles in parallel, with independent error handling for each one. If article three fails, articles four through fifteen still publish successfully.

    Error recovery was the other major upgrade. Apps Script has no retry logic beyond what you manually code into try-catch blocks. Cloud Run has built-in retry policies, dead letter queues, and structured logging. When something fails, I know exactly what failed, why, and whether it recovered on retry.

    The Migration Strategy

    I did not do a big-bang migration. I ran both systems in parallel for two weeks. The Apps Script pipeline continued handling three low-volume sites while I migrated the high-volume sites to Cloud Run one at a time. Each migration followed the same pattern: verify credentials on the new system, publish one test article, compare the output to an Apps Script article from the same site, and then switch over.

    The parallel period caught three bugs that would have caused data loss in a direct cutover. One was a character encoding issue where Cloud Run’s UTF-8 handling differed from Apps Script’s. Another was a timezone mismatch in the publish timestamps. The third was a subtle difference in how the two systems handled WordPress category IDs.

    Every bug was caught because I had a production comparison running side by side. This is the only safe way to migrate a content pipeline: never trust the new system until it proves itself against the old one.

    What Changed After Migration

    Publishing speed went from 45 minutes for a batch of ten articles to under eight minutes. Error rate dropped from roughly 15 percent (mostly timeouts) to under 2 percent. And the pipeline now handles 18 sites without modification — the same container, the same code, different credential sets pulled from the site registry.

    The biggest win was not speed. It was confidence. With Apps Script, every batch run was a gamble. Would it timeout? Would it leave orphaned posts? Would the Sheet get corrupted? With Cloud Run, I trigger the pipeline and walk away. It either succeeds completely or fails cleanly with a detailed error log.

    Lessons for Anyone Running Production Pipelines in Spreadsheets

    First: if your spreadsheet pipeline takes more than 60 seconds to run, it is already too big for a spreadsheet. Start planning the migration now, not when it breaks.

    Second: always run parallel before cutting over. The bugs you catch in parallel mode are the bugs that would have cost you data in production.

    Third: structured logging is not optional. When your pipeline publishes to external services, you need to know exactly what happened on every run. Spreadsheet logs are fragile. Cloud logging is permanent and searchable.

    Fourth: the migration is an opportunity to fix everything you tolerated in the original system. Do not just port the code. Redesign the architecture for the new environment.

    FAQ

    How much does Cloud Run cost compared to Apps Script?
    Apps Script is free but limited. Cloud Run costs roughly -30 per month at my volume, which is negligible compared to the time saved from fewer failures and faster execution.

    Do you still use Google Sheets anywhere in the pipeline?
    No. Notion replaced the Sheet as the content queue. The Sheet was a good prototype but a poor production database.

    How long did the full migration take?
    Three weeks from first Cloud Run deployment to full cutover. The parallel running period was the longest phase.

  • How AI Writes Its Own Instructions: The Self-Creating Skill System That Learns From Every Session

    The Recursion That Actually Works

    Most people think of AI as a tool you give instructions to. I built a system where the AI writes its own instructions. Not in a theoretical research lab sense. In a production business operations sense. The skill-creator skill is an AI agent whose sole job is to observe what works in real sessions, extract the patterns, and codify them into new skills that other agents can use.

    A skill, in my system, is a structured set of instructions that tells an AI agent how to perform a specific task. It includes the trigger conditions, the step-by-step procedure, the quality gates, the error handling, and the expected outputs. Writing a good skill takes deep domain knowledge and careful iteration. It used to take me hours per skill. Now the AI writes them in minutes, and the quality is often better than what I produce manually.

    How Skill Self-Creation Works

    The process starts with observation. During every working session, the AI tracks which actions it takes, which tools it uses, which decisions require my input, and which outcomes are successful. This creates a session log — a structured record of the entire workflow from start to finish.

    After the session, the skill-creator agent analyzes the log. It identifies repeatable patterns: sequences of actions that were performed multiple times with consistent success. It extracts the decision logic: the conditions under which the AI chose one path over another. And it captures the quality gates: the checks that determined whether an output was acceptable.

    From this analysis, the agent drafts a new skill. The skill follows a standardized format — YAML frontmatter with metadata, followed by markdown instructions with step-by-step procedures. The agent writes the description that determines when the skill triggers, the instructions that determine how it executes, and the validation criteria that determine whether it succeeded.

    The Quality Problem and How We Solved It

    Early versions of skill self-creation produced mediocre skills. They captured the surface-level actions but missed the contextual judgment that made the workflow actually work. The agent would write a skill that said “publish to WordPress” but miss the nuance of checking excerpt length, verifying category assignment, or running the SEO optimization pass before publishing.

    The fix was adding a refinement loop. After the agent drafts a skill, it runs a simulated execution against a test case. If the simulated execution misses steps that the original session included, the agent revises the skill. This loop runs until the simulated execution matches the original session’s quality within a defined tolerance.

    The second fix was adding a description optimization pass. A skill is useless if it never triggers. The agent now analyzes the trigger conditions — the keywords, phrases, and contexts that should activate the skill — and optimizes the description for maximum recall without false positives. This is essentially SEO for AI skills.

    Skills That Write Better Skills

    The most recursive part of the system is that the skill-creator skill itself was partially written by an earlier version of itself. I wrote the first version manually. That version observed me creating skills by hand, extracted the patterns, and produced a second version that was more comprehensive. The second version then refined itself into the third version, which is what runs in production today.

    Each generation captures more nuance. The first version knew to include trigger conditions. The second version learned to include negative triggers — conditions that should explicitly not activate the skill. The third version added variance analysis — testing whether a skill performs consistently across different invocation contexts or only works in the specific scenario where it was created.

    This is not artificial general intelligence. It is not sentient. It is a well-designed feedback loop that improves operational documentation through structured iteration. But the output is remarkable: a library of over 80 production skills, many of which were created or significantly refined by the system itself.

    What This Means for Business Operations

    The traditional way to scale operations is to hire people, train them, and hope they follow the procedures consistently. The skill self-creation model inverts this. The AI observes the best version of a procedure, codifies it perfectly, and then executes it identically every time. No training decay. No interpretation drift. No Monday morning inconsistency.

    When I discover a better way to optimize a WordPress post — a new schema type, a better FAQ structure, a more effective interlink pattern — I do it once in a live session. The skill-creator agent watches, extracts the improvement, and updates the relevant skill. From that moment forward, every post optimization across every site includes the improvement. One session, permanent upgrade, portfolio-wide deployment.

    The Limits of Self-Creation

    The system cannot create skills for tasks it has never observed. It cannot invent new optimization techniques or discover new strategies. It can only codify and refine what it has seen work in practice. The creative direction, the strategic decisions, the judgment calls — those still come from me.

    It also cannot evaluate business impact. It knows whether a skill executed correctly, but it does not know whether the output moved a meaningful metric. That evaluation layer requires human judgment and time — traffic data, conversion data, client feedback. The system optimizes execution quality, not business outcomes. The gap between those two things is where human expertise remains irreplaceable.

    FAQ

    How many skills has the system created autonomously?
    Approximately 30 skills were created entirely by the skill-creator agent. Another 50 were human-created but significantly refined by the agent through the optimization loop.

    Can the system create skills for any domain?
    It can create skills for any domain where it has observed successful sessions. The more sessions it observes in a domain, the better the skills it produces.

    What prevents the system from creating bad skills?
    The simulated execution loop catches most quality issues. Skills that fail simulation are flagged for human review rather than deployed to production.

  • The Contact Profile Database: Building Per-Person AI Memory for Every Relationship in Your Network

    The CRM Is Dead. Long Live the Contact Profile.

    Traditional CRMs store records. Name, email, company, last activity date, deal stage. They are databases optimized for pipeline management, not relationship management. They tell you where someone is in your funnel. They tell you nothing about who they actually are.

    I built something different. A contact profile database that stores what matters: what we talked about, what they care about, what their business needs, what introductions would help them, what their communication preferences are, and what our shared history looks like across every touchpoint — email, phone, in-person, social media, and collaborative work.

    The database is powered by AI agents that automatically extract and update profile data from every interaction. When I send an email, the agent parses it for relevant updates. When I finish a call, I dictate a brief note and the agent incorporates it into the contact’s profile. When a social media post mentions a contact’s company, the agent flags it for context.

    The Architecture of a Contact Profile

    Each contact profile lives in Notion as a database entry with structured properties and a rich-text body. The structured properties capture the basics: name, company, role, entity tags that link them to specific businesses in my portfolio, relationship strength score, and last interaction date.

    The rich-text body is where the real value lives. It contains a chronological interaction log, a preferences section, a needs assessment, and a relationship context section. The interaction log captures every meaningful touchpoint with a date and a one-sentence summary. The preferences section tracks communication style, meeting preferences, topics they enjoy, and topics to avoid.

    The needs assessment is updated quarterly. It captures what the contact’s business needs right now, what challenges they are facing, and what opportunities I can see that they might not. This is the section I review before every call and every meeting. It turns every interaction into a continuation of a long-running conversation, not a cold restart.

    How AI Keeps Profiles Current

    Manual CRM updates are the reason most CRMs die within six months of implementation. Nobody wants to spend fifteen minutes after every call logging data into a form. The profile database eliminates manual updates entirely.

    The email agent scans incoming and outgoing email for contact mentions. When it detects a substantive interaction — not a newsletter, not a receipt, but a real conversation — it extracts the key points and appends them to the contact’s interaction log. The agent knows the difference between a transactional email and a relationship email because it has been trained on my communication patterns.

    After phone calls, I dictate a voice note that gets transcribed and processed. The agent extracts action items, updates the needs assessment if something changed, and flags any follow-up commitments I made. This takes me about 90 seconds per call — compared to the five to ten minutes that manual CRM entry would require.

    The Relationship Strength Score

    Each contact has a relationship strength score from one to ten. The score is calculated algorithmically based on interaction frequency, interaction depth, reciprocity, and recency. A contact I speak with weekly about substantive topics scores higher than a contact I exchange LinkedIn messages with monthly.

    The score decays over time. If I have not interacted with someone in 60 days, their score drops. This decay is intentional — it surfaces relationships that need attention before they go cold. Every Monday, the weekly briefing includes a list of high-value contacts whose scores have dropped below a threshold. These are my reach-out priorities for the week.

    The score also factors in reciprocity. A relationship where I am always initiating and never receiving is scored differently from one where both parties actively contribute. This helps me identify relationships that are genuinely mutual versus ones that are one-directional.

    Privacy and Ethics

    This system stores personal information about real people. The ethical guardrails are non-negotiable. First, the database is private. No one accesses it except me and my AI agents. It is not shared with clients, partners, or team members. Second, the information stored is limited to professional context. I do not track personal details that are irrelevant to the business relationship. Third, any contact can request to see what I have stored about them, and I will show them. Transparency is the foundation of trust.

    The AI agents are instructed to never use profile data in ways that would feel manipulative or surveilling. The purpose is to serve people better, not to gain advantage over them. When I remember that someone mentioned their daughter’s soccer tournament three months ago and ask how it went, that is not manipulation. That is being a good human who pays attention.

    The Compound Value of Institutional Memory

    Six months into using the contact profile database, I can trace direct revenue to relationship insights that would have been lost without it. A contact mentioned a business challenge in passing during a call in October. The agent logged it. In January, I saw an opportunity that directly addressed that challenge. I made the introduction. It became a six-figure engagement.

    Without the profile database, that October mention would have been forgotten. The January opportunity would have passed without connection. The engagement would never have happened. This is the compound value of institutional memory: every interaction becomes an asset that appreciates over time.

    The system is still early. I am building integrations with calendar data, social media monitoring, and public company news feeds. The vision is a contact profile that updates itself continuously from every available signal, so that every time I interact with someone, I have the full picture of who they are, what they need, and how I can help.

    FAQ

    How many contacts are in the database?
    Currently around 400 active profiles. Not everyone I have ever met — only people with meaningful professional relationships that I want to maintain and deepen.

    How do you handle contacts who work across multiple businesses?
    Entity tags allow a single contact to be linked to multiple business entities. Their profile shows the full relationship context across all touchpoints.

    What tool do you use for the database?
    Notion, with AI agents that read and write to it via the Notion API. The same architecture that powers the rest of the command center operating system.

  • We Built 7 AI Agents on a Laptop for /Month. Here’s What They Do.

    Every AI tool your agency pays for monthly — content generation, SEO monitoring, email triage, competitive intelligence — can run on a laptop that’s already sitting on your desk. We proved it by building seven autonomous agents in two sessions.

    The Stack

    The entire operation runs on Ollama (open-source LLM runtime), PowerShell scripts, and Windows Scheduled Tasks. The language model is llama3.2:3b — small enough to run on consumer hardware, capable enough to generate professional content and analyze data. The embedding model is nomic-embed-text, producing 768-dimension vectors for semantic search across our entire file library.

    Total monthly cost: zero dollars. No API keys. No rate limits. No data leaving the machine.

    The Seven Agents

    SM-01: Site Monitor. Runs hourly. Checks all 23 managed WordPress sites for uptime, response time, and HTTP status codes. Windows notification within seconds of any site going down. This alone replaces a /month monitoring service.

    NB-02: Nightly Brief Generator. Runs at 2 AM. Scans activity logs, project files, and recent changes across all directories. Generates a prioritized morning briefing document so the workday starts with clarity instead of chaos.

    AI-03: Auto Indexer. Runs at 3 AM. Scans 468+ local files across 11 directories, generates vector embeddings for each, and updates a searchable semantic index. This is the foundation for a local RAG system — ask a question, get answers from your own documents without uploading anything to the cloud.

    MP-04: Meeting Processor. Runs at 6 AM. Finds meeting notes from the previous day, extracts action items, decisions, and follow-ups, and saves them as structured outputs. No more forgetting what was agreed upon.

    ED-05: Email Digest. Runs at 6:30 AM. Pre-processes email from Outlook and local exports into a prioritized digest with AI-generated summaries. The important stuff floats to the top before you open your inbox.

    SD-06: SEO Drift Detector. Runs at 7 AM. Compares today’s title tags, meta descriptions, H1s, canonical URLs, and HTTP status codes across all 23 sites against yesterday’s baseline. If anything changed without authorization, you know immediately.

    NR-07: News Reporter. Runs at 5 AM. Scans Google News for 7 industry verticals, deduplicates stories, and generates publishable news beat articles. This agent turns your blog into a news desk that never sleeps.

    Why This Matters for Agencies

    Most agencies spend thousands per month on SaaS tools that do individually what these seven agents do collectively. The difference isn’t just cost — it’s control. Your data never leaves your machine. You can modify any agent’s behavior by editing a script. There’s no vendor lock-in, no subscription creep, no feature deprecation.

    We’ve open-sourced the architecture in our technical walkthrough and told the story with slightly more flair in our Star Wars-themed version. The live command center dashboard shows real-time fleet status.

    The future of agency operations isn’t more SaaS subscriptions. It’s local intelligence that runs autonomously, costs nothing, and answers only to you.