This image was AI-generated as the featured visual for the article: You Keep the Relationship. I Do the Work Underneath.. Generated using Google Imagen 4 Standard via Vertex AI, converted to WebP with full IPTC/XMP metadata injection.
Model: Imagen 4.0 Standard (Vertex AI)
Format: WebP (quality 90)
Aspect Ratio: 16:9
Metadata: IPTC/XMP injected
Creator: Tygart Media
Part of The Studio collection — AI-generated visuals powering the Tygart Media content engine.
This image was AI-generated as the featured visual for the article: The Disagreement Problem. Generated using Google Imagen 4 Standard via Vertex AI, converted to WebP with full IPTC/XMP metadata injection.
Model: Imagen 4.0 Standard (Vertex AI)
Format: WebP (quality 90)
Aspect Ratio: 16:9
Metadata: IPTC/XMP injected
Creator: Tygart Media
Part of The Studio collection — AI-generated visuals powering the Tygart Media content engine.
What if you could build a complete music album — concept, lyrics, artwork, production notes, and a full listening experience — without a recording studio, without a label, and without months of planning? That’s exactly what we did with Red Dirt Sakura, an 8-track country-soul album written and produced by a fictional Japanese-American artist named Yuki Hayashi. Here’s how we built it, what broke, what we fixed, and why this system is repeatable.
What Is Red Dirt Sakura?
Red Dirt Sakura is a concept album exploring what happens when Japanese-American identity collides with American country music. Each of the 8 tracks blends traditional Japanese melodic structure with outlaw country instrumentation — steel guitar, banjo, fiddle — sung in both English and Japanese. The album lives entirely on tygartmedia.com, built and published using a three-model AI pipeline.
The Three-Model Pipeline: How It Works
Every track on the album was processed through a sequential three-model workflow. No single model did everything — each one handled what it does best.
Model 1 — Gemini 2.0 Flash (Audio Analysis): Each MP3 was uploaded directly to Gemini for deep audio analysis. Gemini doesn’t just transcribe — it reads the emotional arc of the music, identifies instrumentation, characterizes the tempo shifts, and analyzes how the sonic elements interact. For a track like “The Road Home / 家路,” Gemini identified the specific interplay between the steel guitar’s melancholy sweep and the banjo’s hopeful pulse — details a human reviewer might take hours to articulate.
Model 2 — Imagen 4 (Artwork Generation): Gemini’s analysis fed directly into Imagen 4 prompts. The artwork for each track was generated from scratch — no stock photos, no licensed images. The key was specificity: “worn cowboy boots beside a shamisen resting on a Japanese farmhouse porch at golden hour, warm amber light, dust motes in the air” produces something entirely different from “country music with Japanese influence.” We learned this the hard way — more on that below.
Model 3 — Claude (Assembly, Optimization, and Publish): Claude took the Gemini analysis, the Imagen artwork, the lyrics, and the production notes, then assembled and published each listening page via the WordPress REST API. This included the HTML layout, CSS template system, SEO optimization, schema markup, and internal link structure.
What We Built: The Full Album Architecture
The album isn’t just 8 MP3 files sitting in a folder. Every track has its own listening page with a full visual identity — hero artwork, a narrative about the song’s meaning, the lyrics in both English and Japanese, production notes, and navigation linking every page to the full station hub. The architecture looks like this:
8 Listening Pages — one per track, each with unique artwork and full song narrative
Consistent CSS Template — the lr- class system applied uniformly across all pages
Parent-Child Hierarchy — all pages properly nested in WordPress for clean URL structure
The QA Lessons: What Broke and What We Fixed
Building a content system at this scale surfaces edge cases that only exist at scale. Here are the failures we hit and how we solved them.
Imagen Model String Deprecation
The Imagen 4 model string documented in various API references — imagen-4.0-generate-preview-06-06 — returns a 404. The working model string is imagen-4.0-generate-001. This is not documented prominently anywhere. We hit this on the first artwork generation attempt and traced it through the API error response. Future sessions: use imagen-4.0-generate-001 for Imagen 4 via Vertex AI.
Prompt Specificity and Baked-In Text Artifacts
Generic Imagen prompts that describe mood or theme rather than concrete visual scenes sometimes produce images with Stable Diffusion-style watermarks or text artifacts baked directly into the pixel data. The fix is scene-level specificity: describe exactly what objects are in frame, where the light is coming from, what surfaces look like, and what the emotional weight of the composition should be — without using any words that could be interpreted as text to render. The addWatermark: false parameter in the API payload is also required.
WordPress Theme CSS Specificity
Tygart Media’s WordPress theme applies color: rgb(232, 232, 226) — a light off-white — to the .entry-content wrapper. This overrides any custom color applied to child elements unless the child uses !important. Custom colors like #C8B99A (a warm tan) read as darker than the theme default on a dark background, making text effectively invisible. Every custom inline color declaration in the album pages required !important to render correctly. This is now documented and the lr- template system includes it.
URL Architecture and Broken Nav Links
When a URL structure changes mid-build, every internal nav link needs to be audited. The old station URL (/music/japanese-country-station/) was referenced by Song 7’s navigation links after we renamed the station to Red Dirt Sakura. We created a JavaScript + meta-refresh redirect from the old URL to the new one, and audited all 8 listening pages for broken references. If you’re building a multi-page content system, establish your final URL structure before page 1 goes live.
Template Consistency at Scale
The CSS template system (lr-wrap, lr-hero, lr-story, lr-section-label, etc.) was essential for maintaining visual consistency across 8 pages built across two separate sessions. Without this system, each page would have required individual visual QA. With it, fixing one global issue (like color specificity) required updating the template definition, not 8 individual pages.
The Content Engine: Why This Post Exists
The album itself is the first layer. But a music album with no audience is a tree falling in an empty forest. The content engine built around it is what makes it a business asset.
Every listening page is an SEO-optimized content node targeting specific long-tail queries: Japanese country music, country music with Japanese influence, bilingual Americana, AI-generated music albums. The station hub is the pillar page. This case study is the authority anchor — it explains the system, demonstrates expertise, and creates a link target that the individual listening pages can reference.
From this architecture, the next layer is social: one piece of social content per track, each linking to its listening page, with the case study as the ultimate destination for anyone who wants to understand the “how.” Eight tracks means eight distinct social narratives — the loneliness of “Whiskey and Wabi-Sabi,” the homecoming of “The Road Home / 家路,” the defiant energy of “Outlaw Sakura.” Each one is a separate door into the same content house.
What This Proves About AI Content Systems
The Red Dirt Sakura project demonstrates something important: AI models aren’t just content generators — they’re a production pipeline when orchestrated correctly. The value isn’t in any single output. It’s in the system that connects audio analysis, visual generation, content assembly, SEO optimization, and publication into a single repeatable workflow.
The system is already proven. Album 2 could start tomorrow with the same pipeline, the same template system, and the documented fixes already applied. That’s what a content engine actually means: not just content, but a machine that produces it reliably.
Frequently Asked Questions
What AI models were used to build Red Dirt Sakura?
The album was built using three models in sequence: Gemini 2.0 Flash for audio analysis, Google Imagen 4 (via Vertex AI) for artwork generation, and Claude Sonnet for content assembly, SEO optimization, and WordPress publishing via REST API.
How long did it take to build an 8-track AI music album?
The entire album — concept, lyrics, production, artwork, listening pages, and publication — was completed across two working sessions. The pipeline handles each track in sequence, so speed scales with the number of tracks rather than the complexity of any single one.
What is the Imagen 4 model string for Vertex AI?
The working model string for Imagen 4 via Google Vertex AI is imagen-4.0-generate-001. Preview strings listed in older documentation are deprecated and return 404 errors.
Can this AI music pipeline be used for other albums or artists?
Yes. The pipeline is artist-agnostic and genre-agnostic. The CSS template system, WordPress page hierarchy, and three-model workflow can be applied to any music project with minor customization of the visual style and narrative voice.
What is Red Dirt Sakura?
Red Dirt Sakura is a concept album by the fictional Japanese-American artist Yuki Hayashi, blending American outlaw country with traditional Japanese musical elements and sung in both English and Japanese. The album lives on tygartmedia.com and was produced entirely using AI tools.
Where can I listen to the Red Dirt Sakura album?
All 8 tracks are available on the Red Dirt Sakura station hub on tygartmedia.com. Each track has its own dedicated listening page with artwork, lyrics, and production notes.
Ready to Hear It?
The full album is live. Eight tracks, eight stories, two languages. Start with the station hub and follow the trail.
Restoration contractors are paying for Encircle. And PSA. And DASH. And a CRM. And a project management tool. And a call tracking service. And a reputation management platform. And an estimating integration. By the time you add it all up, a mid-size restoration company might be running eight separate software subscriptions, each with its own login, its own invoice, its own support line, and its own way of storing data that doesn’t talk to anything else.
I’ve been watching this stack accumulate for years. And I’ve been thinking about a question I haven’t seen anyone ask out loud:
Who owns the data when the job is done?
The data your business generates is the most valuable thing you produce. The question is who holds the keys.
What Software Companies Are Actually Selling
Encircle is a genuinely good product. So is PSA. So is DASH. I’m not writing this to trash them. They solved real problems — structured photo documentation that insurance carriers accept, drying logs that meet IICRC standards, scope writing that integrates with Xactimate. These things are hard to build from scratch and they matter in a claims-dependent business.
But here’s what all of them are also selling, whether they say it or not: a structured way to store your business’s data. Customer records. Job histories. Equipment logs. Photo sets. Communication trails. Every one of those platforms is capturing the operational intelligence of your company and holding it in their database, in their format, accessible through their interface.
The subscription isn’t just for the software. It’s for continued access to your own data.
That arrangement made sense when there was no alternative. You needed the structure, and the only way to get the structure was to accept the terms. The software vendor provided the architecture. You provided the data. The architecture stayed with them.
That’s the deal. It’s been the deal for twenty years. And it’s changing.
Eight subscriptions. Eight logins. Eight vendors. Nobody owns the whole picture — except the vendors.
What’s Actually Different Now
The thing that changed isn’t AI, exactly. It’s the integration layer.
For most of the software era, building custom business tools required engineering teams, expensive infrastructure, and months of development time. That’s why SaaS won — you couldn’t build it yourself, so you rented it from someone who could. The subscription model was the price of access to capability that was otherwise out of reach.
What’s different now: a single developer — or an operator who knows how to use modern AI tools — can assemble custom business infrastructure in days that would have taken a team months in 2019. A Google Cloud VM costs $60/month. A CRM custom-built on WordPress with webhooks firing into CTM, Slack, and a Firestore job log costs fractions of what PSA charges. An AI intake agent that handles emergency calls, qualifies the job, creates the customer record, and pings the on-call crew — built on Twilio and Claude on Vertex AI — costs less per month than most restoration companies spend on coffee.
The capability gap that justified the subscription is closing. Not for every business — not yet — but for businesses that have someone close enough to understand what they need and how to build it. And critically: when you build it, you own it. The data lives on infrastructure you control. It doesn’t leave when you cancel a subscription because there’s no subscription to cancel.
Dozens of disconnected tools, or one integrated system you own. The math is changing.
What Encircle Still Does That Matters
I said I wasn’t writing this to trash these companies and I meant it. So let me be specific about what they do that’s genuinely hard to replicate.
The compliance layer. Insurance carriers have specific documentation requirements. IICRC has drying log standards. Xactimate has a particular way of handling scope line items. Encircle has spent years building integrations with those systems, getting their formats accepted by carriers, making their documentation hold up in adjuster reviews and litigation. That institutional trust is not a feature you can code in a weekend. It’s accumulated credibility that took years to build and is worth real money to contractors whose revenue depends on claims getting approved.
The field mobile experience. Technicians in the field need something fast, offline-capable, and purpose-built for how they actually work — photos, moisture readings, equipment logs, job updates — all from a phone in a flooded basement. Generic platforms aren’t optimized for that workflow. Encircle is.
So no — the Company OS doesn’t make Encircle irrelevant for everything. What it makes irrelevant is the parts of Encircle — and PSA, and DASH, and the CRM, and the project management tool — that are really just coordination and data structure. The scheduling, the customer records, the communication trails, the job status tracking, the lead attribution, the revenue reporting. All of that can live in a system you own, wired together through APIs, with your data staying on your infrastructure.
You keep Encircle for what Encircle is uniquely good at. You stop paying for the eight other subscriptions that are just doing coordination work you could own.
The Model That Makes This Work
The reason most restoration contractors won’t build this themselves isn’t that they can’t afford it. It’s that they don’t have the time or expertise to architect it — and even if they did, they’d have to manage it forever. That’s not a restoration contractor’s job. Their job is running jobs.
The Company OS model I’ve been developing solves this by flipping the arrangement entirely. Instead of the contractor buying software subscriptions and managing a fragmented stack, I build and host the entire infrastructure — VM, CRM, call tracking, AI intake, content engine, ad management — and take a percentage of revenue I can prove I drove through the system. The contractor pays nothing upfront and nothing ongoing for the infrastructure. They pay on verified results.
The difference from the SaaS model: the data architecture belongs to the system I built, which is operated in the contractor’s interest and accessible to them. The attribution data, the customer history, the job records, the communication logs — all of it lives in a structure we both can see, verified by Call Track Metrics, not locked behind a vendor’s dashboard.
That’s not a software product. That’s an infrastructure partnership. And it produces a fundamentally different answer to the question of who owns the data when the job is done.
The data your business generates should be yours — organized, accessible, and not held hostage by a subscription renewal.
The Question Worth Sitting With
I want to be careful here about the scope of what I’m claiming. The vertical software companies — Encircle, Xactimate, PSA — aren’t going away. The contractors who need carrier-compliant documentation and field mobile tools will keep paying for them. The compliance layer is real and the field experience is real and those are genuinely hard problems.
What I think is ending — or at least what I think deserves to end — is the part of the software subscription economy built on the coordination tax. The $200/month CRM that stores your customer records in someone else’s database. The project management tool that knows your job pipeline better than you do. The reporting dashboard that shows you your own business through someone else’s lens. That category of software exists because the integration layer didn’t. Now it does.
So here’s the question I’d ask any restoration contractor right now: for every subscription you’re paying, do you own the data when you stop paying? Do you know exactly where your customer records live, who controls the schema, what happens if the vendor raises prices or shuts down?
Most contractors have never asked this because they’ve never had to. The subscription was the only option.
It isn’t anymore.
The question isn’t whether your software does the job. The question is who owns the data when the job is done.
Nobody sits down and says “I’m going to build an operating system for an entire industry.” That’s not how it starts. It starts with one client who needs a website. Then another who needs their Google Ads cleaned up. Then someone asks if you can help them figure out why their phone isn’t ringing.
You solve problems. You move on to the next one. You don’t zoom out.
I zoomed out recently — for the first time in a long time — and what I saw surprised me. I hadn’t been building a marketing consultancy. I’d been building a vertical operating system for the restoration industry, one problem at a time, without ever calling it that.
Every piece was built to solve a specific problem. Zoom out and it’s one system.
How It Actually Started
The first piece was SEO. A restoration contractor needed to show up when someone searched “water damage restoration” in their city. Straightforward enough. I built the content, optimized the site, tracked the rankings. It worked. They referred someone else. That someone else had a slightly different problem — their ads were running but the calls weren’t converting. So I looked at that.
Call Track Metrics came in because I kept running into the same argument: the client thought the calls were coming from one place, I thought they were coming from another, and neither of us could prove it. CTM solved that. Now every call is tagged to the source — the keyword, the page, the campaign, the full journey. Attribution stopped being a debate and became math.
Then I noticed that the calls were coming in but jobs weren’t closing at the rate they should. That’s not an SEO problem. That’s an operations problem. So I started looking at intake — how calls were answered, how follow-up happened, how estimates were scheduled. An AI intake agent started to make sense. Not because I was trying to build AI products, but because the gap was right there and I could see it.
The Restoration Golf League came from a completely different direction. Restoration contractors need referral relationships with insurance adjusters and property managers. That’s the commercial side of the business. A golf league is one of the best relationship-building structures that exists in professional services — relaxed, repeated contact, shared experience. It wasn’t a marketing idea. It was a relationship infrastructure idea that happened to use golf as the mechanism.
Each tool built for a specific job. The pattern only becomes visible when you step back.
The Inventory I Didn’t Know I Had
When I actually sat down and listed everything that exists right now across the work I’ve been doing, here’s what came out:
A content intelligence platform — a BigQuery knowledge base that logs every session, surfaces patterns, and drives automated publishing. A lead tracking infrastructure built on Call Track Metrics, wired to every traffic source. A referral network of restoration contractors meeting through a structured golf league across multiple cities. A commercial compliance strategy using fire extinguisher inspections as a loss leader to get in the door with property managers. An AI receptionist product purpose-built for restoration intake — Twilio, Claude on Vertex AI, Cloud Run, Firestore. A Company OS model — a fully hosted GCP environment where I run a contractor’s entire revenue infrastructure and take a commission on verified results. A WordPress CRM being built and dogfooded on my own site before being offered to clients. A knowledge cluster of five interconnected websites building topical authority in the restoration and risk intelligence space.
None of those were planned in sequence. Each one was the answer to a specific question that kept coming up. But together they cover almost every layer of how a restoration business actually operates — lead generation, lead tracking, intake, conversion, referral relationships, commercial acquisition, operations tools, and content authority.
That’s not a service menu. That’s a stack.
Golf, AI, SEO, compliance, CRM — they look unrelated until you see the thread connecting them.
Why Accidental Might Be Better Than Planned
I’ve thought about whether it would have been better to plan this from the start. Design the full system upfront, build it in sequence, launch it as a coherent product.
I don’t think so. And here’s why.
Every piece of this was validated before the next one got built. The CTM infrastructure exists because attribution disputes are real and expensive. The AI intake agent exists because I watched calls get dropped after I’d already driven them. The golf league exists because I saw contractors lose commercial accounts to competitors who had better adjuster relationships, not better work. Each problem was visible because I was close enough to the industry to see it — not designing from a distance.
The version of this that gets designed upfront has a different failure mode: it’s theoretically complete but practically wrong. The problems you think exist from the outside are never quite the same as the ones that actually exist on the inside. Building problem by problem, staying inside the industry, means every piece of the stack is load-bearing because it was built under load.
There’s also something that happens when you’re not trying to build a system. You’re more honest about what’s actually needed. You don’t add things because they complete the picture — you add them because the gap is genuinely painful. The result is a leaner, more accurate stack than anything I could have designed in a planning session.
The Question I’m Sitting With
The thing I keep coming back to: is this replicable in other verticals, or is it only possible because of the depth of time I’ve spent inside restoration specifically?
I genuinely don’t know. The honest answer is probably both. The approach — stay close, solve real problems, let the system emerge — is transferable. But the specific inventory I ended up with is deeply shaped by restoration’s particular quirks: the insurance dependency, the emergency-driven intake, the adjuster relationship dynamics, the commercial vs. residential split, the franchise structures, the IICRC certification culture.
A different vertical would produce a different stack. HVAC has different intake patterns. Personal injury law has a completely different referral economy. Healthcare has different compliance requirements and trust dynamics. The method of paying attention and building toward what you see would be the same. The pieces that emerge would be different.
What I’m more confident about: you can’t fake the depth. The reason the stack works is because I know what it’s like to be a restoration contractor well enough to feel the pain of each layer. That knowledge isn’t transferable quickly. It’s accumulated. Someone who decided tomorrow to “build a vertical OS for HVAC” would be designing from the outside. They’d get some things right and miss the things that matter most, because those only become visible from inside.
Looking back, the pattern is obvious. In the moment, it was just the next problem to solve.
What This Changes
Naming a thing changes how you relate to it. Before this realization, I was a marketing consultant who did a lot of different things for restoration companies. That description is accurate but it undersells the coherence of what’s actually there.
Now I think of it differently: I’m a vertical infrastructure builder who happened to start in restoration and went deep enough that the full stack became visible. The individual services aren’t the product. The system is the product. Any one piece of it — just the SEO, just the CTM setup, just the AI intake — is less valuable than the whole because the whole is integrated in ways that individual pieces can’t be.
That changes what I build next, how I talk about what I do, and who I build it for. It also changes what “being done” means — because a vertical OS is never really done. Industries evolve, problems shift, new gaps appear. The work is staying close enough to keep seeing them.
I didn’t plan any of this. I just kept solving the next problem.