A flagship essay on AI hygiene: what to store, what to keep out, and how to have the conversation about it with the people in your life.
“What do you know about my girlfriend?”
Last night my partner Stef asked me a question she had a right to ask. She wanted to know what my AI knew about her.
I use Claude for hours a day. I run an agency on top of it. I have knowledge bases, project contexts, client stacks, and conversation histories going back years. She watched me work on the thing enough to assume that by now, surely, the AI had a rich picture of her — her sense of humor, her work, the shape of our relationship, the running jokes, the small details a partner remembers. She handed me her phone as a test of it. Let it tell me what it knows.
The answer was almost nothing.
My name for her. That she lives here. A few passing references to a Notion chat room she once set up, a voice memo she sent me that we extracted some thinking from. No sense of who she is as a person. No running joke the model could finish. No model of her at all, really.
She was hurt in a flash, the way you get hurt by something that isn’t an injury but is still information. I was quietly proud, in a way I didn’t know how to explain in the moment. Both reactions were correct. That’s the thing I want to write about here — that the gap between her hurt and my pride is the shape of a whole category of questions almost nobody is asking out loud yet, and it is only going to get bigger.
We talked about it for a while. I tried to explain why the tool was empty of her on purpose. She let me try. And what came out of the conversation was the argument I’m about to make, which I’ll phrase in one sentence up front so you can decide whether to keep reading:
Keeping the people you love out of your AI is not forgetting them. It’s a specific kind of care. And the conversation you have about why they’re not in there is how you close the gap between what the tool knows and what the relationship deserves.
If that sentence lands at all, the rest of this is the why, the how, and the honest version of what I’m still getting wrong.
AI Memory Is Nuclear Power
Here’s the frame that has organized my thinking on this for the last year.
AI memory is nuclear power. Real civilization-scale utility on one side, real civilization-scale danger on the other, and almost nobody I’ve met is running a containment protocol worthy of the payload they’re storing.
The analogy holds all the way down. The fuel is useful because it’s concentrated — that’s the whole point of a persistent memory that remembers your business, your family, your finances, your health, your history. Concentration is what makes the tool powerful. Concentration is also exactly what makes a spill catastrophic. And the people celebrating the new reactor are almost never the people thinking about the waste.
The honest position on this, I’ve come to believe, is neither abstinence nor maximalism. It’s containment engineering. You build the reactor and the shielding. You use the tool and you design the protocol for when the tool fails. Pro-AI and pro-guardrail are the same position. Anyone telling you to choose one is selling you something.
What makes this hard is that the stakes are asymmetric in a way most people never sit with directly. For the platform, your memory is one row in a table of billions — a single unit of risk distributed across a huge population. For you, your memory is a map of your life. The platform’s worst-case scenario is a rough quarter, a settlement, a bad headline. Your worst-case scenario is a destroyed marriage, a leaked client list, a legal catastrophe, a career-ending screenshot. These are not remotely comparable events, and they don’t scale the same way, and they do not reach any kind of equilibrium where the platform’s good-faith security policy protects the individual worst case. The platform is optimizing for its risk profile. Its risk profile is not yours. You are the only person whose worst-case scenario is your worst-case scenario.
That asymmetry is why individual hygiene matters even when platform security is genuinely excellent. It’s why I don’t think this conversation is paranoid and I don’t think it’s solved and I don’t think you can outsource it.
Three Failure Modes. Which One Are You?
Most people running AI at any real depth fall into one of three failure modes, and most of them don’t know which one they’re in. Before I tell you what any of them are, I want you to place yourself while you read.
The over-loader. This is the person who treats the AI as a second brain and dumps everything into it — credentials, relationships, grievances, client details, medical history, the long rambling voice-memo of what happened at Thanksgiving. It feels like investment. It feels like the tool getting smarter about them. It mostly is. But it also means one breach, one nosy partner, one subpoena, one bad exit from the platform turns the tool into a weapon pointed directly at the user. The over-loader’s failure mode is invisible until it isn’t.
The under-loader. This is the person who keeps the tool so sterile it never reaches its potential — which is fine as far as it goes, except the humans in their life often discover, usually by accident, that they aren’t in the context at all. That discovery doesn’t land as safety. It lands as erasure. The under-loader’s failure mode is relational, not technical. The tool stays clean, and the relationships pay the cost the tool should have paid.
The unaware. This is, honestly, most people. No mental model of what’s stored, where, for how long, or under whose policy. They’re making operational decisions — business decisions, relationship decisions, identity decisions — on top of a foundation they have never inspected. They don’t know their AI has memory in six places, not one. They don’t know where the off switch is. They assume chat history is the whole story when chat history is maybe 20 percent of it.
The first hygiene move is always the same: figure out which mode you default to. Over-loaders need to prune. Under-loaders need to have a conversation with the humans they’ve been quietly protecting without telling them. The unaware need to spend thirty minutes mapping what they’ve actually agreed to.
I’ve been all three at different points. Most operators I respect have been too. The point of the diagnostic isn’t to shame. It’s to make the failure mode visible enough that you can actually work on it.
Clean Tool vs. Second Brain: The Choice You Might Not Know You’re Making
There are two coherent philosophies for how to use AI at depth, and they are genuinely in tension.
The Clean Tool approach says: the AI is an instrument. You keep it sharp by keeping it empty of identity. You bring the context you need into each session, do the work, and let the session close without leaving a permanent residue of who you were that day. The AI is like a great chef’s knife — it serves you best when it is exactly what it is, not a repository of everything you’ve ever cut with it.
The Second Brain approach says: the AI is an extension of cognition. The more of you it holds, the more it can do for you. The payoff scales with the investment. Loading your thinking, your projects, your relationships, your patterns into the model is not a liability — it’s the whole point. You are building a partner that knows you well enough to anticipate you. The AI is like a lifelong collaborator who has read every note you ever took.
Both are legitimate. Both have failure modes. The failure mode of the Clean Tool is that you never reach the depth of partnership that made you interested in AI in the first place — you end up with a very sharp instrument and no deep relationship with the work it enables. The failure mode of the Second Brain is that you build something you cannot leave, cannot audit, and cannot defend if it ever gets read by the wrong person.
I run Clean Tool. I should say that plainly. I do not believe it is the only right answer. I believe it is the right answer for how I work, what I work on, and who the people around me are. My work touches client data, confidential business strategy, and a personal life I want to keep intact. The cost of a Second Brain leak, for me, is catastrophic in a way I cannot price. The cost of the Clean Tool is friction — I reload context more often, I carry more of my own thinking in my own head, I refuse some of the tool’s offers of recall. That friction is the price of sleeping well.
I know thoughtful people who run Second Brain and run it well. They’ve built containment around it. They accept different tradeoffs. The worst place to be is the one most users actually occupy, which is a confused middle — enough invested that the data layer has weight, not enough discipline that the containment is real. You get the downsides of both and the upsides of neither.
So if you take one frame from this piece: the choice isn’t which philosophy is correct. The choice is which one you are running, consciously, with the guardrails appropriate to that choice. Drifting into either by accident is what produces the failure modes nobody wants.
The People Not in the Memory
I want to go back to Stef, because this is the part of the piece that matters most to me and I’m not sure I’d trust anyone else to write it the way I need to write it.
When Stef was hurt that the AI didn’t know her, I understood what she was feeling. The intuition beneath the hurt is simple and very human: you spend hours every day with this thing. It’s your work, your thinking, your hours. If you cared about me the way you care about the work, surely some of that care would show up in the tool. That intuition is not wrong in its values. It’s wrong in its mechanics.
AI proximity is not relational proximity. Time-on-tool is the worst possible proxy for trust. A person can spend ten hours a day with an AI and share less of themselves with it than they share in a two-minute phone call with their sister. The tool is near you. It is not close to you. These words are not synonyms and they never have been, and the confusion of them is producing a whole new species of interpersonal hurt that our language doesn’t have good words for yet.
Here is what I believe about the people in my life and my AI’s memory. Stef is not in the tool because she does not need to be in the tool for the tool to do its job. She matters because she is a person, not because the system has modeled her. Putting her in the context would not deepen my relationship with her. It would reduce her to a row in a store I don’t fully control, governed by a policy I did not write, subject to a retention schedule I did not negotiate, accessible to whoever eventually gets to see my session — a partner who leaves, a discovery motion, a breach, a curious kid, a future version of the platform with different terms. None of those futures are certain. All of them are possible. The cost of her being in there, in any of those futures, is hers to pay, not mine.
And I love her. So she is not in there. That is the mechanism.
The thing I couldn’t explain to her in the moment, but want to say here, is that the emptiness isn’t neglect. It’s restraint. It’s the same impulse that makes me not tell certain stories at parties even when they’d get a laugh, because they are hers to tell. It’s the same impulse that makes me lock my phone when I step away, even though the odds that anything bad happens in the next ninety seconds are vanishingly small. It’s the practice of treating the people you love as if their information is theirs, which is the simplest expression of respect I know.
The conversation we had after her hurt was the actual repair. I told her why the tool was empty of her. I told her what was in the tool and what wasn’t. I offered to show her my memory settings, my projects, my contexts — not as a defensive move, but as a matter of domestic transparency. She didn’t take me up on it. The offer was enough. What closed the gap wasn’t the tool changing. It was me being able to say, out loud, you are not in there because I love you, and here is what I mean by that.
If you use AI at the depth I do and you have people in your life, I think you owe them some version of that conversation. It is not a hard conversation. It is mostly just a clarifying one. But it has to actually happen. The gap between what your tool contains and what your relationship deserves does not close on its own.
The Containment You Can Install Tonight
After five sections of framing, you deserve something to do. Here are five moves. None takes more than fifteen minutes. All five together take about an hour. If this is the only section of the piece you act on, you will be meaningfully safer tonight than you were this morning.
Read your memory. Open whatever interface your AI gives you for stored memories — Claude’s memory settings, ChatGPT’s memory panel, whichever surface your platform exposes. Read every entry top to bottom. For each one, ask three questions: is this still true, is this still relevant, would I be comfortable if this leaked tomorrow? Anything that fails any of the three gets deleted or rewritten. Most people have never read their own AI memory end to end. Doing it once is often the moment the rest of this starts to feel real.
Map the six surfaces. The chat history is not the whole memory. The whole memory is scattered across at least six surfaces: conversation history, persistent memory features, project knowledge bases, custom instructions, system prompts, and connected integrations (Drive, email, Notion, Slack). Each has a different retention policy. Each has a different surface for deletion. No single UI shows you the total picture. Sit down once and write out, for your specific AI stack, where all six surfaces live for you. This is a twenty-minute exercise that will clarify more than any article could.
Scope your projects. Stop running one giant context that holds everything. Split into scoped projects — one for client work, one for personal writing, one for household, one for finance if you use it that way. Each project holds only the context it needs. The blast radius of any single compromise stays inside that one project. This is the same least-privilege principle engineers use for software access, applied to context.
Lock the handoff. The threat model that matters for most individual users is not a sophisticated hacker. It’s the moment someone else touches your unlocked device — a partner borrowing the phone, a kid looking for the calculator, a colleague glancing at your screen, a support agent on a screenshare. Install a short, specific protocol: screen lock by default, session close on context switch, and a named practice for what happens when someone else uses your device. The worst leaks come from the most ordinary moments. Plan for those, not for the movie villain.
Rotate what the AI has seen. Every credential that has ever appeared in an AI context — API key, password, token, connection string — goes on a rotation schedule the moment it enters. A ninety-day calendar reminder at minimum. Ideally, credentials never enter the AI directly at all; they live in a secrets manager and the AI calls through a proxy that holds the secret. Moving from the first version to the second is one afternoon of plumbing, and it is the single highest-leverage hygiene move an operator can make.
These are not the whole practice. They are the starter kit. The practice compounds from here.
The Harder Layer: What I’m Still Getting Wrong
I want to write this section honestly because the alternative is writing it dishonestly, and there is no version of this piece that earns its argument if I pretend Tygart Media has this figured out.
So. Here are some real mistakes.
Earlier this month, the AI stack I use to automate WordPress work made an edit to a client site page without the kind of per-page human confirmation the situation deserved. The edit broke three live pages. The client was patient about it. The rollback worked. No business was lost. But the near-miss had the exact shape of the failure mode this whole piece warns about — capability ran ahead of containment, and a system I trusted made a change faster than my judgment could intervene. The lesson was immediate and I installed the guardrail that afternoon: any live-system action on a high-risk surface now requires explicit per-action confirmation. Read-only actions can run free. Destructive or irreversible actions cannot. The rule sounds obvious stated plainly. It was not in place before the near-miss, and that is on me.
I have also, at various points, let credentials linger in AI contexts longer than I should have. Not dramatically. Not catastrophically. But in the honest audit I did after the incident above, there were tokens in project files older than the rotation schedule I would tell a client to use. I rotated them. I built the proxy pattern I should have built a year ago. I am closer to clean than I was, and I am not fully there yet.
There is a reason most operators don’t write sections like this one. The near-miss is pedagogically priceless and professionally embarrassing at the same time. The embarrassment is why the field learns slowly. The honesty, when someone offers it, is the most valuable content in the space — and it is almost never offered, because the incentive structure rewards the polished version over the useful one.
I am publishing this section anyway because I think the embarrassment is a smaller cost than the slow-learning tax the whole field pays when operators hide their misses. And because an article about hygiene that pretends its author doesn’t sweat is not an article I’d trust from anyone else. If you run AI at operator depth long enough, you will produce near-misses. Whether you learn publicly or privately is the only variable. I’d rather learn where it helps someone else avoid the same move.
The 2030 View
If everything in this piece feels a little optional in 2026, project the variables forward and see if the math still works.
Memory depth is going up, not down — meaningfully, as context windows expand and persistence shifts from opt-in to default. Cross-app memory is already arriving; by 2030 your AI will know what’s in your email and your calendar and your files and your shopping history and your health app, not as separate silos but as a fused picture. Agent autonomy is arriving faster than most people realize — the AI is moving from a thing you consult to a thing that acts on your behalf, which means the containment question shifts from “what does it know” to “what can it do.” Shared household AI layers are arriving, with multiple family members on the same account already common enough that the consent problem stops being individual and becomes governance. And the legal system will catch up to all of this, unevenly, painfully, and in ways you will not want to be the test case for.
Every problem in this article compounds under those conditions. The over-loader’s blast radius grows. The under-loader’s relational gap widens. The unaware’s foundation gets shakier. The recipes that take an hour now will take a day then. The containment practices that feel precious today will feel obvious in five years, the way locking your front door and not leaving your wallet in the car feel obvious now.
There will be a public catastrophe. I don’t know whose. I don’t know whether it will be a major breach, a lurid divorce, a criminal discovery, or a platform failure that rewrites retention terms mid-flight. I know it will happen and I know it will reorganize how the rest of us think about this overnight. The people who built the practice before that moment will look prescient. They won’t have been prescient. They’ll have been paying attention.
I would rather pay attention now, while the stakes are small and the mistakes are cheap, than learn after the public catastrophe when the mistakes are not.
The Close
Everything in this piece argues for one small idea.
The tool is a tool. The person is a person. The hygiene is what keeps those two categories from collapsing into each other.
When the tool becomes a stand-in for cognition, memory, identity, or intimacy, it has exceeded what it was ever built to do, and the human pays the cost. When the person becomes a user-of-tools who still owns their own thinking, relationships, and responsibility, the tool does what tools are supposed to do — extend capacity without replacing character.
Every practical move in this article is a local case of that single principle. Every hygiene conversation in your life is an application of it. Every guardrail you install is the same principle, written down.
And the practice compounds or decays. Six months of deliberate attention makes the moves automatic. Six months of neglect means the muscle memory isn’t there when you need it. This is not a project you complete. It is a standing practice you keep, like locking the door, like reviewing your accounts, like calling the people you love.
Do one thing tonight. Read your memory. Map your surfaces. Call the person in your life your AI doesn’t know about and tell them why you kept it that way. Any of those. Whichever one feels least comfortable is probably the right one to do first.
The tool is a tool. The person is a person. The hygiene is what keeps them from becoming each other.
Start there.
