Category: Uncategorized

  • Mobile AI in Notion: The Real Test of Whether Agents Are Ready for Daily Use

    Mobile AI in Notion: The Real Test of Whether Agents Are Ready for Daily Use

    The 60-second version

    The real test of any AI feature is whether it survives the move to mobile. Notion 3.2 made that move in January 2026 — agents on mobile, full Custom Agent support, the same auto-model selection across Claude, GPT, and Gemini. The honest assessment after a few months in the wild: it works, but mobile AI is best for consumption and quick interaction, not heavy production. Voice input for prompts is a desktop-only feature so far. Mobile is where you check on agent runs, approve drafts, and ask quick questions — not where you set up complex skills or build workflows.

    What works well on mobile

    Three patterns that genuinely shine on the phone:
    1. Quick agent queries during in-between moments. Walking between meetings, in line for coffee, on a train. “What’s the status of project X” or “summarize this thread for me.” Phone-sized interaction, phone-friendly output.
    2. Approving and editing agent output. Custom Agent runs overnight, drops a draft in your workspace, you wake up, you read on your phone, you tap-edit a few sentences, you send it. The mobile review pattern is solid.
    3. Quick capture into AI-enriched databases. Voice memo or quick note drops into a Notion database, Autofill fills in summary, tags, owner, date. The phone is the input device; the agent is the cleanup crew.

    What’s painful on mobile

    Equally important to name:
    Building skills. Notion Skills require defining instructions, scope, and triggers. The mobile UI for this is functional but slow. Build skills on desktop; run them everywhere.
    Long-context work. Mobile screens make it hard to verify whether the AI pulled from the right pages. If the task involves cross-referencing or fact-checking a synthesis, do it on desktop.
    Multi-step debugging. When an agent run goes sideways and you need to trace why, mobile makes it hard to inspect the trail. The fix is rarely on mobile.
    Voice input. Currently desktop-only on macOS and Windows. Even on those platforms, voice works only inside AI prompt fields, not for general document dictation. Mobile voice is on the roadmap but unannounced as of April 2026.

    How operators are actually using mobile AI

    Patterns that have settled into real use:
    The morning check-in. Open Notion on mobile first thing. Read the overnight Custom Agent digest. Approve, edit, or escalate. Closes the inbox before the day starts.
    The drive-time capture. Voice memo into a quick capture database during a drive. Agent processes it later. The phone is the input; the desktop is where you act on it.
    The travel survival mode. When your only device is your phone for a few days, Notion AI on mobile is enough to keep workflows running. Not optimal, but operational.

    The honest limitation

    Mobile AI is good. Mobile AI isn’t a desktop replacement.
    If you’re trying to make your phone the primary tool for Notion AI work, you’ll feel friction. The screen is the bottleneck — not the AI capability, not the model selection, not the agent. Reading multi-paragraph synthesis on a 6-inch screen is what creates the strain.
    The right mental model: desktop is where you build, mobile is where you maintain. Skills, complex prompts, agent configurations, Worker setup — desktop. Daily interaction, approvals, quick captures, drive-time inputs — mobile.

    What to expect next

    Voice input on mobile is the obvious next shoe to drop. The desktop version exists; extending it to mobile is engineering, not strategy. Reasonable timeline: by end of 2026.
    Beyond voice, the more interesting mobile question is whether Custom Agent triggers can fire from mobile-specific events — location, motion, calendar proximity. Notion hasn’t announced anything here, but the “agent that wakes up when I land at the airport” workflow is a natural mobile pattern.

    What to read next

    Corpus follow-ups: Auto Model Selection (how mobile picks models), Custom Agents foundation piece (mobile inherits all the same Custom Agent capabilities), and the Solo Operator workflow article (the real-world mobile pattern).

  • How Notion Skills Work: Turning Repeated Prompts Into Reusable Commands

    How Notion Skills Work: Turning Repeated Prompts Into Reusable Commands

    The 60-second version

    Skills are how you stop re-prompting. If you find yourself typing the same instructions to your Notion Agent every Friday — “summarize this week’s project updates in our team format with a green/yellow/red status and an action items list” — that’s a skill waiting to be saved. Once captured, you call it by name and the agent runs the workflow. Skills became prominent with Notion 3.3 in February 2026 and they’re the bridge between “I have an AI assistant” and “I have an AI teammate that knows how we do things here.”

    What a skill actually is

    A skill is three things bundled:
    1. A trigger phrase or name — what you call it when you want it run
    2. The instructions — the prompt logic the agent follows
    3. The context boundaries — which databases, pages, or sources the agent can pull from
    That last piece is what separates a skill from a saved prompt. A saved prompt is just text. A skill is text with scope. The agent knows where to look, what format to produce, and which pages to update.

    The four skills every operator should build first

    If you’re new to skills, these four pay back the time investment within a week.
    1. The weekly digest skill. Reads your project database, your meeting notes, and your Slack archive. Produces a one-page digest in your team’s format. Run it Friday afternoon. You stop writing weekly updates.
    2. The brief-prep skill. Triggered before a meeting. Pulls the relevant project page, the last meeting notes with this person or team, any open action items, and synthesizes a one-page brief. Run it 30 minutes before the meeting. You stop showing up cold.
    3. The inbox-to-action skill. Reads new entries in a specified database (support requests, sales leads, content pitches). Categorizes them, assigns owners based on rules you set, and drafts a first response. You stop processing inbound manually.
    4. The doc-reshape skill. Takes any document and reformats it into your team’s house style — your headings, your sections, your tone. Solves the “we have great content from a partner but it doesn’t read like us” problem.

    How to build a skill that actually works

    Three rules, learned the hard way:
    Be specific about format. “Summarize” produces wildly different outputs depending on the agent’s mood. “Produce a one-page summary with these five sections in this order, max two sentences per section, in active voice” produces consistent outputs. Specificity is the difference between a skill you trust and a skill you babysit.
    Bound the context tightly. The temptation is to give the agent access to everything. The result is slower runs, more credits consumed, and outputs that pull from irrelevant sources. Pin the skill to specific databases or page trees. You can always expand later.
    Test it five times before you trust it. Run the skill against five different inputs and look at the outputs side by side. The variance you see is the variance you’ll get in production. If the spread is too wide, tighten the instructions until the outputs converge.

    What skills can’t do well yet

    Skills inherit the limits of the underlying agent. They struggle with:
    Tasks that require fresh judgment. A skill that’s supposed to “decide whether this lead is qualified” produces inconsistent results because the criteria aren’t fully explicit. Better to have the skill score the lead on five named dimensions and let a human make the call.
    Long autonomous chains. A skill that triggers another skill that triggers another skill is a debugging nightmare. Keep skills atomic. Compose them in workflows outside the skill itself.
    Cross-workspace work. A skill in one Notion workspace can’t reach into another. If you operate across multiple workspaces, you need parallel skills, not one shared skill.

    Skills and the May 3 cliff

    After May 3, 2026, every Custom Agent run consumes Notion Credits. That includes skills run by Custom Agents. The implication: a well-built skill that takes 30 seconds to run is cheap; a sloppy skill that takes 8 minutes because the context isn’t bounded is expensive.
    This is why “specificity” and “context boundaries” graduated from style advice to financial advice. Tight skills cost less. Sloppy skills bleed credits. The audit you should be doing on your skills before May 4 is the same audit you’d do on any line item: is the output worth the cost?

    What to read next

    If skills are interesting to you, the natural follow-up reads in this corpus are the Custom Agents foundation piece (skills run on Custom Agents), the May 3 cliff (when skill costs become real), and the Building Your First Notion Skill walkthrough in Deep Technical (step by step).

  • AI Autofill Databases Explained: The Self-Maintaining Knowledge Base

    AI Autofill Databases Explained: The Self-Maintaining Knowledge Base

    The 60-second version

    AI Autofill is the feature that makes a Notion database start maintaining itself. Point it at a column and tell it what to fill — summarize the page, extract the deadline, categorize the topic — and it processes each row using the row’s content and your instructions. Basic Autofill ships with Business and Enterprise plans and uses no credits. Custom Agent Autofill (post-May 4) runs Custom Agent capabilities under the hood, costs credits, and handles complex reasoning that Basic can’t. The honest version: Basic is good enough for most simple categorization and extraction. Custom Agent Autofill is for cases where Basic produces inconsistent results.

    What Autofill actually does

    Three categories of work it handles well:
    1. Summarization into a property. Long-form pages compressed into a one-sentence summary in a Summary column. Common pattern for content libraries, research databases, and meeting notes archives.
    2. Categorization. Tagging rows with categories based on content. Works well when categories are well-defined (e.g., “support ticket type,” “lead source”). Works less well when categories overlap or require judgment.
    3. Extraction. Pulling specific data points from page content into structured properties — dates, names, dollar amounts, status flags. Works well when the data is reliably present in the source.

    Where Autofill struggles

    Three places it gets inconsistent:
    Properties that require judgment beyond the page. “Is this lead qualified?” depends on context the page may not contain. Autofill will produce an answer, but consistency is poor.
    Multi-property dependencies. “Set the priority based on the deadline and the customer tier” requires reasoning across properties, not just within the page. Possible with Custom Agent Autofill, unreliable with Basic.
    Free-form output that needs to match a tone. “Write a customer-facing summary in our brand voice.” Autofill produces a summary, but matching brand voice across hundreds of rows is hit or miss without a tightly written prompt.

    Basic vs Custom Agent Autofill

    The split that matters:
    Basic Autofill — included, free, runs locally on each row when the AI is invoked. Good for clear single-step prompts (“summarize this page in 2 sentences”). Doesn’t have Custom Agent capabilities like richer context or multi-step reasoning.
    Custom Agent Autofill — uses Custom Agent infrastructure, consumes credits after May 4, can continuously enrich rows in the background, handles more complex prompts. Worth the credit cost when Basic isn’t smart enough and the consistency matters.
    A useful rule: try Basic first. If output quality is good enough, stop there. Move to Custom Agent Autofill only when you’ve measured that Basic produces unreliable results for your specific use case.

    Three Autofill patterns that work

    1. The intake form pattern. New rows arrive (from a form, an integration, or a manual entry). Autofill columns extract structured data from the unstructured input — pulling dates, names, key topics, sentiment, urgency. The intake desk staffs itself.
    2. The library catalog pattern. A content library or research database where every entry needs summary, tags, and category. Autofill keeps the catalog usable as it grows. Without it, large databases become unsearchable.
    3. The status synthesis pattern. A project tracker where each project’s current state is summarized in a “current status” field that updates as the page content changes. Stakeholders get a quick read without opening each project.

    Three patterns that don’t work

    1. Anything requiring fresh external data. Autofill works on what’s in the row. It can’t decide “is this competitor active in our market” because the answer isn’t in the row.
    2. Cross-row reasoning at scale. Autofill processes one row at a time. “Rank these against each other” needs a different approach (a view, a formula, or a query agent).
    3. Compliance-sensitive categorization. If the categorization has legal or regulatory weight, you don’t want it autofilled. Use Autofill to draft the suggested category; have a human confirm.

    The trustworthy database principle

    Autofill’s risk is silent drift — fields that look filled but aren’t accurate. Three guardrails:
    Always show the source. Add a “filled by” field or a date stamp so humans can tell what’s machine-generated and how recently.
    Spot-check 10% monthly. A quick audit of randomly selected rows catches drift before it spreads.
    Set a re-fill cadence for stale rows. Pages change. The Autofill output reflects the page at fill time. Rows older than 30 days that haven’t been re-checked should be flagged.

    What to read next

    Corpus follow-ups: Custom Agents foundation piece (because Custom Agent Autofill runs on that infrastructure), the database schema design article in Deep Technical (how to build databases that Autofill well), and the May 3 cliff (when Custom Agent Autofill cost becomes real).

  • What Notion AI Agents Actually Are (And What They Aren’t)

    What Notion AI Agents Actually Are (And What They Aren’t)

    The 60-second version

    A Notion AI Agent isn’t a chatbot. It’s a worker that lives inside your workspace and acts on it. The base version waits for prompts. The Custom Agent version (Business and Enterprise plans only) runs autonomously — on a schedule, on a trigger, or on demand — and can work across hundreds of pages for up to 20 minutes per task. Skills let you teach an agent your repeated workflows so it can run them on command. Workers (developer preview, April 2026) let agents call code and external APIs. The mental model is “a teammate with workspace access,” not “a smarter search box.”

    Why the distinction matters

    Most coverage treats “Notion AI” as one thing. It isn’t. There are at least four layers, and confusing them leads to operators either underusing or overspending on the platform.
    Layer 1: Notion AI in a doc. This is the inline AI you summon with the space bar or /. It rewrites, summarizes, and drafts inside the page you’re on. It’s a writing assistant. It doesn’t act outside the page.
    Layer 2: AI Autofill on databases. This populates or updates database properties based on row content. Basic Autofill is included on Business and Enterprise plans. Custom Agent Autofill uses Notion Credits for richer reasoning. It’s an enrichment layer, not an agent in the proactive sense.
    Layer 3: Standard Notion Agent. Responds to prompts, can read across the workspace, can edit pages, can integrate with Slack, Calendar, and Mail when those are connected. Reactive — it does what you ask, when you ask.
    Layer 4: Custom Agent. Proactive. Runs on schedule or trigger. Can work autonomously for up to 20 minutes. Can have skills attached. Can call Workers (in developer preview). This is the layer most people mean when they say “agents.” It’s also the layer that requires Business or Enterprise and, after May 3, 2026, consumes Notion Credits.
    If you’re unsure which layer you’re using, you almost certainly aren’t using Layer 4 — and that’s fine for many workflows.

    What agents are good at right now

    Three categories where agents earn their keep without much fuss:
    1. Database hygiene. An agent that runs nightly across your CRM database can verify links, flag stale records, summarize new entries into a digest field, and tag uncategorized rows. This is dull, repetitive work and it stops being your problem.
    2. Recurring document production. Weekly status updates, daily standups, meeting prep briefs. Anything where the format is stable and the inputs change. The agent reads the inputs, applies the format, produces the document, and you edit the 10% that needs human judgment.
    3. Cross-source synthesis. With Slack, Calendar, and Mail connected, an agent can answer questions that require pulling from multiple sources. “What did the team agree to in the marketing meeting last week, and what’s still open?” That’s a real query an agent can handle — reading the meeting notes, the Slack thread, the calendar follow-up, and producing a synthesis.

    What agents are not good at yet

    Equally important to name the gaps.
    Anything requiring judgment about people. Performance review drafting, hiring decisions, conflict mediation. The agent can summarize and surface; it shouldn’t decide.
    Compliance-sensitive output. Legal language, regulated medical content, financial guidance. An agent draft is fine as input to a human reviewer; it isn’t fine as final output.
    Novel reasoning under uncertainty. Agents do well when the pattern is established. They do worse when the situation has no precedent in your workspace. “Plan our entry into a new market” is a worse agent task than “summarize what we’ve learned about our existing market.”
    Stateful work across long timelines. Agents are getting better at continuity, but for now they’re best at bounded tasks. A 20-minute autonomous run is an upper bound, not a target.

    How to think about which layer you need

    A simple decision tree:
    – Just want help drafting? → Layer 1 (inline Notion AI).
    – Want a database to maintain itself? → Layer 2 (Autofill). Use Custom Agent Autofill only when basic isn’t smart enough.
    – Want to ask questions across your workspace and get pulls and edits? → Layer 3 (standard agent).
    – Want recurring autonomous work on a schedule? → Layer 4 (Custom Agent). Be ready to budget Notion Credits after May 3, 2026.
    Most operators land on a mix of Layers 1, 2, and 3. Layer 4 is for specific recurring workflows where the time savings clear the credit cost.

    What to read next

    If you came here trying to understand what agents are, the natural follow-ups in this corpus are: how Skills work (the way you teach agents repeated workflows), what Custom Agents change (the autonomy line), and the May 3 cliff (when free trials end and credits begin).

  • High-Traffic GA4 Channels Delivering the Wrong Users — A Search Intent Diagnosis

    A page can rank on page one, receive consistent organic traffic, and still be failing. The failure is silent — visible only when you look at what arriving users actually do.

    When users search “how to apply for X” and land on a page about “what X is,” they leave immediately. The page ranked for the query but delivered the wrong content for the intent behind it. GA4 captures this as a short session with a high bounce rate — but it does not tell you which queries are driving the mismatch.

    Intent Mismatch Has a Specific Signature

    High organic traffic plus low engagement rate plus short session duration on the same page. If a page is receiving 200 organic sessions a month and engaging 12% of them, something is wrong. The page either ranked for queries it cannot answer, or the content addresses a different aspect of the topic than users are searching for.

    The Silent Scream in Your Internal Search Data

    Internal site search is the most underused intelligence in GA4. When a user searches your site, they are explicitly telling you what they wanted and could not find. That is direct audience research, already collected in your property, almost never reviewed.

    The top 20 internal search terms for any content site are a ready-made content sprint list. No keyword tool produces a brief this precise — because no keyword tool knows which users already tried your site and left empty-handed.

    Your Intent Alignment Score

    The ratio of well-aligned to misaligned organic landing pages is your intent alignment score. Track it quarterly. If you are actively addressing misaligned pages through rewrites and new content, the score should improve. If it is flat, new misalignment is appearing faster than you are fixing old misalignment.

    The methodology is the Books for Bots: GA4 Search Intent Alignment Kit.

    Learn more about the GA4 Search Intent Alignment Kit

  • GA4 New vs Returning Users: What the 14x Session Duration Gap Is Telling You

    Your GA4 new versus returning user data contains a ratio most teams are not monitoring: returning sessions as a percentage of total. That ratio is your retention baseline. It tells you whether your content is building an audience or attracting drive-by traffic.

    The 14x Duration Gap

    In a live GA4 audit on a real content site, returning users averaged 4 minutes 12 seconds per session. New users averaged 18 seconds. Same site, same content, 14x difference. Returning users engaged at 61% versus 22% for new users, and viewed 3.8 pages per session versus 1.2.

    Every benchmark you track is a blend of these two completely different behaviors. The aggregate number hides both the strength of your retained audience and the weakness of your new user conversion to loyalty.

    Loyalty Anchors

    A small number of pages drive most return visits. These loyalty anchors share identifiable characteristics: comprehensive, addressing recurring needs rather than one-time questions, often counterintuitive enough to be memorable and worth recommending to others.

    Once identified, they deserve regular updates, protection from disruptive monetization, and prominent internal linking so new users can find them.

    Your Best Retention Channel Is Not Your Best Acquisition Channel

    Not all acquisition channels produce equal retention. Organic search frequently produces higher retention than social. Email from a curated newsletter produces some of the highest rates of all. The channel producing your returning users is often not the channel producing your most new users — and optimizing for acquisition volume without understanding retention means investing in the wrong channel.

    The methodology is the Books for Bots: GA4 New vs Returning Intelligence Kit.

    Learn more about the GA4 New vs Returning Intelligence Kit

  • GA4 Bounce Rate by Time of Day: The Scheduling Intelligence Most Teams Never Pull

    Most content teams publish when they have something ready. Almost none publish based on when their audience is paying attention. GA4 knows exactly when that window opens.

    Wednesday Is Not Random

    In a live GA4 audit on a real content site, Wednesday produced the highest engagement rate and longest session duration across all seven days. Saturday and Sunday dropped below 20% engagement. The site had been publishing on a Friday cadence for months.

    Wednesday readers are in work mode, researching, looking for answers they can act on before the week ends. Weekend readers browse at lower intent — shorter duration regardless of content quality.

    The Three Daily Windows

    Morning (7AM to 11AM) produces consistently elevated engagement from commuters and early researchers. Late afternoon (4PM to 7PM) shows another spike — users winding down work. Some hours in this window showed 100% engagement rates in the live data.

    Late night (10PM to midnight) is the most counterintuitive finding. Volume is low but depth is exceptional. Users arriving between 10PM and 11PM averaged over 15 minutes on page on the audited site. Nobody is publishing for them.

    The Scheduling Fix

    This is immediately actionable without creating new content. Move planned publishes to peak engagement windows — Wednesday over Friday, 9AM or 5PM over noon. Same content, more receptive audience.

    The full methodology is the Books for Bots: GA4 Time Intelligence Kit.

    Learn more about the GA4 Time Intelligence Kit

  • GA4 Exit Pages: Satisfied Reader or Lost Visitor

    GA4 shows you exit rate. It does not tell you whether that exit was a success or a failure.

    An 85% exit rate with three minutes average duration means the page did exactly what it was supposed to do. Users arrived, found their answer, and left complete. An 85% exit rate with four seconds means the page failed immediately. GA4 reports the same number for both.

    The Two Types of Exit

    A satisfied exit combines high exit rate with high duration — 90 seconds or more. The user read, completed their task, and left. Adding more CTAs to reduce this exit rate would interrupt a successful user journey.

    An abandoned exit combines high exit rate with low duration — under 30 seconds. The user found nothing useful and left. This page needs attention: wrong audience, wrong content, or missing next step.

    The Finding From a Live Audit

    The NYC Summer Internships guide on a real content site showed an 85% exit rate with 3m 20s average session duration. The page was succeeding — users read a comprehensive guide and left with the information they needed. The homepage showed 65% exit rate with 8-second duration. Lower exit rate, dramatically worse performance.

    Dead Ends and the Internal Link Fix

    A third pattern exists: dead ends. Users arrive with genuine interest, stay long enough to engage, but have nowhere obvious to go next. Adding one relevant internal link to these pages often produces measurable session depth improvement with zero content changes.

    Google Analytics Advisor can generate specific page pairing recommendations from your actual behavioral data. The methodology is the Books for Bots: GA4 Exit Intelligence Kit.

    Learn more about the GA4 Exit Intelligence Kit

  • High-Traffic GA4 Channels Delivering the Wrong Users — A Search Intent Diagnosis

    A page can rank on page one, receive consistent organic traffic, and still be failing. The failure is silent — visible only in the behavioral data of users who arrived and immediately left.

    When users search “how to apply for X” and land on a page about “what X is,” they leave immediately. The page ranked for the query but delivered the wrong content for the intent behind it. GA4 captures this as a short session with a high bounce rate — but it does not tell you which queries are driving the mismatch.

    Intent Mismatch Has a Specific Signature

    In GA4, intent mismatch produces a recognizable pattern: high organic traffic, low engagement rate, and short session duration on the same page. If a page is receiving 200 organic sessions a month and engaging 12% of them, something is wrong: the page ranked for queries it cannot actually answer, or the content addresses a different aspect of the topic than users are searching for.

    The Silent Scream in Your Internal Search Data

    Internal site search is the most underused intelligence in GA4. When a user searches your site, they are explicitly telling you what they wanted and could not find. That is direct audience research, already collected in your property, almost never reviewed.

    The top 20 internal search terms for any content site are a ready-made content sprint list — topics real users on your site actively wanted to find. No keyword tool produces a brief this precise because no keyword tool knows which users already tried your site and left empty-handed.

    Your Intent Alignment Score

    Across your organic landing pages, a percentage are well-aligned with search intent — high traffic, high engagement. The remainder are misaligned — high traffic, low engagement. That ratio is your intent alignment score. Track it quarterly. If you are actively addressing misaligned pages, the score should improve. If it is flat, new misalignment is appearing faster than you are fixing old misalignment.

    The methodology is the Books for Bots: GA4 Search Intent Alignment Kit.

    Learn more about the GA4 Search Intent Alignment Kit

  • GA4 New vs Returning Users: What the 14x Session Duration Gap Is Telling You

    Your GA4 new versus returning user data contains a ratio most teams are not monitoring. That ratio — returning sessions as a percentage of total sessions — is your retention baseline. It tells you whether your content is building an audience or just attracting drive-by traffic.

    The 14x Duration Gap

    In a live GA4 audit on a real content site, returning users averaged 4 minutes 12 seconds per session. New users averaged 18 seconds. Same site, same content, 14x difference in how long users stayed. Returning users engaged at 61% versus 22% for new users, and viewed 3.8 pages per session versus 1.2.

    Every benchmark you track — engagement rate, bounce rate, session duration — is a blend of these two completely different behaviors. The aggregate hides both the strength of your retained audience and the weakness of your new user conversion to loyalty.

    Loyalty Anchors

    A small number of pages are responsible for most return visits. These loyalty anchors share identifiable characteristics: they are comprehensive, address recurring needs rather than one-time questions, and are often counterintuitive enough to be memorable and worth recommending.

    Once identified, these pages deserve protection from monetization that would interrupt the experience, regular updates to keep them fresh, and prominent internal linking so new users can find them.

    The Best Retention Channel Is Not the Best Acquisition Channel

    Not all acquisition channels produce equal retention. The channel producing your returning users is not always the channel producing the most new users — and optimizing for acquisition volume without understanding retention often means investing in the wrong channel. Organic search frequently produces higher retention than social media. Email produces some of the highest rates when genuinely curated.

    The methodology for surfacing all of this is the Books for Bots: GA4 New vs Returning Intelligence Kit.

    Learn more about the GA4 New vs Returning Intelligence Kit