M MemberIntel KB

spec

Synthesis

Phase-by-phase sequencing of all architectural commitments against the actual team ramp, surfacing five friction points where the phased plan and architectural reality diverge — content lead timing, Phase 2 scope calibration, differentiation eval ship date, privacy counsel engagement depth, and infra engineer hiring window.

Synthesis

This is the synthesis. Everything across the prior threads collapses into a sequencing question: given the phased plan’s team ramp, the SPEC’s milestone gates, and the architectural commitments we’ve made, what gets built when, and what depends on what.

I’m going to be direct about what doesn’t fit. The phased plan’s Phase 1 (May, two people) and Phase 2 (June, three to four people) are tighter than the SPEC suggests if you take all the architectural commitments seriously. Some things have to slip later, some have to start earlier than the plan currently has them, and a few things from the prior conversations are simply out of scope for V1 launch and need to be flagged as deferred deliberately rather than forgotten.

Let me work through it phase by phase, then surface the friction points.


Phase 1 — May 2026 (Cindy + Seth, 4 weeks).

The phased plan has Phase 1 as scoping and foundation. The architectural commitments mean Phase 1 has to also include some real infrastructure decisions and prototype work, because Phase 2 starts the build and the team can’t be deciding GCP project structure on the first day of Phase 2.

Seth’s Phase 1 deliverables, ordered by dependency:

Hosting and project structure decisions land first. GCP confirmed. Project-per-environment with the four-project model (memberintel-prod, staging, dev, shared) under a Google Cloud Organization with production and non-production folders. This is a one-week Terraform scaffolding effort that Claude Code can drive. By end of week one, the projects exist, Workload Identity Federation is configured, the KMS keyring is set up, the VPC topology is decided. No application code yet.

The architectural decision records start getting written. The SPEC and Seth’s JD both call for ADRs on material choices. By end of Phase 1: ADRs on hosting (GCP), database (Cloud SQL Postgres), vector store (pgvector — recommend committing this in Phase 1 rather than deferring per Open Q9), per-tenant isolation (shared-schema with RLS), CI/CD (GitHub Actions), observability stack, secrets management approach. Each ADR is short — a page or two — but written down and committed. This is the documentation Phase 2 builds on.

The data schema design happens in Phase 1 even though no tables are created. The canonical schema with platform-agnostic fields (source_platform, source_id), the per-customer brain versioning, the audit log structure, the entitlement table shape with V1.5 trial-state extensions. The schema is a markdown document with the ER diagram and column-level annotations, reviewed by Seth and Blair, ready to translate into Phase 2 migrations.

Vendor decisions land. Anthropic API directly (already decided per SPEC). Stripe-customer-OAuth-only for V1 (revisiting Open Q4 with the recommendation from the auth conversation). Cloud Tasks + Cloud Scheduler + Cloud Run for the worker stack. Atlassian Statuspage or Better Stack Status for the status page. These are 30-minute decisions each but they need to be made before Phase 2 starts so the team isn’t blocked.

Senior AI Engineer hire is the biggest deliverable. Per the phased plan, mid-June close. The interview process should test for cloud infra comfort, not just AI engineering, because by V1.5 this person is sharing ops load with Seth. The phased plan has this right; just emphasizing the candidate evaluation should weight infrastructure-leaning AI engineers over pure-AI engineers.

The MP-side API work scoping with Paul Carter is a Phase 1 deliverable. The dedicated MemberIntel REST endpoint approach (Option A from the sync conversation), with versioning and incremental sync support, needs Paul Carter’s team’s commitment by end of May to start MP-side work in early Phase 2. Otherwise the sync pipeline is blocked.

Cindy’s Phase 1 deliverables, also ordered by dependency:

Outside privacy counsel engaged by June 1 — already a phased-plan deliverable, no change. The architectural decisions Phase 1 produces (CMEK, audit dataset, cross-pollination boundary, secrets management approach) become the input package for counsel’s review. The cleanest pattern is to schedule an introductory meeting in Phase 1 mid-month, then a deep architectural review with counsel + Seth in late May.

Customer interviews per the SPEC’s 10-15 target, finishing by end of May. Output is a written synthesis document covering pricing validation, top pain points, dashboard KPI selection, brain scope, free/Pro feature split, beta cohort criteria. This becomes input to PRDs.

PRDs in priority order: Free → Pro upgrade flow + chat experience first (the two are intertwined; treat them as one PRD). Then the Free vs Pro entitlement matrix as a separate document — this becomes the data input for Seth’s entitlement service. Then the dashboard insight-card system. Three PRDs by end of Phase 1, drafted, ready for Blair’s review.

Decision-rights matrix signed off by Cindy, Seth, and Blair. Already a phased-plan deliverable.

Cross-functional kickoff meetings with Curt, Wray, Russ, Thomas, Paul Carter, Ally Roger, Danielle. Already a phased-plan deliverable.

The meta-decisions from the risk-review conversation get raised in Phase 1 with Blair: the question of whether Sarah arrives in Phase 1 instead of Phase 3, the differentiation eval subset becoming a tracked metric, the SOC 2 placeholder. These are conversations, not deliverables, but they need to happen now before momentum carries Phase 1 along.

The phased plan’s Phase 1 milestone gate is roughly right but should add explicit items: ADRs committed, schema design reviewed, MP API surface confirmed with Paul Carter, vendor decisions logged, Senior AI Engineer pipeline at 3+ qualified candidates in late-stage interviews. If any of these aren’t met by end of May, Phase 2 doesn’t start on schedule.


Phase 2 — June 2026 (Cindy + Seth + Ronald + Senior AI Engineer onboarding, 4 weeks).

This is where the phased plan and the architectural commitments collide most. The plan has Phase 2 as “build kickoff” with auth, data ingestion, brain seeding, initial chat, ToS drafts. That’s a lot for four weeks with a brand-new senior hire still ramping. Some of it slips.

What has to happen in Phase 2, in priority order:

The infrastructure foundation gets built. Terraform modules for the VPC, Cloud SQL with CMEK, Cloud Run services, Cloud Tasks queues, Secret Manager structure, Cloud Logging sinks, BigQuery datasets including audit. The CI/CD pipeline with the RLS test harness and the staging-only-synthetic-data discipline. The kill switch infrastructure. This is two weeks of focused work for Seth + the new hire, with Claude Code doing the heavy scaffolding and Seth reviewing.

The schema gets created with the migrations tooling. RLS policies with FORCE on every customer-data table. Two database roles. Middleware for tenant context. Integration tests that verify isolation. By end of week two of Phase 2, a developer can spin up a staging environment, create two synthetic tenants, and run the isolation test suite.

The auth layer for MP-license-based authentication. The per-license signing key pattern, the banner-click flow, the unified user model with multiple identity sources. Standalone email/password fallback (Argon2id, NIST password rules, server-side sessions) in parallel.

The entitlement service skeleton. Postgres tables, Redis (Memorystore) for hot counters, the check_and_consume API. Tier model wired up to the Free vs Pro entitlement matrix from Cindy’s PRD. Trial state fields present even though V1.5 doesn’t ship yet.

The MP sync pipeline (Option A endpoint), assuming Paul Carter’s team has shipped the MP-side endpoints by mid-Phase 2. If they haven’t, this slips. The sync uses the Cloud Tasks pattern with per-customer concurrency=1 and adaptive timeouts (basic version, not full adaptive logic). Customer-facing sync state visibility is a Phase 3 deliverable, not Phase 2.

The Stripe sync pipeline. Customer-OAuth flow, refresh token storage in Secret Manager, scheduled sync for Free tier polling. Webhook listener and signature verification for Pro tier real-time sync. Idempotent event processing.

Auth + sync working end-to-end means a real MP customer can click the banner, sign in, see their data in the per-customer warehouse. That’s the Phase 2 milestone the phased plan calls for.

What gets pushed to Phase 3 from the original Phase 2 plan:

The chat advisor with Haiku — this is what the phased plan calls “initial chat advisor working with Haiku model.” Realistic in late Phase 2 but the polish (citations, feedback capture, eval coverage) goes to Phase 3.

Global brain seed — the indexing of MP docs is feasible in Phase 2 mechanically but the playbook authoring requires the content lead, who under the original phased plan isn’t on the team until Phase 3. This is one of the friction points.

Per-customer brain scaffolding (storage, retrieval, basic update mechanisms) gets built in Phase 2. Cross-pollination is Phase 3.

Public site analysis pipeline — feasible in Phase 2, but I’d push it to Phase 3 because it’s lower-priority than the auth + sync + entitlement core, and it’s the most expensive pipeline, so you want time to tune cost controls before it goes live.

The Phase 2 milestone gate as the phased plan defines it (auth and data ingestion working end-to-end, Senior AI Engineer onboarded and contributing, initial chat working with Haiku, ToS drafts in legal review) is achievable if you accept that “initial chat” means “a route exists that takes a question and returns a Haiku answer, no citations yet, no eval coverage yet.” Worth being honest about that with Blair before committing.


Phase 3 — July 2026 (team grows to 5-6 with Meo and Sarah, 4 weeks).

This is where the meta-decision from the risk review hits. The phased plan has Sarah arriving July 1. The risk review suggests the brain content depth at GA is the highest-leverage product risk and Sarah arriving in May would help. Three options:

Option A: Keep the phased plan as-is. Sarah starts July. Brain at GA has 50 hand-written playbooks plus indexed MP docs. Differentiation depends on retrieval and per-customer brain accumulation. Risk: differentiation underperforms at launch.

Option B: Move Sarah to Phase 1. She joins May, has ~5 months to author brain content before GA, plus the head start lets her synthesize customer interview output into early playbooks. Cost: Katelyn’s content team loses Sarah for an extra two months, and Sarah might be underutilized in Phase 1 before there’s a brain to write into.

Option C: Hire a second content lead earlier than V2 implies. The first content lead arrives early Phase 1 (someone other than Sarah, possibly), focused on brain authoring. Sarah joins on the original schedule for marketing-site copy and launch content, which is more her existing skill set. Two content leads by GA, not one.

Option C is probably the best architectural answer because it acknowledges that brain authoring and marketing-content authoring are different jobs that have been collapsed into one role. The cost is one additional hire, which Blair has to weigh against the differentiation risk. Worth raising with Blair in Phase 1 explicitly.

Whichever option wins, Phase 3’s deliverables shape accordingly. Assuming Option C with a brain-authoring content lead in Phase 1:

Phase 3 has the brain authoring content lead deep into playbook production, with maybe 30+ playbooks committed by end of Phase 3 (toward the 50+ target by GA). The cross-pollination job’s machinery doesn’t matter yet because there’s no per-customer brain content to draw from — that’s a post-launch concern.

The chat experience gets polished. Citations enforced. Feedback capture wired to the audit dataset. Tool calling stabilized — query_customer_metrics, search_global_brain, search_customer_brain, update_customer_brain, analyze_site all working. Sonnet routing for Pro paths working alongside Haiku for Free.

The Sonnet-tier chat experience, tier-gated AI model routing fully operational, entitlement service wired into every LLM-calling code path. Quota tracking working. The dashboard starts surfacing tier-gated insight cards (basic version — full insight prose is a Phase 4 deliverable).

The eval suite shipped in usable form. Maybe 80-100 scenarios at end of Phase 3 (toward 150 at GA). The differentiation subset (30-50 scenarios scored against baseline LLM) shipping by end of Phase 3 with a baseline measurement. This is the leading indicator I flagged in the risk review — it has to exist by end of Phase 3 to give the team time to course-correct if the gap is narrow.

Self-serve billing integration with Ally and Stripe. Plan upgrades, downgrades, cancellation. Failed-payment dunning. This is straightforward Stripe work but it has to be tested across the trial-to-paid, trial-to-failed, Pro-to-Free, Free-to-Pro paths.

Public site analysis pipeline shipped. Weekly cached, Pro-only on-demand with rate limiting. This is the place where cost discipline matters most and where the daily cost circuit breaker will earn its keep.

Customer-facing sync state visibility in settings. The “last sync, status, error message” surface from the sync conversation. Without this, sync failures become silent churn.

ToS / Privacy Policy / DPA in legal review with privacy counsel, near-final. Consent flow at signup designed and built. Cross-pollination opt-out surface in settings even though the cross-pollination job doesn’t run until Phase 4 or later.

Cindy’s Phase 3 work intensifies on PRDs (cross-pollination flow, weekly digest, advanced reports, downgrade flow), beta cohort identification, marketing site coordination, PR plan decision, brand identity locked.

Phase 3 milestone gate per the phased plan is mostly right — chat working with Haiku and Sonnet routing, dashboard with tier-gated cards, brand identity locked, marketing site copy approved, beta cohort identified. Add the differentiation eval subset shipped with baseline measurement, and 30+ playbooks committed.


Phase 4 — August 2026 (team grows to 6-7 with Kalpesh, 4 weeks).

Phase 4 in the phased plan is “website + beta launch.” The architectural commitments add a few things.

The website work is Kalpesh + Meo + Sarah. Mostly straightforward IPJ work, no architectural surprises.

Cross-pollination job starts running, with the three-roles model, k-anonymity floor, content lead review queue. This is the right time because by Phase 4 there are real per-customer brain entries from the beta cohort, and the content lead can start reviewing real candidates rather than synthetic ones. The cross-pollination output won’t be substantial until weeks of beta data accumulate, but the machinery has to be running.

Weekly digest email generation. Haiku for Free, Sonnet for Pro. Email infrastructure already exists at Caseproof per the SPEC.

AI eval suite completes to ~150 scenarios. Nightly drift detection running in production-equivalent staging. Differentiation subset reviewed monthly with Blair starting Phase 4.

Cost monitoring dashboards live. Per-customer token caps active. Daily global circuit breaker active. Weekly cost-per-cohort review starts.

Privacy controls shipped: data export (CSV/JSON), deletion pathway, all the GDPR/CCPA primitives. These are Phase 4 because they need to be production-quality before beta customers test them.

i18n shipping (English, Spanish, German per the SPEC). UI translation, AI chat handling input/output language. The spec mentions this explicitly so it has to ship.

Performance hardening, observability tightening. The OTel instrumentation that was put in during Phase 2 starts paying off — traces now show actual production-shape requests, latency baselines emerge, the alert thresholds get tuned based on real data rather than guesses.

Closed beta launches mid-Phase 4. The phased plan calls for 10-20 hand-picked customers across Free and Pro tiers. The architectural pieces required are all in place — auth, sync, entitlement, chat, dashboard, brain, eval suite, observability, cost controls, privacy controls. The remaining work is operational: weekly feedback collection, bug triage workflow, response time on critical issues.

Phase 4 milestone gate per the phased plan: closed beta running with weekly feedback flowing, memberintel.com live, AI eval suite passing on representative scenarios, cost-per-user dashboards operational, privacy compliance posture clean (counsel sign-off), sales + support enablement materials drafted. All of these line up.


Phase 5 — September through mid-October 2026 (full team plus PR firm, 6 weeks).

Phase 5 is launch ramp. The phased plan has the right structure. The architectural pieces from the prior conversations that matter most here:

Status page configured before GA, not after the first incident. Runbooks written for the five highest-stakes failure modes (API down, DB down, RLS violation, cross-pollination leak, payment failure spike). On-call rotation set up between Seth and the Senior AI Engineer with Ronald as secondary.

Final security review with privacy counsel. The audit log is queryable, the encryption-at-rest is verified, the secret rotation cadences are documented, the data deletion pathway is tested end-to-end. This is the conversation that determines whether counsel signs off in time for GA — and it should be a “review” not a “first look,” because counsel has been engaged since June and seen the architecture evolve.

Final eval suite full run, including the differentiation subset. The release-gate criteria are met or GA slips. The SPEC’s targets — hallucination rate <1%, brain showing measurable advantage over baseline — are validated against the suite.

Final cost projections for Free tier at projected adoption volumes. The SPEC requires stress-testing at 5K, 10K, 50K free users. By Phase 5 the actual cost-per-Free-user data from the beta cohort grounds these projections in reality rather than estimates.

The kill switch tested in staging. Every team member knows how to trip it. The runbook for “trip the kill switch” is on the wall (figuratively — in the runbook directory and linked from Slack).

Marketing campaign execution per the phased plan, no architectural changes from me there.

GA launch mid-October. The phased plan has the right shape for the launch-day operational structure: Cindy holds the gate, Seth on technical readiness, Santiago tracks delivery confidence, Blair makes the final go/no-go call.


The friction points worth surfacing to Blair.

If I’m summarizing this synthesis for Blair as a one-pager, the friction points where the phased plan and the architectural commitments don’t quite fit:

The content lead question. The phased plan has Sarah arriving in Phase 3 and being responsible for both brain authoring and marketing content. That’s both too late for brain depth and too much for one person. Recommend: hire a brain-authoring content lead in Phase 1, keep Sarah on the original schedule for marketing/launch content. One additional hire, but it directly addresses the SPEC’s #1 risk.

The Phase 2 scope is tight. The phased plan has auth, full data ingestion, initial chat, and ToS drafts in Phase 2. Realistic delivery in 4 weeks with Ronald + a brand-new Senior AI Engineer is auth + data ingestion + chat skeleton (no citations yet, no eval coverage yet). The “initial chat” claim should be calibrated explicitly with Blair so the milestone isn’t a surprise.

The differentiation eval has to ship by end of Phase 3. Currently the phased plan doesn’t call this out as a milestone item. It should be. If the gap-vs-baseline isn’t widening by month 3, the team has time to course-correct; if it isn’t measured until Phase 5, the discovery is too late.

Privacy counsel engagement needs to be substantive in Phase 1, not just contractual. The architecture review with counsel + Seth needs to be on the calendar in late May, not deferred to Phase 3 when “compliance work intensifies.” Counsel’s input shapes the cross-pollination boundary, the audit dataset, the secret management approach. Engaging them early saves rework later.

The infrastructure-leaning hire needs to be on the V1.5 hiring roadmap, not deferred to V2. Once the agent ships and the system has more failure surface, two engineers carrying ops on top of feature work is a burnout risk. Worth flagging in the budget conversation now.

The model abstraction layer is a Phase 1 architectural decision, not a “we’ll add it later” item. One day of work to wrap the Anthropic SDK in a thin abstraction. A month-plus refactor if it’s deferred and a strategic question forces a second provider.

The per-customer brain versioning is a Phase 1 schema decision. Same reasoning. Add it now, costs nothing; add it after launch, costs real work.


The shape of what to do next.

If you took this synthesis to Seth and Cindy, the first conversation is probably not “approve all of this.” It’s “where do you disagree.” The places I’d expect pushback:

Seth might push back on the Phase 1 ADR scope — too much paper for too little code. Counter: the ADRs are 1-2 pages each, written quickly, and they prevent rehashing decisions during Phase 2 when team velocity matters.

Cindy might push back on the content lead question — adding a hire is real budget. Counter: it’s the highest-leverage hire for the SPEC’s #1 risk, and the phased plan was built before this risk was fully internalized.

Blair might push back on the Phase 2 scope calibration — “initial chat means basic chat” feels like a step back from the original phased plan. Counter: the original phased plan was optimistic about a brand-new senior hire’s ramp speed; the calibration is honest.

The shape of the deliverable from this conversation, if you wanted one, is a revised Phase 1/Phase 2 ramp document that adds the architectural milestones (ADRs, schema, infrastructure foundation, eval baseline, model abstraction, brain versioning, content lead alignment) to the existing phased plan, plus a brief one-pager flagging the friction points with recommendations.

Want me to draft the revised ramp document and the friction-points one-pager? They’d be the natural artifact to take to Seth and Cindy as a starting point for the discussion. Or we can keep going on architecture — the threads we haven’t touched include the in-MP-admin banner integration mechanics (front-end and security questions), the brain content authoring tooling (which is where the content lead hire question becomes concrete), or the data warehouse schema design (canonical tables, evolution, query patterns).

For: S Seth Shoultes C Cindy Thoennessen B Blair Williams S Santiago Perez Asis