M MemberIntel KB

reference

Quarterly Architecture Review Template

A 90-minute fixed-agenda template for quarterly architectural health reviews — covering differentiation gap, cost-per-cohort, reliability, cross-pollination health, compliance posture, and a standing 'one thing that worried me' round — starting Q4 2026 post-GA.

Drafted: May 5, 2026
Owner: Blair Williams, CEO
Cadence: Quarterly, 90 minutes, starting Q4 2026 (post-GA)
References: MemberIntel SPEC v1, Decision-Rights Matrix, Phased Plan Rev 2, Friction-Points One-Pager


What this review is

A 90-minute working session held four times a year to honestly assess the architectural and operational health of MemberIntel. Attended by Blair (CEO), Cindy (Product Lead), Seth (Lead Architect), and outside privacy counsel. Other attendees by invitation only — keep the room small enough for honest conversation.

The review is not a status meeting. The team doesn’t run dashboards at it. Standing reports go to L10. This meeting exists because dashboards don’t catch judgment-call drift — has the cross-pollination boundary held under real volume, is the differentiation gap widening or narrowing, are we accumulating ops debt that won’t show up until V1.5, has anyone seen something that worried them but didn’t trigger an alert.

The review explicitly looks for things that are getting worse, not just where things are. A review that only ever reports green is a review that’s not doing its job.

What this review is not

  • Not an L10. Standing operational status goes there.
  • Not a roadmap meeting. Strategic roadmap discussion happens elsewhere.
  • Not a budget conversation, except where architectural decisions imply budget shifts.
  • Not a forum for re-litigating decisions already made (the SPEC, the decision-rights matrix, prior reviews). New evidence justifies revisiting; preference does not.

Inputs prepared in advance (one week before the meeting)

These get written up by the named owner and circulated 48 hours before the review. Reading them is pre-work, not in-meeting time.

Cindy’s prep

  • Differentiation gap report. The current eval suite differentiation subset score (MemberIntel vs baseline LLM) trended over the prior quarter. Specific scenarios where the gap narrowed or widened. The hypothesis for why.
  • Free → Pro conversion narrative. Quarter-over-quarter conversion rate, with a one-page narrative on what’s moving it. Not the dashboard data; the story.
  • Customer signal synthesis. Top three customer-reported issues from support tickets, beta cohort feedback, and thumbs-down review. What’s an architecture problem vs a content problem vs a UX problem.
  • One thing that worried me but didn’t fire an alert. A required field. If Cindy can’t think of one, that’s worth examining.

Seth’s prep

  • Cost-per-cohort review. Free tier and Pro tier cost trends. Outliers. Whether the SPEC §5.4 targets ($1.10/Free, $6–12/Pro) are holding.
  • Reliability narrative. Production incidents in the prior quarter, root causes, what got fixed, what’s a pattern. Specifically: are we accumulating ops debt that the runbooks aren’t catching.
  • Architectural debt inventory. What’s been deferred that’s getting expensive to defer. Includes the model abstraction layer status, infrastructure-leaning hire status, anything else that was “we’ll come back to it.”
  • Cross-pollination job health. Volume of candidates drafted, approved, rejected. Rejection rate (target 10–15%; outside that range means upstream filtering is wrong). Any near-misses on re-identification.
  • One thing that worried me but didn’t fire an alert. Required field. Same logic.

Privacy counsel’s prep

  • Compliance posture review. Anything material since last review. Regulatory changes (GDPR enforcement actions, CCPA rule updates, FTC subscription disclosure rules, etc.) that affect MemberIntel’s posture.
  • Data deletion exercise result. Once per year, counsel and Seth jointly run a synthetic data deletion through the actual production system (a test tenant, real pathway). Whether the deletion works end-to-end and how long it takes.
  • Audit log spot-check. Counsel queries the audit dataset for a specific scenario (chosen by counsel, not announced in advance — for example, “show me every access to per-customer brain entries by service accounts other than the chat handler in the last 30 days”). Whether the data is queryable, complete, and defensible.

Blair’s prep

  • One question I want answered this quarter. A single specific question Blair wants the review to address. The question is in the agenda as item 7 and gets explicit time.

The agenda (90 minutes, fixed structure)

0. Pre-meeting check (5 min)

  • Are all prep documents distributed and read? If not, reschedule.
  • Anyone want to add an item not on the standing agenda? Items added at the meeting are time-boxed and don’t displace the fixed sections.

1. Differentiation gap (15 min) — Cindy leads

The SPEC’s #1 risk. The eval differentiation subset is the leading indicator. Three questions:

  1. Is the gap-vs-baseline widening, narrowing, or stable?
  2. If narrowing, what’s the hypothesis — is the brain growing too slowly, is retrieval underperforming, is baseline LLM getting better, or is it a measurement artifact?
  3. What’s the one action this quarter that would most improve the gap?

The brain content lead and Seth attend this segment of the review even if they’re not in the rest of the meeting. They’re closest to the underlying mechanism.

2. Cost and conversion economics (15 min) — Seth + Cindy

The unit economics conversation. Two questions:

  1. Is cost-per-Free-user holding at or below SPEC target ($1.10/mo)? What’s driving variance?
  2. Is Free → Pro conversion holding at or above SPEC floor (5%, target 8–10%)? What’s the trend?

If both are healthy, this segment is short. If either is drifting, this is where the discussion happens.

3. Privacy, security, and compliance (20 min) — counsel leads

Three questions:

  1. Has anything changed externally — regulatory, vendor, threat — that affects our posture?
  2. Did the data deletion exercise (annual) or audit log spot-check (this quarter) reveal anything material?
  3. Are there architectural decisions on the team’s roadmap that need counsel input now rather than at next quarter’s review?

The cross-pollination boundary specifically: counsel asks Seth whether anything in the prior quarter felt close to the line. Honest answer required.

4. Reliability and ops debt (15 min) — Seth leads

Two questions:

  1. What incidents happened, what was the root cause, what’s the pattern?
  2. What’s accumulating that the runbooks and automation aren’t catching? Where is Seth’s time going that it shouldn’t be?

The infrastructure-leaning hire question lives here. If ops debt is accumulating, the V1.5 hire conversation moves up.

5. The “things that worried me but didn’t fire an alert” round (10 min) — all

Each attendee shares the one item from their prep. Format: 60-second statement of the concern, no immediate solving. The items get logged in the action register and someone owns following up.

This is the most important agenda item. It’s the antidote to “the dashboards are green so everything is fine.” If anyone has nothing to share, the meeting facilitator (rotating, not always Blair) probes specifically.

6. Decisions and action register (10 min) — Cindy facilitates

Walk the open action register from prior reviews:

  • Items completed since last review (acknowledge, close)
  • Items still open with progress (status update, ETA)
  • Items still open with no progress (decide: re-commit, re-scope, or abandon — explicit decision, no drift)

Then: any new actions or decisions from this review get logged here with owner and ETA.

7. Blair’s question (10 min) — Blair leads

The one question Blair brought in. Could be strategic (“are we differentiated enough that BuddyBoss launch lands well?”), tactical (“is the agent eval suite ready to gate V1.5?”), or personnel (“is Seth’s load sustainable?”). The discussion is whatever’s needed.


Standing roster of recurring concerns

These aren’t on the agenda every quarter, but they’re on a rotation so every concern gets explicit attention at least once a year. The review facilitator picks one or two to fold into the relevant agenda section each quarter.

ConcernLast reviewedNext dueOwner
Cross-pollination k-anonymity floor still appropriate?Q1 2027Seth + counsel
Per-tenant isolation: spot-check the integration test coverageQ1 2027Seth
Secret rotation cadence executing on schedule?Q4 2026Seth
Eval suite drift detection — false positive / false negative analysisQ1 2027Seth
MP plugin version compatibility — what happens if MP ships a breaking change?Q4 2026Seth + Paul Carter
Anthropic dependency — what would a 24-hour Anthropic outage look like in practice?Q1 2027Seth
Brain content depth — is the playbook count growing on schedule, and is quality holding?Each quarterBrain content lead + Cindy
Customer support volume and pattern — what are people actually struggling with?Each quarterCindy + Wray
The differentiation subset eval — are the scenarios still representative of real customer asks?Q2 2027Brain content lead + Cindy
ToS / Privacy Policy review — anything stale?Q3 2027 (annual)Cindy + counsel
Data deletion exercise (annual)Q3 2027Seth + counsel
Cost economics stress test (5K, 10K, 50K Free users projection)Phase 5 GA prepQ2 2027Seth + Cindy
SOC 2 readiness assessmentQ3 2027Seth + counsel

The table grows over time — every “things that worried me” item that warrants future attention gets added with a next-review-due date.


Action register format

Every action coming out of the review goes into a standing register. Format:

IDDate openedDescriptionOwnerDecision requiredETAStatusDate closed

Status values: open, in progress, blocked, done, abandoned.

The register lives in the same repo as the architecture documents. Reviewed at the start of every quarterly review. Items in blocked for more than two reviews trigger a specific conversation about whether to re-scope or abandon.


Norms

These are binding for the review itself. They exist because group dynamics in 90-minute architecture meetings tend toward agreement rather than honest assessment.

  1. Honest pessimism is welcome. Sandbagging the differentiation gap, the cost trend, or the ops debt to keep the meeting calm makes the next review’s news worse. The room rewards specificity, including specific concerns.

  2. Counsel speaks freely about uncomfortable items. The meeting has the structure and confidentiality to absorb privacy-related concerns counsel might soften in less private settings. Counsel doesn’t have to soften here.

  3. “I don’t know” is a complete answer. When Blair asks a question and the honest answer is “we haven’t measured that,” the answer is “we haven’t measured that,” followed by the action register entry to measure it. Don’t manufacture certainty.

  4. The action register is binding, not aspirational. Items get specific owners, specific ETAs. An item in the register without an owner is a process failure; the facilitator catches it before the meeting ends.

  5. Quarterly cadence is held. Skipping a review because “nothing’s wrong” is the failure mode the review is designed to prevent. Reschedule, don’t cancel.

  6. Decision rights still apply. This is a discussion forum, not a decision-rights override. Decisions still flow through the matrix; the review surfaces them and ensures they’re explicit, not implicit.

  7. The “things that worried me” round is a required field. Anyone showing up without one is questioned by the facilitator. The instinct to say “nothing concerning this quarter” is exactly the instinct the round exists to counter.


When to call a special review

Quarterly is the floor. A special review is called within 5 business days when any of the following happen:

  • A confirmed RLS violation or any cross-tenant data exposure incident
  • A confirmed cross-pollination output containing identifiable customer data
  • A privacy incident requiring counsel notification under GDPR or CCPA
  • A cost-per-Free-user breach of 2x SPEC target sustained for 7+ days
  • A customer-data-loss event from a sync pipeline failure
  • A leaked or compromised credential affecting per-customer secrets
  • Anthropic announces a service deprecation or pricing change that materially affects unit economics
  • Two consecutive monthly differentiation eval reviews show the gap narrowing materially

Special reviews are scoped tight: 60 minutes, focused on the specific incident, with a written post-review summary distributed to the same audience as the quarterly. The next quarterly absorbs the action items.


First review setup — Q4 2026 (post-GA)

The first quarterly review happens approximately 6 weeks after GA, in early-to-mid December 2026. The agenda is the standard structure but with three calibrations specific to the post-launch timing:

  • Section 1 (differentiation gap) — first real production data on differentiation. Critical reading.
  • Section 2 (cost and conversion) — first real production data on Free-tier unit economics. Critical reading.
  • Section 4 (ops debt) — first real production data on what hurt during launch. The runbooks get updated based on this.

Cindy schedules this review by Phase 4 (mid-August), pre-blocking calendars, including counsel’s. Late-quarter scheduling is a chronic failure mode; pre-block the recurring slot.


Document version

Draft v1 — to be reviewed alongside the phased plan v2 and friction-points one-pager. The template itself is a living document; it gets revised after the first 1–2 actual reviews based on what’s working and what isn’t.

For: S Seth Shoultes B Blair Williams S Santiago Perez Asis