M MemberIntel KB

reference

V1 Progress — May 15, 2026

What shipped in V1: site data sync, chat improvements, customer brain spec. What's next: per-customer brain implementation.

What shipped

Site data sync (May 13–15)

The full data pipeline from WordPress to dashboard is live:

  • WordPress plugin (memberintel-connect): 6 REST endpoints exposing members, transactions, subscriptions, memberships, stats, and system info from MemberPress sites. Fixed memberships query (WP CPT, not custom table), subscriptions (no expires_at column), mp_version (get_file_data instead of get_option), and member_id: 0 filtering.
  • Backend sync service: Paginated fetch from all WP endpoints, upsert into PostgreSQL with UniqueConstraint(site_id, mp_id) dedup. Five new DB models: Member, Transaction, Subscription, Membership, SiteStats.
  • SPA dashboard: Stat cards (members, MRR, revenue, churn), transactions table, memberships table, sync button.
  • SPA settings: Connected sites list with sync/disconnect, account info, plan section.
  • Multi-site support: Topbar dropdown for switching between sites, activeSiteId in persisted app state.
  • Marketing site auth awareness: SPA sets .membersintel.com cookie, marketing /connect/ page redirects logged-in users to the app.
  • Cloud Run: Backend deployed at memberintel-api-staging.
  • Cloudflare Pages: SPA at app.membersintel.com, marketing at membersintel.com.

Chat improvements (May 15)

Three critical fixes to the chat pipeline:

  1. Site data context injection: When a user has an active site, their stats, memberships, and recent transactions are injected into the system prompt. The AI can now answer site-specific questions like “How many members do I have?” and “What’s my MRR?”
  2. Conversation history: The LLM now receives the last 10 messages from the conversation, enabling multi-turn dialogue. Previously each message was standalone with no memory of previous turns.
  3. Voyage query embedding fix: embed_query() now uses input_type="query" instead of input_type="document", improving retrieval quality per Voyage’s documentation.

Customer brain spec (May 15)

Design spec for the per-customer persistent context system: 2026-05-15-customer-brain-design.

Four document types that make the AI advisor remember and improve:

DocumentOwnerPurpose
SOULUserPreferences, thinking style, common questions
BIBLESiteFoundational truth about the site — what it is, who it serves
HEARTBEATSiteCurrent state, auto-generated from sync data
MEMORYUser+SiteRunning notes from conversations, retrieved by vector search

SOUL + BIBLE + HEARTBEAT are always-on context in every system prompt. MEMORY is retrieved by vector search when relevant.


What’s next

Phase A: Schema + HEARTBEAT (replaces current site-context injection)

  • Alembic migration: user_souls and site_contexts tables
  • generate_heartbeat() template function (replaces raw SQL build_site_context())
  • generate_initial_bible() template from site URL + memberships
  • Hook HEARTBEAT generation into sync_site()
  • Update system prompt to inject SOUL + BIBLE + HEARTBEAT

Phase B: LLM tool calls (update_customer_brain)

  • Add Anthropic tool use support to call_stream() and call()
  • Implement update_customer_brain tool handler
  • SOUL/BIBLE updates (read-modify-write), MEMORY creation (new brain_entry)
  • Tests for tool call interception and persistence

Phase C: Memory retrieval + API

  • Wire collection='memory' search into chat pipeline
  • Add brain API endpoints (GET/PUT soul, GET/PUT bible, GET heartbeat)
  • RAG re-ingestion from Hive Mind (ensure current data)
For: S Seth Shoultes