reference
V1 Progress — May 15, 2026
What shipped in V1: site data sync, chat improvements, customer brain spec. What's next: per-customer brain implementation.
What shipped
Site data sync (May 13–15)
The full data pipeline from WordPress to dashboard is live:
- WordPress plugin (
memberintel-connect): 6 REST endpoints exposing members, transactions, subscriptions, memberships, stats, and system info from MemberPress sites. Fixed memberships query (WP CPT, not custom table), subscriptions (noexpires_atcolumn),mp_version(get_file_datainstead ofget_option), andmember_id: 0filtering. - Backend sync service: Paginated fetch from all WP endpoints, upsert into PostgreSQL with
UniqueConstraint(site_id, mp_id)dedup. Five new DB models: Member, Transaction, Subscription, Membership, SiteStats. - SPA dashboard: Stat cards (members, MRR, revenue, churn), transactions table, memberships table, sync button.
- SPA settings: Connected sites list with sync/disconnect, account info, plan section.
- Multi-site support: Topbar dropdown for switching between sites,
activeSiteIdin persisted app state. - Marketing site auth awareness: SPA sets
.membersintel.comcookie, marketing/connect/page redirects logged-in users to the app. - Cloud Run: Backend deployed at
memberintel-api-staging. - Cloudflare Pages: SPA at
app.membersintel.com, marketing atmembersintel.com.
Chat improvements (May 15)
Three critical fixes to the chat pipeline:
- Site data context injection: When a user has an active site, their stats, memberships, and recent transactions are injected into the system prompt. The AI can now answer site-specific questions like “How many members do I have?” and “What’s my MRR?”
- Conversation history: The LLM now receives the last 10 messages from the conversation, enabling multi-turn dialogue. Previously each message was standalone with no memory of previous turns.
- Voyage query embedding fix:
embed_query()now usesinput_type="query"instead ofinput_type="document", improving retrieval quality per Voyage’s documentation.
Customer brain spec (May 15)
Design spec for the per-customer persistent context system: 2026-05-15-customer-brain-design.
Four document types that make the AI advisor remember and improve:
| Document | Owner | Purpose |
|---|---|---|
| SOUL | User | Preferences, thinking style, common questions |
| BIBLE | Site | Foundational truth about the site — what it is, who it serves |
| HEARTBEAT | Site | Current state, auto-generated from sync data |
| MEMORY | User+Site | Running notes from conversations, retrieved by vector search |
SOUL + BIBLE + HEARTBEAT are always-on context in every system prompt. MEMORY is retrieved by vector search when relevant.
What’s next
Phase A: Schema + HEARTBEAT (replaces current site-context injection)
- Alembic migration:
user_soulsandsite_contextstables generate_heartbeat()template function (replaces raw SQLbuild_site_context())generate_initial_bible()template from site URL + memberships- Hook HEARTBEAT generation into
sync_site() - Update system prompt to inject SOUL + BIBLE + HEARTBEAT
Phase B: LLM tool calls (update_customer_brain)
- Add Anthropic tool use support to
call_stream()andcall() - Implement
update_customer_braintool handler - SOUL/BIBLE updates (read-modify-write), MEMORY creation (new brain_entry)
- Tests for tool call interception and persistence
Phase C: Memory retrieval + API
- Wire
collection='memory'search into chat pipeline - Add brain API endpoints (GET/PUT soul, GET/PUT bible, GET heartbeat)
- RAG re-ingestion from Hive Mind (ensure current data)