PRODUCT DEMO — MARCH 2026

SavirOS

The AI-Powered Relationship Operating System

Never walk into a meeting unprepared. Never forget what you discussed. Never drop a promise.

47
Autonomous Tasks
2
AI Agents
5
Memory Layers
9
Eval Suites
PMIA 10/10 SavirAI 9/10 Waves 0-7B Complete
saviros.com
SavirOS
ProductPricingAboutBlog
Sign InGet Started

Your AI-Powered
Relationship Operating System

Auto-generated intelligence briefs before every meeting. Relationship memory that compounds. Zero effort.

PMIA Agent
Auto-generates intelligence briefs before every meeting with 5 sub-agents
SavirAI Assistant
Your relationship brain — ask anything about contacts, meetings, promises
5-Layer Memory
Working, episodic, semantic, procedural, and prospective memory layers
SavirOS

Sign in to your account

🔵 Continue with Google
or sign in with email
Sign In
saviros.com/app/settings
Profile
Notifications
Integrations
Billing
Security
Data & Privacy

Integrations

Connect your tools to power SavirOS intelligence

📅
Google Calendar
✓ Connected
📧
Gmail
✓ Connected
📬
Outlook
Connect
🎥
Zoom
✓ Connected
📹
Google Meet
Connect
🗓️
Calendly
Connect
🎙️
Recall.ai
Connect
🔥
Fireflies.ai
Connect
💼
LinkedIn
Coming Soon

Connecting Google Calendar

Syncing events and activating intelligence...

Authentication verified
Calendar access granted
156 events synced (last 90 days)
89 unique contacts detected
PMIA activated — generating first briefs
5-layer memory initialized
Building relationship graph...
SavirOS — saviros.com/app PMIA Active
SavirAI
Online
Welcome back, Arham! You have 18 meetings this week. I've prepared briefs for all of them.
Show me my busiest day
Tuesday is your busiest with 5 meetings from 9am–5pm. Back-to-back from 10am–1pm. I'd suggest blocking a 15-min break.

March 2026

SUNMONTUEWEDTHUFRISAT
Overview
March 2026
SMTWTFS
18
Meetings
18
Briefs
3
Overdue
Team Standup
9:00 AM · Google Meet
Sarah Chen — Investor
10:00 AM · Zoom · Brief Ready
Sarah → EIR intro (overdue)
Due: Jan 15
SavirOS — saviros.com/app PMIA Active
SavirAI
Online
What promises do I have with Sarah?
You have 2 promises with Sarah Chen:

Overdue: Sarah → Intro to Sequoia EIR (Dec 15)
Done: You → Send metrics (Jan 8)

I'd suggest following up in your meeting tomorrow.

Mar 2 – 8, 2026

SUN
1
MON
2
TUE
3
WED
4
THU
5
FRI
6
SAT
7
Week Overview
March 2026
SMTWTFS
Sarah Chen
Strong · Last: Dec 15
Raj Patel
Cooling · Last: Nov 2
Mike Johnson
At Risk · Last: Sep 18
SavirOS — saviros.com/app PMIA Active
SavirAI
Online
Good morning! 5 meetings today. The 10am with Sarah Chen is most important — overdue promise + role change signal.
Prep me for Sarah
Quick prep:
🔄 Sarah joined Sequoia Capital 3 weeks ago
📧 Positive email tone — "exciting progress"
Overdue: EIR intro (Dec 15)
💡 Congratulate on role, reference AI conversation, follow up on EIR intro

Tuesday, March 3

8 AM
9 AM
Team Standup
Google Meet · 30 min · 4 attendees
10 AM
Sarah Chen — Investor Check-in
Zoom · 45 min · Brief Ready · ⚠️ Overdue Promise
11 AM
12 PM
Product Review — Q1 Roadmap
Google Meet · 1 hr · 6 attendees · Brief Ready
1 PM
2 PM
Demo — Acme Corp (Enterprise)
Zoom · 30 min · Discovery Brief Ready
3 PM
4 PM
Advisor — Raj Patel
Google Meet · 30 min · Brief Generating...
Intelligence Brief
Meeting
Sarah Chen — Investor Check-in
10:00 AM · Zoom · 45 min
Key Signals
🔄 Role Change: Joined Sequoia Capital
📧 Email: Positive — "exciting progress"
LinkedIn: AI-native tools post
Promises
⏰ Sarah → EIR intro (overdue)
✓ You → Metrics (done Jan 8)
Strategy
Congratulate on Sequoia move. Reference AI conversation. Follow up on EIR intro. Share Q1 metrics.
SHARED BRAIN — BOTH AGENTS READ/WRITE

5-Layer Memory Architecture

Every meeting makes both agents smarter. Memory compounds across all interactions.

💭
Working Memory
Current conversation + active meeting context (in-prompt)
Store: In-context window
Real-time
📖
Episodic Memory
Structured meeting memories — narrative, decisions, commitments
Store: meeting_episodes
Per-meeting
🧠
Semantic Memory
Extracted facts + consolidated per-contact profiles (deduplicated, scored)
Store: knowledge_facts + contact_intelligence
Compounding
⚙️
Procedural Memory
Learned user preferences — "prefers bullet points", "always asks about family"
Store: procedural_memory
Behavioral
🔮
Prospective Memory
Future commitments — promises tracked, calendar events, reminders
Store: promises + calendar_events
Forward
KEY
Both PMIA and SavirAI read/write to all layers. Post-meeting processing extracts facts → consolidates into ContactIntelligence → enriches next brief + query.
PMIA — PRE-MEETING INTELLIGENCE AGENT

Autonomous 7-Phase Pipeline

12/12 pre-meeting tasks with zero user effort · Grounded in Andrew Ng's 4 Agentic Patterns

1. Context
2. Planner
3. Execution
4. Synthesis
5. Generator
6. Reflection
7. Delivery
🔍
Research
ReAct loop + 5 tools
🤝
Relationship
ReAct + RAG quality
📧
Email
Gmail + Outlook
💼
LinkedIn Delta
Profile changes
🔗
Mutual Conn.
Graph traversal
COST
$0.13/brief · 5–9 LLM calls
LATENCY
12–18s end-to-end
QUALITY
Reflection gate ≥ 7/10
PHASE 2 — ADAPTIVE PLANNING

LLM-Powered Planning Agent

Haiku analyzes meeting context, existing knowledge, and past eval scores to create adaptive execution plans

Planner Input
Meeting: Sarah Chen — Investor Check-in
Brief Type: Intelligence (returning)
Past Brief Score: 62/100 (weak on talking points)
Existing Data: LinkedIn profile (2 weeks old), 4 past meetings
Eval Weakness: SPECIFICITY (scored 5.2/10)
Planner Output (ExecutionPlan)
Stage 1: Relationship Agent (get timeline first)
Stage 2: Research (skip LinkedIn — recent), Email, LinkedIn Delta
Focus: News + email tone (compensate for past SPECIFICITY gap)
Refinement budget: 2 retries if completeness < 40
ReAct iterations: Research=2, Relationship=1
ADAPTIVE
LLM planner reasons about what's known vs unknown. Past eval scores drive focus areas. Heuristic fallback if LLM fails.
EVAL LOOP
Past brief accuracy → avgPastFeedbackScore → planner decisions → EVAL-BASED QUALITY GUIDANCE in generator prompt
PHASE 3 — MULTI-AGENT COLLABORATION

Shared State Blackboard

Sub-agents read each other's outputs mid-execution — not just their own inputs

Stage 1: Foundation
Relationship Agent runs first
→ Writes: timeline, promises, health, lastMeetingDate
→ Detects: "4 meetings, 2 overdue promises, cooling trend"
Stage 2: Intelligence
Research reads shared state → skips known data
Email reads lastMeetingDate + overduePromises → prioritizes relevant threads
LinkedIn Delta reuses research profiles → no duplicate API calls
Stage 3: Enrichment
Mutual Connections reads LinkedIn profiles from Research → enriches company data
All agents write signals to shared state → accumulated intelligence
SHARED STATE OBJECT (SharedAgentState)
signals: ["role_change: a16z→Sequoia", "overdue_promise: EIR intro"]
hypotheses: ["Re-engagement after career transition"]
research: { linkedInProfiles, companyData, news }
relationship: { timeline, promises, health, lastMeetingDate }
email: { threads, toneAnalysis, lastActivity }
quality: { completeness: 78, dataSourcesUsed: 4/5 }
SUB-AGENT INTELLIGENCE

ReAct Loops + Dynamic Tool Use

Agents reason about what to search, observe results, and adapt — not fixed API sequences

Research Agent — ReAct Loop (2 iterations, 15s budget)
ITER 0 (10s): Plan → search_linkedin("sarah@acme.com") → "Now at Sequoia" → search_company("sequoia.com") → search_news("sequoia capital 2026")
OBSERVE: LinkedIn ✓ Company ✓ News ✓ — Gap: no web search for person
ITER 1 (5s): Detect gap → planCompensatoryTools() → search_web("Sarah Chen Sequoia Capital") → Additional context found
WRITE: signals["role_change"], signals["company_news"] → shared state
Relationship Agent — Quality-Check ReAct
RETRIEVE: RAG query for past meetings with Sarah Chen
CHECK: timeline.length < 2 AND no pastSummaries?
RETRY: Broaden query with enriched names from shared state + add vector_transcripts source (5s timeout)
RESULT: 4 past meetings found, lastMeetingDate extracted → shared state
TOOL 1
search_linkedin
TOOL 2
search_company
TOOL 3
search_news
TOOL 4
search_web
TOOL 5
check_state
PHASE 4 — CROSS-SOURCE INTELLIGENCE

Synthesis Agent

Single Sonnet call cross-references all sub-agent outputs → produces intelligence, not data dumps

INPUT (Sub-Agent Outputs)
RESEARCH
Sarah → Sequoia Capital, AI-native tools post
RELATIONSHIP
12 meetings, cooling trend, 2 overdue promises
EMAIL
Positive tone, "exciting progress", +40% engagement
LINKEDIN DELTA
Title: Partner (was VP), Company: Sequoia (was a16z)
CROSS-MEETING
Pattern: "AI distribution" discussed in 3 of last 4 meetings
OUTPUT (SynthesisOutput)
Narrative
"Sarah transitioned to Sequoia as Partner 3 weeks ago. Her AI-native tools focus aligns with SavirOS. Relationship cooling due to career transition, not disengagement. Overdue EIR promise likely deprioritized during move."
Signals
role_change (critical) · email_positive (important) · ai_interest (fyi)
Hypotheses
"May want to rebuild network" · "Sequoia Series B = hiring angle"
Strategy
Open: congratulate. Avoid: pressuring on old promises. Watch for: fund alignment signals.
PHASE 6 — EVALUATOR-OPTIMIZER PATTERN

Reflection Agent

Haiku evaluates brief on 5 dimensions. Threshold: 7/10. Max 2 retries with specific feedback injection.

9
Specificity
Talking points specific to THIS meeting, not generic
8
Grounding
Every claim traces to source data, no hallucination
8
Actionability
Concrete actions, not vague suggestions
7
Completeness
All data sources utilized in final brief
9
Insight Density
Connects dots across sources, not just lists facts
8.2
PASS — Above 7/10 threshold
Brief proceeds to delivery · No regeneration needed
IF FAIL
Specific feedback → generator re-runs with QUALITY FEEDBACK section → re-evaluate (max 2x)
STORED
_reflectionScore + _reflectionIterations saved in brief metadata for eval tracking
AUTO-GENERATED PREP BRIEF

Sarah Chen — Investor Check-in

March 3, 2026 · 10:00 AM · Zoom · Quality: 8.2/10

Synthesized Intelligence
Sarah transitioned from a16z to Sequoia Capital as Partner 3 weeks ago. Her LinkedIn posts center on "AI-native enterprise tools" — strong alignment with SavirOS. Email analysis shows positive sentiment (+40% engagement). Relationship has an overdue EIR intro — likely deprioritized during career transition, not intentional.
Signals
🔄 Role Change: a16z → Sequoia (Partner) — 3 weeks ago
LinkedIn: "AI-native tools are the future" — 5 days ago
📧 Email Tone: Positive — engagement ↑40%
🤝 Mutual: 3 shared contacts inc. David Kim
Promises
⏰ Sarah → EIR intro (78 days overdue)
✅ You → Metrics deck (done Jan 8)
Strategy
1. Congratulate on Sequoia
2. Reference AI distribution conversation
3. Share Q1 traction
4. Gently follow up on EIR intro
5. Explore fund alignment
Data Sources
LinkedIn (Proxycurl)Gmail (12 threads)4 Past MeetingsNewsData.ioHunter.io
SAVIRAI — YOUR RELATIONSHIP BRAIN

4-Phase Agentic Pipeline

Triage-RAG agent with agentic retrieval, smart context assembly, and 6-dimension reflection

1. Understand
Query rewrite → Triage (Haiku) → Entity resolution (4-tier + Levenshtein) → Direct answer check
2. Retrieve
Two-phase retrieval planner → Firestore + Pinecone parallel → CRAG grade → Semantic fallback
3. Assemble
Intent-based ordering → Relationship narratives → Data gap markers → Quality band detection
4. Generate
Sonnet + GROUNDING RULES → 6-dim reflection (Haiku) → Max 2 retries → Follow-up suggestions
$0.04
per query
5-7s
latency
8
action types
6
reflection dims
KEY
SavirAI reads PMIA briefs + memory layers. Every answer is grounded in real data with anti-hallucination rules. Citations provided.
PHASE 1 — UNDERSTAND

Triage + Entity Resolution

Haiku classifies intent, extracts entities with 4-tier resolution + Levenshtein spell correction

ENTITY RESOLUTION (4-TIER)
Tier 0: Conversation Context
Recently mentioned contacts get boost — "her" → Sarah from last message
Tier 1: Exact Match
"Sarah Chen" → exact name/email lookup in contacts
Tier 1.5: Spell Correction
"Sarahh Chn" → Levenshtein distance >75% similarity → "Sarah Chen"
Tier 2: Fuzzy + Calendar
Partial match against calendar attendees as fallback
Tier 3: Relationship Weighted
Multiple "Sarah"s? Rank by healthScore * 0.001 boost
TRIAGE INTENTS
query
"What promises do I have with Sarah?" → RAG retrieval pipeline
action
"Schedule follow-up with Raj" → Action handler + ActionContext
pmia
"Generate brief for tomorrow's meeting" → Trigger PMIA pipeline
chitchat
"Hello" / "Thanks" → Direct answer, skip retrieval
setup
"Connect my calendar" → Panel action: open_settings / open_onboarding
PHASE 2 — RETRIEVE

Agentic RAG with CRAG

Two-phase retrieval planning → quality grading → semantic fallback if needed

Step 1: Retrieval Planner (Intent-Based Source Routing)
Promise query → Primary: sql_promises · Secondary: contacts, events (if primary thin)
Temporal query → Primary: events + summaries · Secondary: vector search
Contact query → Primary: contacts + narratives · Secondary: prep_briefs
No-entity query → Primary: vector (semantic) · Secondary: structured fallback
Step 2: Parallel Retrieval (Firestore + Pinecone)
Firestore: contacts, promises, events, summaries, prep_briefs, conversations (structured)
Pinecone: semantic search with text-embedding-3-small (Redis-cached, 24hr TTL, SHA-256 key)
Memory: contact_intelligence, knowledge_facts, meeting_episodes, procedural_memory
Step 3: CRAG Quality Grading
GOOD (sufficient data) → Proceed to assembly
PARTIAL (some data) → Supplement with semantic fallback (broader 0.6 threshold) + dedup
BAD (no useful data) → Rewrite query + retry once → If still bad: DATA CONFIDENCE: LOW injected into prompt
ANTI-HALLUC
Quality bands (thin/partial/rich) detected and injected. Empty sections get "[No data available]" markers. Top 3 priority sections only.
PHASE 4 — ANTI-HALLUCINATION HARDENING

6-Dimension Reflection

GROUNDING has a hard floor — if grounding < 2, forces retry regardless of overall score

4/5
Relevance
Directly answers question
4/5
Specificity
Cites names, dates, facts
5/5
Data Usage
References provided context
4/5
GROUNDING ⚡
Hard floor: <2 = force retry
4/5
Actionability
Concrete next steps
5/5
Tone
Professional, warm
GROUNDING RULES (System Prompt)
1. NEVER fabricate meetings, facts, or interactions not in provided data
2. If data is missing, say "I don't have information about that" — do NOT guess
3. Distinguish between "stated" (direct data) and "implied" (inferred)
4. Low-confidence facts (<0.5) filtered from retrieval
5. Per-section data gap markers: [No promise data available]
6. Quality bands: thin (0-1 types) / partial (2-3) / rich (4+)
7. CRAG grade injected: DATA CONFIDENCE: LOW/MODERATE
RELATIONSHIP-ENRICHED ACTIONS

8 Action Types + ActionContext

Every action is enriched with relationship intelligence — communication style, past discussions, preferences

📅
Schedule
Enriched descriptions
🔄
Reschedule
Propose + confirm
📧
Draft Email
Style matching
🔔
Reminder
Promise-linked
🤝
Update Promise
Follow-up suggest
🚫
Block Time
Focus protection
🔗
Sharing
Brief toggle
📄
Gen Document
Context-aware
ActionContext Example: Draft Email to Sarah
contact: Sarah Chen (Sequoia Capital, Partner)
relationship: { health: "active", meetings: 12, style: "direct, prefers conciseness" }
intelligence: { recentRole: "Partner at Sequoia", topics: ["AI distribution", "enterprise tools"] }
recentMeetings: [{ date: "Dec 15", topic: "Q4 review" }]
promises: [{ "EIR intro", status: "overdue" }]
→ Email draft matches her communication style, references past discussions
PROMISE INTELLIGENCE

Never Drop a Promise

Auto-extracted from meetings, tracked across interactions, auto-completed via signal detection

Sarah Chen → Intro to Sequoia EIR program
Inbound · Made Dec 15 · Due Jan 15 · 48 days overdue
Overdue
You → Send Q1 metrics to David Kim
Outbound · Made Feb 28 · Due Mar 5
Active
You → Send metrics deck to Sarah Chen
Outbound · Made Dec 15 · Auto-completed Jan 8 (signal: email attachment detected)
Auto ✓
AUTO
detectPromiseSignals() + applyPromiseSignals() from post-meeting trigger. Signals >0.8 confidence auto-complete. Daily 9am overdue reminders.
THE CONNECTIVE TISSUE

Cross-Agent Intelligence

Both agents read/write to shared memory. Every meeting makes both smarter.

PMIA Writes
Prep briefs, signals, research data, synthesis output, reflection scores
PMIA Reads
SavirAI conversations about contacts, user corrections, action history
5-LAYER
MEMORY
🧠
Shared Brain
SavirAI Writes
Answers, extracted facts, user corrections, actions taken, query patterns
SavirAI Reads
Past prep briefs, brief feedback scores, contact intelligence, episodes
FLOW 1
Past briefs → SavirAI retriever → richer answers about contacts
FLOW 2
Conversations → PMIA context → conversation-aware briefs
FLOW 3
Brief feedback → next brief planner → improving accuracy
FLOW 4
Post-meeting → memory extraction → both agents enriched
AUTOMATED POST-MEETING

Post-Meeting Processing

Auto-triggers after meeting ends. Extracts facts, updates memory, completes promises, analyzes patterns.

Detect End
Calendar time or Recall.ai bot status. postMeetingNotified flag prevents duplicates.
Extract
Fact extraction (Haiku). Episode builder. Promise signals. Dedup engine.
Store
knowledge_facts, meeting_episodes, contact_intelligence (atomic upsert).
Enrich
Cross-meeting patterns. Promise auto-completion. Relationship health update.
Every 3 min
Cron checks ended meetings
Daily 4am
Batch consolidate facts
0.8+
Auto-complete threshold
QUALITY ASSURANCE

9 Eval Suites

Golden datasets + LLM-as-judge (Haiku) + deterministic evals. Scores feed back into planner decisions.

PMIA (4 SUITES)
Brief Specificity
LLM judge · Target: >80%
Factual Grounding
Auto citation count · >90%
Actionability
Action item extraction · >5
Prediction Accuracy
Brief vs actual · >60%
SAVIRAI (3 SUITES)
Triage Accuracy
200 golden queries · >85%
Entity Resolution
100 golden queries · >90%
Response Relevance
LLM judge · >85%
CROSS-AGENT (2)
Data Flow Verification
5 Firestore checks
Memory Compounding
Monotonic increase
FEEDBACK
Eval scores → loadPastEvalScores() → planner decisions → EVAL-BASED QUALITY GUIDANCE in generator. Weakest dimension drives focus.
GROWTH ENGINE

Built-In Viral Distribution

Every meeting = distribution event. Zero incremental CAC.

1
User Connects
Calendar triggers PMIA
2
Brief Generated
7-phase pipeline
3
All Attendees
Full + teaser to non-users
4
Non-Users Click
See value → signup CTA
5
Cycle Repeats
New network exposed
MATH
25 meetings/week → 50-100 contacts/month → 5% CTR × 10% conv = 0.25-0.5 new users per active user
LET'S BUILD THE FUTURE

Ready to Transform
Professional Relationships?

The AI-native intelligence layer for the modern workplace

$24.6-72.2B
TAM by 2034
Zero CAC
Viral Growth
80%+
Gross Margin

CONFIDENTIAL — 2026 · Arham Jain, Founder & CEO