Skip to main content

How to Use AI for Demo Personalization That Wins Deals [2026]

Β· 9 min read

Generic demos kill deals.

You walk in with your standard deck. You show features in your standard order. You use your standard case studies. The prospect politely nods along, asks a few questions, then ghosts you for three weeks.

Meanwhile, your competitor did their homework. They opened with the prospect's exact pain point. They showed the feature that solves it. They referenced a customer in the same industry with the same problem.

Guess who got the deal?

Demo personalization isn't optional anymore. But doing it manually for every prospect takes hours. AI changes that math entirely.

AI-powered demo personalization workflow

The Demo Personalization Problem​

What great demo prep looks like:

  • Research the prospect's company (news, earnings, job postings)
  • Understand each attendee's role and likely priorities
  • Identify their specific pain points from discovery call
  • Select relevant case studies and proof points
  • Customize deck with their logo, data, and challenges
  • Prepare for likely objections
  • Create custom follow-up materials

Time required: 2-4 hours per demo

What actually happens:

  • Skim their website for 5 minutes
  • Use the same deck you always use
  • Wing the objection handling
  • Hope for the best

Time spent: 10 minutes

AI lets you get 90% of the value with 10% of the time.

The AI Demo Prep Stack​

Overview​

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Pre-Demo │────▢│ AI Research │────▢│ Outputs β”‚
β”‚ Trigger β”‚ β”‚ & Assembly β”‚ β”‚ For Rep β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ β”‚ β”‚
Calendar Claude/Codex - Research brief
24h before + Web search - Custom slides
+ CRM data - Objection prep
- Attendee profiles

Trigger: 24 Hours Before Demo​

Set up a cron job that runs 24 hours before every scheduled demo:

# OpenClaw cron configuration
cron:
- name: "Demo Prep Automation"
schedule: "0 9 * * *" # Daily at 9am
prompt: |
Check calendar for demos tomorrow.
For each demo:
1. Pull CRM data and discovery notes
2. Research company and attendees
3. Generate personalized brief
4. Recommend case studies
5. Create objection prep
6. Send to rep via Slack

Component 1: Company Deep-Dive​

async def research_company(company_name: str, company_domain: str) -> dict:
"""Research a company for demo preparation."""

research_prompt = f"""
Research \{company_name\} ({company_domain}) for an upcoming sales demo.

Find and summarize:

## Company Overview
- Industry and business model
- Size (employees, revenue if public)
- Recent funding or major events

## Current Challenges (likely)
- Based on their job postings, what are they building?
- Based on news, what problems are they solving?
- Industry-wide challenges affecting them

## Technology Stack
- What tools do they likely use? (from job postings)
- Integration opportunities

## Competitive Context
- Who are their competitors?
- What differentiates them?

## Trigger Events
- Recent news worth mentioning
- Leadership changes
- Product launches

Output as structured JSON for use in demo prep.
"""

# Use Claude with web search capability
response = await claude_with_search(research_prompt)
return parse_research(response)

Component 2: Attendee Profiles​

async def research_attendees(attendees: list[dict]) -> list[dict]:
"""Research each demo attendee."""

profiles = []
for attendee in attendees:
profile_prompt = f"""
Research {attendee['name']} ({attendee['title']}) at {attendee['company']}.

Find:
- Background (previous roles, education)
- Recent LinkedIn posts or activity
- Likely priorities based on their role
- How our product helps someone in their position
- Potential concerns they might have

Output: 3-paragraph brief for the sales rep.
"""

profile = await claude_with_search(profile_prompt)
profiles.append({
'name': attendee['name'],
'title': attendee['title'],
'profile': profile,
'suggested_talking_points': extract_talking_points(profile)
})

return profiles

Component 3: Personalized Slide Recommendations​

Based on the research, recommend which slides to use and in what order:

def recommend_slides(research: dict, discovery_notes: str) -> list[dict]:
"""Recommend slide order based on prospect context."""

prompt = f"""
Based on this prospect research and discovery notes,
recommend the optimal demo flow.

Research: {json.dumps(research)}
Discovery Notes: {discovery_notes}

Available slides:
1. Company Overview
2. Problem Statement (generic)
3. Problem Statement (industry-specific variations)
4. Product Demo - Visitor ID
5. Product Demo - SDR Playbook
6. Product Demo - Smart Dialer
7. Product Demo - AI Chatbot
8. Case Study - SaaS
9. Case Study - IoT
10. Case Study - Professional Services
11. Pricing Overview
12. Implementation Timeline
13. ROI Calculator

Output:
- Recommended order (list of slide numbers)
- For each slide: talking points personalized to this prospect
- Slides to skip and why
"""

return claude_complete(prompt)

Component 4: Objection Preparation​

def prepare_objections(research: dict, competitor_context: dict) -> list[dict]:
"""Prepare likely objections and responses."""

prompt = f"""
Based on this prospect context, predict the top 5 objections
they're likely to raise and prepare responses.

Prospect Research: {json.dumps(research)}
Competitor Context: {json.dumps(competitor_context)}

For each objection:
1. The objection (verbatim how they'd phrase it)
2. Why they're likely to raise it (based on research)
3. Recommended response
4. Proof point or case study to reference
5. Question to ask back

Focus on objections specific to this prospect, not generic ones.
"""

return claude_complete(prompt)

Demo preparation workflow with AI

The Demo Prep Brief (Output Format)​

Here's what the AI delivers to your rep 24 hours before the demo:

# Demo Prep: Acme Corp
**Demo Date:** Feb 9, 2026 at 2:00 PM CT
**Prepared:** Feb 8, 2026 at 10:00 PM CT

---

## 🏒 Company Overview
Acme Corp is a B2B SaaS company in the HR tech space,
~200 employees, Series B ($35M raised). They sell
performance management software to mid-market companies.

**Recent news:** Just launched AI-powered feedback feature
(Jan 2026). Hiring aggressively for sales (12 open SDR roles).

**Why they're talking to us:** Current lead management is
"chaotic" (Sarah's word from discovery). Using Apollo for
data but no workflow automation.

---

## πŸ‘₯ Attendees

### Sarah Chen - VP Sales
**Background:** Former Gong, 8 years in sales leadership
**Likely priorities:** SDR efficiency, pipeline predictability
**Recent activity:** Posted about "SDR burnout" last week
**Talking points:**
- Reference the burnout post empathetically
- Focus on how our playbook reduces cognitive load
- She'll care about rep experience, not just metrics

### Mike Rodriguez - SDR Manager
**Background:** Promoted internally 6 months ago
**Likely priorities:** Proving himself, team performance
**Talking points:**
- He's new to management - position as making him look good
- Focus on coaching insights and team visibility
- Likely to ask detailed workflow questions

---

## πŸ“‹ Recommended Demo Flow

1. **Skip:** Generic company overview (they know who we are)
2. **Start with:** SDR Playbook - this is their pain point
3. **Show:** Visitor ID β†’ "this is how you'd capture their
website visitors showing buying intent"
4. **Case Study:** CloudHR story (same industry, similar size)
5. **Skip:** Dialer demo (they're not ready for this)
6. **End with:** Implementation timeline (Sarah asked about this)

**Total demo time:** 25-30 minutes (leave 15 for Q&A)

---

## ⚠️ Likely Objections

### 1. "How is this different from Apollo?"
**Why they'll ask:** Currently using Apollo, know it well
**Response:** "Apollo gives you data. We give you a daily
action list. Here's the differenceβ€”[show playbook view]"
**Proof point:** CloudHR switched from Apollo, 40% more meetings

### 2. "What's the learning curve for SDRs?"
**Why:** Mike is worried about adoption with his new team
**Response:** "Most SDRs are productive in 2 days. Here's whyβ€”
we replace 5 tools, not add another one."
**Ask back:** "What's the biggest adoption challenge you've
seen with new tools?"

### 3. "Can you integrate with Salesforce?"
**Why:** They're a Salesforce shop (from job postings)
**Response:** "Native integration, bi-directional sync.
Let me show you exactly how activities flow back."

---

## 🎯 Key Messages to Land

1. "From 20 tabs to one task list" - Sarah mentioned tab chaos
2. "Your SDRs shouldn't have to think about who to call next"
3. "This is what CloudHR's team sees every morning" [show example]

---

## πŸ“š Resources to Send After

- CloudHR case study PDF
- ROI calculator (pre-filled with their team size)
- SDR playbook example screenshots

---

*Generated by AI β€’ Review before demo*

Implementation: The 30-Minute Setup​

Step 1: Connect Your Calendar​

// Pull tomorrow's demos from Google Calendar
async function getTomorrowsDemos() {
const calendar = google.calendar({ version: 'v3' });
const tomorrow = new Date();
tomorrow.setDate(tomorrow.getDate() + 1);

const events = await calendar.events.list({
calendarId: 'primary',
timeMin: startOfDay(tomorrow).toISOString(),
timeMax: endOfDay(tomorrow).toISOString(),
q: 'demo OR Demo OR DEMO'
});

return events.data.items.map(parseDemo);
}

Step 2: Connect Your CRM​

// Pull discovery notes and contact info from HubSpot
async function getCRMContext(companyId) {
const company = await hubspot.companies.get(companyId);
const contacts = await hubspot.contacts.getByCompany(companyId);
const notes = await hubspot.notes.getByCompany(companyId);

return {
company,
contacts,
discoveryNotes: notes.filter(n => n.type === 'discovery')
};
}

Step 3: Run the AI Pipeline​

// Main demo prep pipeline
async function prepareDemo(demo) {
const crmContext = await getCRMContext(demo.companyId);

const [companyResearch, attendeeProfiles] = await Promise.all([
researchCompany(crmContext.company),
researchAttendees(demo.attendees)
]);

const slideRecommendations = await recommendSlides(
companyResearch,
crmContext.discoveryNotes
);

const objections = await prepareObjections(
companyResearch,
crmContext.competitorMentions
);

const brief = formatBrief({
demo,
companyResearch,
attendeeProfiles,
slideRecommendations,
objections
});

await sendToSlack(demo.repSlackId, brief);
await saveToNotion(demo.notionPageId, brief);
}

The Results​

Teams using AI demo prep report:

  • 50% less prep time (2 hours β†’ 20 minutes of review)
  • Higher conversion rates (prospects feel understood)
  • Better discovery-to-demo handoffs (nothing falls through cracks)
  • Consistent quality (junior reps perform like seniors)

The AI doesn't replace preparation. It makes preparation possible at scale.

Start Today​

You don't need a complex system. Start with:

  1. Copy one discovery call transcript
  2. Paste into Claude with:
    Based on this discovery call, create a demo prep brief.
    Include: company context, attendee profiles, recommended
    demo flow, likely objections, and key messages to land.
  3. Use the output in your next demo
  4. Iterate on the prompt based on what was useful

Once you see the value, automate the pipeline.


Free Tool

Try our AI Lead Generator β€” find verified LinkedIn leads for any company instantly. No signup required.

Let AI Do the Research So You Can Close​

MarketBetter's AI-powered playbook gives your SDRs everything they need for every conversationβ€”prospect context, recommended actions, and personalized talking points.

Book a Demo β†’


Related reading:

How to Automate Meeting Follow-Ups with AI Coding Agents [2026]

Β· 8 min read

The call ends. Now you have 15 minutes before your next one.

In that window, you're supposed to:

  • Write a personalized follow-up email
  • Update the CRM with notes
  • Create action items in your task manager
  • Share key insights with your team
  • Send relevant resources to the prospect

Reality? You update one line in the CRM, fire off a generic "great chatting" email, and hope you remember the details later.

This is exactly the problem AI coding agents were built to solve.

AI meeting follow-up automation workflow

The Manual Follow-Up Tax​

Let's do the math:

  • Average sales call: 30 minutes
  • Manual follow-up time: 15-20 minutes
  • Calls per day: 4-6
  • Follow-up time per day: 60-120 minutes

That's 1-2 hours daily on post-call admin. For an SDR making $60K/year, that's roughly $15,000/year in follow-up labor costs per rep.

Now multiply by your team size.

Manual follow-ups vs AI-automated comparison

What AI Meeting Follow-Ups Look Like​

Here's the workflow we've built:

  1. Call ends β†’ Recording hits Gong/Fireflies/your tool
  2. Transcript ready β†’ Webhook triggers automation
  3. AI processes β†’ Extracts insights, action items, next steps
  4. Outputs generated:
    • Personalized follow-up email (draft in Gmail)
    • CRM updated with structured notes
    • Action items created in Asana/Linear
    • Slack notification with highlights
  5. Human reviews β†’ Edit and send in 2 minutes

Total time: 2 minutes of review vs 20 minutes of creation.

Building the Automation Stack​

Architecture Overview​

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Call Recording │────▢│ AI Processing │────▢│ Outputs β”‚
β”‚ (Gong, etc.) β”‚ β”‚ (Claude/Codex) β”‚ β”‚ (Email, CRM) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ β”‚ β”‚
β–Ό β–Ό β–Ό
Transcript Extract: Create:
+ Metadata - Key moments - Email draft
- Action items - CRM notes
- Objections - Tasks
- Next steps - Slack alert

Option 1: OpenClaw Cron Job​

If you're already using OpenClaw, add a cron job that checks for new call transcripts:

# In your OpenClaw config
cron:
- name: "Process new call transcripts"
schedule: "*/15 * * * *" # Every 15 minutes
prompt: |
Check for new call transcripts from the last 15 minutes.
For each new transcript:
1. Generate a follow-up email draft
2. Extract action items
3. Update CRM with structured notes
4. Send Slack summary to #sales

Option 2: Codex Script​

Use OpenAI Codex to build a dedicated processing script:

// process-call.js
const { OpenAI } = require('openai');
const openai = new OpenAI();

async function processCallTranscript(transcript, dealContext) {
const response = await openai.chat.completions.create({
model: "gpt-4-turbo",
messages: [
{
role: "system",
content: `You are a sales operations assistant. Process call transcripts
and generate: follow-up emails, CRM notes, and action items.
Be specific and reference actual discussion points.`
},
{
role: "user",
content: `
## Call Transcript
${transcript}

## Deal Context
${dealContext}

## Generate
1. Follow-up email (personalized, reference specific moments)
2. CRM notes (structured: Summary, Key Moments, Objections, Next Steps)
3. Action items (owner, due date, description)
`
}
]
});

return parseResponse(response.choices[0].message.content);
}

Option 3: Claude with Function Calling​

Claude excels at understanding nuance in sales conversations:

import anthropic

client = anthropic.Anthropic()

def process_sales_call(transcript: str, prospect_info: dict) -> dict:
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=4096,
tools=[
{
"name": "create_follow_up",
"description": "Generate a follow-up email",
"input_schema": {
"type": "object",
"properties": {
"subject": {"type": "string"},
"body": {"type": "string"},
"resources": {
"type": "array",
"items": {"type": "string"}
}
}
}
},
{
"name": "update_crm",
"description": "Update CRM with call notes",
"input_schema": {
"type": "object",
"properties": {
"summary": {"type": "string"},
"next_steps": {"type": "array"},
"objections": {"type": "array"},
"champion_signals": {"type": "array"}
}
}
}
],
messages=[
{
"role": "user",
"content": f"""
Process this sales call and generate follow-up actions.

Prospect: {prospect_info}

Transcript:
{transcript}

Use the provided tools to:
1. Create a personalized follow-up email
2. Update the CRM with structured notes
"""
}
]
)

return extract_tool_calls(response)

The Follow-Up Email That Wins​

Generic follow-ups:

Hi Sarah,

Great chatting today! I'll send over those resources we discussed.
Let me know if you have any questions.

Best,
Mike

AI-personalized follow-ups:

Hi Sarah,

Thanks for walking me through how your SDR team handles the
lead scoring issue you mentionedβ€”sounds like the current 2-day
SLA is creating real friction with your demand gen team.

You asked about how we handle intent signals from anonymous
visitors. I'm attaching our visitor identification case study
(the Hologram example you asked about is on page 3).

For the budget conversation with David next week, here's a
one-pager comparing our pricing to the $35K/year tool you
mentioned. Happy to jump on a call beforehand to prep you.

Two questions from our discussion I want to circle back on:
1. The Salesforce integration timelineβ€”are you targeting Q2?
2. Which 3 SDRs would pilot the tool first?

Does Thursday 2pm work for the technical deep-dive with your ops team?

Best,
Mike

The difference: specificity. The AI references actual discussion points, answers real questions, and moves the deal forward.

CRM Notes That Actually Help​

Bad CRM notes (what most reps write):

Good call. Interested in product. Will follow up next week.

AI-generated structured notes:

## Call Summary
30-min discovery call with Sarah (VP Sales) and Mike (SDR Manager).
Currently evaluating MarketBetter vs Warmly. Budget approved,
timeline is Q2 implementation.

## Key Moments
- [8:32] Sarah mentioned 2-day lead SLA causing "constant friction"
- [14:15] Mike asked specifically about Salesforce integration
- [22:40] Budget holder is David Chen (CFO), Sarah has soft approval
- [26:00] Competitor Warmly quoted $35K/year

## Objections Raised
1. Concerned about SDR adoption (Mike)
2. Integration with existing tech stack (Sarah)
3. Data accuracy compared to current provider

## Next Steps
- Send visitor ID case study (Hologram example)
- Schedule technical deep-dive with ops team
- Prep Sarah for budget conversation with David

## Champion Signals
- Sarah used "we need" language 4 times
- Unprompted mention of timeline pressure
- Asked about implementation support

This is what your AI should output. No more lost context between calls.

Action Item Extraction​

AI should automatically create tasks:

{
"action_items": [
{
"task": "Send Hologram visitor ID case study",
"owner": "Mike (rep)",
"due": "2026-02-09",
"priority": "high",
"context": "Sarah specifically asked for this at 14:15"
},
{
"task": "Schedule technical deep-dive",
"owner": "Mike (rep)",
"due": "2026-02-10",
"priority": "high",
"context": "Needs ops team involvement"
},
{
"task": "Create CFO one-pager for Sarah",
"owner": "Mike (rep)",
"due": "2026-02-12",
"priority": "medium",
"context": "Budget conversation with David next week"
}
]
}

These should auto-create in your task manager (Asana, Linear, Notion).

Implementation Checklist​

Prerequisites​

  • Call recording tool with API (Gong, Fireflies, Chorus)
  • CRM with API access (HubSpot, Salesforce)
  • Email tool with draft creation API (Gmail, Outlook)
  • AI API access (Claude, GPT-4, Codex)

Phase 1: Basic Automation​

  • Set up webhook for new transcripts
  • Build AI processing pipeline
  • Generate email drafts
  • Send to Slack for review

Phase 2: CRM Integration​

  • Extract structured data from AI output
  • Map to CRM fields
  • Auto-update deal records
  • Add activity logging

Phase 3: Task Management​

  • Parse action items
  • Create tasks in project tool
  • Assign owners and due dates
  • Link back to deal/contact

Phase 4: Optimization​

  • Add feedback loop (rep edits β†’ training)
  • A/B test email templates
  • Track follow-up effectiveness
  • Refine extraction prompts

The ROI Calculation​

Before automation:

  • 20 min follow-up Γ— 5 calls/day Γ— 250 days = 417 hours/year
  • At $50/hour loaded cost = $20,850/year per rep

After automation:

  • 2 min review Γ— 5 calls/day Γ— 250 days = 42 hours/year
  • AI costs: ~$300/year (at $0.01/transcript)
  • Total: $2,400/year per rep

Savings: $18,450/year per rep

For a 10-person SDR team: $184,500/year back to selling.

Start Today​

You don't need a perfect system. Start with:

  1. Export one transcript from your call recording tool
  2. Paste into Claude with the prompt:
    Generate a personalized follow-up email and structured CRM notes
    from this sales call transcript: [transcript]
  3. Review the output β€” is it better than what you'd write in 2 minutes?
  4. Automate once you see the quality

The time you save on follow-ups goes straight to more conversations.


Free Tool

Try our AI Lead Generator β€” find verified LinkedIn leads for any company instantly. No signup required.

Let AI Handle the Admin So You Can Sell​

MarketBetter's AI-powered playbook doesn't just track your dealsβ€”it tells your SDRs exactly what to do next. No more manual prioritization. No more missed follow-ups.

Book a Demo β†’


Related reading:

Building an AI Proposal Generator That Closes Deals [2026]

Β· 8 min read

A prospect asks for a proposal.

You say "I'll have it to you by end of day."

Then you spend 3 hours:

  • Finding the last proposal you sent
  • Swapping out company names (and missing one on slide 12)
  • Rewriting the "why us" section for the tenth time
  • Hunting for the right case study
  • Manually calculating pricing
  • Wondering if any of this is even what they asked for

Meanwhile, your competitor sends a personalized proposal in 45 minutes. They reference the specific challenges from the discovery call. They include a case study from the same industry. Their pricing is crystal clear.

Guess who looks more professional?

AI proposal generation isn't about replacing humans. It's about spending your time on strategy instead of formatting.

AI proposal generator workflow

What AI Proposal Generation Actually Looks Like​

Input:

  • CRM deal data
  • Discovery call notes
  • Prospect company info
  • Your pricing structure
  • Template library

Processing:

  • AI extracts key requirements
  • Matches pain points to features
  • Selects relevant case studies
  • Calculates custom pricing
  • Writes personalized sections

Output:

  • Draft proposal document
  • Personalized executive summary
  • Relevant case studies inserted
  • Pricing table pre-filled
  • Human reviews in 15-30 minutes

Time saved: 2-3 hours per proposal

The Architecture​

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Trigger: β”‚
β”‚ "Create β”‚
β”‚ Proposal" β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Pull Data │────▢│ AI Process β”‚
β”‚ - CRM deal β”‚ β”‚ - Generate β”‚
β”‚ - Notes β”‚ β”‚ sections β”‚
β”‚ - Company β”‚ β”‚ - Select β”‚
β”‚ research β”‚ β”‚ case study β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ - Calculate β”‚
β”‚ pricing β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Output: β”‚
β”‚ - Google Doc β”‚
β”‚ - PDF β”‚
β”‚ - Notion page β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

CRM data flowing into AI proposal output

Building the Proposal Generator​

Step 1: Define Your Proposal Structure​

Start by mapping your standard proposal sections:

const proposalStructure = {
sections: [
{
name: 'executive_summary',
type: 'ai_generated',
prompt: 'Write a 2-paragraph executive summary addressing {pain_points}',
maxLength: 300
},
{
name: 'understanding_your_needs',
type: 'ai_generated',
prompt: 'Summarize our understanding of {company}\'s challenges based on {discovery_notes}',
maxLength: 500
},
{
name: 'proposed_solution',
type: 'template_with_variables',
template: 'solution_template.md',
variables: ['selected_products', 'implementation_timeline']
},
{
name: 'case_study',
type: 'selected',
selector: 'matchCaseStudy(\{industry\}, {company_size}, {pain_points})'
},
{
name: 'pricing',
type: 'calculated',
calculator: 'calculatePricing({seats}, {products}, {term})'
},
{
name: 'next_steps',
type: 'template',
template: 'next_steps.md'
}
]
};

Step 2: Build the Data Pipeline​

async function gatherProposalData(dealId) {
// Pull from CRM
const deal = await hubspot.deals.get(dealId);
const company = await hubspot.companies.get(deal.companyId);
const contacts = await hubspot.contacts.getByDeal(dealId);

// Pull discovery notes
const notes = await hubspot.notes.getByDeal(dealId);
const discoveryNotes = notes.filter(n =>
n.type === 'discovery' || n.type === 'call_notes'
);

// Enrich with research
const companyResearch = await researchCompany(company.domain);

// Extract pain points using AI
const painPoints = await extractPainPoints(discoveryNotes);

return {
deal,
company,
contacts,
discoveryNotes,
companyResearch,
painPoints
};
}

Step 3: Generate Each Section​

async function generateSection(section, data) {
switch (section.type) {
case 'ai_generated':
return generateWithAI(section, data);

case 'template_with_variables':
return populateTemplate(section, data);

case 'selected':
return selectContent(section, data);

case 'calculated':
return calculateSection(section, data);

case 'template':
return loadTemplate(section.template);
}
}

async function generateWithAI(section, data) {
const prompt = interpolate(section.prompt, data);

const response = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: section.maxLength * 2,
messages: [
{
role: 'system',
content: `You are writing a section of a B2B sales proposal.
Be professional but not stiff.
Focus on the prospect's specific needs.
No generic filler. Every sentence should matter.`
},
{
role: 'user',
content: prompt
}
]
});

return response.content[0].text;
}

Step 4: Smart Case Study Selection​

async function selectCaseStudy(data) {
const caseStudies = await loadCaseStudies();

const selectionPrompt = `
Select the best case study for this prospect:

Prospect:
- Industry: ${data.company.industry}
- Size: ${data.company.employees} employees
- Pain points: ${data.painPoints.join(', ')}

Available case studies:
${caseStudies.map((cs, i) => `
${i + 1}. ${cs.title}
Industry: ${cs.industry}
Size: ${cs.companySize}
Key results: ${cs.results}
Pain points addressed: ${cs.painPoints}
`).join('\n')}

Output: The number of the best case study and why in one sentence.
`;

const response = await claude(selectionPrompt);
const selectedIndex = extractNumber(response);

return caseStudies[selectedIndex - 1];
}

Step 5: Pricing Calculator​

function calculatePricing(data) {
const { seats, products, term, discountCode } = data.deal;

let pricing = {
items: [],
subtotal: 0,
discount: 0,
total: 0
};

// Base pricing
for (const product of products) {
const productPricing = pricingMatrix[product];
const lineItem = {
name: productPricing.name,
quantity: seats,
unitPrice: productPricing.perSeat,
total: seats * productPricing.perSeat
};

if (term === 'annual') {
lineItem.total = lineItem.total * 12;
lineItem.termDiscount = lineItem.total * 0.15; // 15% annual discount
lineItem.total -= lineItem.termDiscount;
}

pricing.items.push(lineItem);
pricing.subtotal += lineItem.total;
}

// Apply discount code if present
if (discountCode) {
pricing.discount = calculateDiscount(discountCode, pricing.subtotal);
}

pricing.total = pricing.subtotal - pricing.discount;

return pricing;
}

Step 6: Assemble the Document​

async function assembleProposal(data) {
const sections = await Promise.all(
proposalStructure.sections.map(section =>
generateSection(section, data)
)
);

// Create Google Doc
const doc = await googleDocs.create({
title: `Proposal - ${data.company.name} - ${formatDate(new Date())}`
});

// Apply template formatting
await applyProposalTemplate(doc.id);

// Insert sections
for (const section of sections) {
await insertSection(doc.id, section);
}

// Add company logo to header
await insertLogo(doc.id, data.company.logo);

return doc;
}

The Prompt That Powers Great Proposals​

Here's the core prompt for the executive summary:

You are writing the executive summary for a B2B sales proposal.

## Context
Company: \{company_name\}
Industry: \{industry\}
Decision maker: {contact_name}, {contact_title}

## Their Challenges (from discovery)
{discovery_notes}

## Our Solution
{products_recommended}

## Key Results We've Delivered
{relevant_metrics_from_case_studies}

## Write the Executive Summary
- 2 paragraphs maximum
- First paragraph: Acknowledge their specific challenges (use their words)
- Second paragraph: How we solve it and expected impact
- Be confident but not arrogant
- Reference specific numbers where possible
- End with a forward-looking statement

Do not:
- Use generic phrases like "industry-leading" or "best-in-class"
- Make claims you can't support
- Write more than 150 words total

Real Output Example​

Input:

  • Company: Acme HR (200 employees, HR tech)
  • Pain points: "SDR team spending too much time on research", "leads going cold because follow-up is too slow", "no visibility into what's working"
  • Products: SDR Playbook, Visitor ID
  • Contact: Sarah Chen, VP Sales

AI-Generated Executive Summary:

Acme HR's sales team is losing deals to slow follow-up. Your SDRs spend hours researching leads that should take minutes, and by the time they reach out, competitors have already made contact. Meanwhile, you have no clear visibility into which activities actually drive pipeline.

MarketBetter solves this with a daily SDR playbook that eliminates research time and tells your reps exactly who to contact, how to reach them, and what to say. Combined with our visitor identification, you'll know the moment a target account hits your website. Companies like CloudHR have cut their lead response time from 48 hours to under 1 hourβ€”and increased qualified meetings by 40%. We're proposing a 90-day pilot with your 8-person SDR team to deliver similar results.

Time to generate: 12 seconds

Implementation Options​

Option 1: OpenClaw Agent​

# openclaw.yaml
agents:
proposal-generator:
model: claude-3-5-sonnet-20241022
systemPrompt: |
You generate B2B sales proposals.
You have access to CRM data, case studies, and pricing.
Generate professional, personalized proposals.
tools:
- hubspot_read
- google_docs_create
- case_study_search

Option 2: n8n/Make Workflow​

Build a visual workflow:

  1. Trigger: New deal reaches "Proposal Requested" stage
  2. Fetch: Pull CRM data
  3. AI: Generate sections with Claude API
  4. Create: New Google Doc from template
  5. Notify: Slack message to rep with link

Option 3: Custom Script​

# Run proposal generation
node generate-proposal.js --deal-id 12345 --output gdoc

# Output:
# βœ… Data gathered from HubSpot
# βœ… Executive summary generated
# βœ… Case study selected: CloudHR
# βœ… Pricing calculated: $24,000/year
# βœ… Google Doc created: [link]
# βœ… Sent to #sales-proposals

Quality Control Checklist​

Before sending any AI-generated proposal:

  • Company name correct everywhere (search for placeholder text)
  • Pain points match discovery notes
  • Case study is relevant (same industry or problem)
  • Pricing math is correct
  • No hallucinated features or claims
  • Contact names spelled correctly
  • Timeline is realistic
  • Legal/compliance review if required

The AI does 80% of the work. The human does 20% of review that ensures quality.

ROI Calculation​

Before AI proposals:

  • Time per proposal: 3-4 hours
  • Proposals per week: 5
  • Weekly hours: 15-20
  • Monthly cost (at $75/hour): $4,500-6,000

After AI proposals:

  • Time per proposal: 30-45 minutes (review + customize)
  • Proposals per week: 5
  • Weekly hours: 2.5-4
  • Monthly cost: $750-1,200
  • AI costs: ~$50/month

Monthly savings: $3,200-4,750

Plus: faster turnaround means deals don't stall waiting for proposals.

Start Building Today​

  1. Document your proposal structure β€” What sections do you always include?
  2. Gather your inputs β€” What data goes into each section?
  3. Test with Claude β€” Paste a discovery transcript and ask for an executive summary
  4. Iterate β€” Refine prompts until output quality is consistently good
  5. Automate β€” Connect to your CRM and document tools

The best proposal is the one that arrives fast and speaks directly to what the prospect cares about. AI makes that possible at scale.


Free Tool

Try our AI Lead Generator β€” find verified LinkedIn leads for any company instantly. No signup required.

Want Proposals That Write Themselves?​

MarketBetter helps sales teams move faster at every stage. From finding leads to closing deals, our AI-powered platform handles the research so you can focus on selling.

Book a Demo β†’


Related reading:

The AI SDR Tech Stack: Tools We Actually Use at MarketBetter [2026]

Β· 8 min read
sunder
Founder, marketbetter.ai

AI SDR Tech Stack Diagram

Everyone talks about "AI for sales." Few share what they actually use.

At MarketBetter, we don't just build AI-powered SDR tools β€” we use them. Every day. Our entire GTM motion runs on an AI-first stack that handles everything from lead research to email personalization to competitor intelligence.

This isn't a theoretical "you could do this" post. This is our actual stack, with real tools, real workflows, and honest assessments of what works and what doesn't.

Why Build an AI-First GTM Stack?​

The math is simple:

Traditional SDR workflow:

  • 40% of time on research
  • 30% on manual data entry
  • 20% on email/call prep
  • 10% actually selling

AI-augmented SDR workflow:

  • 10% reviewing AI research
  • 10% approving personalized content
  • 10% on strategy and exceptions
  • 70% actually selling

The shift isn't about replacing humans. It's about removing the grunt work so SDRs can do what they're good at: building relationships and closing deals.

Our Core Stack: The Foundation​

1. OpenClaw (AI Agent Orchestration)​

What it does: Runs our AI agents as persistent assistants with memory, tools, and the ability to work autonomously.

How we use it: We have multiple specialized agents that handle different parts of our GTM motion:

  • Content research and creation
  • Competitor intelligence gathering
  • Lead enrichment and scoring
  • Email personalization

Why it matters: Without an orchestration layer, AI is just a chat interface. OpenClaw turns it into an actual worker that can remember context, access tools, and complete multi-step tasks without constant babysitting.

The honest take: Setup isn't trivial. You need technical chops to configure agents properly. But once it's running, the leverage is enormous. One well-configured agent can do the work of multiple human hours daily.

2. Claude (AI Reasoning Engine)​

What it does: The brain behind our agents. Handles complex reasoning, writing, and decision-making.

How we use it:

  • Writing personalized outreach
  • Analyzing competitor positioning
  • Summarizing call transcripts
  • Generating content briefs

Why Claude over GPT-4? For sales tasks specifically:

  • Better at following complex instructions
  • More natural writing style (less "AI-sounding")
  • Stronger at maintaining context across long conversations
  • More reliable at structured output

The honest take: Claude is more expensive than GPT-4-turbo for high-volume tasks. We use Claude for quality-critical work (outreach, content) and sometimes GPT-4 for bulk processing where good-enough is fine.

3. HubSpot (CRM + Automation)​

What it does: Our central system of record for all customer and prospect data.

How we integrate AI:

  • AI agents read deal context before generating outreach
  • Automatic enrichment of new contacts with AI-gathered intel
  • Activity logging from AI workflows
  • Lead scoring enhanced with AI signals

Why not just use HubSpot's AI? HubSpot's native AI is improving, but it's generic. Our stack lets us:

  • Use custom prompts optimized for our ICP
  • Integrate signals HubSpot doesn't have
  • Control exactly how AI interacts with our data

The honest take: HubSpot's API is solid but rate-limited. We cache aggressively and batch operations to avoid hitting limits during high-activity periods.

The Research Layer: Where AI Shines Brightest​

4. Brave Search API (Real-Time Intelligence)​

What it does: Programmatic web search without the Google tax.

How we use it:

  • Real-time company news before outreach
  • Competitor monitoring (pricing changes, product launches, hiring)
  • Industry trend research for content
  • Finding contact info and social profiles

Why Brave over Google?

  • Better pricing for API access
  • Less aggressive rate limiting
  • Cleaner results without SEO spam

Pro tip: Combine search with web scraping. Search finds the pages; scraping extracts the data. AI then synthesizes it into usable intelligence.

5. LinkedIn Sales Navigator​

What it does: B2B prospecting and intent signals.

How we integrate AI:

  • AI reviews prospect activity before outreach
  • Automated analysis of shared connections
  • Content engagement tracking

The honest take: LinkedIn's API access is restrictive. We mostly use it manually but have AI help process and analyze the data we extract.

The Content Engine: AI-Generated At Scale​

6. Replicate (Image Generation)​

What it does: Creates custom images for blog posts and social content.

How we use it:

  • Workflow diagrams for tutorials
  • Quote cards for social sharing
  • Featured images for blog posts
  • Comparison graphics

Why Replicate?

  • Pay-per-image pricing (no subscriptions)
  • Fast generation via Flux
  • API-friendly for automation

The honest take: AI-generated images still need human review. About 70% are usable on first try; the rest need re-generation or light editing.

7. Our Blog Pipeline​

The workflow:

  1. AI agent receives content brief (topic, keywords, angle)
  2. Agent researches using web search
  3. Agent writes first draft in Docusaurus MDX format
  4. Agent generates 2-3 images
  5. Agent creates GitHub PR
  6. Human reviews and merges
  7. Auto-deploy to production

Volume: We're pushing 5+ blog posts daily during content sprints.

Quality control: AI writes, humans approve. Every piece gets a human eye before publishing. But the human review takes 5 minutes instead of the 2+ hours writing would take.

The Communication Layer: Personalization at Scale​

8. Email (Microsoft 365 + AI Drafts)​

The workflow:

  • AI researches prospect
  • AI generates personalized draft
  • Human reviews in drafts folder
  • Human sends (or edits then sends)

Why not fully automated sends? Trust. We want human judgment on anything that goes out under our name. AI proposes; humans dispose.

Personalization elements AI handles:

  • Recent company news references
  • Industry-specific pain points
  • Role-specific messaging
  • Timing recommendations

9. Slack (Internal Communication)​

How AI plugs in:

  • Automated alerts for important signals
  • Daily briefings from agents
  • Quick queries to AI from any channel

The honest take: The key is making AI accessible where work happens. Forcing people to switch contexts kills adoption.

The Intelligence Layer: Knowing Your Market​

10. Supabase (Data Lake)​

What it does: Stores and organizes all the intelligence our AI gathers.

What we track:

  • Competitor intel (pricing, features, positioning)
  • Customer insights (pain points, wins, objections)
  • Content performance (what's working)
  • Agent activity (what's been done)

Why Supabase?

  • PostgreSQL flexibility
  • Real-time subscriptions
  • Simple API
  • Generous free tier

The power move: When agents research a competitor, the insights go into Supabase. Next time anyone asks about that competitor, the answer is instant β€” no re-research needed.

What's NOT in Our Stack (And Why)​

We Don't Use: Automated LinkedIn Outreach Tools​

Why not: LinkedIn actively bans accounts that automate. The risk isn't worth it. We use LinkedIn for research and manual engagement only.

We Don't Use: AI Voice Callers (For Cold Outreach)​

Why not: The tech isn't there yet for cold calls. AI voice works for appointment reminders and simple transactions, but complex sales conversations still need humans.

We Don't Use: "All-in-One" AI Sales Platforms​

Why not: They're jacks of all trades, masters of none. Purpose-built tools connected by AI orchestration outperform monolithic platforms.

Results: What This Stack Delivers​

Since implementing this AI-first approach:

Research time: Down 80% (from 2 hours to 25 minutes per prospect deep-dive)

Email personalization: Every email is personalized. Previously, only high-value targets got custom messages.

Content output: 10x increase in blog production without adding headcount.

Competitor intelligence: Real-time vs. quarterly reports.

Lead response time: Under 5 minutes for inbound vs. industry average of 47 hours.

Building Your Own AI SDR Stack: Where to Start​

If You're Technical​

  1. Start with OpenClaw + Claude
  2. Connect to your CRM via API
  3. Build research workflows first (highest immediate ROI)
  4. Add content generation next
  5. Layer in communication drafting

If You're Not Technical​

  1. Start with ChatGPT/Claude directly for individual tasks
  2. Use Zapier to connect tools
  3. Focus on one workflow at a time
  4. Consider platforms like MarketBetter that package AI-powered SDR workflows without requiring technical setup

The Honest Assessment​

What AI does well:

  • Research and synthesis
  • First-draft writing
  • Pattern recognition across large datasets
  • 24/7 availability for routine tasks

What AI still struggles with:

  • Nuanced relationship building
  • Complex negotiation
  • Reading emotional cues
  • Knowing when rules should be broken

The winning formula: AI for scale and speed. Humans for judgment and relationships.

What's Next for Our Stack​

We're actively working on:

  1. Better lead scoring β€” Using AI to analyze intent signals across multiple sources
  2. Automated call prep β€” Briefing documents generated before every sales call
  3. Real-time competitive intel β€” Alerts when competitors make moves
  4. Predictive outreach timing β€” AI learning when prospects are most receptive
Free Tool

Try our Tech Stack Detector β€” instantly detect any company's tech stack from their website. No signup required.

Try It Yourself​

Building an AI-first GTM stack isn't about buying one magic tool. It's about connecting specialized tools with AI orchestration.

Start small. Pick your biggest time sink. Automate that one thing. See results. Expand.


Want to see AI-powered SDR workflows in action? Book a demo of MarketBetter to see how we turn intent signals into actionable playbooks for your SDRs β€” no AI expertise required.

How to Automate Lead Research with Claude Code [Step-by-Step Tutorial]

Β· 6 min read

The average SDR spends 6 hours per week researching prospects. That's 6 hours of:

  • Googling company names
  • Scanning LinkedIn profiles
  • Reading news articles
  • Looking for pain points to reference

What if you could do all that in 30 seconds?

Claude Codeβ€”Anthropic's AI with tool use and code executionβ€”can turn a prospect name into a complete research brief automatically. Here's exactly how to set it up.

Claude Code researching prospects from multiple data sources

What Good Lead Research Actually Looks Like​

Before we automate, let's define what we're building. A great prospect brief includes:

  1. Company Overview: What they do, company size, industry
  2. Recent News: Funding, product launches, leadership changes
  3. Tech Stack: What tools they already use (if visible)
  4. Pain Point Signals: Job postings, complaints, market trends
  5. Personalization Hooks: Specific details for your outreach

This used to take 10-15 minutes per prospect. Now it takes seconds.

The Claude Code Approach​

Claude Code can:

  • Execute searches and aggregate results
  • Read web pages and extract key information
  • Structure unstructured data into useful formats
  • Reason about what matters for your use case

Here's a prompt template that generates complete prospect briefs:

Research this company for a B2B sales outreach:

**Company:** {\{company_name\}}
**Our Product:** AI-powered SDR platform that turns intent signals into pipeline

**Create a prospect brief with:**

1. **Company Overview**
- What they do (one sentence)
- Employee count and headquarters
- Industry and target market

2. **Recent Activity (Last 6 Months)**
- Funding or acquisitions
- Product launches
- Leadership changes
- Press coverage

3. **Sales-Relevant Signals**
- Are they hiring for SDRs, sales ops, or demand gen?
- Any complaints about lead quality or outbound efficiency?
- What CRM/sales stack do they use? (check job postings)

4. **Personalization Hooks**
- 3 specific details I can reference in an email
- Potential pain points based on their situation
- Suggested angle for outreach

5. **Recommended Next Step**
- Best channel to reach them (email, LinkedIn, phone)
- Suggested first message angle

Be specific. Use actual data, not generic statements.

Setting Up Automated Research​

Option 1: OpenClaw + Claude (Always-On)​

If you want research to run automatically when new leads come in:

# OpenClaw config
cron:
jobs:
- name: "New Lead Research"
schedule:
kind: every
everyMs: 900000 # Every 15 minutes
payload:
kind: agentTurn
message: |
Check HubSpot for contacts added in the last 15 minutes.
For each new contact, create a prospect brief and add it
to the contact notes field.

This runs in the background, enriching leads as they arrive.

Option 2: Claude Code CLI (On-Demand)​

For manual research when you need it:

# Install Claude Code
npm install -g @anthropic-ai/claude-code

# Run research
claude-code research "Acme Corp"

Option 3: VS Code Extension​

If you work in VS Code, Claude Code integrates directly:

  1. Highlight a company name
  2. Cmd+Shift+P β†’ "Claude: Research Prospect"
  3. Get a brief in your sidebar

Lead research funnel: Raw data to enriched profile

Real Research Output Example​

Here's what Claude Code actually produces for a real company:


Company: Hologram (hologram.io)

Overview: IoT connectivity platform providing global cellular for devices. ~150 employees, HQ in Chicago. Series B ($65M from Battery Ventures).

Recent Activity:

  • Feb 2026: Launched Hyper network for low-latency IoT
  • Jan 2026: Partnership with AWS IoT Core announced
  • Hiring: 3 open SDR roles, 2 demand gen positions

Sales Signals:

  • Job posting mentions "scaling outbound motion" and "improving lead quality"
  • Uses HubSpot (seen in job req), Outreach for sequences
  • Active on G2 responding to reviews (cares about buyer perception)

Personalization Hooks:

  1. Reference the Hyper launch: "Saw the Hyper network announcementβ€”congrats"
  2. Note the hiring push: "Looks like you're scaling the SDR team"
  3. Connect to IoT/connectivity angle: "We work with several IoT companies..."

Recommended Approach: LinkedIn β†’ Email sequence. Their team is active on LinkedIn. Reference specific content they've posted.


This took 15 seconds to generate. A human would need 10-15 minutes minimum.

Enrichment Sources Claude Code Can Access​

When you give Claude Code research tasks, it can pull from:

SourceWhat It Finds
Company websiteProducts, pricing, team page
LinkedInEmployee count, org structure, recent posts
Job boardsHiring signals, tech stack clues
News sitesFunding, partnerships, launches
G2/CapterraReviews, complaints, competitor comparisons
CrunchbaseFunding history, investors, competitors

The key is structuring your prompt to tell Claude what matters for your specific outreach.

Advanced: Building a Research Pipeline​

For high-volume prospecting, build a full pipeline:

[New Lead] 
↓
[Basic Enrichment]
- Company size, industry
- Contact title, seniority
↓
[ICP Scoring]
- Match against ideal customer profile
- Score 1-100
↓
[Deep Research] (if score > 70)
- Full prospect brief
- Personalization hooks
↓
[Routing]
- Hot leads β†’ Slack alert + call queue
- Warm leads β†’ Automated sequence
- Cold leads β†’ Nurture list

Each step can be automated with Claude Code + OpenClaw.

Common Mistakes to Avoid​

1. Researching Every Lead Equally

Not every lead deserves 10 minutes of research. Use basic enrichment to score first, then deep-dive on high-potential prospects only.

2. Ignoring Negative Signals

Good research includes disqualifying information. If a company just laid off their sales team, that's important context.

3. Stale Data

Information decays. Set up refresh cycles for long-nurture prospects.

4. Over-Personalizing

Mentioning 5 specific details in an email feels creepy. Pick the ONE most relevant hook.

Measuring Research Quality​

Track these metrics:

  • Time per lead: Should drop from 10-15 min to under 1 min
  • Reply rates: Better research β†’ better personalization β†’ higher replies
  • Qualification accuracy: Are AI-scored leads actually converting?
  • Rep adoption: Is your team actually using the briefs?

The MarketBetter Advantage​

MarketBetter does this automatically for every website visitor:

  1. Identify: Know which companies visit your site
  2. Enrich: Pull firmographic and technographic data
  3. Research: AI generates prospect briefs
  4. Prioritize: Score and route to the right rep
  5. Act: Get a daily playbook of exactly who to contact

No manual research required. No copy-pasting between tools.


Ready to automate your lead research? See how MarketBetter turns visitor identification into actionable prospect intelligence. Book a demo.

How to Build a Custom Sales Copilot with OpenClaw [2026 Tutorial]

Β· 7 min read

"Who should I call next?"
"What's the context on this deal?"
"Draft me a follow-up email."

What if you could ask these questions and get instant, accurate answersβ€”based on YOUR CRM, YOUR email history, YOUR calendar?

That's what a sales copilot does. And with OpenClaw, you can build one for free.

Sales copilot workflow: User query to AI-powered recommendation

What is a Sales Copilot?​

A sales copilot is an AI assistant that knows your business context:

  • Your deals: Pipeline stages, deal values, next steps
  • Your contacts: Communication history, preferences, pain points
  • Your calendar: Upcoming meetings, prep needed
  • Your emails: Recent exchanges, follow-ups due

Unlike generic ChatGPT, a copilot doesn't need you to paste context every time. It already knows.

Why Build Your Own (vs. Buying)?​

Off-the-shelf AI SDR tools cost $35,000-50,000/year. They come with:

  • Features you don't need
  • Limitations on customization
  • Data that lives on someone else's servers

OpenClaw is free and open source. You:

  • Build exactly what you need
  • Own your data completely
  • Customize endlessly

Architecture Overview​

Here's what we're building:

[You via WhatsApp/Slack/Telegram]
↓
[OpenClaw]
↓
[Claude/GPT-4 API]
↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Your Data Sources β”‚
β”‚ β€’ HubSpot CRM β”‚
β”‚ β€’ Gmail/Outlook β”‚
β”‚ β€’ Google Calendar β”‚
β”‚ β€’ Company Docs β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
↓
[Context-Aware Response]

OpenClaw acts as the bridge, connecting the AI to your tools.

Data sources feeding into the sales copilot AI

Step 1: Install OpenClaw​

npx openclaw@latest init

This creates your workspace with:

  • Configuration files
  • Memory system
  • Agent definitions

Step 2: Connect Your Data Sources​

HubSpot CRM​

Create a Private App in HubSpot and add to your config:

# .openclaw/config.yaml
plugins:
hubspot:
enabled: true
token: ${HUBSPOT_TOKEN}

Email (Microsoft 365)​

Set up Graph API access:

plugins:
ms365:
enabled: true
clientId: ${MS365_CLIENT_ID}
tenantId: ${MS365_TENANT_ID}
clientSecret: ${MS365_CLIENT_SECRET}

Google Calendar​

plugins:
gcal:
enabled: true
credentials: ${GCAL_CREDENTIALS_PATH}

Step 3: Define Your Copilot's Personality​

Create a SOUL.md file that defines how your copilot behaves:

# SOUL.md - Sales Copilot

## Who I Am
I'm your sales copilot. I know your pipeline, your contacts,
and your calendar. I help you sell smarter.

## How I Communicate
- Direct and actionable
- I give specific recommendations, not generic advice
- I cite my sources (which deal, which email, etc.)
- I flag urgency when it matters

## What I Can Do
- Pull deal info from HubSpot
- Summarize email threads
- Check upcoming meetings
- Draft follow-up messages
- Alert you to stale deals

## What I Won't Do
- Send emails without your approval
- Make changes to CRM without confirmation
- Share your data anywhere

Step 4: Create Core Commands​

Define common queries your copilot can handle:

"Who should I call today?"​

# In your agent config
prompts:
call_priority:
message: |
Check my HubSpot pipeline and identify:
1. Deals that haven't had activity in 7+ days
2. Deals with meetings scheduled this week
3. High-value deals (>$10K) in negotiation stage

Rank by urgency. For each, tell me:
- Company name and deal value
- Last contact date and method
- Suggested talking point based on history

"Prep me for my next call"​

prompts:
meeting_prep:
message: |
My next meeting is in {{time_until}} with {{contact_name}}.

Pull together:
1. Company overview (from HubSpot + web research)
2. Deal history and current stage
3. All email exchanges in last 30 days
4. Key talking points based on their pain points
5. Potential objections to prepare for

Format as a quick-reference briefing I can scan in 2 minutes.

"Draft a follow-up email"​

prompts:
follow_up:
message: |
I just finished a call with {{contact_name}} at {{company}}.

Based on our email history and CRM notes, draft a follow-up email that:
1. Thanks them for the call
2. Summarizes key points we discussed
3. Proposes clear next steps
4. Maintains my usual tone (check recent sent emails)

Keep it under 150 words.

Step 5: Set Up Proactive Alerts​

Your copilot shouldn't just respondβ€”it should reach out when needed:

# Cron jobs for proactive alerts
cron:
jobs:
- name: "Morning Pipeline Brief"
schedule:
kind: cron
expr: "0 8 * * 1-5" # 8am weekdays
payload:
kind: agentTurn
message: |
Good morning! Here's your pipeline brief:

1. Meetings today (with quick context)
2. Deals needing attention (stale or slipping)
3. Follow-ups due
4. Any hot signals (new website visitors, email opens)

Keep it to 5-7 bullet points max.

- name: "Stale Deal Alert"
schedule:
kind: cron
expr: "0 14 * * 1-5" # 2pm weekdays
payload:
kind: agentTurn
message: |
Check for deals over $5K that haven't been touched in 10+ days.
If you find any, alert me with:
- Deal name and value
- Last activity
- Suggested re-engagement approach

Step 6: Connect Your Messaging Channel​

OpenClaw supports multiple channels:

WhatsApp (Personal)​

openclaw whatsapp link
# Scan QR code with your phone

Slack​

plugins:
slack:
enabled: true
token: ${SLACK_BOT_TOKEN}
channel: "#sales-copilot"

Telegram​

plugins:
telegram:
enabled: true
token: ${TELEGRAM_BOT_TOKEN}

Real Conversation Examples​

Here's what interacting with your copilot looks like:

You: Who needs follow-up today?

Copilot: 3 follow-ups due:

  1. Acme Corp ($24K) - Demo was 3 days ago, promised to send proposal. Send it now.

  2. Initech ($12K) - They went quiet after pricing. Suggest: value-focused check-in email.

  3. Hooli ($45K) - Contract review since Monday. Time for a nudge.

Want me to draft emails for any of these?


You: Yes, draft Acme proposal email

Copilot: Based on your call notes from Thursday:


Subject: Acme proposal as promised

Hi Sarah,

Great speaking Thursday about reducing your SDR ramp time. As promised, I've attached our proposal.

Key points covered:

  • Visitor identification for 5K monthly visitors
  • HubSpot integration (you mentioned this was critical)
  • 90-day pilot option

Want to find 15 minutes next week to walk through it?

Best, [Your name]


Send, edit, or scrap?

Advanced: Adding Custom Tools​

OpenClaw supports custom tools. Example: Add a "competitor check" tool:

// tools/competitor-check.js
module.exports = {
name: 'check_competitor',
description: 'Check if a prospect uses a competitor product',
parameters: {
company: { type: 'string', required: true }
},
async execute({ company }) {
// Check BuiltWith, job postings, etc.
const techStack = await fetchTechStack(company);
return techStack.filter(t => COMPETITORS.includes(t));
}
};

Now your copilot can answer: "Does Acme use any competitor products?"

Common Pitfalls​

1. Too Much Context Don't dump your entire CRM into every prompt. Be selective about what context matters for each query.

2. No Memory Structure Use OpenClaw's memory files to store persistent context (ICP, competitors, etc.) so you don't repeat yourself.

3. Forgetting Permissions Your copilot has access to sensitive data. Keep it on a secure channel. Don't expose it to shared workspaces.

4. Over-Automation Start with assisted workflows (copilot drafts, you approve). Don't go full autonomous until you trust the outputs.

What's Next?​

Once your basic copilot works, extend it:

  • Add competitive intelligence lookups
  • Connect to Gong/Chorus for call analysis
  • Build a deal coaching mode
  • Create a forecasting assistant

The foundation is the sameβ€”you're just adding more context and capabilities.


Don't want to build from scratch? MarketBetter comes with AI-powered playbooks built in. Visitor identification, lead prioritization, and recommended actionsβ€”no coding required. Book a demo.

Free Tool

Try our AI Lead Generator β€” find verified LinkedIn leads for any company instantly. No signup required.

Claude's 200K Context Window: Why It Changes Everything for Sales Teams [2026]

Β· 7 min read

Most AI tools choke when you need them most.

You're prepping for a demo. You want the AI to understand the prospect's entire journeyβ€”the 47-email thread, the Gong call transcript, the CRM notes from three different reps, their company's latest 10-K filing.

You paste it all in. The AI says: "This exceeds the maximum context length."

That's a 4K-32K context window in action. It's like trying to fit an enterprise deal into a Post-it note.

Claude's 200K token context window changes everything.

Claude's 200K context window visualization showing all sales data types

What is a Context Window (And Why Does Size Matter)?​

A context window is how much text an AI can "see" at once. Think of it as working memory:

  • 4K tokens (~3,000 words): One email thread, maybe
  • 32K tokens (~24,000 words): A few documents
  • 128K tokens (~96,000 words): A substantial research project
  • 200K tokens (~150,000 words): An entire deal history. Every touchpoint. Every document.

For sales, this isn't a nice-to-have. It's the difference between AI that knows your prospect and AI that guesses.

Context window size comparison across AI models

Real Sales Use Cases for 200K Context​

1. Complete Deal Context Before Every Call​

Load into a single prompt:

  • Every email exchange (all 47 of them)
  • Gong/Chorus call transcripts from discovery + demo
  • LinkedIn activity and posts from key stakeholders
  • Their company's recent earnings call
  • Competitor mentions from their 10-K
  • Internal Slack conversations about the deal
  • CRM notes from every rep who touched the account

Now ask: "What are the three objections most likely to come up in tomorrow's negotiation call?"

Claude doesn't guess. Claude knows.

2. Personalized Outreach at Scale​

Traditional AI personalization:

"I noticed you're the VP of Sales at {company}. I'd love to show you how..."

200K context personalization:

Load: Their last 10 LinkedIn posts, company blog, recent podcast appearance,
job postings, press releases, G2 reviews they've written

Generate: Hyper-personalized email referencing their actual stated priorities,
using their vocabulary, addressing their specific challenges

The difference is palpable. One feels like spam. The other feels like you've done your homework.

3. Competitive Battle Cards That Actually Help​

Instead of generic battle cards, load:

  • Your competitor's entire pricing page
  • Their G2 reviews (all of them, including the 1-stars)
  • Their recent changelog/releases
  • Job postings (reveals their priorities)
  • Customer complaints on Twitter/LinkedIn
  • Their sales team's LinkedIn posts (yes, really)

Ask: "Based on all of this, what are the three biggest weaknesses we should exploit, and how should we position against each?"

The output is specific, actionable, and currentβ€”not a PDF from six months ago.

4. Account Planning That Sees Everything​

For enterprise deals, load the entire account history:

  • All closed-won and closed-lost deals
  • Every support ticket
  • Product usage data
  • Expansion history
  • Key contact changes
  • Champion departures

Ask: "Create an account plan for the renewal. What's the risk level, who are our champions, and what expansion opportunities exist?"

How to Use Claude 200K in Your Sales Stack​

Option 1: Direct API Integration​

import anthropic

client = anthropic.Anthropic()

# Load all your deal context
deal_context = load_deal_context("acme-corp") # Returns ~100K tokens

response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=4096,
messages=[
{
"role": "user",
"content": f"""
Here is the complete deal context for Acme Corp:

{deal_context}

Based on all of this information, prepare me for tomorrow's
negotiation call. What objections should I expect? What
leverage do we have? What's the likely outcome?
"""
}
]
)

Option 2: OpenClaw for Continuous Context​

OpenClaw maintains persistent context across conversations:

# openclaw.yaml
agents:
sales-copilot:
model: claude-3-5-sonnet-20241022
systemPrompt: |
You are a sales copilot with access to complete deal context.
You remember all previous conversations about this account.
You proactively surface relevant information.

The advantage: Context builds over time. Each interaction adds to what the AI knows.

Option 3: RAG + Full Context Hybrid​

For truly massive datasets (10+ deals, entire CRM):

  1. Use RAG to retrieve relevant chunks
  2. Load retrieved chunks + current deal context into 200K window
  3. Get responses grounded in both specific and broad context

Context Window Comparison: Claude vs The Field​

ModelContext WindowReal-World LimitBest For
GPT-4128K~100K usableSingle-deal deep dives
GPT-4 Turbo128K~100K usableCost-effective analysis
Claude 3.5 Sonnet200K~180K usableMulti-deal, full history
Claude 3 Opus200K~180K usableComplex reasoning + full context
Gemini 1.5 Pro1M~900K usableMassive document analysis

For most sales use cases, Claude's 200K hits the sweet spot: enough context for complete deal history without the latency and cost of 1M+ windows.

What Fits in 200K Tokens?​

To give you a sense of scale:

  • 1 email: ~200-500 tokens
  • 1 call transcript (30 min): ~5,000-8,000 tokens
  • 1 10-K filing: ~40,000-60,000 tokens
  • Complete deal history (6-month enterprise sale): ~50,000-80,000 tokens
  • 10 LinkedIn posts: ~2,000-3,000 tokens

You can fit an entire enterprise deal's documentation in a single prompt.

The Prompt Pattern for Sales Context​

Here's a template that works:

# Account Context: {Company Name}

## Company Overview
{Paste company research, 10-K summary, news}

## Stakeholder Map
{Paste LinkedIn profiles, org chart notes}

## Conversation History
{Paste all email threads, meeting notes}

## Call Transcripts
{Paste relevant Gong/Chorus transcripts}

## CRM Data
{Paste deal stage, notes, activity history}

## Competitive Context
{Paste what you know about their evaluation}

---

# Task
Based on all of the above context, {your specific request}

Common Mistakes to Avoid​

❌ Dumping Everything Without Structure​

Bad:

Here's everything: [massive text blob]
What should I do?

Good:

# Context organized by type
## Emails (chronological)
## Call transcripts
## Company research

# Specific question
What are the top 3 objections likely in tomorrow's call?

❌ Forgetting to Update Context​

Your 200K context is only as good as its freshness. Build systems that automatically pull:

  • New emails
  • New CRM notes
  • New call transcripts
  • New stakeholder LinkedIn activity

❌ Ignoring Token Economics​

200K tokens of input β‰  free. At ~$3/M input tokens for Claude 3.5 Sonnet:

  • 200K tokens = ~$0.60 per full-context request
  • Do it 100x/month per rep = $60/rep/month

Still cheaper than a bad deal, but worth optimizing.

The Bottom Line​

Claude's 200K context window isn't a spec sheet number to brag about. It's a fundamental shift in what AI can do for sales.

When your AI knows everything about a dealβ€”every email, every call, every documentβ€”it stops being a generic assistant and starts being a genuine copilot.

The question isn't whether to use large-context AI for sales. It's whether you can afford not to while your competitors do.


Free Tool

Try our AI Lead Generator β€” find verified LinkedIn leads for any company instantly. No signup required.

Ready to Put AI to Work for Your Sales Team?​

MarketBetter turns AI insights into daily SDR action. Our AI-powered playbook tells your reps exactly who to contact, how to reach them, and what to sayβ€”based on real buyer signals.

Book a Demo β†’


Related reading:

Claude vs ChatGPT for Sales Teams: Which AI Wins in 2026?

Β· 7 min read
sunder
Founder, marketbetter.ai

Your SDRs spend just 35% of their time actually selling. The rest? Research, data entry, writing emails, prepping for calls. Both Claude and ChatGPT promise to automate this busyworkβ€”but they take different approaches.

After running both AIs on real sales workflows at MarketBetter (and building an AI SDR with OpenClaw), here's what we learned about when to use each.

OpenAI Codex vs Claude Code vs ChatGPT: Complete GTM Comparison [2026]

Β· 6 min read

Three AI tools. All capable. But which one should your GTM team actually use?

With GPT-5.3-Codex dropping February 5, 2026, the landscape just shifted. Again. This guide breaks down OpenAI Codex, Claude Code, and ChatGPT for sales and marketing use casesβ€”with specific recommendations for each workflow.

Comparison matrix: Codex vs Claude vs ChatGPT for GTM

Quick Summary: Which AI for Which Task​

Use CaseBest ToolWhy
Code generation/scriptsCodexPurpose-built, best performance
Long research/analysisClaude200K context, better reasoning
Quick answers/chatChatGPTFast, good enough for simple tasks
Email personalizationClaudeNuanced writing, follows instructions
Pipeline automationCodex + OpenClawAgentic capabilities, mid-turn steering
Sales call prepClaudeBetter at synthesis and summary
Proposal generationClaudeLonger document handling

Now let's dig into the details.

OpenAI Codex (GPT-5.3-Codex)​

Released: February 5, 2026
What it is: OpenAI's most capable agentic coding model

Key Features​

  • 25% faster than GPT-5.2-Codex
  • Mid-turn steering: Direct the agent while it's working (killer feature)
  • Runs in Codex app, CLI, IDE extension, or Codex Cloud
  • Multi-file changes: Can edit entire codebases
  • Built for autonomy: Designed to work on complex tasks without constant prompting

GTM Use Cases for Codex​

  1. Building Sales Automation Scripts

    • Write HubSpot/Salesforce API integrations
    • Build custom lead scoring models
    • Create data sync workflows
  2. Pipeline Monitoring Systems

    • Alert systems for stale deals
    • Automated reporting dashboards
    • Integration scripts between tools
  3. Custom Sales Tools

    • Chrome extensions for LinkedIn
    • Email template generators
    • Proposal automation systems

Codex Pricing (2026)​

  • Codex CLI: Free tier available, pay per API call
  • Codex Cloud: ~$50/user/month (team features)
  • Enterprise: Custom pricing

When NOT to Use Codex​

  • Simple email writing (overkill)
  • Non-technical tasks (use Claude or ChatGPT)
  • Quick research (Claude's context window is better)

Claude Code (Anthropic)​

What it is: Claude 3.5/4 with tool use and code execution
Integrated into: VS Code, terminal, OpenClaw

Key Features​

  • 200K context window: Can analyze entire documents, codebases, or conversation histories
  • Precise instruction following: Better at nuanced tasks
  • Constitutional AI: More reliable safety guardrails
  • Tool use: Can browse web, execute code, interact with APIs

GTM Use Cases for Claude​

  1. Prospect Research

    • Deep-dive company analysis
    • Competitive intelligence reports
    • Personalization hook identification
  2. Email Writing

    • Personalized outreach at scale
    • Multi-touch sequence creation
    • Reply handling suggestions
  3. Document Analysis

    • Analyzing sales call transcripts
    • Extracting insights from RFPs
    • Summarizing long email threads
  4. Sales Coaching

    • Call analysis and feedback
    • Objection handling suggestions
    • Win/loss pattern identification

Claude Pricing (2026)​

  • Claude.ai: $20/month Pro, $30/month Teams
  • API: $3-15 per million tokens (varies by model)
  • OpenClaw: Free (bring your own API key)

When NOT to Use Claude​

  • Heavy code generation (Codex is faster)
  • Real-time chat (ChatGPT has lower latency)
  • Tasks requiring strict format adherence (can be verbose)

ChatGPT (GPT-4o/4-Turbo)​

What it is: OpenAI's general-purpose assistant
Best for: Quick tasks, brainstorming, general questions

Key Features​

  • Lowest latency: Fastest responses
  • Plugins and GPTs: Extensible for specific use cases
  • Web browsing: Built-in search
  • Voice mode: Conversational interface

GTM Use Cases for ChatGPT​

  1. Quick Research

    • "What does [company] do?"
    • "Who are [competitor]'s biggest customers?"
    • "What's the average deal size in [industry]?"
  2. Brainstorming

    • Subject line ideas
    • Objection responses
    • Campaign angles
  3. Light Automation

    • Simple data formatting
    • Template generation
    • Quick calculations

ChatGPT Pricing (2026)​

  • Free: Basic access
  • Plus: $20/month
  • Team: $25/user/month
  • Enterprise: Custom

When NOT to Use ChatGPT​

  • Complex, multi-step workflows (use Codex)
  • Long document analysis (Claude's context is better)
  • Tasks requiring precise formatting (can be inconsistent)

Three AI tools side by side with key differentiators

Head-to-Head: The Details​

Context Window​

ToolContext WindowImplication
Claude200K tokensCan analyze ~500 pages at once
ChatGPT128K tokensGood for most tasks
CodexVaries by taskDesigned for code, not documents

Winner for GTM: Claude. When researching prospects or analyzing long conversations, context matters.

Instruction Following​

Claude excels at following precise instructions. If you say "write exactly 3 bullet points," you get 3 bullet points.

ChatGPT tends to add extra context or caveats.

Codex is excellent for technical instructions but can over-engineer simple requests.

Winner for GTM: Claude for content, Codex for technical tasks.

Agentic Capabilities​

Codex was built for autonomous work. The mid-turn steering feature lets you redirect it without starting over.

Claude can be agentic via OpenClaw but requires more setup.

ChatGPT's agentic features are limited.

Winner for GTM: Codex for automation, Claude via OpenClaw for custom agents.

Speed​

ToolResponse TimeThroughput
ChatGPTFastestBest for high-volume
ClaudeMediumGood for quality
CodexVariesDesigned for complex tasks

Winner for GTM: Depends on use case. ChatGPT for quick tasks, Codex for batch processing.

The Best Stack for GTM Teams​

Based on our analysis, here's the optimal setup:

For SDRs​

  • Primary: Claude (via OpenClaw for automation)
  • Secondary: ChatGPT (quick questions)
  • When needed: Codex (building custom tools)

For Sales Ops​

  • Primary: Codex (building automation)
  • Secondary: Claude (analysis and research)
  • When needed: ChatGPT (quick prototyping)

For Marketing​

  • Primary: Claude (content and research)
  • Secondary: ChatGPT (brainstorming)
  • When needed: Codex (programmatic SEO, automation)

Integration Comparison​

OpenClaw Compatibility​

ToolOpenClaw SupportSetup
ClaudeNativeAdd API key
GPT-4NativeAdd API key
CodexVia APIRequires custom config

OpenClaw works best with Claude due to Anthropic's tool use design.

CRM Integration​

  • Codex: Best for building custom integrations
  • Claude: Best for enrichment and research tasks
  • ChatGPT: Limited native integration

Real-World Performance: Email Personalization​

We tested all three on the same task: Write a personalized cold email for a VP of Sales at a 200-person SaaS company.

Claude Output​

Followed the template exactly. Referenced specific company details. Professional but warm tone. 94 words (as requested).

ChatGPT Output​

Added extra context we didn't ask for. Good personalization but verbose. 147 words (missed the target).

Codex Output​

Technical and formal. Suggested code-like structures. Not ideal for email writing.

Winner: Claude for email personalization.

The Bottom Line​

There's no single best tool. The right answer depends on your workflow:

  • Building automation? β†’ Codex
  • Writing content? β†’ Claude
  • Quick questions? β†’ ChatGPT
  • Running AI agents 24/7? β†’ OpenClaw + Claude

The smartest teams use multiple tools for different tasks rather than forcing one tool to do everything.


Want AI that's already integrated? MarketBetter combines visitor identification, AI-powered playbooks, and automated outreach in one platform. No prompt engineering required. Book a demo.

Free Tool

Try our AI Lead Generator β€” find verified LinkedIn leads for any company instantly. No signup required.

OpenAI Codex Mid-Turn Steering: The Killer Feature for GTM Teams [2026]

Β· 6 min read

When GPT-5.3-Codex dropped on February 5, 2026, everyone focused on the "25% faster" headline. But the real game-changer? Mid-turn steering.

This feature lets you redirect an AI agent while it's workingβ€”not after it finishes. For GTM teams running complex automation, this changes everything.

Codex mid-turn steering: Human directing AI mid-task

What is Mid-Turn Steering?​

Traditionally, when you ask an AI to do something, you wait until it's done to give feedback. If it goes off track, you:

  1. Wait for completion
  2. Read the output
  3. Write a correction prompt
  4. Start over

Mid-turn steering breaks this pattern. You can intervene during execution:

You: Build a lead scoring model based on our HubSpot data

Codex: [starts working]
- Pulling contact fields...
- Analyzing conversion patterns...
- Building scoring criteria...

You: Actually, weight company size more heavily than title

Codex: [adjusts mid-task]
- Updating weight for company_size field...
- Recalculating score thresholds...
[continues with adjustment]

No restart. No lost work. Just a course correction.

Why This Matters for GTM​

1. Complex Automation Doesn't Fail Silently​

When building sales automation, you often don't know exactly what you want until you see the first attempt. Mid-turn steering lets you:

  • Watch the agent's approach in real-time
  • Correct misunderstandings immediately
  • Guide toward edge cases as they appear

Without this, a 20-minute automation task might need 3-4 full restarts to get right.

2. Better Collaboration with AI​

Mid-turn steering makes AI feel less like a black box and more like a collaborator. You're not just prompting and prayingβ€”you're actively directing.

For sales leaders building complex workflows, this means:

  • Faster iteration cycles
  • More precise outputs
  • Higher confidence in automation

3. Reduced Token Waste​

Every restart burns tokens. Mid-turn steering reduces:

  • Repeated context loading
  • Duplicate work
  • Prompt engineering overhead

For teams running Codex at scale, this adds up.

Human giving mid-task feedback with course correction

GTM Use Cases for Mid-Turn Steering​

Building Custom Lead Scoring​

Traditional approach:

  1. Ask Codex to build a lead score
  2. Wait 10 minutes
  3. Realize it weighted "email opened" too heavily
  4. Start over with clarification
  5. Wait another 10 minutes

With mid-turn steering:

  1. Ask Codex to build a lead score
  2. Watch it start weighting criteria
  3. "Waitβ€”de-emphasize email opens, focus on website visits"
  4. Codex adjusts in real-time
  5. Get the right model in one pass

Generating Email Sequences​

Traditional approach:

  1. "Write a 5-email nurture sequence"
  2. Wait for all 5 emails
  3. Email 3 is too salesy
  4. Restart or write complex follow-up prompt

With mid-turn steering:

  1. "Write a 5-email nurture sequence"
  2. After email 2: "Make these more educational, less pitch-focused"
  3. Codex adjusts emails 3-5 accordingly
  4. Done

Building Pipeline Dashboards​

Traditional approach:

  1. "Build a pipeline dashboard showing X, Y, Z"
  2. Wait for completion
  3. Visualizations aren't quite right
  4. Describe changes in detail
  5. Hope it understands

With mid-turn steering:

  1. "Build a pipeline dashboard"
  2. See the chart types being chosen
  3. "Actually, use bar charts for that, not pie"
  4. Watch it switch mid-build
  5. "Add a filter for deal size"
  6. Done with all adjustments in one session

How to Use Mid-Turn Steering​

In Codex CLI​

# Start a task
codex run "Build a HubSpot integration that syncs new contacts"

# While it's running, type to intervene
> Also add error handling for rate limits
> Skip the logging for now, we'll add that later

In Codex Cloud (Web UI)​

The Codex dashboard shows real-time execution. A sidebar lets you:

  • See what the agent is currently doing
  • Type interventions
  • Pause/resume execution
  • Save partial progress

Via API​

const session = await codex.createSession({
task: "Build lead enrichment pipeline",
onProgress: (state) => console.log(state),
allowSteering: true
});

// Intervene mid-task
await session.steer("Use Apollo for enrichment instead of Clearbit");

Best Practices for Mid-Turn Steering​

1. Let It Start Before Steering​

Don't intervene in the first 10 seconds. Let Codex show its approach firstβ€”you might learn something.

2. Be Specific with Corrections​

❌ "That's not quite right"
βœ… "Use percentage instead of raw numbers for the conversion column"

3. Steer Early, Not Late​

If you see it going the wrong direction, intervene immediately. Don't wait until it's 80% done.

4. Save Checkpoints​

For complex tasks, tell Codex to checkpoint progress: "After each major step, commit and show me the current state"

This lets you roll back if steering doesn't work.

5. Don't Over-Steer​

Every intervention has overhead. If you're steering every 30 seconds, your initial prompt probably wasn't clear enough.

Mid-Turn Steering vs. Other Approaches​

ApproachProCon
Single promptFast for simple tasksNo correction possible
Chain of promptsMore controlContext lost between prompts
Agent loopsAutonomousHard to intervene
Mid-turn steeringBest of both worldsRequires Codex

Mid-turn steering gives you the autonomy of agents with the control of manual prompting.

Real Example: Building a Competitor Alert System​

Here's a real session transcript (abbreviated):

Me: Build a system that alerts me when competitors publish new content

Codex: Starting. I'll:
1. Set up RSS feeds for competitor blogs
2. Create a daily digest
3. Send via email

Setting up RSS parser...

Me: Actually, send via Slack not email

Codex: Switching to Slack webhook...
Setting up #competitor-intel channel post...

Me: Also check their Twitter, not just blogs

Codex: Adding Twitter API integration...
Will monitor @Warmly_AI, @CommonRoom...

Me: Add @6sense too

Codex: Added. Continuing with alert formatting...

[5 minutes later]

Codex: Done. System checks hourly, posts to #competitor-intel
when new content detected.

That would have been 3-4 restarts without mid-turn steering.

Limitations to Know​

1. Not All Tasks Support Steering​

Some operations (like API calls mid-flight) can't be interrupted. Codex will tell you when steering isn't possible.

2. Token Cost Still Applies​

Steering doesn't reduce total tokensβ€”it just uses them more efficiently.

3. Requires Real-Time Attention​

If you're not watching, you can't steer. For hands-off automation, traditional approaches might be better.

The Bottom Line​

Mid-turn steering is Codex's competitive moat for complex GTM automation. It transforms AI from "prompt and pray" to "collaborative building."

For teams building:

  • Custom integrations
  • Complex workflows
  • Multi-step automation

This feature alone justifies using Codex over alternatives.


Want AI that works out of the box? MarketBetter combines visitor identification, automated playbooks, and AI-driven outreachβ€”no prompting required. Book a demo.

Free Tool

Try our AI Lead Generator β€” find verified LinkedIn leads for any company instantly. No signup required.