Skip to main content

How to Build a Lead Scoring Model Without a Data Scientist

ยท 12 min read
MarketBetter Team
Content Team, marketbetter.ai
Share this article

Most B2B teams know they should be scoring their leads. Few actually do it well. According to Gartner, only 25-30% of B2B companies have a functioning lead scoring model โ€” even though the data consistently shows that teams with scoring see 30% higher close rates and significantly shorter sales cycles.

The reason is not that scoring is conceptually hard. It is that most guides on the topic assume you have a data science team, a mature data warehouse, and six months to build a predictive model. The reality for most growing B2B teams: you have a CRM, some intent data, and you need something working by Friday.

This guide gives you exactly that. A practical scoring framework you can build in a spreadsheet, validate against your own pipeline data, and deploy into your daily SDR workflow โ€” all without writing a single line of Python.

Two-axis lead scoring framework mapping account fit against buying intent

Why Most Lead Scoring Models Failโ€‹

Before building anything, it helps to understand why so many scoring models end up ignored. The failure modes are remarkably consistent:

Over-engineering the model. Teams spend months building 50-variable scoring algorithms, only to discover that three variables explain 80% of their conversions. Start simple. Add complexity only when you have data proving it helps.

Scoring individuals instead of accounts. In B2B, buying decisions involve 6-10 stakeholders on average. A single champion visiting your pricing page matters, but three people from the same company each reading different case studies matters more. Score at the account level, then identify the right contacts within high-scoring accounts.

Ignoring signal decay. A pricing page visit from yesterday is worth far more than one from six weeks ago. Without time decay built into your model, your "hot" list fills up with accounts that were interested in Q1 and have since signed with a competitor.

Set and forget. The scoring model that worked when you sold to mid-market SaaS companies may not work when you expand into healthcare. Recalibrate quarterly at minimum โ€” compare your scored predictions against actual closed-won deals and adjust weights.

The Two-Axis Framework: Fit vs. Intentโ€‹

The most reliable scoring approach for teams without a data science function is a two-axis model. Every account gets two scores:

Fit score โ€” how well does this company match your ideal customer profile? This is relatively stable and based on firmographic and technographic data.

Intent score โ€” how actively is this company researching solutions like yours right now? This is dynamic and based on behavioral signals.

Plotting accounts on these two axes gives you four quadrants:

High IntentLow Intent
High FitTier 1: Work immediatelyTier 3: Nurture โ€” they match but are not ready
Low FitTier 4: Deprioritize โ€” interest without fit wastes cyclesTier 5: Ignore

There is also a Tier 2 for accounts with high fit and medium intent โ€” these get sequenced within 24 hours rather than worked immediately.

This framework works because it forces a critical distinction: fit tells you who could buy, intent tells you who might buy soon. Most SDR teams over-index on one or the other. The teams that consistently hit quota work the intersection.

Step 1: Define Your Fit Score (30 minutes)โ€‹

Your fit score should be based on 4-6 firmographic and technographic attributes that genuinely predict whether a company will buy from you. Not aspirational attributes โ€” actual ones based on your existing customer base.

Pull a list of your last 20-30 closed-won deals and look for patterns across these dimensions:

Core Fit Attributesโ€‹

AttributeExample CriteriaPoints
Company size (employees)50-500 employees25
IndustrySaaS, Fintech, Professional Services20
Revenue range$5M-$100M ARR15
GeographyUS, UK, DACH region10
Tech stack signalsUses Salesforce + outbound tooling15
Funding stageSeries A through Series C15

Total possible fit score: 100

How to calibrateโ€‹

Look at your closed-won deals and reverse-engineer the scores. If your best customers are almost always 100-300 person SaaS companies using Salesforce, then those attributes get the highest weights. If geography rarely affects whether a deal closes, give it fewer points or drop it entirely.

The goal is that your top 20 customers would all score 70+ on your fit model. If they do not, your attributes or weights are wrong.

Tier thresholds:

  • 70-100: High fit (worth pursuing with active intent)
  • 40-69: Medium fit (pursue only with strong intent)
  • Below 40: Low fit (deprioritize regardless of intent)

Step 2: Define Your Intent Score (30 minutes)โ€‹

Intent signals fall into two categories: first-party (what they do on your properties) and third-party (what they do elsewhere that suggests buying interest).

Signal Weightsโ€‹

SignalCategoryPointsDecay
Demo request / pricing page visitFirst-party307 days to zero
Case study or comparison pageFirst-party2014 days to zero
Multiple stakeholders visiting your siteFirst-party2514 days to zero
Third-party intent surge (Bombora, G2 research)Third-party2514 days to half, 30 days to zero
Champion job change (new role at target account)Third-party3030 days to zero
Blog / content engagementFirst-party57 days to zero
Email opens / link clicksFirst-party35 days to zero
LinkedIn engagement (ad clicks, profile views)Third-party1014 days to zero

Total possible intent score: 148 (though realistically, accounts showing 50+ are strongly in-market)

The critical insight: signal stackingโ€‹

No single signal is reliable on its own. A pricing page visit could be a competitor doing research. A content download could be a student writing a paper. But when signals stack โ€” a pricing page visit plus a G2 comparison search plus two stakeholders viewing case studies โ€” the probability of genuine buying interest compounds rapidly.

6sense's published benchmarks show that accounts identified as "in-market" through stacked intent signals convert at 6x the rate of traditional list-based outbound. Even if you discount vendor benchmarks by half, 3x is transformational for SDR productivity.

Tier thresholds:

  • 50+: High intent (actively evaluating)
  • 20-49: Medium intent (early research)
  • Below 20: Low intent (no meaningful signals)

Step 3: Build the Scoring Spreadsheet (15 minutes)โ€‹

You do not need a custom data platform to start. A well-structured spreadsheet gets you 80% of the value.

Column structureโ€‹

ColumnSourceNotes
Company NameCRM / enrichment
DomainCRM / enrichmentPrimary key for deduplication
EmployeesEnrichment (Fiber, Lusha, ZoomInfo)
IndustryEnrichment
RevenueEnrichment
Tech StackEnrichmentComma-separated key technologies
Fit ScoreCalculatedSum of attribute points
Last Site VisitAnalytics / visitor ID
Pages VisitedAnalytics / visitor IDFlag pricing, case study, comparison
Third-Party IntentBombora, G2, etc.Surge score or category
Stakeholder CountAnalytics / visitor IDUnique visitors from same domain
Champion ChangesUserGems, LinkedIn, etc.
Intent ScoreCalculatedSum of weighted signals with decay
Combined TierCalculatedLookup against the quadrant matrix
Last UpdatedAutoFor decay calculations

The formulaโ€‹

For each account, the combined tier is a simple lookup:

  • Fit >= 70 AND Intent >= 50 โ†’ Tier 1
  • Fit >= 70 AND Intent 20-49 โ†’ Tier 2
  • Fit >= 70 AND Intent < 20 โ†’ Tier 3
  • Fit 40-69 AND Intent >= 50 โ†’ Tier 2
  • Fit 40-69 AND Intent 20-49 โ†’ Tier 3
  • Everything else โ†’ Tier 4 or 5 (deprioritize)

This takes 15 minutes to set up. The ongoing work is feeding it fresh data โ€” which is where automation becomes essential.

Step 4: Automate the Data Pipelineโ€‹

A manual spreadsheet works for validating your model. It does not work at scale. Within two weeks, you will stop updating it and your SDRs will go back to working whatever list feels right.

The automation you need:

Visitor identification feeds first-party intent signals automatically. When someone from a target account visits your site, their company, pages viewed, and visit frequency should flow into your scoring model without anyone copying data from Google Analytics.

Multi-source enrichment keeps fit data current. Company size changes. Funding rounds close. Tech stacks evolve. Pulling from multiple providers โ€” Fiber for technographics, Lusha for contact data, Exa for AI-extracted company intelligence โ€” ensures your fit scores reflect reality, not last quarter's snapshot.

Account-level signal aggregation rolls individual visitor behavior up to the buying committee level. Three people from Acme Corp each visiting your comparison page in the same week is a far stronger signal than one person visiting three times. Your scoring system needs to distinguish between these.

Automated decay prevents stale signals from polluting your prioritization. If an account showed strong intent 45 days ago and has gone quiet, they should not still be sitting in Tier 1.

This is exactly what MarketBetter was built to do. The platform combines visitor identification, multi-provider enrichment, and signal intelligence into a single account scoring and prioritization layer. Your SDRs open one dashboard and see accounts ranked by the intersection of fit and intent โ€” updated in real time, with signal decay built in.

Automated account scoring pipeline flowing from data sources through scoring to SDR prioritization

Step 5: Validate Against Real Pipeline Dataโ€‹

A scoring model is a hypothesis. It needs validation.

After running your model for 30 days, pull two reports:

Report 1: Score-to-meeting conversion. Of the accounts your SDRs worked in each tier, what percentage converted to a meeting? If Tier 1 accounts are not converting at a meaningfully higher rate than Tier 3, your scoring weights are wrong.

Report 2: Missed opportunities. Look at deals that closed-won in the past quarter. What tier would your model have assigned them when they first showed intent? If good deals are consistently landing in Tier 3 or Tier 4, you are missing important signals or underweighting key attributes.

Target benchmarksโ€‹

These numbers vary by industry and deal size, but directionally:

TierExpected Meeting RateExpected Close Rate
Tier 1 (high fit + high intent)15-25%5-10%
Tier 2 (high fit + medium intent)8-15%2-5%
Tier 3 (nurture)3-8%1-2%
Tier 4-5 (deprioritize)< 3%< 1%

If your Tier 1 meeting rate is below 10%, the model needs tuning. If it is above 20%, you may be too conservative โ€” consider loosening thresholds to capture more volume.

Step 6: Operationalize Into Daily SDR Workflowโ€‹

A scoring model only creates value if SDRs actually use it. The operational integration matters more than the model sophistication.

The daily workflowโ€‹

  1. Start the day with Tier 1 accounts. These get personalized, research-backed outreach within hours of the signal firing. No templates. Reference the specific behavior โ€” "I noticed your team has been evaluating visitor identification tools" โ€” because these accounts deserve your best effort.

  2. Sequence Tier 2 accounts. These get a well-crafted sequence that balances personalization with efficiency. Three to five touches over two weeks, mixing email and LinkedIn.

  3. Feed Tier 3 into nurture. Marketing automation handles these โ€” drip campaigns, retargeting, content invitations. When their intent score rises, they automatically promote to Tier 2.

  4. Ignore Tier 4-5. This is the hardest part. SDRs are conditioned to prospect broadly. But every minute spent on a low-fit, low-intent account is a minute not spent on Tier 1. Discipline here is what separates quota-hitting teams from the rest.

Weekly calibrationโ€‹

Every Friday, spend 15 minutes reviewing:

  • Which Tier 1 accounts converted? Which did not? Why?
  • Are any signals consistently over- or under-weighted?
  • Has your ICP shifted based on recent wins?

Adjust weights incrementally. Do not overhaul the model based on one bad week.

Common Mistakes to Avoidโ€‹

Treating all intent signals equally. A pricing page visit is not the same as a blog view. Weight accordingly.

Ignoring negative signals. A company that visited your pricing page, started a trial, and churned after three days should be scored down, not up. Build in negative scoring for churn indicators and competitor-switch signals.

Too many tiers. Three to four actionable tiers is ideal. Five is the maximum. Beyond that, SDRs cannot remember the playbook for each tier and default to treating everything the same.

Not involving SDRs in model design. The people working accounts daily have intuitions about what signals matter that no amount of data analysis will surface. Interview your top performers before setting weights.

Waiting for perfect data. Start with whatever signals you have today โ€” even if it is just website visits and company size. A rough model that SDRs actually use beats a perfect model that lives in a data engineering backlog.

Getting Started Todayโ€‹

You do not need a six-month implementation plan. Here is the minimal viable scoring model you can have running by end of day:

  1. Export your last 20 closed-won deals. Note company size, industry, and how they first engaged.
  2. Define 4-5 fit attributes based on patterns in those deals. Assign points.
  3. List the intent signals you can currently access. Even if it is just website analytics and email engagement, that is enough to start.
  4. Build the spreadsheet. Use the column structure above. Score your current pipeline.
  5. Have your SDRs work Tier 1 accounts first for two weeks. Measure meeting rates by tier.
  6. Iterate. Adjust weights based on what you learn.

Once you outgrow the spreadsheet โ€” and you will โ€” platforms like MarketBetter automate the entire pipeline: visitor identification, multi-source enrichment, account-level signal aggregation, automated scoring with decay, and a prioritized daily work queue for your SDR team. The scoring model you build manually today becomes the logic that runs automatically at scale.

The teams that hit quota consistently are not the ones with the most sophisticated models. They are the ones that have any model at all โ€” and actually use it every morning.


Want to see account scoring and signal intelligence in action? Book a demo and we will show you how MarketBetter prioritizes your total addressable market automatically.

Share this article