If you’re serious about qualifying leads, you’ve probably realized the usual “spray and pray” approach is a waste of time. You want a model that fits your unique business—one that flags real opportunities, not just anyone who filled out a form. This guide walks through setting up a custom classification model in Tamr to score B2B leads, with an eye toward what actually works (and what’s just window dressing). If you’re technical, but not a data scientist, this is for you.
Why Use Tamr for Lead Scoring? (And Should You?)
Let’s not kid ourselves: Tamr isn’t the only tool for machine learning or lead scoring. But it does have strengths if:
- Your customer/prospect data is messy and scattered.
- You want to mix in machine learning without building everything from scratch.
- You need explainability (so sales actually trusts the model).
If you just want a quick-and-dirty scoring rule (“if industry is finance, +10 points”), Tamr’s probably overkill. But if you’re wrangling data from CRM, marketing, and third-party sources, and you want a maintainable workflow, it’s worth a look.
Step 1: Get Your Data in Order
Custom classification models are only as good as your data. Tamr’s big draw is its data unification—so use it. Here’s how to set yourself up for success:
- Connect your sources: Pull in leads from CRM (Salesforce, HubSpot), marketing events, enrichment vendors, whatever you’ve got. The more complete, the better.
- Standardize fields: Make sure you’re comparing apples to apples. “Company Name” vs. “Account Name” vs. “Organization”—pick one and stick to it.
- Deduplicate: Tamr can help here, but do a sanity check; duplicates mess with your training labels.
- Handle missing data: Don’t just ignore blanks—decide if you want to impute, flag, or drop them. Garbage in, garbage out.
Pro tip: Don’t go overboard with enrichment. More data isn’t always better—quality beats quantity. If you can’t explain a feature to your boss, think twice about using it.
Step 2: Define What a “Good Lead” Means for You
Before you even touch the model, clarify what “good” actually is. Otherwise, you’ll train a model that optimizes for the wrong thing.
- Work with sales: Ask them which leads actually closed, and why. Don’t just guess.
- Label your data: Tag records as “good” (converted to opportunity or customer) or “not good.” This step takes time, but it’s essential.
- Avoid bias traps: Don’t just use “leads we called” as positive examples—that creates feedback loops.
What not to do: Don’t auto-label based on form completions or email opens. These aren’t the same as real sales opportunities.
Step 3: Set Up Your Classification Project in Tamr
Now you’re ready to build. Tamr’s interface is pretty straightforward, but don’t expect it to hold your hand. Here’s the process:
- Create a new project: Choose “Categorization” or “Classification” (naming varies by version).
- Import your labeled dataset: You’ll need enough positive and negative examples for the model to learn. If you have hundreds, that’s a good start. Fewer than 50? You’ll get sketchy results.
- Select features: Choose columns that actually signal lead quality (industry, company size, engagement history). Avoid leaking “future” info or IDs.
- Configure target labels: Tell Tamr which column is your “good lead” label.
- Tune hyperparameters (optional): Most folks can leave defaults, but if you know what you’re doing, tweak as needed.
Reality check: The “auto-magic” here is decent, but not foolproof. Tamr won’t tell you if your labels are bogus, or if your features make no sense. It’s on you to sanity-check.
Step 4: Train and Evaluate the Model
Here’s where the rubber meets the road. Tamr will train a model and spit out metrics.
- Check class balance: If 95% of your leads are “not good,” your model will just predict “not good” all the time. You want a reasonable mix.
- Look at metrics that matter: Precision and recall are more useful than accuracy. For lead scoring, false positives waste sales time, false negatives leave money on the table.
- Review feature importance: Tamr will show which features drive predictions. Gut-check these with your sales team—if “Zip Code” is #1, something’s off.
Don’t get cute: Resist the urge to over-optimize. If your model works decently and makes sense, move on. You’re not building a Nobel-winning algorithm—just something better than a spreadsheet.
Step 5: Deploy and Integrate Predictions
A model sitting in Tamr isn’t helping anyone until its scores reach the right people.
- Publish results: Tamr lets you export predictions, usually via CSV, API, or direct integration with your CRM.
- Add scores to lead records: However you do it, get the score in front of sales, not just your analytics dashboard.
- Set up feedback loops: Let sales flag junk leads or “missed gems.” Feed this data back into your next training cycle.
Pro tip: Don’t hide the model’s logic. If sales thinks the scoring is a black box, they’ll ignore it. Share the top signals—even if they’re obvious.
Step 6: Monitor, Retrain, Repeat
No model is “set and forget.” Lead scoring models decay fast—markets change, people game the system, data drifts.
- Track performance: Are good leads being missed? Is sales complaining more? Don’t just look at numbers—listen to users.
- Retrain regularly: Monthly or quarterly is usually enough, unless your business changes overnight.
- Prune features: If a data field stops being useful, drop it. Keep things lean.
What to ignore: Fancy “AI explainability” dashboards. They might look impressive, but if you can’t act on the output, it’s just noise.
Honest Takes: What Works—and What Doesn’t
- Works: Using your own success criteria, involving sales in labeling, keeping features simple.
- Doesn’t: Overfitting, trusting default labels, or expecting Tamr to fix bad data.
- Ignore: Hype about “AI-powered everything.” Lead scoring is still about understanding your customers and your process.
Keep It Simple, Iterate, and Don’t Overthink
Getting a custom classification model running in Tamr isn’t rocket science, but it is a process. Focus on clean data, clear definitions, and regular feedback. Don’t worry about perfection—ship something simple, see how it does, and improve from there. If your model helps sales have better conversations and wastes less time, you’re winning. Everything else is window dressing.