If you're sending cold emails, follow-ups, or any kind of outbound sales campaign, you know the drill: you write what you think is a killer email, send it out, and then...crickets. Or maybe you get some replies, but you have no idea what really worked. That’s where A/B testing comes in, and why Reply.io’s built-in tools for testing email templates are worth a closer look.
This guide is for anyone running outreach or sales campaigns—SDRs, founders, marketers—who actually cares about getting answers, not just sending more emails. I’ll walk you through how to use Reply.io’s A/B testing features step-by-step, what’s genuinely useful, and what not to waste your time on.
1. What is A/B Testing, and Why Bother?
Let’s keep it simple: A/B testing is just sending out two (or more) versions of an email to see which one gets better results. You might change the subject line, the opening sentence, or even the call to action. Instead of guessing what works, you measure it.
Why bother? Because small tweaks—like a more specific subject line or a shorter email—can literally double your reply rates. And because your intuition about what “sounds good” is usually wrong. A/B testing tells you what actually works for your audience.
2. How Reply.io Does A/B Testing
Reply.io bakes A/B testing right into its sequence builder. You don’t need to hack together spreadsheets or juggle multiple tools. You can set up alternative versions of your emails directly where you build your campaign, and Reply.io will automatically split your contacts across the variants.
How it works: - In any step of your sequence (email, not calls or tasks), you can add A, B, C (and sometimes D) variants. - Contacts are randomly assigned to a variant. You don’t have to do anything. - Reply.io tracks open, reply, click, and bounce rates for each variant.
What it’s good for: - Testing subject lines - Testing different intros (“Hey John,” vs. “Hi John,”) - Trying different CTAs (“Let’s book a call” vs. “Interested in chatting?”) - Measuring impact of length or tone
What it’s not good for: - Multivariate testing (changing lots of things at once) - Super granular targeting (like only testing on a certain industry segment—unless you manually segment first) - Long-term, always-on testing (A/B tests are per sequence step, not global)
3. Setting Up Your First A/B Test in Reply.io
Let’s get into the weeds. Here’s how to actually set up an A/B test on an email template in Reply.io:
Step 1: Create or Edit a Sequence
- Go to your main dashboard and either create a new sequence or pick an existing one.
- Make sure you’re editing a step that sends an email (not a call or task).
Step 2: Add a Variant
- In the email step, look for the “Add Variant” or “A/B” button. This adds an alternate version—your “B.”
- You can add more than two variants, but don’t go overboard. Testing two at a time is usually plenty.
Pro tip: If you’re just starting out, stick to A/B (two versions). More variants = slower results and more confusion.
Step 3: Write Your Email Variants
- Write out your “A” version.
- In the “B” version, change just ONE THING. If you change the subject and the body, you’ll never know what mattered.
- Typical things to test:
- Subject line
- First sentence or “hook”
- Call-to-action
- Email length
- Use of personalization
What to avoid: Testing multiple things at once. If you change five things, you won’t know which (if any) mattered.
Step 4: Assign Contacts and Launch
- When you launch the sequence, Reply.io will automatically randomize who gets A and who gets B.
- You can’t (and shouldn’t) try to control who gets which version. Let it randomize.
- If you have a very small list (<100 contacts), don’t expect statistically significant results. But do it anyway—some data beats no data.
4. Measuring Results and Making Decisions
A/B testing is pointless if you don’t actually check the results and act on them. Here’s how to do that in Reply.io:
Step 5: Track Performance
- After your emails have started sending, check the “Step Stats” in your sequence.
- Look at:
- Open rates (for subject line tests)
- Reply rates (for everything else)
- Click rates (if you’re including links)
- Bounce rates (mostly to catch mistakes)
Don’t obsess over opens. Email opens are notoriously unreliable—privacy changes and Apple Mail have made open tracking pretty shaky. Focus on replies.
Step 6: Decide What’s Worth Keeping
- If one variant clearly outperforms the other (say, 10% vs. 4% reply rate), switch all future sends to the winner.
- If the results are close (e.g., 8% vs. 7.5%), it’s probably just noise. Move on, or run a new test.
- Don’t run a test forever. If you’ve sent a few hundred emails and have a clear winner, call it.
Step 7: Keep Iterating, But Don’t Overcomplicate
- After you pick a winner, feel free to test again—just change one new thing.
- Don’t get stuck in “test paralysis.” The goal is better emails, not endless tweaking.
- Document what worked (a simple Google Doc or Notion page will do). That way, you and your team don’t repeat old mistakes.
5. What Actually Matters (And What to Ignore)
Focus your tests on:
- Subject lines: These still impact open rates, even if open tracking isn’t perfect.
- First sentences: People skim. The first line can hook them or lose them.
- CTAs: Sometimes “Let me know if you’re interested” gets more replies than “Book a call here.”
- Personalization: Try swapping in specific details vs. generic intros.
Skip or approach with skepticism:
- Testing emojis or gimmicks: Rarely moves the needle in B2B. Can even hurt you.
- Changing fonts or colors: Most people won’t notice or care.
- Over-automating: The more you personalize and sound human, the better your results.
Honest take: Don’t chase tiny improvements at the expense of sending more emails. Sometimes, a test just doesn’t matter—move on.
6. Common Mistakes to Avoid
- Changing too many things at once: You want clarity, not chaos.
- Testing with tiny lists: If you only have 30 contacts, results are basically random.
- Ignoring the results: A/B testing is useless if you never check the stats.
- Letting tests run too long: Pick a winner, move on.
- Blindly trusting open rates: Spammers and privacy tools mess with these numbers. Prioritize replies.
7. Advanced Tips (If You’re Ready)
- Segment your lists first: If you serve multiple industries, test separately for each.
- Test at different steps: Sometimes, the third email in a sequence is the real magic—don’t just test the first touch.
- Export your data: For deeper analysis, export results and look for patterns. But honestly, most people don’t need this.
Keep It Simple and Keep Testing
A/B testing in Reply.io isn’t rocket science, but it’s easy to overthink. Don’t let “analysis paralysis” stop you from sending better emails. Start with small, focused tests, check your stats, and make changes. Most important: keep iterating. Email outreach is about steady, small improvements—not magic bullets.
If you’re ever unsure what to test, just ask yourself: “What’s the simplest thing I could change that might actually help?” Then test that. And don’t let anyone tell you there’s a “perfect” template—just a better one than you had before.