Best practices for A B testing marketing campaigns in Iterable

A/B testing is supposed to give you straight answers: What works? What doesn't? But too often, tests are set up wrong, or the results don't mean much. If you're running marketing campaigns in Iterable, and you want to actually learn something from your A/B tests (instead of just going through the motions), this guide is for you.

Below, you'll find realistic, field-tested advice on how to get A/B testing right in Iterable—without wasting time on busywork or chasing “best practices” that don’t actually help.


Why A/B Testing in Iterable Matters (and What to Avoid)

A/B testing is all about making decisions based on data, not gut feelings or the highest-paid person in the room. Iterable makes it relatively easy to set up A/B tests for emails, push, or other messaging campaigns. But here’s the thing: just because you can test anything doesn’t mean you should.

Common mistakes to avoid:

  • Testing every little thing. Don’t waste time A/B testing trivial changes (like the color of a button) unless you’re sending at massive scale.
  • Running tests with too few people. Small sample sizes give you noisy, unhelpful results.
  • Chasing “statistical significance” without understanding it. If you don’t know what that means, or how Iterable calculates it, you’ll fool yourself.
  • Ignoring the setup. If your control and test groups aren’t truly random, your test is already broken.

If you want to actually move the needle, focus on tests that have a shot at making a real difference—like subject lines, offers, or entire templates.


Step 1: Define Your Goal (Know What You Actually Care About)

Before you even open Iterable, get clear on what you want to learn. This sounds obvious, but skipping this step is the #1 way teams waste time on bad tests.

Pick ONE clear goal per test, such as:

  • Increase email open rates
  • Boost click-throughs on a specific offer
  • Drive more purchases from a series

Don’t: Try to measure everything at once. You’ll muddy the results and won’t know what worked.

Pro tip: If your team can’t agree on one key metric, the test isn’t ready.


Step 2: Decide What to Test (And Keep It Simple)

Once you’ve got a goal, decide what you’ll actually change between A and B. Remember: The best A/B tests isolate one variable.

Some high-impact things to test:

  • Subject lines (short vs. long, question vs. statement, with/without emoji)
  • Email body copy (concise vs. detailed, different value propositions)
  • Images or layout (big hero image vs. none)
  • Call-to-action (CTA) language
  • Sender name (brand vs. individual)

What not to test (unless you have a huge list):

  • Tiny visual tweaks no one will notice
  • Useless personalization (like adding {first_name} just because you can)
  • Tests with so many variants that you can't tell what's working

Be ruthless: If you can't explain why a test might move your chosen metric, skip it.


Step 3: Set Up Your Test in Iterable

Now you’re ready to get into Iterable. The platform offers built-in A/B testing for emails, push, SMS, and more. Here’s how to set up a basic A/B email test:

  1. Create a new campaign or workflow.
  2. In the campaign setup, look for the “A/B Test” option—Iterable calls these “Experiments.”
  3. Add your variants (usually just A and B—don’t get fancy unless you have a big audience).
  4. Set your audience split. Iterable does this automatically, but double-check that the split is truly random.
  5. Define your winning metric (open rate, click rate, conversion, etc.).
  6. Decide if you want to “auto-pick” a winner after a set time, or just run the test across your whole list.

A few honest notes:

  • Don’t use more than two variants unless you’ve got tens of thousands of recipients. Otherwise, each group is too small to matter.
  • If you’re testing subject lines, set the winner to “open rate.” If you’re testing content or CTA, use “click rate” or actual conversions.
  • Iterable’s “auto-winner” feature is handy, but don’t let it lull you into not looking at the data yourself.

Step 4: Send, Wait, and Don’t Peek Too Soon

Patience is the hardest part. Most people want to peek at results within an hour or two, but you’ll fool yourself if you do.

Best practices:

  • Give it at least 24-48 hours for email tests, longer if your audience opens mail slowly.
  • Don’t end the test early just because one variant is “winning” after a few hours. Results can flip.
  • Iterable will show you stats in real time, but resist the urge to call it early unless you’re dealing with massive numbers.

What to ignore: People who say you need to wait “exactly X days” for “statistical significance.” Use your brain: if 90% of opens happen in 24 hours, that’s usually enough.


Step 5: Analyze the Results (No Magic, Just Math)

After your test has run, dig into the results. Iterable gives you the raw numbers—opens, clicks, conversions, etc.—for each variant.

How to actually interpret results:

  • Look for meaningful differences. If Variant A’s open rate is 23.1% and B’s is 23.4%, that’s probably just noise—unless your list is huge.
  • Don’t chase statistical significance unless you understand it. If the difference is big and the sample size is decent, that’s usually enough for marketing.
  • If the “winner” only wins by a hair, don’t overhaul your whole strategy based on it. Run the test again or try something bolder.

Pro tip: Keep a spreadsheet or doc with your past tests. Patterns matter more than one-off “wins.”


Step 6: Roll Out the Winner (But Don’t Assume It’ll Work Forever)

Once you’ve picked a winner, make it your new default—but don’t get too attached. What works today might not work in a month.

What to do next:

  • Update your campaign or workflow with the winning variant.
  • Keep an eye on performance over time—sometimes “winner’s curse” kicks in and the lift disappears.
  • Plan your next test, building on what you learned (don’t just keep testing subject lines forever).

A/B testing isn’t a one-and-done deal. Treat it as ongoing maintenance, not a single project.


Other Practical Tips & Honest Takes

  • Segment your audience if needed. Sometimes, what works for new subscribers doesn’t work for long-timers. Iterable can handle segments, but only use them if you have a real reason.
  • Avoid “multi-armed bandit” hype. Iterable offers automated optimization, but unless you’ve got massive volume (think: millions), simple A/B tests are more reliable.
  • Don’t obsess over small uplifts. If a test result is within a couple percentage points, that’s probably noise. Focus on bold changes that can make a real impact.
  • Document what you don’t test. Sometimes, saying “we’re not testing X because we don’t think it matters” is just as important.
  • Don’t let the tool make the decisions for you. Iterable is powerful, but it won’t save you from bad strategy.

Keep It Simple and Keep Testing

A/B testing in Iterable isn’t rocket science, but it does require discipline and a willingness to ignore distractions. Test things that matter, wait for results, and don’t let minor wins go to your head. Most of all, don’t get paralyzed by “best practices”—the best thing you can do is keep experimenting and learning. Stay curious, stay skeptical, and let your results speak for themselves.