So you’re running Hyperise campaigns and keep hearing how “personalized images boost conversions.” Sure, but how do you actually know if that fancy image with a prospect’s logo or first name moves the needle? That’s where A/B testing comes in.
This guide is for marketers, SDRs, or anyone using personalized images in outreach—especially if you’re tired of vague advice and want to see what actually works for your audience. I’ll walk you through practical, step-by-step instructions to A/B test personalized images in your Hyperise campaigns without getting lost in the weeds.
Let’s get your results talking, not just the sales copy.
Step 1: Get Clear on What You Want to Test
Before you touch a single image, decide what you’re actually comparing. A/B tests only work if you’re testing one thing at a time.
What you can test: - Personalized image vs. no image - Personalized image vs. static (non-personalized) image - Two different styles of personalized images (e.g., one with a logo, one with a name) - Different calls-to-action within the image
What to avoid: - Changing too many variables at once. Don’t swap the image, email copy, and subject line all at the same time—you won’t know what made the difference. - Overly minor changes (e.g., tweaking a shade of blue). Unless you have massive traffic, you probably won’t see meaningful results.
Pro tip:
Write down your hypothesis in plain language. For example: “I think adding a personalized image with the recipient’s company logo will increase click rates.” This keeps you honest about what you’re really testing.
Step 2: Prep Your Images in Hyperise
Time to create the images you’ll use. Hyperise makes it pretty easy, but there are a few things worth watching out for.
A. Create Your Image Variants
- Log into Hyperise and head to your dashboard.
- Create your base image—this is your control. For example, an image with generic messaging or a plain banner.
- Clone that image to make your variant. This is where you’ll add the personalization layer (name, logo, etc.), or tweak the CTA.
B. Add Personalization Elements (if testing them)
- Use Hyperise’s merge tags to pull in each contact’s info—first name, company, etc.
- Double-check your data fields. If you’re missing info (like a logo for some contacts), Hyperise can show a fallback image. Use this, or you’ll end up with broken images for some people.
C. Keep Images Consistent
- Size, format, and placement should be the same between variants.
- Don’t introduce new distractions. If the only difference is the personalization, you’ll get a cleaner read.
What to ignore:
Don’t get hung up on making the “perfect” design. Version 1 is almost never your winner anyway.
Step 3: Set Up Your Campaign Split (The Right Way)
You want your A/B test to be fair—no stacking the deck. Here’s how to do it:
A. Use a Platform That Supports A/B Testing
- If you’re sending emails through a tool that integrates with Hyperise (like Mailshake, Lemlist, or Outreach), check if it has built-in A/B testing. Most do.
- If not, you’ll have to manually split your list (more on that below).
B. Randomly Split Your Audience
- Random is key. Don’t, for example, put all your hot leads in one group and cold ones in another.
- Most platforms let you auto-split your list. If not, export to CSV, randomize in Excel/Google Sheets, then upload two lists.
Example: - Group A gets the control image. - Group B gets the personalized image.
C. Insert the Right Hyperise Image into Each Variant
- Each email version should have its own unique Hyperise image URL.
- Double-check that the merge tags correspond to the right fields in your CRM/email tool.
Pro tip:
Send test emails to yourself (using emails with/without data in the personalization fields) to catch any glitches. There’s nothing worse than sending “Hi {{FirstName}}” with a broken image.
Step 4: Decide What You’ll Measure
Don’t just measure what’s easy—measure what matters to your business.
Common metrics: - Click-through rate (CTR): Are more people clicking on your CTA? - Response rate: Are more people replying to your email? - Conversion rate: Are more people booking calls, signing up, etc.?
Be careful with open rates.
They’re notoriously unreliable because of email privacy tools and image blockers. Focus on actions, not just opens.
Set a time frame:
Run the test long enough to get meaningful data (a week or two for most outbound campaigns). Don’t stop early because you “feel” like one is winning.
Step 5: Launch and Monitor
Hit send and watch the numbers—but keep your cool.
A. Monitor for Errors
- Spot-check your sends. Are all images rendering? Any broken merge tags?
- Watch for unusually high bounce or unsubscribe rates—could be a sign your images are triggering spam filters.
B. Wait for a Real Result
- Don’t call a winner after 20 sends. You need enough data for a real difference to show up.
- A good rule of thumb: At least 100+ sends per variant, ideally more.
What to ignore:
A couple of early replies saying “Nice image!” can be encouraging, but they’re anecdotal. Wait for real numbers.
Step 6: Analyze and Decide
Numbers in, now what?
A. Compare Your Metrics
- Look at your pre-defined success metric across both variants.
- Is there a clear winner? Or are the results basically identical?
Example:
- Group A (control): 2.1% click rate
- Group B (personalized): 3.2% click rate
That’s a meaningful jump. But if it’s 2.1% vs 2.2%, don’t waste time over-optimizing.
B. Check If the Difference Is Real
- Use a simple statistical significance calculator (there are free ones online) to see if your results are likely due to chance.
- If you don’t have enough data for significance, just note that and move on. Don’t pretend you’ve found a “winner” if you haven’t.
C. Look for Surprises
- Did the personalized image hurt results? Sometimes people find them gimmicky or spammy. Don’t force it.
- Did you see more replies, but not more clicks? That could be a win, depending on your goals.
Step 7: Roll Out, Rinse, Repeat
If you found a clear winner, great—use it as your new control and keep testing. If not, don’t sweat it.
A few honest takes: - One winning test doesn’t mean every future campaign will see the same results. Audiences, offers, and seasons all change things up. - Sometimes, personalized images make a big difference; other times, they don’t. The only way to know is to test. - Don’t chase a 0.1% improvement forever. Your time’s better spent testing bigger changes or improving your list quality.
Final Thoughts
A/B testing personalized images in Hyperise is straightforward—if you keep things simple and focus on what actually moves the needle. Don’t get lost in the details or the hype. Set up your test, measure what matters, and let the data guide you. If you keep iterating (and avoid overcomplicating things), you’ll get results that actually help, not just look good in a slide deck.
Happy testing.