The digital marketing world is constantly evolving. Naturally, new marketing techniques are on the rise. So, it’s crucial to use A/B Test to ensure that your techniques are making a difference in your conversion rates. You may be wondering, what is A/B Testing? A/B Testing, also known as split testing, combines scientific methodology and marketing. It basically involves testing one version of your email newsletter against another version where you have changed one single element. After that, you measure your results to see which version yielded a better conversion rate. If you ask us, it is definitely worth it to launch A/B Testing for your email newsletters because you will know the types of strategies that are working in order to create better performing campaigns. When you use A/B Testing, you will also notice a significant increase in generated leads, sales and revenue.
Before you get started with split testing, you need to do the groundwork for a smooth operation so you can follow and use these steps as a guideline.
Use website analytics and visitor behaviour analytics tools to identify existing strengths as well as problems areas for your email newsletters. This will provide insight into customer behaviour so you can know which segments you should A/B test. You will also have to track your current metrics to use as a baseline for the test.
After your analysis, the main starting point will be your hypothesis. Creating a split testing hypothesis is crucial for email conversion optimization. When you build your hypothesis, you should make observations about the current scenario, speculate possible reasons, suggest improvements to fix the situation and explain how you will measure the results. For maximum optimization, you should create a hypothesis for every element you plan to test.
For most cases, you will want to email split test your entire list of clients. However, depending on the type of email newsletter you want to send, it would be wise to segment your email list because not all of your clients and subscribers will have the same interest and level of interaction with you. You can segment according to several categories such as age, gender, geographic location, industry, website behaviour as well as many others.
When setting up an effective A/B test, you must know the potential variables you want to test in order to get accurate results and to maximize the A/B test significance. While you can split test almost all the different elements, here are the main ones you should focus on.
The first thing you will want to split test is your subject line. After all, it is your secret weapon in getting your subscribers to open your email. You can split test subject lines for numerous things such as length, word order, capitalization and even the use of or the lack of emojis. Through A/B testing, you can take different approaches to figure out what works best for your audience to increase their open rates. Maybe using their names drives up the open rates while using a statement instead of a question results in a dip.
For the actual email copy of your newsletter, you can experiment with size, content, headlines and even images. You can test whether or not email personalization yields better results or if long form content is preferred over short form content. With A/B Testing, you can also choose better placements for your client testimonials and website links. Furthermore, it lets you effectively assess techniques and tones of voice for the newsletter so that you can create a sense of urgency when needed.
After your subscribers have already opened your email, it is imperative to put in your best effort for the next logical step, which is to convert these subscribers. For that to happen, Call To Actions are crucial, as they dictate whether someone buys your product or clicks through to a landing page. So, for Call To Actions, you can try varying the color, size, text and type of button for split testing. You can also vary the design of the social media buttons that often accompany email signatures. Experimenting properly and effectively with Call To Action button copy is the most important step to lead to more conversions.
First impressions do, to a certain extent, influence your subscriber’s actions. If the overall design of your newsletter doesn’t leave a lasting first impression then it will do little to make your subscriber take action. On the other hand, if you manage to impress them then you will get better traffic. The only way to know whether or not they will be impressed is by A/B Testing the design. You can experiment with images and text to see which fares better. You could try using columns for comparisons. You could also try different placements for different elements in a carefully crafted newsletter. In today’s age, you should definitely split test the entire layout, style and template on all devices to ensure that your newsletter displays well on all of them.
The time of delivery of your email newsletter has a significant impact across many metrics, especially the conversion rate. No one really knows what the best time to send your email for your specific audience is but one of the main benefits of A/B testing is that you will at least narrow down the delivery window that works best for them. This way, you will figure out the day of the week, time of the day and time zone that works best for your target audience.
For email marketing testing, you have to define why you’re doing this. Is it to generate more engagement, leads and/or sales? Is it to know if your campaigns are actually working? Or the most popular, is it to increase your open and conversion rates? The goals could vary from organization to organization. But only after you’ve defined the conditions, you can get started on the process. Regardless of that, however, when actually sitting down and setting the goals for your email campaign testing, you have to be realistic. So, make sure your goals are realistic for your company.
While running the A/B Test, you need to figure out which emails get more people interested in your products. And to do that, we’ve laid out some analytics to help you have a clear idea on what you should aim to improve in your tests. Here, we dive into what each result means and how you should go about increasing it to achieve the goal your company has set.
The Open Rate, as the name suggests, gives us the portion of people who opened your email. If your open rate was 10%, your goal might be to increase it to 15%. While having a better open rate does mean that more people will see what you have to offer from the newsletter, it doesn’t necessarily mean that 5% of your test group will buy, or even click through to your product. If a newsletter has a good click rate, but isn’t bringing in more people, then you might need to change bits and pieces of the newsletter and do some more A/B testing.
The Click-Through Rate is basically the ratio of people who click on a link in your newsletter to the people who viewed your newsletter. The CTR is an important analytic, that you need to keep tabs on, as it brings in more people to view what you have to offer. Even though having a high CTR means your product will be known to more people, the increase in conversions might not increase.
The Response Rate is the ratio of people who responded to the email or took part in an offer to the total number of people in a sample. The greater the response rate, the better it is as you improve your database as well as amplify your engagement with people.
The Conversion Rate is the main analytic that shows if your email newsletter is doing well or not. The goal of each newsletter is to bring in as many customers as it can, and if the conversion rate is low, then the newsletter needs to be tweaked and played around with. If the CTR is high, people will not convert if the message in the newsletter, and the message on your site interests them. The newsletter should have a compact version of your message which does not stray from what your product is. In order to maximize your sales, A/B testing should be done with the aim to improve conversion rate.
For the most part, having a warmer colour palette, a welcoming message or something that attracts the attention of your target audience will help boost your open rate and click-through rate. Clickbaiting is one way of doing so, but we advise you to not promise what you can’t offer, or lead the audience in a wrong way.
As for the response rate, having offers which the audience takes without having to worry giving you their personal data helps. For example, a limited time discount code for those who received the newsletter, or a short survey inquiring about the products they might be interested in, often boost the response rates.
If a person is interested in your product and clicks the link on the newsletter, then they expect to see similar products on your website. If your website has a different tone than your newsletter, then it will deter them from associating themselves with your company, which won’t help the conversion rate. While adjusting your site to match the interests of your target demographics will help maintain interest and lure them into using your product, having a clear mission and quality product will go a long way for your company.
There are numerous A/B Testing marketing tools available right now that help you run these tests so you don’t have to do everything on your own.
MailChimp has a built in tool called “A/B Split Campaign” that lets you pick a sample from your list of subscribers and just run the test on them. After you select a percentage of people you want to test (MailChimp recommends between 20%-50%), you simply choose the metric and timings. And voila, you’re done! This tool by MailChimp not only informs you which variation worked the best but also sends the winning variation to the portion of your subscribers who didn’t receive the test.
When using the Campaign Monitor’s A/B Testing Option, you will get to fiddle around with their customizable email templates. You can select the type of test you want to run, email content as well as subset of your list you want to run the test on. You also pick the test settings as well as the method through which the winner will be selected. Furthermore, even while the test is in progress, you will be able to monitor the results with their A/B Test Report.
Through SendGrid’s tool, you can test upto six different variations for your A/B test campaign. Alongside the usual type of test, sample size and winning criteria, you can also pick the test duration, from between thirty minutes to 24 hours. After it decides the winner, you have the option of choosing whether to automatically send that version or to manually pick the version.
A successful A/B test has to be statistically significant and Kissmetrics’ A/B Significance Test calculates that for you. This tool is different from the other tools mentioned above in the sense that rather than running the test for you, it tells you if the changes you make will have a significant statistical impact on your conversion rates or not. The calculator tells you the difference in conversion rates, whether the changes will yield a higher rate and if that in itself is statistically significant.
There are many sample size calculators available from different brands. These calculators calculate the minimum sample size required for each version of your test based on your audience. They help you increase your confidence level to see the desired change in your conversion rates. Some split test calculators even tell you the total duration you should run your A/B test for.
While performing A/B test on newsletters, we suggest that you send out newsletter at random while only changing one detail, maybe a different colour palette, a bolder headline, a different picture and so on. By doing this, you can examine how each change affects how people interact with your newsletter. You can study what helps get more clicks and ultimately more conversions to your websites. If there are too many differences between the newsletters, then you can’t pinpoint what improved or diminished your sales.
Prioritizing your A/B test helps your company have a clearer vision on what changes your company wants to make, what it expects from the change and what future tests would look like. According to experts, prioritizing removes emotions from your change, helping you and your team to make an informed decision. While prioritizing, there are a multitude of factors to keep in mind, from the ease of implementation to how noticable the change is and even adding important data and removing distractions from your newsletter. There are various models for prioritizing the A/B tests: PIE framework (Potential, Importance and Ease), ICE Score (Impact, Confidence and Ease), ICE Version 2 (Impact, Cost and Effort) and so on. The same prioritization won’t work for every company, so the best model depends on how your company runs.
It’s important to test variations but it’s more important to analyze the data received and tweak your variations accordingly. If a test is getting fewer clicks than the previous rate, you should analyze why it didn’t do as well and try to change to improve the conversion rate. While analyzing the data, compare how it’s faring over the goals you’ve set. And, even if the test isn’t doing well in the beginning, don’t end it prematurely. Run your test for at least a week so you have a bigger sample size. Aside from the number of clicks the change brings, you also need to keep track of customer engagement, from sign ups to registering for a free trial, and also how much traffic the site gets. You can improve your sales if you analyze your data and make changes appropriately.
If your current conversion rate is 15%, out of a 100 people it’s possible that 15 people bought your product, giving an apparent conversion rate of 15%. However, if the sample was bigger, let’s say of 100,000 people, it’s unlikely that 15000 people will have bought your product. This is because, with a larger set of data, the data will shift towards its real value. So, having enough data to run a statistically significant test will help you analyze the data better.
One key part of A/B testing is running your tests simultaneously. When two different instances of the same newsletters are being compared, they should be exposed to the same conditions. Time changes a lot of factors, such as the growth of your company, quality of your products, demand of your products and so on. If the tests are run simultaneously, then the factors listed above will not be influencing the results.
By running A/B tests, your company can develop its optimal marketing strategy. Because it can really make a difference, it can help your company accomplish its marketing goals. However, you should keep in mind that you have to plan your A/B tests well in advance and we advise you to definitely not “wing it”.