For more articles like this, sign up for the ASPE-ROI Newsletter
A/B testing is a critical component of any marketer’s arsenal these days. Marketing as a whole is becoming much more reliant on data-driven decisions that are derived from split testing, which provides insight into what we should and shouldn’t do. Conversion rates are becoming much more important to all levels of management, especially as testing tools get easier to implement and allow us to be extremely agile in our marketing efforts. Here are seven A/B testing lessons that may give you some ideas of what to test to help increase your conversion rate.
1. Good title vs. a title that will always get clicked
In marketing it is important to evaluate how you showcase your product in front of potential buyers. In fact, there is a common saying: “Sell benefits, not features.” While writing the title of your landing page, ask yourself if it really sells the benefit.
Recently, Neil Patel launched an all-in-one SEO analyzer on his personal blog, QuickSprout.com. Though the tool created a buzz around Digital Marketing circles, Neil A/B-tested the title of the landing page of that tool.
Title A: Are You Doing Your SEO Wrong?
Title B: Do You Want More Traffic?
Both the titles were good, but the second one is catchier. Neil Patel hasn’t revealed how much impact this test had in his conversion rate. Maybe it is too early to reveal
Another great example: Movexa, a natural joint-relief supplement by Vitamin Boat Corp. has done an A/B test on its product page. Earlier, Movexa’s title was too vague. Later, the company improved its title to catch people’s attention.
Title A: Natural Joint Relief
Title B: Natural Joint Relief Supplement.
Test hypothesis & the impact: The ideas behind these tests were to improve the clarity of the titles. As titles are one of the first things seen by the users, improving it will have a big impact on conversion.
Movexa has reported that improving clarity of their title on the product page has increased its sales by 90 percent.
Lesson to learn:
Think like your customer. Make sure you are selling benefits and not features. Split test with different elements and find which works best for you.
1. Description vs. Overview
A recent study about how users read on the web reveals that users read, at most, 28 percent of the copy on the page. This means, instead of reading the full copy, users will typically scan the content.
A couple of months ago, Keep&Shareconducted an A/B test with 4 variations of titles and descriptions.
Variation 1: Longer title with brief overview
Variation 2: Longer title with longer overview
Variation 3: Longer title with brief but different overview from above 2
Variation 4: Shorter title with shorter overview
Test hypothesis & the impact: The objective is to analyze which of the 4 variations work better in terms of conversion. The test will be fast to run and easy to analyze as the title and descriptions have changed.
The winner? Variation 4 with a conversion rate of 103 percent.
Lesson to learn: It doesn’t matter how good you describe the product, unless the users are really interested in reading it, results won’t vary. Try to give a brief overview of what you provide. Make sure the copy is easily digestible. Remember, less is better almost always.
2. Making your call to action prominent
Fiverr.com is the world’s largest marketplace for small services. Unlike other websites, in a service-based marketplace, users are more likely to learn how the website works before creating an account.
Having said that, the above fold of the previous Fiverr homepage design was more intended to teach people how the site works.
At first, on the homepage I clicked on the “how does it work” button. That was a better explanation. I was amazed. On the home page, if you click on the how it works button, you get a much better explanation of the product.
As you browse through different categories, you can see interesting gigs and read purchase reviews. There is no need yet to create an account. Therein lies the problem. The call to action on the homepage was not prominent on the previous design.
The nice news is that, unlike the older page, the call to action on the current design is prominent.
Similar to Fiverr, the old design of Consolidated Label didn’t have any call to action on their page. So they A/B tested by placing a prominent call to action on their test page.
Test hypothesis & the impact: As call to action is prominent on the new design, obviously the expectation from the test is increased conversion.
Fiverr hasn’t revealed conversion rate of its new design yet. Consolidated Label confirmed a huge increase in conversions by 62 percent.
Lesson to learn: You may want to let your users know more about your product or services. However, make sure your call to action is prominent on homepages.
1. Checkout process – Single step vs. multiple steps
Recently, HostGator.com reduced the checkout process from 2 steps to 1 single step. Earlier, users were required to choose the domain name and discount coupon on the first step and enter the billing information on the second step. Currently, all the steps are merged together.
Test hypothesis & the impact: The idea here is to simplify the checkout process as much as possible to reduce the possibility of hitting the back button or going elsewhere.
A study conducted by Getelastic.com has revealed that a single page checkout process has increased the conversion rate by 21.8 percent.
In fact, reducing the number of steps does not always increase the conversion rate.
CrazyEgg.com had a 10 percent hike in its conversion rate when it changed the checkout process from 2 steps to 3 steps.
Lesson to learn: Research shows that single-page checkout outperforms the multi page checkout in terms of conversion rate. However, it depends on the type of the product and the target market. Make sure you split test and learn what works for you. For a multi-page checkout, it is better to provide a visual indicator, which shows the user’s checkout progress.
2. Explanation – Slide vs. videos
If a picture speaks a thousand words, how many words does a video speak? After all, we are lazy. As mentioned earlier, users read at most 28 percent of web copy on a given page. So why not to create an explainer video instead of putting all your efforts on writing and improving the copy?
A nice explainer video not only showcases what your product is all about, but it makes your product stand out from your competitors.
Test & the impact:
According to a Statistic Brain study, the average attention span of a visitor on a web page is 8 seconds. However, it would be harder to explain the purpose of a product in 8 seconds, especially if the idea behind a product is somewhat complicated.
This is why an explainer video is vitally important. Research shows that, on average, explainer videos are watched for more than 2 minutes. Hopefully using an explainer video will be helpful with increasing conversion rate.
Crazyegg.com has reported an increase in conversion rate by 64 percent.
Work.com and Dropbox.com have also found increases in conversion rates by 20 percent and 10 percent, respectively.
Lesson to learn: No matter if you use slides or explainer videos, make sure the end users understand what your product is all about.
Explainer videos are great, however there are some exceptions where the videos would be less than ideal.
For instance, videos might not be ideal when:
- · Visitors have poor Internet connectivity
- · Products are simple and it is easy to convey the main message
- · Visitors are in a hurry and don’t want to watch the entire video
3. How long the landing page should be?
A common way of attracting a younger generation is to bring the prospects to a smaller squeeze page with fewer or no distractions. However, in some cases, you may want a longer sales page for further convincing the users to signup. Syed Balkhi, the founder of WPBeginner.com uses both the concepts together in a single landing page.
Users can click on a CTA button and they are directed to a contact form that needs to be filled out. Users are also able to click on information that will show them more details of the services provided before signing up.
Test and the impact: Longer pages will be needed if it is harder to convince the prospect to buy a product or if the product is costlier.
Quicksprout.com has observed an increase in conversion rates of 67.2 percent with a shorter squeeze page, where the prospects were just asked to submit their email address.
FitnessWorld.dk has found 11 percent more conversions when they reduced the size of the page where the gym is well known and the offer is simple and inexpensive.
Lesson to learn: You should split test between different kinds of landing pages and use the one that works better for you. If you are not sure, you can combine both the concepts of smaller and longer page into a single landing page so that users can easily choose from it.
1. Split testing- the down side
Split testing can be a nice way to learn what works best for your website, however there may be some drawbacks if you don’t use it effectively.
- Duration: Make sure you conduct A/B tests long enough. Shorter duration tests may not hold true for the long run.
- Number of conversions: For better results, you shouldn’t stop the test unless you receive a specific number of conversions for each variation. This means if you have a low-traffic website, you’ll have to keep the test until statistical significance is achieved.
- SEO: Rumors are spreading that split testing can hurt SEO. One of the major problems for the long term A/B tests is the duplicate content issue. Make sure your test URLs are not indexed on Google.
Lesson to learn: Despite having some drawbacks, A/B tests are vital for conversion rate optimization. Tools like Visual Website Optimizer can give you a rough idea of how long you should run the A/B tests by inputting elements such as the number of visitors, number of variations and expected improvements in the conversion rate.
It is common to have negative results when you run A/B testing on your website. Don’t be discouraged from further testing. Remember, options that worked for someone else may not always work for you.
It’s 3 p.m.. You’ve promised to deliver a blog post, either to your own blog through a self-imposed deadline or to a client’s blog, at EOB today. So far, you’ve come up with…nothing.
There is so much noise on the Internet, that you’ve convinced yourself that if your content isn’t astoundingly brilliant (as in “Oh my gosh, this has prompted me to change my business model” brilliant), then there’s no point in publishing it. You think back to the myriad of blog posts you’ve read and proceeded to think “What a waste of time…” You don’t ever want to be that author.
Which is why, at now 3:05 p.m., you’ve still got nothing.
One big mistake a content marketer (including myself) can make is to assume that people will do nothing more than read your content, well, read the content and then buy your product. Even though the term “online conversation” has now been in existence for a few years, it’s still easy to monologue your way through blog posts and webinars, thinking that no one will offer feedback. With that thought process, it’s easy to think that you’d have to produce nothing short of brilliance in order to stand out online.
One of the greatest things about digital marketing is the ready availability of metrics. Compared to decades past, we can know so much more about how many people respond to our marketing activities and how, when, and where they respond. And it’s hard to find a CMO who hasn’t read a slew of articles about Big Data in the past year, wondering how it should relate to her marketing team’s efforts.
Agile Marketing practitioners understand that metrics are key for all marketing activities – “data over opinions” is part of the Agile Marketing Manifesto. And we all know it’s hard to improve what you can’t measure. Unfortunately, many marketers still fail to do the important work of connecting the dots between metrics associated with individual tactics and metrics that the business readily understands and values such as revenue. However, oftentimes marketers can’t connect those dots because they don’t know how their customers connect the dots between their sales and marketing tactics to begin with.
This one-hour web seminar, The Importance of Buyer Behavior for Agile Marketing Metrics, was presented by Chris Barron on Friday, May 23, 2014. Attendees reviewed common marketing metrics and how they can be combined with a marketing funnel, or better yet a buyer’s journey, to connect the dots to the bottom line.
Missed this seminar? Download the recording and slides here!
Learn to apply Agile Methods to Marketing in order to get more done, adapt to change and see immediate measurable results in Agile marketing training from ASPE-ROI.
Last time, I talked about how some Agile Marketing teams use Scrum but don’t use User Stories. And, as with other symptoms of Scrumbut, how that can come about for both good and bad reasons. Which leads us to our next symptom:
“We use Scrum, but we don’t do relative sizing.”
If you went through any formal scrum training, you were probably exposed to this relatively awkward concept – pun totally intended. In a nutshell, relative sizing systems simply allow you to say this is bigger/smaller than that. That’s why is called “relative” sizing – you’re not holding something up to an absolute scale to determine how large or small it is. And there are several “styles” of relative sizing, the most commonly used being:
- Modified Fibonacci sequence: 0, 1, 2, 3, 5, 8, 13, 20, 40, 100
- Shirt sizes: S, M, L, XL, XXL
- Dogs: various breeds from Chihuahua to Great Dane
My favorite is the modified Fibonacci (Fib for short). But before digging into why you should use it, let’s take a quick look at what people are doing instead of relative sizing:
- Rough estimates in hours and/or days
Agh! Run! Once you start talking in terms of hours and days it’s hard to keep “rough” in perspective. People have an ingrained habit of trying to be precise and accurate when they’re discussing time estimates. Before you know it, someone starts creating a Gantt chart to make sure they’re coming up with an accurate “rough” estimate. Actually, they’ll wind up creating a precise but very inaccurate estimate in that case, but I digress.
Unfortunately, it’s also reinforced by the people who ask for the estimates. I remember a GM asking me how long it would take to put together an analysis and proposal for a new product opportunity, and I told him about 3 to 4 weeks. And without missing a beat, he asked me what it would take to make sure it took 3 instead of 4. Ugh.
Statistically, estimating something and winding up within + or – 10% sounds pretty good in many applications. However, when it comes to time estimates, taking 10% longer is NOT the same as taking 10% less time than estimated – you get hammered a lot harder for missing a deadline than for beating it. Even with the latter, if you’re always finishing within your estimates, you eventually get accused of intentionally sandbagging and are pressured to lower your future estimates.
Relative sizing breaks these habits by forcing you to deal with units of measure that can’t be readily compared with actual time. Relative sizes encompass not only time but also risk, complexity, and general uncertainty. When you decide to actually do the work, e.g. in your sprint planning meeting, you will break down items in tasks with actual hours – but at least then we’re waiting until the last responsible moment and we’re having the actual team members come up with estimated time of the actual tasks involved. Doing a time-based estimate before this point is just asking for trouble.
So how does relative estimation solve these problems? If I ask a given marketing team to say whether campaign A or campaign B is more work than the other, we’ll probably have a short discussion, then they’ll decide. Then if I ask that same team later to estimate whether a new campaign, called ingeniously enough campaign C, is more work than A or B, we’ll have some discussion and they’ll decide. And I could keep asking the same team to give me a relative ranking of different campaigns until we have agreement that, roughly speaking, these over here are the smallest, those over there are the largest, and we have a couple groups of campaigns that we’ve ordered between those two extremes.
Now if I start having that team actually implement those campaigns during sprints, we’ll start to see how many of the relatively largest campaigns we can get done in a sprint, how many of the relatively smallest, etc. So over the course of a few sprints, just by using a consistent relative sizing system combined with the actual productivity of the marketing team, I can ask them to give me a relative size of any new campaign, and after a short discussion, get a rough estimate of how long it would take to get done.
It’s commonly understood in the social media/content marketing world: Ask questions to get a better engagement levels. Your audience is flattered that you’re asking their opinions. They feel more connected because they’ve given their input. They’re more likely to engage when you move the conversation towards a sale. It makes sense…but it’s not entirely true.
If you’ve used the “asking questions” approach to increase your follower levels, number of likes, etc., you’ve likely noticed how difficult it can be. I’ve worked at a social media marketing agency for two years. During those two years, I’ve always tried to implement the “ask questions” rule. I can definitively say that coming up with questions isn’t the hard part — the hard part is getting people to answer them!
Unless you boost your posts, only a small portion of your Facebook or Twitter audiences are going to see your question in the first place. The engagement you’ll supposedly receive by asking questions (comments, likes, retweets) should boost your “staying power” on your audience’s newsfeed, but how will your audience engage if they don’t see your content? It’s a never-ending cycle.
Does this mean the question asking should cease? Not at all; it simply means you need to rethink the way you go about it. In my two years at Shelten Media, the following mistakes always led to frustration. Avoid them, and you’ll get much better response rates from your audience.
Mistake #1: Only asking the question once
Learn to apply Agile Methods to Marketing in order to get more done, adapt to change and see immediate measurable results in Agile marketing training from ASPE-ROI.
Tactical Metrics in Your Agile Marketing Backlog
In my last post about your Marketing Backlog, I talked about how to approach events as an Agile Marketer and useful metrics that will allow you to adapt and improve the effectiveness of your events. Likewise, in this post I’ll use a new product campaign as an example for additional tactical metrics. And just like the metrics I mentioned for events, there’s no rocket science or black magic here – it’s a matter of defining metrics that are meaningful, straightforward to collect, easy to understand and then having the discipline to track them over time to make improvements.
Let’s say we’re the marketing team at Insane-o Corporation, a company I’m totally making up for this blog post. And we’re going to launch a new product, the Maxi-Mini, that will expand our existing Maxi product line. What does it do? What are its benefits? Ehh – this is just an example, so I’ll leave all the details up to your imagination.
At Insane-o Corporation, we’ve categorized our campaign activities into four marketing backlog buckets – awareness, consideration, decision, and loyalty. If you’re familiar with the buyer’s journey concept then these buckets will be familiar too. If you’re a funnel gal or guy, you might’ve called them TOFU, MOFU, conversion and retention. And because we’re Agile Marketers, we’re focused on getting the minimum viable campaign out the door as soon as possible with real metrics so we can start learning, adapting, and improving. Don’t know what a minimum viable campaign is? Did I mention I teach a class on Agile Marketing (hint, hint)?
Let’s start with awareness. Because Insane-o is a household name, we don’t have to worry about branding at that level. The Maxi product line is well known in its existing markets, but Maxi-Mini is going after a segment that we haven’t served before. And the Maxi-Mini product is certainly new, so we should do some activities to build product awareness. Furthermore, our buyer persona research showed us that most of our potential buyers will search for keywords related to their paint-points.
- We’ve done a lot of effective PPC campaigns in the past, and we know specific keywords from our buyer persona research. So we’ll do a PPC campaign that takes people to content that raises awareness of Maxi-Mini.
- We’ll use multiple versions of the content to test what works best.
- What will we measure? We’ll measure standard PPC-related metrics like impressions, rank, CPC and CTR.
- We’ll also measure the overall impact on Maxi-Mini awareness. Simplest and easiest way is with a single survey question: “List the products you think of for… “ and here’s where you insert whatever pain-points you imagined for Maxi-Mini. Note which rank Maxi-Mini appears in the answer if it appears at all. Repeat this survey periodically.