3 posts on 3 topics

6 03 2008

Edit: I fixed all the links in this post.  Copy and pasting is getting the best of me!

I recently came across a few great posts that I enjoyed and wanted to pass onto you all. The first is from Tim Ash, who has written a great book on Landing Page Optimization. One of his more recent entries discusses how to write effective copy to increase conversions.

One of my favorite bloggers, Avinash Kaushik tells marketers to embarrass their managers in order to succeed at their campaigns. Testing tops that list of course, but his other techniques are great methods at “working the system.”

Lastly, Lenny de Rooy, wrote a guest post at SEO Scoop about 5 misconceptions of Google Web Optimizer. It goes slightly beyond just GWO itself and into testing methodology

Advertisements




How to get ideal test conditions (and results)

4 03 2008

A big mistake in testing is to overlook variables inside and outside of the test that impact results. In an ideal test, the only variables would be the ones you are testing on your page. That usually isn’t possible though, but as long as you account for them in your analysis, you will get correct and actionable information.

Sky image

If you test a seasonal page, then the optimal page you get for that season, probably won’t perform when the season ends. By not paying attention to those kind of variables, you are setting yourself up into thinking you’ve found the optimal page. The same type of mistake is made by grouping e-mail, print, SEM campaigns and event traffic, unless you know they react the same to your changes.

Even within segments, there might be more segments to uncover. Your only limitation should be traffic; don’t segment so granular that you can’t run a decent sized test in a decent amount of time.

One of my clients doesn’t get a lot of traffic, but the traffic he does get is very distinct. One converts in the single digits and the other converts in the teens. Although combining them would get me more data, it would be very confused data since they convert so differently.

A few things to look out for:

  • The ad or offers visitors see beforehand
  • Interactions between your factors (if you aren’t testing interactions)
  • Technical problems
  • Problems that occur before or after the tested page

A note about the last bullet, the problems can range from a technical problem to a problem with the overall funnel. If people get different experiences in the funnel that drastically impact whether they convert or not, it can add a noise to your test. Some examples are different checkout processes for registered and non-registered users or users being inelligible for service.

The purpose of testing is to find out if a certain element performs well under the conditions you provide. If you aren’t paying attention to all the conditions, then the results you derive will be incorrect without you knowing.





3 steps to quickly make a good multivariate test

21 02 2008

Having great testing technology puts a lot of power in your hands. You can test anything and everything you want. However, like any other tool, to use it effectively you have to use it right. There’s a lot of best practices and thought that goes into test design, but following these three rules can get you a good test in most situations.

Steps
  1. Maximize your traffic: Pack as much as you can into a test for the amount of traffic you have to keep it a short test. Using Widemile’s platform that’s 2 weeks to be safe, with Google Optimizer you should do at least a month (explanation).
  2. Test opposites: If you test stuff that’s similar, they’ll perform about the same. So find out the general theme you should be following first by testing opposites (B2B vs B2C, podcast vs ebook, descriptive vs benefits).
  3. Learn from the previous test: Always make sure you line up your tests so that you learn something that can be used in the next one to either refine or to learn something new.

The goal of these three things are to maximize your time spent testing by testing as much as possible while also minimizing testing suboptimal content. For example, if I was selling iPods and I tested 2 images of people running with the iPod, one with a man and the other a woman, I might think that was a good test. However I could have totally missed out on an image that worked better, such as an iPod next to a PC. I could test that out after the initial test, but then I just wasted one test run. The right way would be to test one sport image versus one PC image and find out which direction to go. From there I could test against other opposing images or refine the PC image.

The only warning I’d throw in is that if you’re trying to test a lot of things at once, you might want to scale back. Pick a 2-4 themes depending on your test size and stick to testing them out. Don’t mix and match.

Follow these steps and you’re on your way to getting not quick tests, but efficient ones.





What’s an average conversion rate? 40%!

1 02 2008

New URL testingblog.widemile.com

Would you believe that? And if it were true, would it really mean anything to you? It shouldn’t.

Conversion Rate Table

I get asked this question fairly often and at first glance it seems like a logical question to ask, but really the focus should be elsewhere.

From my experience, conversion rates range from less than a percent all the way up to 30% or more. Does knowing that help me optimize my clients’ pages? No. Every page has so many variables internally and externally that it is very difficult and nonsensical to worry about the average conversion rate.

The goal of your page, differences between your product/service against your competition, target you’re trying to reach, avenues you advertise and numerous other factors all effect your conversion rates. A competitor having a higher conversion rate than you, does not mean you’re doing something wrong. Set the baseline for yourself and keep improving it. That’s how marketers should approach their conversion rate.

If you’re testing, you’ll find out if you’re campaign is performing suboptimal and find out what the optimal is at the same time.

Pretty amazing huh?

I don’t tell clients I’m going to get their conversion rates above industry averages, I tell them that I’m going to make their campaign as successful as possible. Do that and you’ll be ahead of competition and ahead of where you were when you first started.





Doubling conversion rates: MarketingSherpa case study

23 01 2008
MarketingSherpa Logo

MarketingSherpa is giving us a lot of love recently with The Weather Channel case study and now one on Smartsheet. If you were too shy to sign up for our case study on the Widemile website, then please check out the one at MarketingSherpa.

I especially like the last 4 points in the story:

#1. Don’t sit pat on your conversion rates. Colacurcio didn’t know that her original 5%-7% was above the industry average going in and is glad she didn’t. “If I would have said, ‘My conversion rate is pretty good’ and done nothing, I would have totally missed the opportunity to double it.”

#2. Conducting just a few multivariate test and applying them to a greater number of landing pages works. Simply put, you don’t have to test each landing page individually. “Sometimes it seems overwhelming when you think of multivariate testing, but you can cut corners. There are a lot of things that are low-hanging fruit — things that can be applied across landing pages.”

#3. Redesigning shouldn’t stop with your team’s new ideas. “You really have to get your organization into the mindset: ‘We are testing. We are not just going to spit out the next five Web pages.’ ”

#4. Even if your higher-ups are impressed with the initial results, Colacurcio says, marketers should expect to face organizational barriers when they start their second round of testing.

Don’t forget to check out the before and after in the creative samples.

If none of that catches your eye, Janet Meiners at MarketingPilgrim wrote a great summary on the case study.  She also makes a great point that, “I’ve been in heated debates about the best course of action but testing works best – assuming you’re humble enough to go with the data over your ego.

Trust your data and watch your numbers build your ego.





Test nothing and get results

18 01 2008
Test Nothing

Now this is a blog about testing right? So why test nothing? Because nothing is powerful.

I don’t mean don’t test at all, I mean test having less on your page. Those awesome sub headlines your copywriter created? Or those testimonials? Your audience might not care or even look at them.

Best practices say to use trust logos, reviews, awards and a whole lot of other content, but you never know if it really helps. In fact, one of the biggest lifts ever at Widemile, came from removing everything but the core material on the page. One hero shot, one headline, one description and one button.

It was plain and simple.

To me and everyone else, it was an empty page filled with white space, but it converted at an extremely high rate compared to the previous page. Everyone at my office and at our client’s couldn’t believe it was the optimal page, but we couldn’t argue with the data.

While this might be an extreme example, it also is an example of the opportunities you may be missing if you don’t test turning off (hiding) elements on your page. So don’t just test something, test nothing too.





How multivariate testing can change your whole business

15 01 2008
New URL testingblog.widemile.com
Test Tube

My boss, Frans Keylard, taught me one lesson that exponentially increased my respect for the power of multivariate testing.

While on the outside multivariate testing is about finding the best version of a page, once you know how to test, it can do a whole lot more than that. The information you glean from multivariate testing can shift the whole direction for your product, service and business in general.

Multivariate testing does help you find good headlines, the right images and other content, but it also acts as a survey about your product/service that your visitors don’t even know they are taking.

For example, my company deals with a lot of companies selling to both business and home users. Traditionally, to figure out what was more popular, they would survey people asking, “Would you buy this product for your home or for your business?” They then would count up the responses, the highest being the best one to go with.

At Widemile, I accomplish the same thing using a multivariate test. I serve some visitors business messaging and others home messaging and they respond by buying or not buying. If the page with business messaging has more conversions, then that is the way to go, otherwise go with the home messaging.

(Better yet, if they are both significant in size, find ways to segment them and do more testing.)

In both situations, you’re asking a question and getting an answer. While multivariate testing asks the question less directly, it gets the most direct answer possible, a conversion, from the most direct audience possible, live traffic. This deals with the weakness inherent in surveys; people’s answers and actions don’t always match up.

I’m not saying multivariate testing replaces surveys in all situations, but you get real valuable and actionable information from testing.

It is like killing two birds with one stone, with one small bird (your landing page) and then a huge bird (your overall marketing and business plan.)

Some marketers already do this with their PPC and banner ads, seeing what people respond to and adjusting their overall marketing strategy to what works best. Multivariate testing is an extension of this, but it requires an actual conversion by the visitor.

Start taking surveys of your audience using multivariate testing. All you have to do is key in on a few messages that you think might work, try them out. You’ll learn how to improve your web pages and your business at the same time.

Questions? Comments? This is my favorite topic, so I encourage you to leave a note for me.