3 posts on 3 topics

6 03 2008

Edit: I fixed all the links in this post.  Copy and pasting is getting the best of me!

I recently came across a few great posts that I enjoyed and wanted to pass onto you all. The first is from Tim Ash, who has written a great book on Landing Page Optimization. One of his more recent entries discusses how to write effective copy to increase conversions.

One of my favorite bloggers, Avinash Kaushik tells marketers to embarrass their managers in order to succeed at their campaigns. Testing tops that list of course, but his other techniques are great methods at “working the system.”

Lastly, Lenny de Rooy, wrote a guest post at SEO Scoop about 5 misconceptions of Google Web Optimizer. It goes slightly beyond just GWO itself and into testing methodology

Advertisements




How to get ideal test conditions (and results)

4 03 2008

A big mistake in testing is to overlook variables inside and outside of the test that impact results. In an ideal test, the only variables would be the ones you are testing on your page. That usually isn’t possible though, but as long as you account for them in your analysis, you will get correct and actionable information.

Sky image

If you test a seasonal page, then the optimal page you get for that season, probably won’t perform when the season ends. By not paying attention to those kind of variables, you are setting yourself up into thinking you’ve found the optimal page. The same type of mistake is made by grouping e-mail, print, SEM campaigns and event traffic, unless you know they react the same to your changes.

Even within segments, there might be more segments to uncover. Your only limitation should be traffic; don’t segment so granular that you can’t run a decent sized test in a decent amount of time.

One of my clients doesn’t get a lot of traffic, but the traffic he does get is very distinct. One converts in the single digits and the other converts in the teens. Although combining them would get me more data, it would be very confused data since they convert so differently.

A few things to look out for:

  • The ad or offers visitors see beforehand
  • Interactions between your factors (if you aren’t testing interactions)
  • Technical problems
  • Problems that occur before or after the tested page

A note about the last bullet, the problems can range from a technical problem to a problem with the overall funnel. If people get different experiences in the funnel that drastically impact whether they convert or not, it can add a noise to your test. Some examples are different checkout processes for registered and non-registered users or users being inelligible for service.

The purpose of testing is to find out if a certain element performs well under the conditions you provide. If you aren’t paying attention to all the conditions, then the results you derive will be incorrect without you knowing.





What can your data really tell you?

6 12 2007

Online testing is a bit different from other marketing data. It uses live traffic to find out what works. Analytics is the same, measuring what’s occurring at the moment. So why is that important? Well you can infer all you want from surveys, usability studies and demographics, but in the end you can’t argue against what real users are doing.

Avinash Kaushik, a popular analytics blogger, summed up the juiciest bits of a presentation by Jim Novo at eMetrics. In it, Jim asked, “What data yields insights that can be actioned the most?” The answer:

Data pyramid

“[A]ctionability, relevance of insights that can be actioned decreased as you go down the slide”

He makes the point that the farther away you get from the top of the pyramid, the harder it is to accurately predict your users actions. Yet often times too much value is put into the bottom levels of the pyramid. Even when marketers do test and measure actual behavior, they go about it the wrong way because they stick to all this other data too much and end up testing things that are all alike, defeating the purpose of testing.

Think about it, can you really tell if a red button will work better than a blue button if all you have are demographics? There are places for all of these types of data, but there should not be a fear of actual behavioral data. Yes we are using live traffic, yes the data is driven by technology (online visitors, javascript, cookies, rather than people filling out a survey), but those numbers tell a story unlike any other data.

Make the most out of all types of your data, but don’t die by one or the other. Use what’s best for every situation, but realize that you will never know you are right until you test it out.