Kaveh Moravej

Assume Nothing, Test Everything

You might remember a mention several weeks ago of the web analytics and testing carried out by the Obama campaign. Well, we've been fortunate enough to see similar information trickling out from the Romney side, and even though Romney's campaign failed, there are still a lot of marketing intelligence insights to learn from. I find these fascinating in the way that they tie design and psychology together.

This also makes for a nice game of spot the difference; Take a look at the online campaign banner below and see if you can identify what's changed.

romney2

As you might have spotted, the only difference here is that the button in the top right corner has had its text changed from "donate" to "contribute". This small change alone, accounted for a 10% increase in sign-ups.

The screenshot below is another version of the campaign's homepage, with a slideshow displaying featured content.

Romney Campaign

Simply moving the sign-up form to a prominent position on the home page, led to a massive 632% increase in sign-ups.

Lastly, the testing team used IP data to customise a splash page.

Romney Campaign

As you can see, people were much more responsive to a customised message (including the user's state), instead of a generic one.

Although Romney ultimately failed, few can point the finger of blame at his digital team, since they seem to have done an excellent job of testing and optimising everything.

What's so interesting about all this, is how susceptible we all are to being unconsciously influenced by such minor changes in design and wording - almost like hypnotised subjects responding to trigger words or visuals.

Interestingly, Google's own research in this area demonstrates that people don't always consciously know what they prefer. They can tell you one thing, but in practice end up doing completely the opposite. For example when Google directly asked people how many search results they wanted to see per page - 10, or 20, 25, or 30 - users said they'd like more results, but testing showed otherwise. The problem was that showing more results meant slower loading times, which in turn meant users were less willing to search.

I think we're still seeing the tip of the iceberg as far as this type of testing is concerned, but any time we can throw out assumptions and work with real data, it's good for business.

With thanks to Optimizely for providing source images.