When you run an email test, do you test for the right reasons and in a way that will deliver real results?
At Alchemy Worx, we’d be the first to agree that testing can be an extremely valuable investment of time – we certainly see it as an essential element of any long-term email marketing strategy. But the most valuable tests require a significant amount of resource across all elements – from planning, design and HTML production, to deployment and analysis of the results.
This poses a challenge to email marketers, who for years have been told that testing is the ‘responsible’ way to make changes to their email marketing programme. Because the channel allows it, the received wisdom goes, email marketers are expected to prove a concept before implementing any new idea.
Unfortunately, with the sort of time and resource constraints that marketing departments are experiencing in these times of austerity, any significant investment in testing is seen as a luxury that the business can’t afford, or at least not on any ongoing, consistent basis.
The result: an attitude of extreme caution to trying new approaches without first testing them that may be counter-productive. Companies that don’t have time to plan and execute a proper test miss opportunities and tread water instead of diving in and making a change.
In this way, the mantra of ‘Test, Test, Test’ can hold back real results and is another symptom of Fear and Self-Loathing in Email Marketing, a condition that Dela Quist, Alchemy Worx CEO, explores in his new book of the same name. Fear and Self-Loathing – the misplaced anxiety that we are emailing too much or overloading inboxes, for instance – can have an adverse impact on a wide variety of strategic decisions, from permission and opt-ins to frequency, content – and testing.
Go with your gut instinct
We’re all agreed: done right, testing can uncover unexpected opportunities and, where possible, should be an integral part of your long-term email marketing strategy. BUT: don’t let it stop you from trying new things.Testing minimizes risk and certainly helps to optimize elements of your campaigns. But no matter how well-planned and executed, no test will be 100% accurate. External influences such as the weather and the economic climate cannot be controlled, for instance, and can have significant impact on campaign performance; running a test on a single campaign will not uncover the long-term impact of that test.
If you don’t have the time to run a statistically significant test, why not take a risk instead? You know your subscribers, and you have valuable instincts about what will and won’t work. So go with your gut and try something different. Making excuses about the lack of resources to run the perfect test will likely cost you more in the long-term than the risk in trying something new.
What, after all, is the worst that could happen? Most of the uplifts seen by testing are small and steady, and even running a test that has a positive uplift means that you will have to keep deploying a less effective campaign for a period of time. The potential benefits of trying something different are likely to outweigh any possible losses – if you can bring yourself to just take a risk.
Work that test plan
When you do get the opportunity to do some proper testing, here’s how to develop an email test plan in 5 easy steps:
Define your hypothesis
Clearly defining what you’re hoping to achieve through testing will help focus your efforts and keep you on track throughout the process.
Mine your historical data
To pinpoint the areas that are likely to have the biggest impact – and minimize the number of tests you need to do – take a look at what’s been done before.
Design your test plan
Take a long-term view and develop a test plan that involves making small, regular changes to your campaign – but remember to build in an element of flexibility to take into account the results that emerge.
Deploy your campaigns
Remember to allow for the extra resource that testing will require.
Analyze your results
Wait as long as you can to assess the results – then update your hypothesis and start the whole cycle again.