Creating an effective direct marketing campaign begins with determining what the metrics for success will be. By understanding what constitutes an acceptable response, the campaign can then be constructed to established benchmarks and delta improvement over that benchmark.
Understanding the analytic and data points related to the campaign are important to measuring which campaigns perform and which are less than optimal. By creating versions of the campaign and then monitoring response, a predictable model can be constructed which can be employed for future direct marketing programs.
A/B or multivariate testing are best utilized when the sample is large enough for multiple campaigns. At Wilde Agency, we've created A/B tests for our own programs as well as programs for clients. For example, when we sent invitations last year for our Marketing Summit, a percentage of the invitations said "FREE" and a percentage said "COMPLIMENTARY". We tested the value of using each work as a psychological trigger. On first analysis, "FREE" out performed "COMPLIMENTARY" by roughly 30%.
Creating an opportunity to test triggers during a campaign gives feedback for future efforts. In our example, we sent a second wave of invitations and tested the audience by offering a VIP gift with one version, and no gift with the other. In a third wave, we tested the concept of limited time offer. Each time we tested, we had a hunch or hypothesis which version would outperform the other, yet testing is the only true way to assure that personal expectations do not color the actual performance or result.
By creating tests and measuring the result, we are able to continue to improve the conversions and ultimately lower the cost per conversion, resulting in greater return on marketing investment.
Have you done any testing on campaigns? Have you had results that surprised you?