Does your nonprofit organization have a direct mail fundraising program?  Are you testing consistently in your program?

If you’re like a lot of organizations, you might not be sure when you should test,  what kinds of things you should or shouldn’t test, or even what metrics to measure.  Here are some simple thoughts on testing that I hope help you improve your program.

When is the best time for testing?
Always!  Each time you mail, you should be testing something.  You don’t have to test multiple variables with each mailing.  And it doesn’t have to be a major test each time.  But you should be learning something new with each mailing you send out.

Test things that make a difference
By that, I mean, test important things.  Don’t test inconsequential factors that won’t have a signficant impact on your results.  For my money, the first place I’d test is audience – who the people are receiving your solicitations.  If you can refine your audience through segmentation or modeling, you can potentially reduce your marketing expenses and increase your results.  In acquisition, list testing can also have a big impact on your results.  Testing different list categories could open up entirely new audiences for your organization, and testing different selects of lists can improve ROI and lower your cost per donor.   

After audience tests, I’d focus next on offer tests – what it is you’re asking the reader to do, and what their action will accomplish.  You may find a new offer that doubles the performance of your current control offer.

Also test things like package formats (#10 package vs. a 6×9 or 9×12; 6-page letter vs. 1-page, etc.), gift array, teaser copy, four color vs. black and white, inclusion or exclusion of a package component (i.e., a buckslip, bounceback, premium item), etc. 

And if you haven’t done so yet, you need to be testing single channel vs. mult-channel campaigns.  That is, test the performance of direct mail, vs. combined direct mail/e-mail, and/or direct mail/telemarketing.  Multi-channel fundraising campaigns almost always deliver better results than single channel results. 

Don’t waste your time on tests like these
The tests above all have the potential to seriously improve your results.  But not all tests are created equal.  Don’t waste your time on things like font tests, signature colors, or where on the reply device you ask for an e-mail address.  These variables are so minor that they won’t significantly increase performance.

Important test metrics
The goal with any test (aside from just learning what works better than the contro) is improving performance.  This means you need to track several different metrics to figure out whether your test beats your control, and if so, if the test performance was strong enough to warrant becoming your new control.

You should measure Return on Investment (ROI), Cost Per Dollar Raised (CPDR), Cost Per Donor (CPD) and Net Yield Per Donor (NYPD).  In addition to those standard metrics, you’ll also want to track the number of additional responses you’ll need to offset any increase in costs from the test.  Jump on over to Roy Jones Reports for more on direct mail test metrics.

Understand your testing environment
If you were in the mail with a test when the market crashed a few years back, or the day Hurricane Katrina made landfall, or when the towers fell on 9/11, I’d bet the results of any tests would be suspect.  Don’t blindly trust results of any tests that are impacted by phenominal environmental issues like these.  The same could be said for any locally relevant issues.  Several yeas ago I was working for an organization whose Chairman was indicted just a few days after a mail drop.  The Chairman’s arrest made national news, and was covered nonstop for more than a month.  Their direct marketing programs were seriously impacted by this situation.  Any tests that were in the market would look like serious failures – but could easily have fallen victim to the overal negative environment.  In situations like this, it’s worth re-testing.