Demographics are Garbage

July 20, 2016

Post by: Kevin Schulman


This is absolutely the type of headline we would write in order to be contrarian and provocative (not to mention, accurate).  Alas, we can’t take credit.  This is the headline from a recent article about Netflix whose headline reads, in full,

“Netflix says geography, age and gender are garbage for predicting taste.” 

Blame Netflix for rendering the vast majority of demographic data append services worse than useless.

(Note: A nod to Nick Ellinger, newest member of the DonorVoice team who alerted us to the article)

Or thank them. Or us…

What gives?  How is this possible?  You know your donors are older and that older donors stick around much longer than younger ones so how can age, among other demos be garbage?

What Netflix knows from their mountain of data (movie selection behavior mostly) is that people are in fact different but not in the ways you’d imagine.  Your donors are also different but not in any of the ways they are typically segmented – e.g. by RFM buckets or acquisition source.

Here is the critical nugget – the Netflix mountain of data, just like your mountain of data, is 99% garbage, 1% gold.

In the garbage pile are age, gender and geography.  Think about this, geography is what makes up the cluster codes (e.g. Prizm) with the rationale being that people who live near each other are similar.

That this is partly true misses the much larger reality, those similarities to our neighbors are dwarfed by the differences with our neighbors.

To recap, here is the Netflix reality and yours: differences within a general, stereotypical group (e.g. females) are greater than the differences between two general, stereotypical groups (e.g. females and males).

But what about romantic comedies, i.e. “chick-flicks”?  An entire genre tagged by gender.  It is absolutely true that if Netflix served up romantic comedies to a group of females and males who otherwise looked demographically the same (e.g. age, geography, income) they would see much greater uptake by the female group.

Done, declare victory, a brilliant A/B test proves the generalization.  Not so fast.

The fact that more females ‘converted’ on romantic comedies than males obscures the missed opportunity; there are females who don’t like ‘chick-flicks’ and some males who do.  In the first case you’ve served up irrelevant content and in the latter, missed a sales opportunity.

This same reality applies to the gender based testing done in the charity sector.  For example, the mailing label test using generalized, stereotypical assumptions about male preference – e.g. anchors and boats – and female preference –  e.g. flowers.

Seeing that females respond better than males to mailing labels with flowers is an example of the very limited, skim the surface type of mentality that keeps the charity sector on a slow decline trajectory while alternative philanthropic options grow like crazy – e.g. direct to donor models, social good stock funds, social good bonds.

Using these weak, demographic profiles and thinking you have some unique insight is akin to renting names to mail and thinking you have a unique universe of donors.  Everybody does this, it is a ‘best practice’ to nowhere.

What is the alternative?  Here is the upshot, Netflix does have customer clusters (i.e. segments).  Those clusters are based on likes (and by extension, dislikes).  In short, preferences.

Think about it, a group of donors on your file who:

  • Have disease X that you are in the business of curing.
  • Like big predator birds (not small ones)
  • Know someone who was a beneficiary of your charity
  • Have a strong religious motivation to give even though your charity is secular

If you profile these groups by their demographics you will see that they skew this way or that way (e.g. older, female, whatever…).  That is as inevitable as it is irrelevant.

Why in the world is it beyond the capacity of a charity or any business to group folks based on preferences like the examples raised?

This is a mindset shift, not one of resource or database functionality.  It is one that requires not thinking of donors as widgets on an assembly line.

Or maybe it is two assembly lines; males and females…or is it old people and not old people…


Is Donor-Centric Real or Unicorn?

Post by: Kevin Schulman


In our prior post we stipulated that donors want to donate. They don’t want massive frustration and irritation in doing so, which is precisely what all the asks (i.e. volume) causes.  There is a study here proving that point.  And it doesn’t irritate some tiny, minority of folks who really aren’t “good” donors anyway, it irritates the majority of your donors.

In short, most give in spite of the fundraising volume machine and the irritation it causes, not because of it.

So why do they give?  To clarify, most don’t give again – hence the dreadful first year retention rates.

But, for those who elect to stick around, what is causing their giving – if not the constant asking – and how would the business look different if charities organized mindset, metrics and process around these reasons?

If we need or want a label, and of course we do, enter the elusive donor-centric term.  Our sector unicorn that is never seen and yet the claims to know where to find one (e.g. how to become donor-centric) or to be one (e.g. we are donor-centric) abound.

If most claiming to be or know how to be donor-centric are a million miles from it (they are) then what in the name of unicorns, rainbows and pots of gold is the answer?

No more theory and platitudes damn it.  Details, please.

Here is an actual example that we’ve modified for confidentiality reasons.

We identified (using this method) two different types of donor identities for a community based, social services charity – people who are into helping children  (e.g. tutoring, preventative wellness) and those into helping adults (e.g. job training, financial literacy).

These identities, just like your identity as a parent, employee, sports fan or hobbyist are core to who these folks are but they don’t need this charity to reinforce and ‘live-out’ this particular identity.

Make no mistake however, the identity of “help children” and “help adult” are the number one motivators for why these folks will give to this charity and failure to market in a way that reinforces this identity will result in them leaving.  The current, “control world” for these donors is identical.  It serves neither group well and any reinforcing of identity is done by accident.

The new world order starts with these two segments and a separate, dare we say it, journey for each.  Note how fundamentally different this segmentation scheme is from what is far more typical – past giving (for current donors) or even worse, acquisition channel/campaign (for new donors).

How do we build the journey from there?

Enter step two for this client – asking folks to self-identify at the point of acquisition as either being more focused on children or adults.  Getting this information is as important as the donor contact and bank details.


Because knowing it means we actually have permission to send the next something.  Not legal permission, not moral or ethical permission but internal mindset permission that says I now know something about you because you told me.  My next job is to send you something that is responsive.

Without this everything is likely to feel more like well-intentioned spam than ‘donor-centric, relevant’ content that the unicorns purport to have figured out.

The welcome kit/onboard is a perfect example.  This is, almost without exception, the send them everything (which means we send nothing) set of communications.  The proverbial kitchen sink.

In its place for these donor experience test groups will be what is internally branded as the “getting to know you” communications.

The entire purpose is to learn at least one additional thing (or more than one) about the donor in order to have additional permission – in fact, obligation – to send something in return and be responsive.

But, the additional something is not random.  This is not the ubiquitous, sham ‘donor survey’ asking them to identify issues they care about or some other rhetorical, fraudulent tactic suggesting we actually care about their individual preferences.

In this case, and based on additional donor insights, the client will ask donors to indicate if they have interest in advocacy actions, volunteering or what we call being “quiet supporters”, meaning they want low behavioral involvement (not to be mistaken with low emotional involvement) but will donate regularly with very little spend required by the charity or desired by the donor.

What is this all about?  It is a recognition that not all “child” (or “adult”) folks are the same.  They share the same identity, yes.  But, how they want to manifest that with this charity and the specific assets it offers will differ.  In short, there are sub-segments within the ‘child” group based on charity specific preferences/interests.

At this point we start to shift donors into sub-tracks or journeys – still living in their child or adult identity cohort – that are built out based on donor expressed preference.

(A note: we can and will augment self-expressed preference with behavior data – e.g. they opened, clicked, replied or otherwise indirectly expressed a preference.  The cautionary note is we should recognize a behavior based choice is not the same as a preference)

All the content that goes into these sub-tracks or journeys already exists.  In fact, folks in the control get all of it.  All of it….

Less is more when the “less” is determined by understanding donor identity and preference and serving up content, offers, interactions, communications that match it.

If this is starting to seem a lot like personalized, relationship building that is because it is.

And it needs to get more personalized, down to the 1 to 1 level.

Why?  Because the individual interactions donors have with your business – i.e. their experiences – are experienced individually and the determination of whether the interaction was good, bad or in-between differs between Donor X and Y.  Further, the quality of those interactions, from the donor’s perspective, will play a big role in determining whether Donor X and Y stay or go.

Do you know, at the individual donor level and in aggregate, how good or crappy your donor experiences are?  Are they doing the intended job or not?  For example,

  • Is your online donation process easy?
  • What was the quality of the recruiting process?
  • Is the “welcome” reinforcing their decision to give (probably not…)?
  • Is the ‘thanking” making them feel valued?
  • Did the phone call with the donor service rep fully address their concern?
  • Did the e-appeal or direct mail piece reinforce their identity?

Don’t pretend you can answer any of these questions with behavior data.  You cannot.

Double negative alert:  If you can’t answer these questions your charity is not donor-centric.

As part of the test for this client they are getting into the game of measuring and managing donor experiences (using this tool) at the individual supporter level by asking for feedback right after the interaction and acting on it.  This is not asking for asking sake.  Donors who have a crappy experience will get a different follow-up than those who had a positive experience.

The analogy here is your life as a consumer where you are likely getting inundated with requests for feedback after you shop online, fly on a plane, stay in a hotel, buy a car, buy a pack of gum…  Why? Because the corporate sector knows the quality of these experiences dictate a large part of the decision to stay or go and the only way to measure them is to ask.

Said differently, this data has enormous value to the business. It is worth far more than the next appeal you will send.

So, what do you think will win?

Control World: Sending 40, 50, 60 plus push communications (mail, phone, email) over the course of the year that adheres to an internally crafted production schedule that covers any and all aspects of the business


Test World: Matching what is sent to donor identity and preference and asking for and acting on individual level feedback tied to interactions they have with the business.  This will, in most cases, result in far fewer push communications going out but only as an incidental outcome of using a different strategy, not because sending less is “the” strategy.

That may seem like a loaded question.  Who is going to pick the Control World?

Well, the vast majority of charities choose it every day and donors, unfortunately, experience it every day.

Or maybe folks will pick the Control World as the answer because they can cite the results of a “cadence” pilot they did showing that when they sent less they made less.  You can see here for details on the many, many flaws with that testing and mindset.

The bottom line is this:

Most charities have chosen to make the ask their economic engine, the volume model.  To make this engine “work” doesn’t require knowing anything other than some very, very limited past behavior to make some slight efficiency improvements for future volume. 

In reality – your current, unrealized reality – the ask deserves very little credit in actually raising money (16% by our attribution models) and more of the blame for current world order.

The donor is giving in spite of the asking machine, not because of it. 

P.S. The test group wins.  But this isn’t even the best part, though it is pretty damn good.  The best part is that we’ve changed the economic engine. The way to scale and grow this pilot has nothing to do with sending out more stuff.

Donor-centric can be operationalized and made real.  Just don’t go asking any unicorns how to pull it off.


Being donor-centric is not function of volume (even though volume biz model horribly broken)

May 16, 2016

Post by: Kevin Schulman


Let’s be provocative from the start; donor-centricity used to be an empty, vacuous, platitude.

This, in and of itself, is not a big deal and in fact preferable to the definitions that seem to be filling the empty, vacuous void.

First and foremost, volume (i.e. cadence) seems to have quickly stepped in to become “a”, and probably “the”, defining trait of donor centric; namely if you send a lot of solicitations you are not donor-centric and if you send less or want to or are open to it or don’t reject it out of hand you are on your way to being more donor-centric.

Not to be outdone however, one large, prominent charity claims to epitomize donor-centricity and so much so that it increased the number of annual mail solicitations from 24 to 27.

How to justify sending more in the name of being donor-centric?  Enter the relevance red-herring; often a built-in excuse to keep doing what one is doing and if possible, increase it. Nobody is arguing for sending out irrelevant so why in the world does everybody feel the need to make the obvious point that it be relevant?

More importantly who makes that judgement?  It is always an internal one with justification being we sent 3 more appeals and each netted money and we used the correct pronouns and storytelling and made the donor the hero and talked about the impact of their past donation and made a compelling, simple case for additional need with a very clear ask, repeated 6.3 times.

There was a seminal study done analyzing the language and style and structure of fundraising copy relative to other bodies of work (e.g. academic) and it was an indictment nobody read –fundraising copy reads more like an academic abstract than personal, emotional narrative.

Everyone thinks their copy is ‘relevant’ and we all apparently live at Lake Woebegon.

Most of it is crap – per this study – and even if it weren’t, nobody at this aforementioned charity is likely arguing for sending out 54 appeals. Why not? They are all, by their accounting, relevant? What line in the sand exists that says 27 is ok but 54 is crazy talk?


Let us stipulate for the record, sending 27 DM appeals in a year is a massive waste of money and yes, we know appeal 25, 26 and 27 all netted money. You can read the full, empirical case here but the short version is this reality,

Sending more raises more, with sharply decreasing ROI each time and it cannibalizes from future donations.  If you send 6 appeals and get 1.5 gifts a year and think you can send 12 and get 3 the next year you will be profoundly disappointed.  You will also be profoundly disappointed to see just how stubborn the 1.5 average gift per 12-month period is (for current, multi-year donors in particular) against the wave of asks you throw at it.  And by “it” we mean “them” – the donors you are frustrating the hell out of.

What is an alternative to send more, (sort of) make more?  Another charity did a year-long test with a 65% reduction in asking that resulted in almost the same total revenue and more net. Importantly, they advertised this change to folks at the beginning of the year-long test to explain the change and rationale; namely, they heard and are now being responsive to all the complaints. This sensitized donors to the new reality and they adjusted their giving behavior to give roughly the same amount despite 65% fewer requests.

Donors want to donate. They don’t want massive frustration and irritation in doing so, which is precisely what volume causes.  There is a study here proving that point.  And it doesn’t irritate some tiny, minority of folks who really aren’t “good” donors anyway, it irritates the majority of donors.

In short, most give in spite of the fundraising volume machine and the irritation it causes, not because of it.

The deceptively simple answer to how to ask less and make more is to give donors what they want and stop giving them what they don’t.

But, if a hammer (i.e. volume) is your only tool, everything looks like a nail.

However, as we stipulated at the outset, by defining donor-centric as fixing the mess we created it really misses the point, even though volume is horribly broken as a model and the “send less to make more” approach is a vast improvement in donor experience.

An alternative?  What if your entire business were built around answering the following types of questions and then building out product, offers, communications and dare we say it, journeys, based entirely on this intelligence about your donors?

What is their connection to your cause?

Said differently, what are the underlying motivations for giving to you?  What personal identity are they trying to reinforce and what specific activities do they want to undertake with your business to manifest that identity?

How good or crappy are your donor experiences?  Are they doing the intended job or not?  For example,

  • Is your donation process easy?
  • What was the quality of the recruiting process?
  • Is the welcome reinforcing their decision to give?
  • Is the ‘thanking” making them feel valued?

Don’t pretend you can answer any of these questions with behavior data.  You cannot.

Double negative alert:  If you can’t answer these questions your charity is not donor-centric.

The logic for why this matters is as simple as it is irrefutable.

Your donors have identities (e.g. bird enthusiast, cancer survivor, caregiver of someone with mental health issues, member of local community, religious person who is giving in concert with their faith) that explain why they chose to give to you.

You are simply choosing to ignore it.  The donor however, won’t ignore it.  They will very quickly realize your charity has no sense of who they are or what they want.

Your donors are also having experiences with every interaction – passive or active – they have with your business. These experiences are good, bad and in-between and include pedestrian un-sexy things like the registration process for Event X as well as actually participating in or attending Event X.  These experience matter, they are being mentally registered by your donors and totally unaccounted for by your business.

This total lack of alignment between your business model – volume – and their motivation and experiences driving behavior is the reason for lousy retention.  This is mindset and mindset drives choice.

Your charity, in all likelihood, has chosen to make the ask your economic engine, the volume model.  To make this engine “work” doesn’t require knowing anything other than some very, very limited past behavior to make some slight efficiency improvements for future volume.

In reality – your current reality – the ask deserves very little credit in actually raising money (16% by our attribution models) and more of the blame for current world order.

The donor is giving in spite of the asking machine, not because of it.  Simply reducing the volume and speed of the asking machine lessens the aforementioned irritation but it doesn’t do anything to get at motivation or the myriad of other, non-volume, related experiences playing out every day.   Your business model and why they give (and stop) are still out of whack.

This post is already way long so look for a Part II that offers more detail and specific examples of what should be your new business model.


Want a good donor experience and better retention? Start with understanding donor identity.

December 9, 2015

Post by: Kevin Schulman


Why do donors give?  Seems like a question worth knowing the answer to if you are in the business of trying to affect that giving behavior.


One of the main reasons they give has nothing to do with your specific charitable brand but rather, their using your charitable brand to deliver on or otherwise reinforce their innate sense of self.  Said another way, this is what they “bring to the party” having nothing to do with your marketing or fundraising efforts per se.

Seem soft and fuzzy?

Or maybe you think you are already doing this by using “you” pronouns instead of 2nd person “we” or because you work hard to make the donor the hero and show how their gift is having an impact?   While all marginally on target and marginally useful, this is, at best, pretty weak tea and lowest common denominator type activity.

Or maybe you’ve done a lovely attitudinal segmentation and identified six (or 4 or 5…) segments who have very different profiles and maybe even differences in what might pass for “motivation” to give.

And maybe you have even managed to link this segmentation to your database and discovered that low and behold these segments seem to behave differently when you look in the rearview mirror.

Don’t mistake effort, time and complexity for progress. We’ve seen many of these efforts that mistake correlation for causation (i.e. seeing some differences in behavior by segments) because they start from a flawed, attitudinal segmentation that has no theoretical basis and as a result, is random.

Random never works except randomly…

But, let’s be generous and assume you do have some groups on your file that differ based on their motivation or reason to give.  Now what?  What is the experience you are going to serve up for these segments?  Is it changing a few marketing messages to be more in concert with their motivation for giving?  What else?

Still pushing out fundraising ask after fundraising ask in as many channels as possible?  The end result is likely still lousy retention because even if you got the identity/motivation right, you missed on translating that starting point into an ending point of a very different donor experience.

What is the alternative?  Let’s start with an intuitive and illustrative definition of ‘identity’.

People have an identity for each distinct network (social, professional) in which they ‘participate’.  A person has multiple identities and can consider themselves a parent, a Cowboys fan, an American, a Southwest Airlines employee, a golfer and a person with Type 1 Diabetes.

We all have multiple identities but they are only relevant in certain contexts.  A person’s sports fandom is totally irrelevant to their charitable giving.  Make no mistake however, the identity that is relevant is a major, causal driver of behavior (and attitudes).  And knowing which identity they are bringing to the party when interacting with your charity should dictate a lot more than copy changes that attempt to sing the right notes.

By way of specific example, consider health charities and those donors who are also current or potential beneficiaries of the charity.  In short, they either have the disease/ailment/handicap your charity is fighting or are a care-giver for someone who does.

This is the relevant identity. It is, without exception, the number one reason they support you.  Seems so obvious doesn’t it?  But here comes the provocative part – there isn’t a single health charity we know of (they may exist but we’ve yet to come across them in our work in the US or UK) that has a meaningfully different experience for those with a direct connection versus indirect.

But make no mistake, the needs and preferences of these “direct connection” supporters are very different from the indirect connection folks.  And yet, the marketing and fundraising communications and touchpoints look more identical than not.

What is required to successfully match the identity and associated needs and preferences is simultaneously;

  • Knowable
  • No more difficult to deliver on and yet,
  • Requiring such a mental sea change in approach that most will dismiss it out of hand.

The sea change to successfully raise money from those with a direct connection to a health charity can be boiled down to two, complementary statements:

  • The number one reason they donate is because of the services side of the business.
  • The number one way to raise more money is…here comes the mindset shift…for the fundraising/marketing team to deliver services (i.e. information, access, promotion of service outlets).

But we are marketing/fundraising, we don’t do services?  No, you don’t.  But you do attempt to deliver relevant information designed to elicit giving behavior.  Generic appeals attempting to invoke emotion and inspire action and support are not winning the day.

What these people need to match their identity is significant, consistent and without exception, recognition of who they are. This results in very different copy to be sure.  But, as importantly, it requires pulling the content not from the Fundraising 101 best practices handbook but instead, the services side of the business.

In short, market the informational hotline, not the sad story of a victim who they can help.  Read the latter version (just review any health charity appeal for Exhibit A) from the perspective of someone who actually has a direct connection and it instantly seems insensitive, even offensive.

Meet their needs.  They will meet yours.  In some ways it is that simple and tough all at the same time.

Identity matters.  Don’t make the mistake of thinking some random group of attitudinal clusters is identity that translates into donor motivation, needs and preferences.  And even if you have that part nailed, the next question of how you treat these folks and what you do – beyond copy tweaks – is the difference between delivering a great donor experience and a crappy one.


The Charity “Membership” Ruse

December 4, 2015

Post by: Kevin Schulman


Imagine you signed up for an annual gym membership in January and paid with a check (or cash for the Mafioso readers).  Now further imagine that you also signed up for the gym’s “healthy eating” monthly email.

What does the term annual connote to you?  What about membership?  Would you expect to have the gym ask you to renew your annual membership?  Sure.  Would you expect them to do it sometime before the membership actually expired and you were no longer allowed access?  Absolutely.

In fact, as a regular user you’d probably appreciate a reminder and maybe even a few.

What about the manager asking you to renew in July?  Doe that seem a bit premature and annoying?

What about the manager asking you to renew 12 times in the course of the year?  What if you renewed in November and got 3 or 4 more requests to renew before January rolled around?

And what if, on top of these requests to renew your membership that seem to be coming fast and furious and without regard to whether you have already renewed or not, you also got solicited with emails (remember you signed up for the healthy eating email) asking you to bump up your membership to the next level to include 1 free yoga class a month.  And let’s suppose you responded to one of these emails, gave your credit card to avoid the nightmare (hopefully) of all the requests to renew your membership and upgraded at the same time.

But the renewal requests from the manager keep coming…

Somewhere in corporate gym land a marketing manager will point out, correctly, that the number of renewals goes up with each request and that in the email channel there was a big increase in the gym membership upgrades compared to last year because of the decision to automatically solicit people who signed up for healthy eating newsletter.

Further up in corporate gym land some VP of marketing who has worked at this gym for many years will lament, again, the lousy (and slowly declining) retention rates and wonder if there isn’t some better way to get people to renew their annual membership and separately or as part of that process, consider upgrade options.

This VP will actually pose the borderline heretical question of whether the renewal process and lack of coordination with the email marketing effort is simultaneously the reason for the marginal increase in membership renewal and the attrition rate.

This same VP, as a final act before leaving for the gym down the road (who does everything the exact same), will put together a plan to do a better job of honoring the social contract with members who sign up for an annual membership by only asking to renew 30 days out from the actual renewal date and making it a top priority to make sure there is no confusion with cross-marketing or data glitches.   As a part of the plan there will be a prominent ‘mea culpa’ letter and email sent to all current (and former) members telling them about the new renewal process and apologizing for having unintentionally created such a crappy experience that diminished and took away from the main, workout experience.

As a postscript in the communication all members will be alerted to a new member feedback system that will make their experiences and preferences a priority.

The plan indicates an anticipated reduction in marketing cost, an increase in net revenue and more word of mouth referrals from happy, highly satisfied gym members – especially those who used to belong to other gyms, including the one down the road, but quit because the renewal process was so annoying.

This plan is ultimately rejected as too risky by corporate.  The VP heads to the gym for one final workout to deal with the frustration before starting her new gig on Monday at the gym down the road…

We have collected thousands of donor comments about membership (and linked it to donor behavior data) and from that work we’ve discovered:

…If you are a charity selling “membership” there is very high likelihood the gym metaphor is the world you are creating for people who trusted your social contract, want to support your mission and are doing so in spite of your marketing/fundraising, not because of it.  Of course, most went to the gym down the road already…




Is your nonprofit a “watch, observe and guess” organization or a “listen and act” one?

October 7, 2014

Post by: Kevin Schulman


Donor transactional data tells you zero about “Why” something happened.  Maybe the sector doesn’t care…

All the quantitative data from your website, email marketing tools, direct mail response data….opens, clicks, visits, time on site, response rate, conversion rate, social media shares and on and on and on.

Al those reports slicing and dicing and creating metrics because it if it is easy to count it must be good….All critical data to get more efficient at analyzing the world we elected to “cook up and serve”, which likely has very little resemblance to the “meal” and diet the donors want.

All critical data to watch and observe our supporters and then guess at root cause.

However, all this transactional data cannot, no matter how much you torture it, tell you Why something happened.  Nor can it listen to your supporters and answer questions like this.

  • Why do people click on one thing and not another, is it location, lack of interest, lack of understanding?
  • Why do those who don’t click on anything behave that way?  Did they still get value from it?
  • Why do visitors spend x amount of time on our site? Are they really engaged or does our navigation suck?
  • Why is it that we only have one percent conversion rate?
  • Why did 99% of people take no action?

So how do these seemingly critical questions get answered now?

Typically the response is: “I think this is happening…” or “The last client we worked on did it this way” or “The red button got a .02% better click through rate so it is the best choice …” or “When I was the ruler of the world this is why people did this. . . . (that’s the Highest Paid Person’s opinion, HiPPO speaking…)”.

We overlay our own opinions and experiences and preferences.

Unfortunately we are not our supporters.

In fact being as close to our organization and our marketing activity as we are, it is quite likely that we are the worst possible people to understand our supporters.

So why not ask them? You know, the donors.   Yes, a survey.  One that is short, purposeful and tied to a specific interaction or experience and served up at the point and time of that interaction (e.g. exiting your website, having a donor service interaction, reading a content page on your website).

This kind of listening is continuous, not discreet.  It is also ubiquitous in the commercial sector. Think about the last time you bought something, had a meal, traveled or shopped online… anywhere.  Chances are the company was asking you for feedback about that experience.  Heck, lots of them even provide incentives for that feedback.

They measure and managed and spend against what they assign value too.   So does the non-profit sector…

Getting into the listening and acting is simple, low cost and scalable.   You can find out more about how here and here.

Even if weren’t low cost and simple (it is), how do we continue to ignore the “must-know-but-rarely-do-we-with-the-way-we-currently-operate” answer to the Why question?




Stop the Direct Mail Testing

September 25, 2014

Post by: Kevin Schulman


Is your direct mail testing on auto-pilot?  Are you testing out of habit?  We hear a lot of very smart, sophisticated direct marketers working for big non-profit brands tell us this.

If you are one of them it is time to get off the merry-go-round and stop testing (with the current approach).

These same marketers we hear from are looking at the time, effort and cost of their habitual, auto-pilot testing and the associated return and making the smart decision to cut way back on the number of tests.  It is hard after all to beat the control.

To address the “now what?” question that remains if you find yourself in this boat we put together this 10 point framework for testing.   The guiding principle here is that non-profits should think about all that time, effort and money going into habitual, rote testing as their pot of money for innovation.

We believe this testing protocol will lead to far fewer, more meaningful tests (a big plus) and more definitive decision making on outcomes (another big plus).

1)      Allocate 25% of your acquisition and house file budget to testing.

2)      Of the 25%, put 10% into incremental and 15% into big ideas.

  • An important corollary here, some of this money should go into researching ideas or paying others to do it.  You can even use the online environment to pre-vet ideas with small, quick tests of the ideas to gather data.

3)      Set guidelines for expected improvement. 

  • Any idea for incremental must deliver a 5% (or better) improvement in house and 10% in acquisition (will see why difference in minute)
  • Any idea for breakthrough must deliver a 20% (or better) improvement.

4)      Any idea – incremental or breakthrough – must have a ‘reason to believe’ case made that relies on theory of how people make decisions, publicly available experimental test results or past client test results. 

  • The reason to believe must include whether the idea is designed to improve response or average gift or both – this will be the metric(s) on which performance is evaluated.
  • A major part of this protocol is guided by the view that far more time should be spent on generation of test ideas and therefore, creating the necessary “rules” and incentives to create this outcome.
  • This may very well result in 3 to 5 tests per year.  If they are well conceived and vetted that is a great outcome.

5)      Determine test volume with math, not arbitrary, ”best practice” test panels of 25,000 (or whatever)

  • Use one of many web based calculators (and underlying, simple statistical formulas).  Here is one we like but there are plenty – all free.
  •  Inputting past control performance and desired improvement (i.e. the 5% of 20%).  Do not use arbitrary 25k and 50k test sizes.
  • An acquisition example: if our control response rate is 1% and we want to be able to flag a 5% improvement – i.e. response rate greater than 1.05% – to say it is real  –the test size would need to be 626,231 (at 80% power and 95% confidence and 2-tail test).  That is not a typo.  How many acquisition test panels have been used in the history of non-profit DM that are  producing meaningless results because of all the statistical noise?  A sizeable majority, at least….  If we want to be able to flag a 10% improvement – i.e. better than 1.1% as meaningful – we need a test panel of 157,697.  For most large charities this size is very doable but only if the math is understood on why.

 6)      Do not create a “random nth” control panel that matches the test cell size for comparison.

  • We are unsure how many charities employ this approach but it can lead to drawing the exact wrong answer on whether the test lost or won.
  • The problem with the “random nth” control test panel of equal size to the test – e.g. two panels drawn with random nth at 25,000 each – is that creates a point of comparison that has its own statistical noise and far more than the main control with all the volume on it.  There are a few retorts that have surfaced in defense of this practice but they are simply off-base.

7)      Determine winners and losers with math, not eyeballing it.

  • Use one of many web based calculators to input test and control performance and statistically declare a winner or loser.

8)      Declare a test a winner or loser

  • Add results to the “reason to believe” document maintain a searchable archive.

 9)      All winners go full volume, rollout. 

 10)   Losers can be resurfaced and changed with a revised “reason to believe” case.