Read this by … whenever

June 22, 2017

Post by: Nick Ellinger

Facebooktwitterlinkedinmail

We use deadlines in matching gifts, end of year, Giving Tuesday, end of fiscal year, and sometimes we just make up deadlines.  All of this because urgency is a fundraising superpower, one of the key principles of influence.

Here at DonorVoice, we aim to focus our tactics around what is meaningful to the individual donor or general principles of behavioral science.  But who can be against a direct marketing tactic that works?

If it works?  Unfortunately, deadlines may have no effect or even be counterproductive.

A Dutch charity looked at the deadline for their matching gift.  They sent emails (with the subject line “Lokwang is grilling rats” — tell me you wouldn’t open that!) at three different time periods: 3 days before their deadline, 10 days before, and 34 days before.  Similarly, with texts, they did one two days before and another three days before.

The results?  All shorter deadlines did was increase in unsubscribe rates (actually, it was correlated with an increase in unsubscribe rates, not necessarily causal).  The only thing that increased giving was a reminder email, which increased giving rates by 50% (or, to put it another way, a second email was half as effective as the first).

The authors of this study hypothesize that this counterintuitive result was because email and text are “now or never” media — that 2-3 days was too long a deadline.  And this was born out by most of the responses coming in the first day.

So perhaps a mail program will show deadlines working.  Another study looked at New Zealand donors.  They were mailed an invitation to a five-minute online survey on charitable giving with a $10 donation going to either World Vision or Salvation Army if they completed it.  And the letters had three possible deadlines: one week, one month, and no deadline.

And the response rates?

  • One week: 6.3%
  • One month: 5.5%
  • No deadline: 8.3%

That’s right: a deadline was related to a suppression of response.  You actually got better response with no urgency attached.

Why?  Looking at when the gifts came in, there were no responses to the one-week deadline after day nine.  But responses to the no deadline piece came in until day 27.  And pity the poor one-month deadline, the worst of both worlds: not urgent enough to prompt action, but was urgent enough to prompt distaste.

All in all, this puts the world of invented deadlines on its ear.

When I asked Dr. Kiki Koutmeridou, DonorVoice’s behavioral scientist about these results, she counseled:

  • Do not have an RSVP date in the initial mailing – only have a generic message “response needed” or “Respond Now”
  • If you really need to have an answer by a certain date, then introduce a short deadline in the reminder mailing. Ideally, the deadline should be a week away. That’s the amount of time which will prompt people to action.
  • If you don’t need to have people’s answers by a certain date, then don’t have an RSVP date. This way, people might still respond after a few weeks have gone by.

And if you really want to stimulate responses, how about that “learning about what your donors want” idea?

Facebooktwitterlinkedinmail

Positioning a nonprofit

June 15, 2017

Post by: Nick Ellinger

Facebooktwitterlinkedinmail

Jack Trout passed away last week. And if you don’t know the name, you almost certainly know his concept: positioning.
Jack Trout

Trout coined the term “positioning” and first used it in a 1969 article.  He then co-authored Positioning: The Battle for Your Mind with Al Ries, a classic ranked the top media and marketing book of all time by (an unscientific poll of) Ad Age readers.

Trout’s central insight is one we can still draw from today: the success of a product is not based its attributes as conceived of by its creators; it’s the perception that consumers have of it.

If you could travel back to 2007, along with placing profitable stock or sports bets and stopping the Hollywood writers’ strike, you’d see the success in branding of the Toyota Prius.  When hybrids entered the car market, most tried to look as much like any other car as possible, hiding that they were hybrids in fear that car buyers think them less powerful.

Toyota went the other direction, making the Prius look different, wearing the hybrid, eco-friendly label with pride.  Consumers responded, with 57% of Prius owners saying they bought the car because it “makes a statement about me.”  (The NY Times article account on this is called “Say ‘Hybrid’ and Many People Will Hear ‘Prius‘”, which says what you need to know.)

This wouldn’t have been possible pre-Jack-Trout.  Cars were sold on attributes, not the positions they held in people’s minds.

So how can a look into positioning help your nonprofit?  Well, it’s instructive to look at the types of positioning Trout and Reis articulated:

Functional positioning: when you are solving problems and providing concrete benefits.  Think here of the relationship that a health charity has to those who suffer from the disease the charity aims to remediate or eliminate.  The charity provides counseling, special diets and exercises, access to the latest research, and more.  I, for example, leaned heavily on Autism Speaks when my daughter was diagnosed. Their positioning, for me, is one of both trusted authority and debt that I owe.

The Norwegian Cancer Society recently redesigned their website and increased their donations by 250%.  And, shockingly, it involved making the donate button harder to find.  Rather, they focused on their functional positioning and asked for donations after they had solved the problems of people coming to their site.

Symbolic positioning: the Prius is an example — what does my support of your organization say about me and make me feel? There’s even a neural reason for this: our brains like when problems are solved; we like it even better when we are the ones who solve the problems (a gross oversimplification of a great study, but it’s a rich topic into which we’ll delve another time).

In working with the U.S. Olympic and Paralympic Foundation, their previous controls focused on functional positioning: your gifts helps get gold medals.  However, our commitment study and pre-test of their donors found that wasn’t what donors were responding too.  Rather, they wanted the feeling of belonging that comes from being part of the team.  When the message changed accordingly, giving — both response rate and average gift — increased significantly.

Experiential positioning: what experiences do you have that are unique to your organization?  We’ve seen with every organization who uses our feedback platform the way that both good and bad experiences can shape a donor journey.  Even the most committed donor can be turned off by a highly negative experience with an organization.  Likewise, providing a unique experience can cause someone who may have made the first gift on a lark to make the second with commitment.

A good example here is Catholic Relief Services, who has mailed out three feedback mailings to their donors of the past two years.  Each mailing talks about the value of that individual, announces changes they have made to the donor experience, and asks for advice and feedback from donors.  They do not ask for money at all; not even a soft ask.

Yet each of these mailings has netted positive revenue.  The feedback has helped them shape the donor experience.  And they are netting more money on fewer mailings because they are creating a consistent positive experience for which they hope to be known.

One thing you’ll note in all of these examples is that they each require a knowledge of why people give (or why they would give) at a deeper level than “because we ask.”  Trout and Ries did not promise that it would be easy, just that it would be better.

So thank you, Jack Trout, for your contributions to our marketing expertise.  And let’s put his insights to work for our organizations and for the general good.

Facebooktwitterlinkedinmail

What your donation buys: How to get a 42% lift in revenue

June 14, 2017

Post by: Dr. Kiki Koutmeridou

Facebooktwitterlinkedinmail

Charities are very good at making the abstract act of giving more tangible by explaining what different levels of donations will buy. In the example below, UNHCR USA informs people that $45 can buy high thermal blankets while $85 can buy a heating stove. These symbolic gifts make the abstract donation amount feel more real and increase engagement. So far so good.

But there’s a catch.

Donation decisions, especially the decision about the amount, can be highly influenced by the way we ask. This asymmetrical structure, where a higher donation amount is linked to a completely different symbolic gift, is the standard practice for many charities. But is it the most effective one?

In our latest case study, we worked with UNHCR USA to find a more effective way to present these symbolic gifts on their donation page.

The result? A 42% income increase and a Best Insight Nudge Award.

 

What we did: the role of perceived impact

Donors may care about the cause, but they care more about their individual contribution. They want to be sure their donation is not just a drop in the ocean. However, they rarely examine the actual impact that their gift had. They solely focus on information about how their money could be used, the symbolic gifts.

As mentioned above, charities typically link a different symbolic gift with each donation amount – what we call asymmetrical benefit structure. For example, $85 could buy a stove, £60 could buy an electric radiator. With this structure it’s unclear whether a higher amount leads to more impact as we compare apples and oranges. Is a stove better than a radiator? Is it worth giving the extra $25?

What if we made that comparison easier? What if the different donation amounts didn’t buy different symbolic gifts but more of the same gift e.g. $45 could buy 5 blankets, $85 could buy 9 blankets? With this symmetrical benefit structure it’s easier to make direct comparisons while the additional impact of the higher amount is very clear. This could make people choose a higher donation amount in order to increase their impact.

For our study, we used two online donation pages; UNCHR USA’S original page above featuring an asymmetrical benefit structure (control) and a version using a symmetrical benefit structure, shown below.

Online advertising, organic search from google, or Direct Marketing drove people to these webpages. Each visitor was randomly presented with one of the 2 conditions. Testing stopped after we had 100 donations in each condition.

 

Results

Our test donation page outperformed the current standard practice. Presenting symbolic gifts in a symmetrical way – more money buys more of the same benefit – made the impact of a higher amount more salient and generated more income.

The symmetrical test page yielded $12,820 in total revenue while the control only $9,005. These results were due to an increase in the revenue per donor. The control page resulted in $83 per donor while the symmetrical test page yielded $111 rev per donor.

 

Remember: This 42% increase in total revenue was achieved merely by changing the way symbolic gifts are presented. Isn’t that cheap?

 

Facebooktwitterlinkedinmail

Committed donors are from Mars; non-committed donors are from Venus

June 13, 2017

Post by: Nick Ellinger

Facebooktwitterlinkedinmail

A new NonProfit Pro article about how your committed donors may be entirely different from your non-committed donor is now but a click away…

Facebooktwitterlinkedinmail

Better blog posts than mine

June 8, 2017

Post by: Nick Ellinger

Facebooktwitterlinkedinmail

Every Thursday, I try to bring a little bit of wisdom, of humor, of random neural firings, and of things you can act on for your nonprofit direct marketing.

But this week, I wasn’t going to do any better than a couple of my colleagues, so I must cede the floor:

Hope you enjoyed them as much as I did!

Facebooktwitterlinkedinmail

3 free June webinars to boost your second half results

June 6, 2017

Post by: Nick Ellinger

Facebooktwitterlinkedinmail

Whether you want to learn what your donors are thinking, who your donors are, how to create donor journeys that will beat your control, or get people to opt-in to your communications, we have something for you:

Missing From Your CRM: The Quality Data Showing What’s Killing Your Retention

How would you know if one of your major donors had a bad experience at your last gala event? Or your new online donor had trouble entering their info?  Or you had a vendor acquiring donors who will never retain?

This information is available, but often ignored at every interaction.  Few organizations collect and act on this missing quality data from donor feedback.  Even fewer combine it with existing behavior and transactional data to achieve the seemingly mythical 360-degree view of their donors. In this session, you’ll learn the how and why with real examples of charities reaping financial reward from a full view of their donors.

This webinar is June 22 at 11 AM Eastern/10 AM Central/4 PM London time; please sign up today!

Making “Radical” Change with High Reward and Low Risk: how to develop control-beating journeys for cash and monthly donors

Effective donor journeys start with deep knowledge of your donors; learning their motivation, their needs and their pain points and deep planning using modern marketing tools fit for the purpose.

This webinar will show you how to craft your donor journeys and get the insights you need using specific examples from Catholic Relief Services, the UN Refugee Agency, the Royal Society for the Protection of Birds, Amnesty International, and more. These organizations shifted to true, donor-centric journey development. These control-beating journeys represent nothing short of ‘radical’ change. Because the changes are evidence-based and empirical, it greatly lowered the perceived (and real) risk, which, in turn, greatly increased internal buy-in.

This webinar is June 29 at 11 AM Eastern/10 AM Central/4 PM London time; please sign up today!

The Behavioural Science Behind Consent

European nonprofits are moving toward an opt-in model for donor communications, spurred by GDPR (General Data Protection Regulation). But getting people to opt in to communications is valuable everywhere in the world. So please join Blackbaud and DonorVoice to talk about how to maximize your communications, and design calls to action to improve your opt-in rates.

This webinar is June 21 at 9 AM Eastern/8 AM Central/2 PM London; by now you know the drill: please sign up today!

Facebooktwitterlinkedinmail

Get accurate results from your donor surveys

June 1, 2017

Post by: Nick Ellinger

Facebooktwitterlinkedinmail

I’m a fan of Freakonomics books and podcasts.  While they occasionally have some howlingly bad science (e.g., most solar panels are not black and the assumptions that go into their oft-quoted drunk walking is more dangerous than drunk driving contention are profoundly odd), their contrary points of view often cause me to reassess.  That’s a useful exercise even if you end up at the same place you were.

Recently, they’ve done two podcasts that talk about how surveys can be misleading – “Are the Rich Really Less Generous Than the Poor?” and one about how we put different things into Google than they would do in public that I won’t name as it may make this NSFW.

But I would say this doesn’t mean that donor surveys are bad.  It just means that bad donor surveys are bad.  So how do you craft a donor survey that gets you the results you need?  We’ve done another good post here, but let’s start by looking at the reasons people mislead with survey responses from the podcasts:

The social desirability effect.  People say what they want other people to believe in surveys (in fact, they also say what they want to believe about themselves).  This is certainly true of donations – almost two-thirds of people overreport their giving on mail surveys.  This is due largely to self-deception and the degree of benefit people will get from overreporting.  (In other words, if you think people are bad on mail surveys, wait until you see their tax returns).

The experimental demand effect. When people know they are part of an experiment, they may behave differential from how they would in real life.  Specifically, they will try to do what they believe to be appropriate behavior.  Even a picture of human eyes can increase donations or, in a lab setting, the likelihood of reported giving. In short, we aim to please researchers (whether they are there or not).  In fact, even researchers aim to please researchers, funders, or journal editors consciously or subconsciously, which is why double-blind studies where the researcher and participant both don’t know who is getting what treatment are the gold standard for research.

Selection bias.  This happens when surveyers don’t get a random sample from the population they are trying to study.  For academic studies, this is often because they are done on undergraduates, who tend to be WEIRD (white education industrialized rich democratic) and cluster in similar economic and demographic zones.  One thing we are seeing in some more recent studies is that some things that were considered holy writ (e.g., you need to have sad faces in your fundraising) may only be true for acquisition audiences (since that’s where they were tested).

Metacognition challenges.  Asking people why they do what they do is a food’s errand.  Not only would this be biased by everything about, but people literally have no idea why they do what they do (aka metacognition – sounds more fun as a big word, no?), so they couldn’t tell you even if they wanted to.

To these challenges, I’d add filter effects, the idea that our leaders will only accept results that agree with their preconceived notions.  While it is not technically a problem with surveys, I’ve seen good surveys die this way.

All this sounds like you can’t trust people in a survey.  This is as flawed as taking them as holy writ.  As we’ve discussed, there are some concepts like loyalty and preference that you simply can’t assess any other way.

So how do you craft a survey that gives you valuable results?  A few thoughts from some of our commitment studies and pre-test tool surveys that can predict how messaging will perform before the first email is sent or the first stamp applied:

  • Select your audience very careful. When we want to assess how a direct mail audience will respond to messaging, we will select a panel of people who are 50-55+ years ago and have given to similar organizations in the past year.  While this has some selection bias, it biases the panel toward what an acquisition audience will look like.
  • But don’t worry too much about channel. We recently collected a large sample of offline donors for an international relief organization and compared them to the online donors.  There was no substantive difference.  The people who have selected in to donating to your organization are more like each other (for your purposes) than they are like their demographic counterparts.
  • Don’t tip what result is desired. With our pre-test tool surveys, we randomize different aspects of a mail piece or email to the A or B condition.  This way, there is no preferred condition that we might subtly suggest to the donor.
  • Have professionals help with your survey design. The largest set of errors come in in the design of the survey.  There are plenty of questions that might sound good but lead to bad results.  There are simple traps like leading questions (e.g., “How much do you like our organization?”) and double-barreled questions (e.g., “What do you think of the type and amount of communications you get from us?”) that most know how to avoid, but there are hundreds of things like this and even seemingly minor changes (e.g., did you include an “I don’t know” category) can have major impacts.
  • Never ask why. This doesn’t mean you can’t figure out why.  By asking about people’s ratings of various aspects of their relationship with you and their overall satisfaction, you can accurately create a model around what creates satisfaction for your donors.  In fact, if you marry this to behavioral data, you can even see the value of increasing satisfaction, loyalty, or commitment.
  • Set your hypothesis and assessment criteria first. If you can, get it in writing.  This will guard against leaders only accepting the results that bolster their claims.

We’d love to help design a survey for you – let us know how we can help.

Facebooktwitterlinkedinmail