Proving ROI can be difficult for marketing and brand loyalty initiatives, and communities focusing on these kinds of objectives aren't any different. After all, it can be almost impossible to separate out all the many things that influence customer's decisions. How much of a customer's purchase is that 30-second ad on TV or radio really responsible for?
To address this question, advertisers on broadcast media will often try to implement some kind of controlled testing or staggered roll out to isolate the influence of their campaign from other factors. And for campaigns composed of limited bursts of content like traditional media this can be an effective strategy. But as the media mix and the frequency of interactions increases, this becomes harder and harder to disentangle.
Particularly for online communities and other ongoing social media, control and testing methodologies are ineffective and even dangerous to apply. Dangerous, because the attempt to isolate and control factors such as audience size and duration can actually be detrimental to building a thriving community of members, which prevents you from achieving your objectives. Alastair Ray talks about the problems of measuring integrated campaigns in an article on thinkbox.tv titled simply "Return on Investment":
"And with the rise of the integrated campaign where all media work together and often run at the same time that is making evaluation of the different elements including TV more difficult... And while the consensus is that integrated campaigns are indeed more powerful than non-integrated ones it’s vitally important to ensure the evaluation doesn’t distort the communications plan it’s designed to assess."
He was speaking of integrated marketing campaigns across multiple media, but I would argue that this applies to any ongoing program or initiative where factors are difficult to isolate. Or as Jeremy Griffiths, Effectiveness Director at MediaCom was quoted in the same article: '“We tend to avoid compromising the plan simply for the ability to get a better measurement of it.”
It can be hard to prove a true causal relationship between community and objectives like increased purchases. After all, it may be that the people most likely to buy are the ones who are most likely to join a community (so community membership alone won't indicate a true cause of increased purchases among those members in that case). And if you are trying to influence not only the highly visible and active members of a community but the silent majority of passive participants as well, this makes the measurement even harder. So if time and control methodologies don't work well for measuring community success, what should marketers do to validate their investments?
I haven't seen an easy easy answer to this question yet. The most rigorous methodologies today are ones that use statistical analysis to measure both the likely impact of your particular campaign, and what would have occurred had you done nothing at all (factoring in the downward pressure of your competitors efforts). But that level of detailed analysis often requires expertise and resources that today's leaner marketing departments may not have access to.
A effective compromise solution we've seen employed is to baseline the current behavior of existing customers, then track how their behavior changes (if it does) once they join the community, according to the objectives you are trying to achieve. You can also capture data on awareness trends and qualitative anecdotes after the community has been deployed to help describe what changes you are seeing and why they occur.
Finally, be sure that you understand exactly what it is that your are trying to achieve, and make sure you are gathering data that will measure this appropriately. Look closer at what additional value you are expecting from community members - perhaps instead of measuring the increase in purchases for the most active members, the greater value may be that the most active members are affecting the purchases of others on the community. Using web analytics, you might then track who is viewing this content and whether this ultimately ends in a lead generating activity or even a purchase.
Ultimately, rigorous study and evaluation of enterprise community performance are good things for your business. You just want to make sure that you are not limiting your chances of success for the sake of measurement.