Last Thursday night I attended a Boston Interactive Media Association panel on Data Integration & Operations called How to Avoid Paralysis of Analysis.
I was hoping to glean some insights into how local marketing professionals are tackling the challenges of measurement, analytics and ROI in a business environment where the return on marketing is increasingly scrutinized.
The panel was moderated by Shar VanBoskirk, Senior Analyst at Forrester Research, Inc., and made up of the following individuals:
If you're in the business of marketing...or any business that has a marketing function...you have likely seen the studies showing the need for greater marketing analytics, and the numerous articles about marketing accountability and measurement standards. They are hotly contested topics, to be sure.
Yet the BIMA conversation centered around the more traditional tactics of measuring page views and visit durations, and the need to track only a manageable number of data points to avoid analysis paralysis. Panelists commented on the fact that ad servers have become a crutch for measurement in that we expect a certain level of data that just isn't available from newer channels (or traditional ones, for that matter). This is where I'd hoped a discussion would ensue around alternative measurement techniques like Blogpulse , IMMI, or Marqui's notion of sentiment analysis.
Little was shared by way of actual case studies illustrating how one company tackled the challenge of measurement...and potentially reaped rewards. Even less time was spent on more complex measurement techniques, like appropriate metrics for a Web 2.0 world, or assessing campaign performance over time.
I recognize that none of this is easy. As new channels evolve, audiences fragment, and integrated campaigns become even more far-reaching, the challenges of measurement and analytics will multiply. One panelist mentioned that measurement is hard because there are no standards. I would argue that measurement shouldn't have standards, and that it isn't so much hard as it is laborious.
A marketer's success metrics should be as unique as his/her product and service offering, market position, and business objectives on any given day. It is critical at the onset of any marketing initiative that the client and agency stakeholders come to agreement on the business goals and success metrics. The agency should then determine the most appropriate way to track those metrics (even through proxy events if need be) and set up the appropriate infrastructure (or pre/post test, or whatever technique is agreed upon). That's the relatively easy part.
The more difficult part is faithfully monitoring those metrics, analyzing the results and optimizing the initiative over time. Sounds like common sense, but in the frenzy to get things in market (and after the collective sigh-of-relief that occurs when a new initiative finally does launch), the planning and follow-up on the measurement front often slips through the cracks. Or there's a changing of the guard and the new regime switches success metrics mid-stream (bad) or loses interest in measurement altogether (even worse).
Data is only powerful if you can turn it into knowledge, and that takes diligence. In my experience, many clients don't have the time or inclination ot interpret the data and optimize their marketing initiatives accordingly. This is the critical step where you analyze the data - whatever it may be - so that you can evaluate campaign performance, develop audience segments, fine tune your messaging and offers, and take a differential investment approach to improving success metrics.
I guess the avoidance of analysis paralysis really requires an organizational commitment to stand by the agreed-upon success metrics and methodically revisit them. ROI may be achieved quickly, but more often than not it will take time. I know that there are success stories out there (I hope to upload a few of mine here at some point), but maybe the lack of well-documented cases is what keeps us talking about it.