Marqui's Evolution of Marketing
ADI for Online Ad Measurement

BIMA: Data Integration and Operations

BimaLast Thursday night I attended a Boston Interactive Media Association panel on Data Integration & Operations called How to Avoid Paralysis of Analysis.

I was hoping to glean some insights into how local marketing professionals are tackling the challenges of measurement, analytics and ROI in a business environment where the return on marketing is increasingly scrutinized.

The panel was moderated by Shar VanBoskirk, Senior Analyst at Forrester Research, Inc., and made up of the following individuals:

If you're in the business of marketing...or any business that has a marketing function...you have likely seen the studies showing the need for greater marketing analytics, and the numerous articles about marketing accountability and measurement standards. They are hotly contested topics, to be sure.

Yet the BIMA conversation centered around the more traditional tactics of measuring page views and visit durations, and the need to track only a manageable number of data points to avoid analysis paralysis. Panelists commented on the fact that ad servers have become a crutch for measurement in that we expect a certain level of data that just isn't available from newer channels (or traditional ones, for that matter). This is where I'd hoped a discussion would ensue around alternative measurement techniques like Blogpulse , IMMI, or Marqui's notion of sentiment analysis.

Little was shared by way of actual case studies illustrating how one company tackled the challenge of measurement...and potentially reaped rewards. Even less time was spent on more complex measurement techniques, like appropriate metrics for a Web 2.0 world, or assessing campaign performance over time.

I recognize that none of this is easy. As new channels evolve, audiences fragment, and integrated campaigns become even more far-reaching, the challenges of measurement and analytics will multiply. One panelist mentioned that measurement is hard because there are no standards. I would argue that measurement shouldn't have standards, and that it isn't so much hard as it is laborious.

A marketer's success metrics should be as unique as his/her product and service offering, market position, and business objectives on any given day. It is critical at the onset of any marketing initiative that the client and agency stakeholders come to agreement on the business goals and success metrics. The agency should then determine the most appropriate way to track those metrics (even through proxy events if need be) and set up the appropriate infrastructure (or pre/post test, or whatever technique is agreed upon). That's the relatively easy part.

The more difficult part is faithfully monitoring those metrics, analyzing the results and optimizing the initiative over time. Sounds like common sense, but in the frenzy to get things in market (and after the collective sigh-of-relief that occurs when a new initiative finally does launch), the planning and follow-up on the measurement front often slips through the cracks. Or there's a changing of the guard and the new regime switches success metrics mid-stream (bad) or loses interest in measurement altogether (even worse).

Data is only powerful if you can turn it into knowledge, and that takes diligence. In my experience, many clients don't have the time or inclination ot interpret the data and optimize their marketing initiatives accordingly. This is the critical step where you analyze the data - whatever it may be - so that you can evaluate campaign performance, develop audience segments, fine tune your messaging and offers, and take a differential investment approach to improving success metrics.

I guess the avoidance of analysis paralysis really requires an organizational commitment to stand by the agreed-upon success metrics and methodically revisit them. ROI may be achieved quickly, but more often than not it will take time. I know that there are success stories out there (I hope to upload a few of mine here at some point), but maybe the lack of well-documented cases is what keeps us talking about it.

Comments

Charlie Ballard

Hey, Stephanie. Charlie here from the event, Jeremi mentioned you wrote it up. Thanks, it's the first real feedback I've seen other than two emails from BIMA saying it happened, and it’s exactly the kind of discussion I’d love to see more of.

I think I recall you attended with the guy from DTAS who asked about how to measure a campaign correctly over time. Good question. As you point out, the planning and follow-up on the measurement often can slip through the cracks, and it was always frustrating to see happen as a former analyst at his current employer.

Thankfully I've started to see this happen more and more rarely. I've seen the *potential* for dropping the ball, I've seen how reporting could get lost in the hype to launch the next campaign -- if all we’re doing is reporting metrics like clickthrough and visits. I.e., “who really cares?” stats.

In these situations, though, I have to wonder where the pro forma is in it all. Where is the “expected” versus “actuals to date” comparison? Where are the “how we’re doing” and “implications / next steps” components of the mid-campaign reporting? While they could care less about the basic stats, I’ve never met an exec who is anything but a crack fiend for knowing how on-track a campaign is compared to what was expected from it. If we’re missing our marks they’re going to look forward to burning someone and if it’s blowing away expectations they’re going to wet themselves in anticipation of letting their higher ups know. Best of all, a forecast requires a model, and any deviation from expectations is only going to require an improvement to the model. And who doesn’t love a great model?

The greatest difference I see between average measurement and the analysis that lasts and changes things is that the average measurement usually seems to be put out there just for fun. It’s a checkbox on the SOW. Whether we’re talking envelope open rates or Feedburner link popularity, most measurement will have some cool trendlines, it’ll have a bunch of proportions, some of it might even have an interaction rate or two, but none of it includes recommendations for what needs to change based on what’s going on -- nor does it discuss how the recommendations made last time were carried out and how they affected the latest numbers.

I would have *loved* to talk about my team’s current efforts to use BlogPulse, the Conversation Gap, OpinMind, and other Web 2.0 tools to help paint a picture of how our primary client is being perceived in the blogosphere. We’ve begun a very preliminary investigation into how relevant such online discussions are to the company’s perception among the general public, and hey, that stuff’s just cool – it’s what you discuss with the other Blues Schmooze attendees when you have on your trendy rectangular glasses and triangular bike messenger backpack. I would have loved to go on about what we’ve found to date, which is sadly, very little, and I know you would have too, as one of my colleagues heard you mention your disappointment. Wish you’d brought it up.

But as we saw in most questions asked at the event, people just don’t seem to be close to ready for any of this. My greatest challenge at the moment isn’t investigating what blog, RSS, and social networking measurement tools can tell us, but rather how the hell to explain why cost per conversion should be important to a client lead who has trouble grasping why having her creative be clickable might just be a good best practice.

At times we in the marketing community – especially interactive – sit in our ivory towers dreaming about a world when everything’s behavioral, and the web-based software exists that aggregates everything without the slightest IT involvement, and all we have to do is sit back and watch our brilliant next-generation strategies generate endless revenue for our clients at a high, automatically managed ROI.

But the reality is that at the moment we have to deal with key decision makers who have to be spoon-fed the most obvious decisions daily, and we need to get a LOT better at making our results have a purpose. I died a little bit on the inside when the CEO in the back put out there the fact that his analysts regularly send him a dashboard which shows that nothing they do has the slightest affect on his sales. He didn’t even put it as a question or seem to be asking for suggestions on what they’re doing wrong. It was almost just “in my experience a lot of times none of this even matters”.

But what matters is not the numbers – the numbers are only a reflection of behavior, both in terms of the marketing actions taken and the consumer response to said actions. And at all times these numbers need to paint a picture of this back and forth, this “customer conversation”, to use a Forrester cliché. As you say, data is only powerful if you can turn it into knowledge, but I’d argue that there’s a imperative next step that is even more frequently lacking – that your knowledge needs to be presented in such a way that real action can be taken, that the consequences of not taking these actions need to be quantified, and that we need to get that right before we can even begin to delve into how the Google Grid is going to reshape the way we manage online content preferences:
http://www.albinoblacksheep.com/flash/epic

The comments to this entry are closed.