Home Resources Newsletters Monitoring Measurement Effectiveness

Monitoring Measurement Effectiveness

August 01, 2016

The job of marketing measurement isn’t complete when reports are produced – measurement needs to be acted upon to facilitate improved performance

The original Pennsylvania Station in New York City was considered an architectural masterpiece. Photographs of its 1910 opening, which took place after nine years of planning and construction, show a breathtaking exterior and concourse. But despite its grandeur, the station fell into disrepair and was demolished just 53 years later, prompting New York Times architecture critic Ada Louise Huxtable to lament, “The tragedy is that our own times not only could not produce such a building, but cannot even maintain it.”

Ambitious b-to-b marketing and reporting initiatives often are planned, developed and launched – only to be neglected before long and discontinued without achieving a lasting impact. Measurement teams can avoid this vicious cycle by focusing on the evolution phase of the SiriusDecisions Aligned Measurement Process Model, in which post-launch monitoring of measurement effectiveness takes place. In this issue of SiriusPerspectives, we identify three categories of monitoring metrics, using the SiriusDecisions Metrics Spectrum as a guide.

Operations of Monitoring

Some level of monitoring is necessary to understand whether measurement is contributing to improved performance, yet too few b-to-b organizations have consistent measurement monitoring programs in place. A primary reason for this absence is a lack of clarity surrounding the operations of monitoring. Consider the following questions when setting up a measurement monitoring program:

  • What should be monitored? Significant reporting deliverables should be monitored, so start by identifying what those deliverables are. Prioritize deliverables associated with highly scrutinized areas of the business (e.g. deliverables that relate to areas of strategic priority, high investment or historic underperformance). Newly developed and released reporting should be considered significant until it attains its anticipated level of adoption. Because monitoring and analysis require resources, measurement teams must strike a balance between identifying deliverables that should be monitored and implementing a manageable level of monitoring.

  • Who does the monitoring? In most b-to-b organizations, a marketing operations or marketing analytics team is responsible for developing and producing marketing measurement. These teams are also responsible for monitoring measurement. Organizations that lack marketing operations or marketing analytics typically struggle to monitor measurement, because the monitoring component is not supported. While measurement is still produced in these organizations, its impact on performance is unknown.

  • Who needs to see monitoring results? Marketing operations or marketing analytics teams who generate reporting deliverables are the primary and regular users of monitoring information, as they need to take action in response to the results of monitoring. Marketing leadership also may be interested in understanding general trends surrounding the adoption of measurement, because this can indicate progress toward a cultural change regarding measurement.

Category One: Utilization

Measurement leaders are frequently dissatisfied with insufficient use of reporting, so it’s essential to monitor the utilization of significant reporting deliverables. Capture the following activity metrics to understand the degree to which intended audiences are engaged with reporting deliverables:

  • What to measure. Start by measuring report utilization levels by user role, possibly creating segmentation matched to the structure of the business (e.g. geographies, departments, business units). Quantify the total instances of reports being accessed and average utilization frequencies (e.g. the average user refers to the report three times per month). This indicates how integral the information has become within the workflow of audiences. Measure the percentage of each segment of the reporting audience that is accessing the deliverable to identify whether specific groups of intended users are underutilizing reporting. Identifying these low-usage groups can be used to focus deeper investigation and remediation efforts.

  • How to measure it. Methods of measuring utilization depend on the mechanisms through which reporting is delivered to its users. Many commercially available business systems and dashboarding tools include utilities that report on dashboard access by named users. When reports are posted on internal Web pages, Web analytics can be used to count page views, downloads and unique users. Consider extending the use of marketing automation beyond tracking user behavior among external buyers understand utilization by internal users. For reporting that is distributed via email, use marketing automation or email systems to gather clickthrough data. Organizations that provide some level of support for measurement questions, via phone or email, may choose to log the volume of calls or messages being handled as a proxy for utilization.

Category Two: User Satisfaction

Producers of measurement need to learn how the users of that measurement feel about it – including how well it enables those users to perform their jobs effectively. Look at the following output metrics periodically to gauge user satisfaction, which will provide insight into potential areas for improvement:

  • What to measure. For significant reporting deliverables, satisfaction metrics should begin with accessibility, accuracy, ease of interpretation and utility. Supplement these standard elements with questions specific to the use cases that the measurement was constructed to address. For example, if measurement was intended to provide visibility into tactic effectiveness, measure how well users feel it meets that objective.

  • How to measure it. Informal interviews or conversations with a sample of stakeholders can provide the most immediate feedback. Use phone calls or face-to-face meetings to ask individual users how useful they find the reporting and why. Look to uncover issues around awareness, accessibility and perceived value. Interviewing is especially useful for reporting deliverables that are targeted at small user audiences, while surveying can be effective as a more scalable method for soliciting regular feedback from larger groups of users. Consider using online survey tools to obtain perception data. Aim for quarterly feedback and rotate the users approached during each survey period to avoid survey fatigue.

Category Three: Decision Adoption

The purpose of reporting deliverables is to help stakeholders within an organization make better business decisions. To see if reporting is having the desired effect, evaluate the following impact metrics, which indicate whether measurement is being relied upon to drive decisionmaking:

  • What to measure. While it isn’t feasible to consistently measure the quality of decisions made because of measurement, it is appropriate to look for evidence that the decisionmaking process of an organization relies on reporting. Periodically (e.g. quarterly, biannually) collect outputs that show whether the organization is increasingly counting on measurement to drive decisions – a cultural change surrounding the role measurement plays in steering business operations. To some extent, the business impact of decision adoption may be discernible through comparison of the relevant performance of business segments with high levels of measurement utilization to the performance of segments with low levels of utilization.

  • How to measure it. Informal means can be used to observe the frequency with which reporting is being cited as justification for decisionmaking during planning meetings. Observations made by the measurement team should focus on citations by stakeholders as well as expectations communicated by leadership for what is necessary to justify a change. More formal methods of assessing adoption include surveying stakeholders to determine if and how the use of measurement has changed over time. To make impact comparisons across user segments, identify relevant impact metrics, such as pipeline creation or retention rates, and correlate changes over time in those measures with reporting utilization levels.

The Sirius Decision

After the original Pennsylvania Station was demolished, New York City established the Landmarks Preservation Commission to monitor the maintenance of significant structures and protect them from destruction. The preservation of measurement efforts also depends on monitoring. When the monitoring effort uncovers gaps or shortfalls in measurement utilization, user satisfaction or decision adoption, the team should respond appropriately – e.g. ramping up training efforts and developing new ways to communicate why and how to utilize reporting to drive better decisionmaking.