Early in my career, I had the misfortune to be in the room when the CEO of a multi-billion dollar corporation posed this question to the enablement team: If you stop training the sales reps, will they stop selling? I say misfortune because it gnawed at me for years – but it wasn’t so much the question (which was a bit loaded), as it was that the team couldn’t effectively answer it and the ramifications that followed. The heart of the matter is: Can you ever definitively tie enablement to sales results?
In this situation, the training organization had recently undertaken the massive effort of identifying all learning and development resources across the enterprise, and centralizing the function to standardize and eliminate redundancy – all good things to do. The training industry bestowed national awards on the programs and the reps put smiley faces on their class surveys.
However, as a result of consolidation, there was a full view into the entire cost of sales training. Despite significant reductions in overall spend and a blended learning program in the works to cut travel, this hefty figure became a frequent target of overzealous finance analysts and cost cutting consultants. Early on, the VP of learning and development became adept at tracking activity and revenue at the learner level to show results for significant learning events or new hire time to quota.
What Is Wrong With This Picture?
They could show that the onboarding program appeared to get most reps to quota in the acceptable timeframe. They could point to increasing revenue in a service offering when they provided a class on how to sell it. What they could not do is demonstrate a clear before-and-after picture. They could not point to previous trends in results and how training intervention had directly improved the KPIs, including the CEO’s most important objective at that time – more revenue. They could not definitively say the reps were any better off with training than without it. As a result, the learning and development budget was decreased by 35 percent, and dozens of positions and programs were eliminated.
The problem was partly a lack of data – the “before” picture did not exist in a way that could be tracked and, understandably, there was no group that could be used as a control (i.e. a group with no training). However, we frequently see enablement organizations that do little or no tracking of indicators that cannot be clearly tied to the impact of enablement activities or fail to show before-and-after trends to tell the full story.
The Moral of the Story
The time to think about how to effectively measure the impact of enablement programs is not when the CEO asks you to justify your existence. Like most morals, this is easier said than done. Here are some guidelines to prepare you for the unfair questions that may await you in that budget meeting:
Now, to answer that burning question: Yes, you can tie enablement to defined measurable outcomes, including revenue, if you focus on the specific initiative or tool and clearly map the path.