Home Resources Newsletters Resetting a Failed Lead Scoring Model

Resetting a Failed Lead Scoring Model

February 01, 2015

Successful lead scoring demonstrates marketing’s ability to produce sales-ready leads, increasing demand for marketing support and improving cross-functional alignment.

A lead scoring model can be reset when it is not performing as intended and an assessment reveals that the problems are severe and cannot be easily fixed. In this issue of SiriusPerspectives, we describe a three-step process for resetting a failed lead scoring model, managing the new model’s rollout, and making ongoing refinements to drive future lead scoring success.

One: Reset the Model

After determining that a lead scoring model has failed, organizations must assess the failure’s severity and causes to determine whether resetting it is the best corrective action. To reset the model, take the following steps:

Verify stakeholder support. First, identify all stakeholders and ensure they are committed to the lead scoring reset. Describe and obtain agreement on the reasons for the failure of the current model, describe how the new model will be different, and identify each stakeholder’s role in the reset process. If necessary, adjust stakeholders’ expectations of lead scoring.

Redefine and build the model. When creating a new lead scoring model, scoring factors (e.g. demographic and firmagraphic data, prospect behavior, readiness to buy) should be based on historical deal outcomes to the greatest extent possible, vs. relying solely on assumptions. Analyze recent closed/won and closed/lost deal data to understand correlations between deal outcomes and attributes. The model also needs to comply with service-level agreement expectations and the technology requirements of the marketing automation platform (MAP) and sales system of record.

Simulate and validate. Test the reset model by scoring fictional leads. Based on the score, there is a corresponding action. If the receiving function agrees sufficiently often (e.g. for 85 percent of the test leads) that the action that the lead score suggests is appropriate, then the lead scoring model is identifying and prioritizing the right leads for the right actions, and the model can be considered validated and ready for rollout.

Two: Implement the New Model

Once the new model has been built in the MAP and validated, we recommend the following best practices for rollout:

Train the team. Introduce the new model to a small pilot group comprising the original stakeholders as well as select representatives from marketing and receiving function(s). Once it is confirmed that actual leads are being scored according to the model and pilot participants in the receiving functions are acting on converted leads in compliance with the SLA, roll out the model to the rest of each team. Communicate why changes were made, what exactly has changed, and stakeholders’ roles and responsibilities.

Manage change. Any failed initiative can be difficult for an organization to overcome. However, sometimes failure is necessary for all parties to get involved and buy into the remedy. Throwing out the lead scoring model and starting over requires discipline and internal change management. Stakeholders are less likely to be skeptical of the new model if they are involved in the process, their feedback is welcome and necessary changes they suggest are promptly incorporated.

Build support and adoption. In many b-to-b organizations, the relationship between marketing and sales is fragile, and the previous lead scoring model’s failure may have increased friction and created an anti-scoring bias. The process owner managing the lead scoring project must work to minimize bias, manage expectations, and help marketing and sales leaders overcome resistance to lead scoring and its associated SLA.

Three: Test, Re-Evaluate, Iterate

Lead scoring models must be revisited often to ensure success. Because changes in the model may have a noticeable impact on receiving functions, involve leaders or representatives from these functions in the testing and iteration process, and keep them informed whenever changes are made. Follow these guidelines to successfully manage the testing, re-evaluation and feedback process:

Test the entire model. Many organizations approach lead scoring model testing like email A/B testing, (e.g. changing one lead scoring attribute at a time, analyzing the results, and then moving to the next attribute). Conducted over multiple sales cycles, this process may be tedious without providing marketers the right information. To determine whether the model’s attributes and weighting are working effectively to identify qualified leads, marketers can iterate simultaneously with multiple variables using regression analysis or predictive models, assuming lead volume is high enough. Another option is to run multiple models simultaneously, with receiving functions using one model and others running in the background to determine which is most effective in prioritizing leads with the highest propensity to convert or buy.

Re-evaluate lead scoring effectiveness. Lead scoring should be re-evaluated at least annually – preferably every six months. Analyze the success metrics established, review conversion rates, and measure any improvement over the baseline. A significant decrease in acceptance rates by receiving functions – or any changes in the target market or demand type, or the introduction of a new product – should trigger an immediate re-evaluation.

Establish feedback loops. Marketing typically creates the lead scoring model with input from receiving functions and with reference to SLAs between marketing and these functions. The same team should be responsible for analyzing scoring effectiveness and testing improvements to the model. Receiving functions must adhere to agreed-upon processes for accepting and further qualifying leads and providing feedback based on expected outcomes. We recommend creating a committee to formalize this feedback process. During committee meetings, identify lead flow obstacles, determine whether higher-scoring leads are converting more frequently into closed/won deals, and discuss disqualification reasons for scored leads. The outcome of these conversations should inform the team whether the lead scoring model is working and where additional testing may be necessary.

The Sirius Decision

In many cases, initial lead scoring success allows for additional experimentation with lead scoring (e.g. incorporating more advanced scoring models that include predictive capabilities, or applying advanced testing methods). However, one type of scoring or testing method should not replace the other; we recommend applying multiple methods that inform and cross-check one another. Also keep in mind that a lead scoring model that works today might not remain effective in the future. Constantly re-evaluate lead scoring success, and adjust the model to adapt to changing conditions and continue improving results.