Conduct pilot programs to evaluate demand initiatives before investing more significant time, money and opportunity costs
Imagine the classic TV comedy Gilligan’s Island if the series had remained true to its pilot episode – with the castaways including a high school teacher instead of the Professor, and two secretaries instead of movie star Ginger and farm girl Mary Ann. Television networks use a pilot episode to test a new series and gauge its chances for success. Shows that are accepted by the network are often changed significantly after the pilot episode – with new characters, different sets and altered storylines – to ensure a fresh and unique approach.B-to-b marketers use pilot programs to assess the viability and impact of proposed demand creation initiatives. Applying lessons from a pilot program increases an organization’s chances of being successful when the new process, technology or project is implemented on a larger scale. In this issue of SiriusPerspectives, we present a three-step process for effectively piloting demand initiatives.
A pilot program tests a proposed change in a smaller controlled environment before introducing that change more broadly. Pilot programs should be considered when the truth of a hypothesis is unclear (e.g. a proposed answer to a problem, or the expected outcome of a proposed initiative) and can be feasibly and accurately tested with a limited-scale implementation. Three types of change that demand teams can test with pilots include:
Resource allocation. Initiatives in this category that may be tested through a pilot include: adding a function (e.g. teleprospecting) to the organization, adding headcount to the team, outsourcing resources (e.g. using a demand creation agency to outsource marketing automation platform [MAP] tasks) and bringing a currently outsourced resource or function into the company as full-time employees.
Process. Initiatives in this category that may be tested through a pilot include: creating a new service-level agreement (SLA) between teleprospecting and marketing to determine if lead handoff rules are working, and changing an existing SLA between field sales and marketing to determine if sales qualified leads that have stopped engaging should be recycled back to marketing for further nurturing.
Technology. Initiatives in this category that may be tested through a pilot include: implementing a new technology platform or migrating from one platform to another – e.g. adding an event management platform to an existing marketing technology stack to manage logistics, or determining whether it makes sense to migrate from one MAP to another.
A pilot requires the same amount of planning as any other project a team would typically implement. However, many b-to-b marketing organizations have a limited period of time to get a project up and running, often with multiple and competing priorities. As a result, they often end up putting processes, people or technology in place too quickly. Although those involved may be doing their best, they may not necessarily have the experience to know what does and doesn’t work. Use the following steps to create a plan that addresses the pilot’s objectives, scope, tools, implementation or execution plan, team member responsibilities, budget and timeline:
State the goal. Be specific about the pilot’s goals. The hypothesis is that the proposed action can help move the organization from the current state to a future state, so make sure the desired future state is clearly stated.
Set expectations. Pilots are experiments, and experiments are riddled with uncertainty, so set clear expectations, gather stakeholder input and have team members work together to define what outcomes would constitute a successful or unsuccessful pilot. Poorly defined expectations lead to misalignment in objectives, different perceptions about the success or outcome of the pilot and/or a difference in strategy. Some differences of opinion may not necessarily be related to the outcome and the results, but rather the stakeholder’s resistance to change or lack of involvement during the process.
Control the variables. Carefully consider all of the factors that may inadvertently influence the outcome of the pilot. Control for all possible explanations, and ensure that teams are not misattributing success. For example, if the organization is changing a sales process, better results may be caused by salespeople changing their behavior for a short period of time before reverting to their previous habits. The SLA pilot might be the reason people paid attention, but not the direct reason for improvement.
Review assumptions. The hypothesis is built on many previously made assumptions (e.g. the team’s level of knowledge and capabilities). Review these assumptions when planning for the pilot, then adjust them based on the outcome of the pilot before implementing the initiative on a larger scale.
Define test scenarios. Well-defined scenarios make it easier to determine the pilot’s success and to evaluate and minimize the risk associated with full-scale deployment. For example, an organization may need a new MAP, because its business model has changed from an e-commerce transactional sales model to an enterprise sales model. The assumption is that the existing platform does not accurately account for this new scenario, but a pilot of the existing technology’s capabilities can test the new scenario while working through the new business model to see if it’s necessary to move to a new MAP.
Collect and track the data. Define what kind of data will be collected and how it will be collected. This may be done through automated systems (MAP, sales force automation [SFA] platform). Another method is to gather subjective or anecdotal metrics and perceptions by giving pilot participants the opportunity to provide feedback through surveys or focus groups. The same data points must be tracked for all pilot participants or scenarios throughout the pilot.
Identify the participants. Select participants carefully, ensuring that the hypothesis can accurately and effectively be proven successful or unsuccessful. For example, if a new process is being piloted to the teleprospecting team, include newer team members as well as veterans who have used the existing process.
Define the duration. Pilots need time to produce significant and relevant results and reveal potential problems prior to a full-scale implementation. For example, it takes time to test the hypothesis that adding tele-resources increases pipeline. However, it’s not always realistic to run a pilot for an extended period of time. Define a proxy that can be used to indicate a positive outcome and evaluate the pilot’s success or failure. Set clear checkpoints along the way to define and document agreed-upon changes. At the end of the pilot, compare the metrics and outcomes against this timeline.
Once assumptions have been reviewed, scenarios defined, data collection methods established, and participants and duration determined, it’s time to conduct the pilot. When the pilot reaches its agreed-upon end date, the team must collect the findings and produce a report that includes the original hypothesis, the data gathered and the results.
Conduct the pilot. During the pilot phase, the support of management, stakeholders and participants is required. To ensure that participants focus on the task at hand, create a repository where new ideas and future hypotheses that arise during execution can be stored for consideration at a later date, minimizing the risk of slowdowns and interruptions.
Evaluate the results. If the results are consistent with the hypothesis, they support the assumptions that back it up. Consider whether other factors could have been responsible for the result. If other factors cannot be ruled out, consider extending the pilot while controlling for this new factor. If the pilot was not consistent with the hypothesis, the pilot should not necessarily be ended immediately. It may mean the hypothesis should be adjusted, and another pilot conducted to prove or disprove the new hypothesis.
Determine next steps. In addition to the results, the communications and knowledge transfer that occur throughout the pilot provide valuable feedback that informs next steps. Although a successful pilot does not automatically mean the project should move forward to full-scale implementation, it does give the organization the information required to make a well-informed decision – thus mitigating the level of risk the organization faces when choosing whether to invest in the proposed change.
Most b-to-b marketing organizations do not typically conduct pilots, most likely because they take time and involve multiple stakeholders, often from different functional roles. Although the marketing function is typically short on time and resources, pilots should be established as an standard process. Any significant change in demand creation resource allocation, processes or technology should be evaluated for the level of uncertainty in its anticipated outcomes and its suitability for a limited-scale, experimental implementation. To get the most out of a pilot, marketers must avoid drawing immediate conclusions without exploring root causes. Many variables may contribute to pilot results, so followup investigations must be thorough in order to yield an accurate understanding of the factors contributing to the pilot’s success or failure.