Monitoring the Execution of Strategic Plans – From the IAF Interview

formagatepurpose

Magate Wildhorse and I thank the Association for Strategic Planning, and the International Affairs Forum (IAF) for the opportunity to speak on the topic of Monitoring Strategic Plans ahead of the ASP Conference 2018. This year’s conference theme: “Bridging the Strategy Execution Gap”.

Below Meegan Scott shares with Dimitri Neos of the International Affairs Forum on the Magate Wildhorse approach to monitoring the implementation of strategic plans.  The pre-conference interview addresses the monitoring process for driving strategy execution.

Our approach to pre-implementation evaluation of strategic plans was shared in the previous post. And our post-implementation, plan evaluation approach will be shared in the post following this.

IA-Forum: What is your approach to Monitoring and Evaluation strategic plans?

The monitoring which happens pretty soon after planning as we progress along the strategy process supports evaluation and is part of the control function. Again, the approach that we’ll take depends on our role in the processes. Our work will focus largely on monitoring for results versus merely tracking implementation status.We tend to use a blended approach where we focus on utilization and empowerment of the client. We are proactive about our emphasis on learning from the monitoring and performance measurement process. The aim is to get the best in terms of correcting actions to avoid disaster, and also for learning and improvement—which helps with risk management and mitigation.

Those are the approaches and now let’s discuss the steps.

The first step would be to conduct a monitoring readiness assessment to decide why and what to monitor.
If we led the planning, the monitoring readiness assessment would not be an exhaustive process because we would have done that in the prior organizational assessment. In that case it would be more about knowing how ready the client is to take on the challenge, preparing them to use monitoring and reporting tools as well as tracking and reporting tools.

The next thing that we would focus on, is identifying the best tools for them. For that we consider the type, the level, and the quality of the monitoring and reporting experience of the organization.

We play the role of evangelist for performance management and measurement. That helps us, as well as the organization, and it makes planning easier. We ensure everyone understands the role of monitoring and evaluation, and its importance in demonstrating and ensuring accountability. We highlight its importance in providing data for getting stakeholder buy-in and ensuring relevance, as well as to validating and making a judgment about the effectiveness of their programs. It is always interesting when a group walks up and says, “we do well because we did this and that”. But when you begin to speak with founders, management, funders, and staff, they understand that what they were taking for doing well is a different story. Or maybe what they were beating themselves up for was not all that bad.

We also emphasize the importance of monitoring for providing information that may lead to winning more funding, getting messages for marketing, and for driving innovation. There’s also the possibility of using the monitoring process to strengthen resource mobilization. As you may gather evidence that you’re an attractive partner for another entity. So, you can collaborate, share funds as well as drive strategy success. It is part of the reason the often less vigorously pursued monitoring of the external environment is so important. When clients hear of these possibilities it makes them more eager to own performance management.

We deliberately include provision for monitoring and evaluation with every strategy planning engagement.

That lead us to the next two steps in the process, that is deciding together with the client what outputs and outcomes will be monitored. And developing or adapting indicators to monitor the delivery of those outcomes. Our plans are generally support by a performance management framework and systems, which would include key information for monitoring implementation such as the inputs, the outputs, the outcomes, the agreed impacts, and key performance indicators both for the organization in total as well as for funders and related accountability needs. This assists the organization with getting their angle and handle on its own results and for strengthening or advancing its overall strategy.

We can now look at the next step in the monitoring process — gathering baseline data.
This includes historical organizational performance data or program specific data. At times, we don’t have baseline data, especially if they are new clients or maybe the data gap is just for a particular program they’re just rolling out. If the organization doesn’t have baseline data, we might be able to gather data from other sources such as statistic from government departments, interviews or literature reviews and use that to start a baseline. We might even find some in a pre-planning organizational or environmental assessment. If not, they know that the monitoring and performance data that will be gathered further in the process will help to provide the baseline for going forward.

From there we move towards the heart of the matter —ensuring monitoring is planned and works to deliver improvement or desired impact. It means we have to lead the team into setting targets that are tied to the intended change or results if developing or adjusting the system in order to conduct the monitoring. This is done with the vision, desired results or end state in mind.

It involves a review of cost information, budgets, funding, infrastructure and technology capability, human resource, and the timing and frequency of collecting data, responsibility or data collection, levels of effort, method of collection, quality measures for indicators, and so on.
The format of data or evidence is also important. The information gathered is used in setting the monitoring targets as well as for tracking and analysis during monitoring. Targets are often set during planning, or during the quarterly reporting process. But it could also be for a specific assignment for a one-off result monitoring intervention, and the contracting organization is being left with guidance for continuing the process.

The process involves the integration of indicators, providing for intended user information needs and users, and desired impact from the global, national, organizational, customer, beneficiary or client, compliance and other stakeholder levels. Both long and short-term targets and indicator targets must be included. Data collect is verified, in some instances sources are verified, tools and methods may also be tested and adjusted, samples may also be tested etc.

We capture that kind of information to a large extent in our plan documents. We do so for each line item of the plan, allowing for expansion of those items to ensure that the strategy is broken out properly and supported by indicators and measures that support the strategy and the strategic plan and is trickled down to your operations plan. It makes monitoring or designing the monitoring process easier.

So, when we get to the monitoring activity itself and the creation of a monitoring system our plan documents can be tailored to create performance monitoring and management reports. They can be further adjusted to create the monthly report as well. In it, you have to input cumulative performance information that comes from the day-to-day activity processes that are suitable to be captured at a strategic level as well as higher level indicators.

We generally end up with something that can be adjusted easily for monthly, quarterly and annual reports. Each quarterly report displays cumulated data on performance for the next quarter and the final quarter; and the third quarter is adjusted to form the fourth or annual report.
That forms a key part of the monitoring system. A compendium of indicators with dictionary is also important in monitoring, so we generally give a complimentary compendium of indicators whenever, we develop a strategic plan or monitoring system.

The actual monitoring focuses on tracking progress, quality, standards and status on the performance targets and results. It is the Check that helps drive Improve in the (PDCA process). We will rise alarms or present greenlights, areas and opportunities for improvement in communicating the findings. The comparison to planned versus actual results is key to helping teams to understand their progress and level of urgency for making adjustments.

We combine the findings from the surveillance that we’ve performed, through reviewing reports, documents, and other sources. Here, we’re looking for consistency and truth, in terms of what is reported. We’ll perform surveys, interviews, samples and meetings to monitor the validity and reliability of information, as well as identify any problems. Because the team is busy performing management duties, they don’t have time to take on those activities. We do that and tie everything together and then develop a total analysis with recommendations.

Out of that process, we end up identifying any emergent strategies. We analyze problems, difference between plan theories and reality; deviations from planned activities; even deviations from how program logic said it would work. The information gathered is then used to come up with suggestions, recommendations or draw out corrective measures and follow ups. We identify gaps that may have been overlooked and new gaps that may arise out of the implementation process, and gain inside information for updating strategy plan.

Sometimes you may develop a business process improvement team or something to handle an improvement to make that change or to push through an initiative if you see it’s falling behind. It could be a business process improvement, but it could also be something for resourcing planned activity that’s not happening for some reason. That is an example of the Act. We look to see if the annual review was done; or if evaluation and reports were completed and submitted if the work is for an organization that’s implementing donor projects or government funded projects. We do that to see if they did their external evaluation in accordance with when it was scheduled to be done.

We generally host Monitoring reporting meetings that allows for management response and input. At this point, we have already identified internal champions and if we are adjunct external or internal team member, we play a key role in championing and facilitating the performance measurement meetings; or in leading them with strong backing from the leadership of the organization. Leading the meetings alongside leadership allows us to guide the floor and facilitate meetings towards learning and improvement. At times we might be driving the process in order to get the leadership to get into the driver’s seat for championing performance through that kind of process.

In our plans (strategic or monitoring) we ensure there’s a performance measurement calendar and visuals that support the PDCA, (the plan to, do, check, act) framework for continuous improvement, and results-based management or other blends. The visuals are comprised of two separate triangles. One triangle has expected outcomes and impacts at the level of the organization. This can also include the societal level of impact. In the middle of it, there are the program, initiatives, service, and product indicators related to outcomes; plus, others related to output, efficiency, cost and effectiveness indicators. At the base of the triangle are the internal and individual measures for individual performance coming from input and output processes. To tie those up, you stretch compliance quality improvement measures running up and down the line so they can see that has to be included.

The other triangle includes objective strategies, levels of effort, and a responsibility targets time line running down it. This is a PDCA visual to remind them to communicate findings; to act and to improve on.

Last, to ensure utility, we package the information and communicate the findings so that the potential impacts, feedback and information for driving the improvement, any decisions, and risk are communicated. We facilitate moments for reflection and look at what worked, what went well, what didn’t well; and why. We also look at, what could we have done better.

Overall, our monitoring approach involves establishing the importance of monitoring and reporting for organizational growth and accountability. We look at information needs in terms of assessments, targets and indicators that were agreed, quality and standard measures, and outcomes to monitor. We also look or other data sets that emerges during execution. We provide training in the use of performance monitoring and measurement frameworks and tools. We establish teams and assign roles for performance management and measurement, including the board and their responsibility for the corporate strategy plan. And we consider and design the reports and reporting tools to suit the different needs and for ensuring improvement processes are supported.

If we take a step back to look at how we plan to support monitoring you would understand why we include the annual report and ensure that the AGM is there, the financial reporting is there on the calendar in the plan document.

Hosted by IAF ahead of ASPConf2018.

See also:

Interview Transcript on IAF

Post-execution evaluation of the corporate strategy plan

Pre-implementation evaluation of  strategic plans

Strategy Execution Challenges–“Bridging the Strategy Execution Gap”

———————

Something good happened in Rosemont, Chicago. The Association for Strategic Planning Conference 2018.

Copyright © 2018 International Affairs Forum, Association for Strategic Planning, Magate Wildhorse, Meegan Scott
All Rights Reserved

iaf_logo_black magatelogogreen