Monitoring the Execution of Strategic Plans – From the IAF Interview

formagatepurpose

Magate Wildhorse and I thank the Association for Strategic Planning, and the International Affairs Forum (IAF) for the opportunity to speak on the topic of Monitoring Strategic Plans ahead of the ASP Conference 2018. This year’s conference theme: “Bridging the Strategy Execution Gap”.

Below Meegan Scott shares with Dimitri Neos of the International Affairs Forum on the Magate Wildhorse approach to monitoring the implementation of strategic plans.  The pre-conference interview addresses the monitoring process for driving strategy execution.

Our approach to pre-implementation evaluation of strategic plans was shared in the previous post. And our post-implementation, plan evaluation approach will be shared in the post following this.

IA-Forum: What is your approach to Monitoring and Evaluation strategic plans?

The monitoring which happens pretty soon after planning as we progress along the strategy process supports evaluation and is part of the control function. Again, the approach that we’ll take depends on our role in the processes. Our work will focus largely on monitoring for results versus merely tracking implementation status.We tend to use a blended approach where we focus on utilization and empowerment of the client. We are proactive about our emphasis on learning from the monitoring and performance measurement process. The aim is to get the best in terms of correcting actions to avoid disaster, and also for learning and improvement—which helps with risk management and mitigation.

Those are the approaches and now let’s discuss the steps.

The first step would be to conduct a monitoring readiness assessment to decide why and what to monitor.
If we led the planning, the monitoring readiness assessment would not be an exhaustive process because we would have done that in the prior organizational assessment. In that case it would be more about knowing how ready the client is to take on the challenge, preparing them to use monitoring and reporting tools as well as tracking and reporting tools.

The next thing that we would focus on, is identifying the best tools for them. For that we consider the type, the level, and the quality of the monitoring and reporting experience of the organization.

We play the role of evangelist for performance management and measurement. That helps us, as well as the organization, and it makes planning easier. We ensure everyone understands the role of monitoring and evaluation, and its importance in demonstrating and ensuring accountability. We highlight its importance in providing data for getting stakeholder buy-in and ensuring relevance, as well as to validating and making a judgment about the effectiveness of their programs. It is always interesting when a group walks up and says, “we do well because we did this and that”. But when you begin to speak with founders, management, funders, and staff, they understand that what they were taking for doing well is a different story. Or maybe what they were beating themselves up for was not all that bad.

We also emphasize the importance of monitoring for providing information that may lead to winning more funding, getting messages for marketing, and for driving innovation. There’s also the possibility of using the monitoring process to strengthen resource mobilization. As you may gather evidence that you’re an attractive partner for another entity. So, you can collaborate, share funds as well as drive strategy success. It is part of the reason the often less vigorously pursued monitoring of the external environment is so important. When clients hear of these possibilities it makes them more eager to own performance management.

We deliberately include provision for monitoring and evaluation with every strategy planning engagement.

That lead us to the next two steps in the process, that is deciding together with the client what outputs and outcomes will be monitored. And developing or adapting indicators to monitor the delivery of those outcomes. Our plans are generally support by a performance management framework and systems, which would include key information for monitoring implementation such as the inputs, the outputs, the outcomes, the agreed impacts, and key performance indicators both for the organization in total as well as for funders and related accountability needs. This assists the organization with getting their angle and handle on its own results and for strengthening or advancing its overall strategy.

We can now look at the next step in the monitoring process — gathering baseline data.
This includes historical organizational performance data or program specific data. At times, we don’t have baseline data, especially if they are new clients or maybe the data gap is just for a particular program they’re just rolling out. If the organization doesn’t have baseline data, we might be able to gather data from other sources such as statistic from government departments, interviews or literature reviews and use that to start a baseline. We might even find some in a pre-planning organizational or environmental assessment. If not, they know that the monitoring and performance data that will be gathered further in the process will help to provide the baseline for going forward.

From there we move towards the heart of the matter —ensuring monitoring is planned and works to deliver improvement or desired impact. It means we have to lead the team into setting targets that are tied to the intended change or results if developing or adjusting the system in order to conduct the monitoring. This is done with the vision, desired results or end state in mind.

It involves a review of cost information, budgets, funding, infrastructure and technology capability, human resource, and the timing and frequency of collecting data, responsibility or data collection, levels of effort, method of collection, quality measures for indicators, and so on.
The format of data or evidence is also important. The information gathered is used in setting the monitoring targets as well as for tracking and analysis during monitoring. Targets are often set during planning, or during the quarterly reporting process. But it could also be for a specific assignment for a one-off result monitoring intervention, and the contracting organization is being left with guidance for continuing the process.

The process involves the integration of indicators, providing for intended user information needs and users, and desired impact from the global, national, organizational, customer, beneficiary or client, compliance and other stakeholder levels. Both long and short-term targets and indicator targets must be included. Data collect is verified, in some instances sources are verified, tools and methods may also be tested and adjusted, samples may also be tested etc.

We capture that kind of information to a large extent in our plan documents. We do so for each line item of the plan, allowing for expansion of those items to ensure that the strategy is broken out properly and supported by indicators and measures that support the strategy and the strategic plan and is trickled down to your operations plan. It makes monitoring or designing the monitoring process easier.

So, when we get to the monitoring activity itself and the creation of a monitoring system our plan documents can be tailored to create performance monitoring and management reports. They can be further adjusted to create the monthly report as well. In it, you have to input cumulative performance information that comes from the day-to-day activity processes that are suitable to be captured at a strategic level as well as higher level indicators.

We generally end up with something that can be adjusted easily for monthly, quarterly and annual reports. Each quarterly report displays cumulated data on performance for the next quarter and the final quarter; and the third quarter is adjusted to form the fourth or annual report.
That forms a key part of the monitoring system. A compendium of indicators with dictionary is also important in monitoring, so we generally give a complimentary compendium of indicators whenever, we develop a strategic plan or monitoring system.

The actual monitoring focuses on tracking progress, quality, standards and status on the performance targets and results. It is the Check that helps drive Improve in the (PDCA process). We will rise alarms or present greenlights, areas and opportunities for improvement in communicating the findings. The comparison to planned versus actual results is key to helping teams to understand their progress and level of urgency for making adjustments.

We combine the findings from the surveillance that we’ve performed, through reviewing reports, documents, and other sources. Here, we’re looking for consistency and truth, in terms of what is reported. We’ll perform surveys, interviews, samples and meetings to monitor the validity and reliability of information, as well as identify any problems. Because the team is busy performing management duties, they don’t have time to take on those activities. We do that and tie everything together and then develop a total analysis with recommendations.

Out of that process, we end up identifying any emergent strategies. We analyze problems, difference between plan theories and reality; deviations from planned activities; even deviations from how program logic said it would work. The information gathered is then used to come up with suggestions, recommendations or draw out corrective measures and follow ups. We identify gaps that may have been overlooked and new gaps that may arise out of the implementation process, and gain inside information for updating strategy plan.

Sometimes you may develop a business process improvement team or something to handle an improvement to make that change or to push through an initiative if you see it’s falling behind. It could be a business process improvement, but it could also be something for resourcing planned activity that’s not happening for some reason. That is an example of the Act. We look to see if the annual review was done; or if evaluation and reports were completed and submitted if the work is for an organization that’s implementing donor projects or government funded projects. We do that to see if they did their external evaluation in accordance with when it was scheduled to be done.

We generally host Monitoring reporting meetings that allows for management response and input. At this point, we have already identified internal champions and if we are adjunct external or internal team member, we play a key role in championing and facilitating the performance measurement meetings; or in leading them with strong backing from the leadership of the organization. Leading the meetings alongside leadership allows us to guide the floor and facilitate meetings towards learning and improvement. At times we might be driving the process in order to get the leadership to get into the driver’s seat for championing performance through that kind of process.

In our plans (strategic or monitoring) we ensure there’s a performance measurement calendar and visuals that support the PDCA, (the plan to, do, check, act) framework for continuous improvement, and results-based management or other blends. The visuals are comprised of two separate triangles. One triangle has expected outcomes and impacts at the level of the organization. This can also include the societal level of impact. In the middle of it, there are the program, initiatives, service, and product indicators related to outcomes; plus, others related to output, efficiency, cost and effectiveness indicators. At the base of the triangle are the internal and individual measures for individual performance coming from input and output processes. To tie those up, you stretch compliance quality improvement measures running up and down the line so they can see that has to be included.

The other triangle includes objective strategies, levels of effort, and a responsibility targets time line running down it. This is a PDCA visual to remind them to communicate findings; to act and to improve on.

Last, to ensure utility, we package the information and communicate the findings so that the potential impacts, feedback and information for driving the improvement, any decisions, and risk are communicated. We facilitate moments for reflection and look at what worked, what went well, what didn’t well; and why. We also look at, what could we have done better.

Overall, our monitoring approach involves establishing the importance of monitoring and reporting for organizational growth and accountability. We look at information needs in terms of assessments, targets and indicators that were agreed, quality and standard measures, and outcomes to monitor. We also look or other data sets that emerges during execution. We provide training in the use of performance monitoring and measurement frameworks and tools. We establish teams and assign roles for performance management and measurement, including the board and their responsibility for the corporate strategy plan. And we consider and design the reports and reporting tools to suit the different needs and for ensuring improvement processes are supported.

If we take a step back to look at how we plan to support monitoring you would understand why we include the annual report and ensure that the AGM is there, the financial reporting is there on the calendar in the plan document.

Hosted by IAF ahead of ASPConf2018.

See also:

Interview Transcript on IAF

Post-execution evaluation of the corporate strategy plan

Pre-implementation evaluation of  strategic plans

Strategy Execution Challenges–“Bridging the Strategy Execution Gap”

———————

Something good happened in Rosemont, Chicago. The Association for Strategic Planning Conference 2018.

Copyright © 2018 International Affairs Forum, Association for Strategic Planning, Magate Wildhorse, Meegan Scott
All Rights Reserved

iaf_logo_black magatelogogreen

Strategic Plan Evaluation – From IAF Interview

formagatepurpose

Magate Wildhorse and I thank the Association for Strategic Planning, and the International Affairs Forum (IAF) for the opportunity to speak on the following topics ahead of the ASP Conference 2018. The theme of this year’s Conference was “Bridging the Strategy Execution Gap”.  We had a wonderful time in Chicago. We met many strategy experts from various industries and countries; learned and shared during various conference sessions and we came home with a prize.  Ha ha see why you can’t afford to miss ASP Conference.

Following are the full-length scripts from my pre-conference interviews hosted by Dimitri Neos of the IAF.

The interviews are shared in three parts and covers the following topics:

  • Part 1: Pre-Implementation Evaluation of the Strategic Plan
  • Part 2: Monitoring of the Strategic Plan
  • Part 3: Post-Execution Evaluation of the Strategic Plan

We also discussed specific challenges related to closing the strategy to execution Gap.

From our ASPConf2018 pre-conference interviews.

Part 1: Pre-Implementation Evaluation of the Strategic Plan

IA=FORUM What is your approach to Monitoring and Evaluating strategic plans?

MEEGAN SCOTT: Our approach to the evaluation of strategic plans depends, on the stage in the strategy process, the purpose of the evaluation, and the terms of reference for the evaluation. Our role depends on whether we are adjunct internal or external consultants, and who commissioned the evaluation.

I mentioned internal adjunct because that’s a service we provide where we are adjunct to a team and not just be involved for a couple of days.

The evaluation would involve examining the context as well as the basis and logic of the strategy contained in the plan. We would also be, comparing expected to actual results, identifying emergent strategies, making recommendations for corrective actions, and developing recommendations for performance improvement.

How it actually plays out at Magate Wildhorse depends on whether we are doing the evaluation prior to execution. If it’s before execution, we focus on the content of the plan and we look for the typical consistency, balance, consonants, feasibility, advantage, completeness, clarity, and ability. When we look at consistency, we are looking at strategic intent, the framework, and so forth.

Depending on the strategy, the framework and context, we are looking at– the questions that would focus on things like: are the strategic intent and strategy framework consistent? Was, or to what extent, was a value chain considered for improving products or service delivery— or both? Has the strategy identity of the organization been clearly articulated? Are they relevant? Is there a mission statement; vision, values, and culture statements? Is there a value proposition? We also look to answer the question — Were there provisions and powerful messages for communicating the strategy identity?

Does the plan provide for building human resource and leadership capacity in response to internal gaps or a desired future state of the organization?

Another thing we look at is what, and how solid, the planned activities initiatives are. For example, activities and initiatives for retaining or growing membership or customer base as well as those for capturing non-users. We also ask, is there a timeline, or calendared activities and processes for strategy renewal and updates? What is the frequency of updating the strategy plan and version control. What is the efficiency of the plan

development process?

A question asked to help us as well as the client to do better in the future.

You have to learn what you could have done differently, what you have to get the clients to do differently, what they could have done differently.

During the analysis of the processes, we would use methods such as customer satisfaction assessments. We also look at time sheets, schedules and journals for assessing the time and process. We review our own reflexive journal for every strategic planning exercise; I get a completely new journal.

Organizational history and review of assumptions and sharing are important. You have to examine when and why the organization started. How has it changed, what has impacted the change and what the client or organization is looking for in the future?

That process can really aid organizational learning as well as raise flags or clarifies concerns related to mission drift.
We ask clients to answer questions related to history and the processes that they can do away with. We also look for mechanisms for ongoing surveillance as well as balance in accordance with the strategic framework. Key questions here would consider — How well does a plan addresses risk, the assumptions and the contingencies? Or if it addresses them at all.

To see what’s in the control kit or better other things in the control kit we look at: timelines, levels of effort, performance standards, quality standards and performance indicators.

The mix of lead, lagging, SMART and SMARTER indictors, the last being (Specific Measurable Attainable Realistic and Time-bound), plus extending and rewarding.
For ensuring the plan is actionable we also look at work plans and budgets —how well do they support each other?

In the next post we share the discussion on Monitoring the Strategic Plan.

Hosted by IAF ahead of ASPConf2018.

See also:

Bridging the Strategy Execution Gap — Select Challenges

Monitoring strategic plans

Evaluating the corporate strategy plan post-execution (post-implementation)

Magate Wildhorse, The  Noësis & Artificial Intelligence

————————–

ASPConf2018, the Association for Strategic  Planning Conference, an event for your professional development and business calendars.

Copyright © 2018 International Affairs Forum, Association for Strategic Planning, Magate Wildhorse, Meegan Scott
All Rights Reserved

iaf_logo_black magatelogogreen

Post-Execution Evaluation of Strategic Plans – IAF Interview

formagatepurpose

IA-Forum: What is your approach to Monitoring and Evaluation strategic plans?

Magate Wildhorse and I thank the Association for Strategic Planning, and the International Affairs Forum (IAF) for the opportunity to speak on the topic of Evaluating Strategic Plans, Post-Execution  ahead of the ASP Conference 2018. This year’s conference theme: “Bridging the Strategy Execution Gap”.

Below Meegan Scott shares with Dimitri Neos of the International Affairs Forum on the Magate Wildhorse approach to evaluating strategic plans post execution. The pre-conference interview addresses post implementation strategic plan evaluation, a best practice for driving strategy execution success.

Our approach to pre-implementation evaluation of strategic plans was shared in the previous post. In the second post of the series we addressed monitoring the implementation of strategic plans.

IA-Forum:  What about the post-execution evaluation process?

Meegan Scott:  The task at hand in post-implementation is to make a judgement about the strength of the organization at a milestone review period (when we’re asked to do evaluation).

It could be a Mid-term Review of a Plan or at the end of a Plan Period.

We ask if the organization is stronger at the end of that milestone period or the planned period than when the plan was created, and at the start of execution.  Was the strategy executed successfully?  This is an attempt to assess the effectiveness of a plan in guiding the organization towards achieving improved performance.  We look at that in terms of effectiveness, efficiency, relevance, financial viability, cost effectiveness, and for some type entities, we would go deeper into looking at quality aspects. 

For that type of plan, let’s say for a manufacturer of clothing, we may use the Hoshin Planning Model to add those related lines of questions to the evaluation.  In general, we also look at how the plan helps the organization with adjusting to changes in the environment.  These include political factors, social factors, competitor inflation, interest rates, legislation and even ecological factors.  Sometimes, we’ll find that entities do not know all the governing legislation affecting them.  So, we normally place a table up at the front of a plan that includes governing legislations.

We also look at the plan logic and the premises and predictions.  How did those work according to plan?  Were they accurate, to what extent, and what needs to be adjusted?  We look at whether or not the plan helped to improve motivation in a desired culture and advance the mission of the organization.  We also look at its impact on the organization, on its history.

Did it help to create new history?  Did the plan carry out what was meant to be done?  Did it help to strengthen and improve financial management partnerships, program management, leadership at different levels, and HR capacity to support both the present and the desired future?  Analyzing HR capacity is in part to help management retain tacit knowledge in the organization rather than just staying put and waiting to hire staff.

Typically, the evaluation will take the form of a self-assessment.  Even though it may be a request for an evaluation by a donor, we try to make it into a self-assessment so that the client can benefit from owning and growing that culture of performance and measurement and improvement.

The client also benefits from receiving information for decision-making related to their strategic choices for strategy updates and reformulation for the next plan and milestone period. That adds much more benefit than if you did the evaluation with merely accountability in mind.  We therefore approach the evaluation with a view to gathering performance information, meeting accountability requirements, and to guide resource allocation.  Another important thing we look at is the infrastructure for delivering strategy.

If the organization is implementing multi programs as would be in the case of a government department, and many NGOs, the approach would be heavily influenced by the terms of reference.  That comes with a call for proposal versus if it was just the organization that came up with the idea and asked for a proposal.  In the latter case we are left with greater leverage in designing what it is that we will be doing.

Our approach includes a blend of evaluation approaches.  This depends on the competence of the organization in collecting and using performance information and the information needs outlined or that we glean from the call for the evaluation and the intended users.  That blend would involve components of the utilization focus evaluation approach.

It would include consultations for ensuring that the information collected will be of benefit and is what is desired by the organization and its stakeholders.  We may even use a theory-based evaluation approach for assessing the logic for addressing a particular problem, the effectiveness, and the context.

We would want to look at the theory of change, how it’s holding up against what was expected, the participants and their attitude, and how their participation impacts the outcomes for them.  We could also use a more all-inclusive strategic evaluation approach.  That would be a strategic evaluation into the outcomes and the impact of the target population.

Irrespective of the evaluation approach or blend thereof, we would consider planned results against actual results and unintended results.

So, going back to the strategic evaluation— we’d look at the results and service levels as well whatever they are creating, selling or giving away.  For outcomes, our examination would be in terms of their relevance and effectiveness.  For outputs, the focus would be the products or services and how efficient the organization was in producing them (the outputs). Other output related questions to answer would be— along the lines of how cost-effective it was to deliver the solutions, and the quality of those outputs that were delivered.

You also want to analyze the internal management and leadership as it relates to output processes that are involved and for developing them.

So you perform an individual level assessment and review of measures.  This can be an area of challenge or resistance.  The moment you begin to ask for job descriptions and such, expect a break or stop in information flow.  At the plan level, we review measures indicators, strategy identity, et cetera.

A key component of the exercise is the management response session that we’ll lead for discussing the findings, the recommendations, and the judgment.  From that, you’ll get feedback into how management feel about the judgment and the findings.  This may result in some insight about the context and maybe some adjustment.  You will also draw out of that process actions for improvement and try to get some calendar and resource commitments towards that.

A review of external literature and internal organizational documents is part of the process.  External literature includes literature of the external environment.  When looking at internal literature, we examine their reporting and administrative documents, the operations plan, the corporate strategy plan, performance reports, and minutes from board meetings.  Other methods or lines of evidence for data analysis could include conducting surveys, interviews, and consultations.  If some instances we must calculate, develop estimates, or undertake social media searches. At times we even have to look at or conduct lab research or participatory.

The output would be typically an organizational assessment and development report.  This would include the proposed strategic options and choices for informing the development of a new strategy plan, strategy update, or a plan for the next plan period.

Let’s turn to the kinds of questions that we would ask. Different evaluations may have their own unique questions.  But in general, we’d ask questions such as, what were the goals and objectives in the plan?  How did the organization perform based on the strategic intent stated in the plan and its related goals?  We’d also ask how effective did the organization use the plan to manage the delivery of its results?  That is, the priority, the focus areas, the approaches, and accountability.  Another question is: how easily or difficult did the plan make the performance management and measurement process?  Because if it is just a summary of a plan and not fully elaborated and support by measurable indicators, there’s going to be trouble at the execution stage.

An important question to ask is: does the plan include an alignment mechanism for cascading and aligning?  We also look at whether or not the major initiatives and commitments were delivered on time and in budget.  If there were deviations, how wide was the spread and what needs to be changed.  Therefore, if they finished before schedule, were late or on time, we want to understand the reasons.  We also look at the overall workings of the plan logic based on the theory or theories of change or the strategy maps and strategy framework or income output map, or any combination of them.  We also look at whether or not the scope of operations is made clear by the plan.  How suitable was the initiatives or initiatives for building capacity and advancing the strategic direction articulated in the plan— is another area which we examine.

So, to arrive at a judgment we’ll use multiple sources of qualitative and quantitative evidence.  We look at the use of the plan and the process and the annual and periodic review and strategy update.  Are they following that guide and are they updating the plan?  We ask if a priority trade off happened and, if so, why?  We also look at the effectiveness of the plan in communicating to the board, to management, to partners, to funders, and staff.  Do they understand the plan or do they find it to be a burdensome document?  Does it address the value chain and how they’re going to make product relationships and leverage them in terms of the products or services or the supply chain? 

Moreover, we analyze how effective the plan is in articulating the strategic identity.  So that’s how we do that post-implementation evaluation.

See also:

Interview Transcript on IAF

Strategy Execution Challenges

Pre-implementation evaluation of the corporate strategy plan

Monitoring of the corporate strategy plan

—————–

ASPConf2018, the Association for Strategic  Planning Conference, an event for your professional development and business calendars.

Copyright © 2018 International Affairs Forum, Association for Strategic Planning, Magate Wildhorse, Meegan Scott
All Rights Reserved

iaf_logo_blackmagatelogogreen