Evaluation is evaluation, if you know how to conduct an evaluation you just need to apply that knowledge to other circumstances.
Although that “case closed,” or “end of story” perspective may appear to be obvious; it is a very risky and false assertion. Baking a cake and designing an evaluation have two things in common: the approach taken can be simple or complex and both will fail to achieve the intended purpose if the appropriate method is not selected relative to the desired result.
Cappuccino Cheese Cake
Source: Wilton
http://www.wilton.com
Therefore, in the same way the method and approach for making a cheese cake is different to that for making a rich fruit cake; it is in much the same way that methods and approaches for conducting evaluations vary.
Evaluation design and methods vary according to the stage in the life cycle of the evaluand, availability and condition of data, and the suitability of the method for providing solid evidence in relation to the issue that is central to the evaluation.[1] Budget, ethics, objectivity, relevance to decision-making and time are among several other factors that shape the environment in which an evaluation is conducted, together with the other factors mentioned above they serve as the basis for selecting the most appropriate evaluation method.
In order to save time deciding on an evaluation design, you should revisit the Terms of Reference (TOR) to determine the level of program results, and what is involved at the output and outcomes levels.[2] If you have passed the proposal writing stage you should revisit not only the TOR but engage in dialogue with the commissioners of the evaluation in order to fully understand the real purpose of the evaluation and how the results are intended to be used. Once you have identified the levels or results, you can then identify and categorize the evaluation issues, which are usually grouped into three main categories: 1) continued need and relevance of intervention, 2) results, and 3) cost-effectiveness.
The evaluation issues discovered will influence your choice of evaluation strategy━ the strategy will determine the quality of the evidence to be gathered and combined with the concerns or nature of the decision-making environment of the evaluation will help you to decide the most appropriate methods of evaluation. One generally begins with having a fulsome understanding of the program theory. We do this by building a logic model, which includes if possible, both a working theory of change and action. Next, we identify evaluation issues and questions to understand whether the program theory is defensible and valid, the results that are expected or not, and that there is an ongoing connection between needs and results.
With respect to methods of inquiry, an evaluation issue having to do with a factual or descriptive question will require the design of a comparison and supporting measurements as the approach for providing an answer to the evaluation question. Conversely, a strategic or normative issue that requires a judgment to be made will usually require an evaluation comparison comprising evidence required or anticipated, and lines of inquires for delivering the best evidence to support decision-making on the identified questions. In all cases, a combination of evaluation approaches and methods are often required in order to ensure accuracy, and to validate the strength of evidence.
Rich Fruit Cake
Source: Grace Foods
http://www.gracefoods.com
It would appear; therefore, that evaluation is “not” evaluation after all. By now it should be evident that your choice of evaluation approach will play a major role in influencing the chosen evaluation design, data collection and analysis techniques. It is, therefore, important to note that different kinds of evaluation methods lend themselves to different kinds of data analysis and statistical testing, those are factors that will have implications for internal validity, reliability, and precision, for example. It is good to know the different evaluation methods that may be used, and that are appropriate for providing useful information that can be used to effectively inform policy or program decision-making.
There are three broad categories of evaluation design and several methodologies relevant to each: experimental designs, quasi-experimental designs and implicit experimental designs. Randomized or experimental designs are considered to be the most rigorous and preferred methods, and are often referred to as the gold standard; quasi-experimental designs, and non-experimental designs.
The last category requires a lesser degree of rigour though it is a popular choice for conducting constructivist-type evaluations, for example. Non-experimental designs may be converted into implicit quasi-experimental designs in order to enable the evaluator to draw strong inferences, and make stronger propositions as a result of the increased strength of conclusion and validity, than would be possible if using a traditional non-experimental approach.
If viewed from the perspective of a tool for passing a judgment on the worth or importance of: a program, product, project, policy or action then it is reasonable to say that evaluations are one and the same ━ evaluation is therefore evaluation. On the other hand when it comes to designing and conducting an evaluation it must be remembered and the task should be approached with the knowledge that evaluations differ significantly. How and why they differ is based on their purpose; the life stage of the intervention; the issue of the evaluation; environmental factors including data quality and budget; stakeholder understanding of and stake in the evaluation, and the scope of impact that the decision to be made based on the findings of the evaluation will have.
Wedding Cake – Source: Selena Cakes http://selenacakes.com
We have seen that a myriad of factors influence and shape the design and conduct of an evaluation ━the result, no two evaluations are the same. A cheese cake will never satisfy the taste buds of someone craving a rum laced fruit cake and vice versa; the best made cheese cake will ruin a wedding for the bride and audience who are expecting a fruit cake; the same applies to evaluations and their stakeholders, only serving the wrong dish at the evaluation table could have more dire consequences for all stakeholders involved.
You are invited to follow this article for a Table which summarizes the three main categories of evaluation designs, methodologies, sources for gathering evidence and a select listing of tests used in data analysis.
We thank you for stopping by, thank you also for sharing your thoughts on the topic.
Scott, M. E., Shepherd, R.P., Copyright © 2015 Magate Wildhorse. All rights reserved.
Robert P. Shepherd, PhD, CE is Associate Professor and Supervisor, Diploma in Policy & Program Evaluation at the School of Public Policy and Administration, Carleton University.
[1] PROGRAM EVALUATION METHODS: Measurement and Attribution of Program Results, (Public Affairs Branch Treasury Board of Canada, Secretariat), [Online], Available at: < http://www.tbs-sct.gc.ca/cee/pubs/meth/pem-mep03-eng.asp>, Accessed: 21 March 2015. [2] Public Affairs Branch Treasury Board of Canada, Secretariat, op. cit