The intended use of the evaluation information will determine the type of evidence you need. While research studies that seek to develop generalizable knowledge on effective program models require a high degree of certainty and sophistication (e.g. statistical significance, robust attribution, and causal links based on methods that incorporate control groups), evaluation in support of uses such as program improvement or generating impact information for marketing purposes can draw conclusions from more feasible methods, as long as any areas of uncertainty (including potential sources of bias) are clearly documented.
In brief:
- The overall purpose of evaluation is to get useful information.
- Information is useful when it results in a sufficiently better answer to a question.
- An answer is sufficiently better when the reduction in uncertainty around the information is enough to make a decision or take action.
The credibility and value of all evaluation information (evidence) can be enhanced by clear communication covering two areas:
- Explain why the evaluation strategies used were appropriate given available resources (time and money) and program context, including considerations of participant burden and program effectiveness (e.g. giving participants tests of knowledge may discourage participation, which would interfere with the program delivery–and possibly introduce bias)
- Know what biases might exist as a result of the evaluation process, and therefore what uncertainty might still exist around the program questions.
Below are listed possible evaluation uses, their primary audience, and some thoughts on the level of certainty (i.e. the strength of the evidence) the evaluation needs to provide for each purpose or use.
Program improvement
Audience: internal
Level of certainty/evidence needed
- various levels of certainty can provide useful information to shape the program
-
- useful information even without statistical significance
- useful information for internal comparisons (e.g. areas where knowledge gain higher/lower) from reasonably valid questions with similar possible biases (e.g. self report of knowledge level)
- useful information even with possible nonresponse bias
- useful information without control group
- useful information even when it is difficult to establish counterfactuals with confidence
Research/knowledge development
Audience: external
Level of certainty/evidence needed
- generally high need for certainty
- accepted methodologies vary by discipline
- methods should support high degree of validity
- often need to establish attribution, causation
- perhaps more flexible level of certainty in areas where there are big knowledge gaps (i.e. high existing uncertainty)
Marketing
Audience: external
Level of certainty/evidence needed
- Audience likely will not require high certainty (generally uncritical audience)
- university or organization needs information to adhere to standards consistent with its reputation, e.g.,
- methods and assumptions should be documented and justified
- Presentation should not be misleading
Performance management/assessment
Audience: Internal
Level of certainty/evidence needed
- organizational standards may vary; evidence standards may not be explicit
- will probably align with information attainable with available resources
- important to document and justify choices of methods and assumptions including references to feasibility if applicable
External reporting
Audience: External
Level of certainty/evidence needed
- Specified standards may vary; standards may not be explicit
- important to document and justify choices of methods and assumptions including references to feasibility if applicable
- May need to have formal evaluation; in some cases may need to be external. If so, the process will generally be led by the external funder.