Policy and programme evaluation: principles and objectives

PDF version

Editorial

Arash Rashidian 1

1 Director of Information, Evidence and Research, WHO Regional Office for the Eastern Mediterranean Region, Cairo, Egypt


In May 2012, the World Health Assembly endorsed the World Health Organization (WHO) evaluation policy, emphasizing the fact that evaluation is an essential part of the work of the Organization (1). The evaluation policy intended to bring to fore a few important principles of WHO’s work: further accountability, systematic provision of oversight of performance, and ensuring that the Organization continues to learn from its experiences. The endorsement of the policy was followed by important and immediate actions, most notably the publication of the WHO Evaluation Practice Handbook (2) and the establishment of the WHO Evaluation Office.

“Evaluation” is a widely used term in management and policy circles, often as part of project “monitoring and evaluation”. Understandably, WHO has a long history of advocating and promoting the evaluation of programmes it envisaged (3,4). Hence, the evaluation policy in effect sought to go beyond the notion of “evaluation” as part of the project lifecycle, and to streamline the thinking on evaluation within the Organization. It also focused further on the work of WHO as the subject of evaluation, as well the need for more systematic assessments of its implemented programmes. At the global level, following this approach, evaluation of the impact of WHO publications has been concluded and a number of potentially high-impact evaluations are ongoing, including: WHO Secretariat’s contribution to health-related Millennium Development Goals (MDGs), WHO reform, and WHO presence in countries (5).

WHO has a wide mandate and a multitude of tasks and responsibilities. Evaluation projects are expensive and require effort to ensure their results are valid, timely and beneficial. The cost and time required for evaluation are also affected by the evaluation methodology, which in turn affects the validity of the findings. Hence, answering the two questions of “what to evaluate” and “how to evaluate” are major determinants of the long-term success of the evaluation policy at the global as well as regional levels. While answering the “what” question is very much based on senior management policy oversight and the Member States’ expectations, some general guidelines can be borrowed from public policy analysis literature on different “streams of thought” that exist for answering the “how” question, and generate a linkage between the two (6).

Evaluation of a policy or major programme of work, at a national or international level, can be categorized into three overarching streams of thought: 1) why decision-makers make their decisions, and what are the main intentions of the policy; 2) how policies are developed and implemented; and 3) what has been the impact or effects of the policies (6).

The first stream, in a nutshell, is interested in questions such as why Member States lead WHO towards certain objectives, among alternative objectives; or what are the comparative roles of different United Nations agencies on health. Although evaluations of this nature might be rare in WHO, arguably the recent overarching “Report of the Ebola interim assessment panel – July 2015” was an example of such an evaluation project (7). Similarly an evaluation of this nature might assess, at the global level, different aspects and effects of the move from MDGs to Sustainable Development Goals (SDGs).

The second stream of thought deals with the understanding of how policies are developed and function in practice. An evaluation of this nature will be interested in knowing how the policy contents are developed (e.g. are equity concerns included?); how polices are turned into action through development of strategic and operational plans, and the implementation of those plans; and the interaction between the contextual factors (e.g. health system design) and different policy actors with the implementation of the policy.

While the two former streams of thought focus on why and how policies are developed and put in practice, the third stream deals with two categories of equally important propositions. First, was the policy and programme developed based on sound research evidence? And second, did the policy actually achieved its objectives, and whether or not it had unintended effects? Since 2007, WHO has demonstrated serious commitments to improve the use of research evidence in its recommendations, resulting in noticeable improvement in its processes (8,9,10). Still, evaluation might be warranted to see whether the required standards have been followed in all the recommendations of interest. More importantly, WHO is keen to demonstrate the effects of its policies on improving population health outcomes. Such evaluations might be of great importance, but are inherently difficult to conduct given the complexity of environments in which WHO works, the multitude of its determinants – including social and political determinants – that affect health outcome, and the timeline of intended effects of the policies.

Finally, in each evaluation the selection of the outcomes measures of evaluation is important. As eloquently put before, “not everything that counts can be counted”. For example, while the number of individuals that attend a training event is important (and can be counted), the aim of such training is usually to change practice and health outcomes (which is difficult to count). Additionally, the outcomes of interest may not be limited to the programme itself. Implementation of one particular programme may divert attention from other areas of work, or result in unintended (positive or negative) outcomes in other programmes. As a rule of thumb, while it may be justifiable in evaluating a focused programme of work to measure the direct cost and outcomes of the programme, for overarching programmes a more systemic approach that covers other potentially relevant outcomes is need (11).

In the end it is important to repeat what was highlighted at the start: there is an opportunity cost for any evaluation project. Hence, the evaluation projects should focus on the most pertinent areas of work and assess the outcomes that really matter. As Tukey noted in 1963: “far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise” (12).

References

  1. Evaluation policy. Geneva: World Health Organization; 2012 (Information Note 28/2012).
  2. WHO evaluation practice handbook. Geneva: World Health Organization; 2013 (http://apps.who.int/iris/handle/10665/96311, accessed 30 January 2017).
  3. Global Malaria Programme. Training module on malaria control: planning and managing programmes. Geneva: World Health Organization; 2011.
  4. Joint external valuation tool and process overview. Geneva: World Health Organization; 2016 (http://apps.who.int/iris/bitstream/10665/252755/1/WHO-HSE-GCR-2016.18-eng.pdf, accessed 30 January 2017)
  5. Evaluation Matters, Newsletter of the WHO Evaluation Office. Issue 3, January 2017 (http://who.int/about/finances-accountability/evaluation/evaluation_matters_issue_3_Jan2017.pdf, accessed 30 January 2017).
  6. Knoepfel P, Larrue C, Varone F, Hill M. Public policy analysis. Bristol: Policy Press; 2007.
  7. Report of the Ebola interim assessment panel – July 2015. Geneva: World Health Organization; 2015 (http://www.who.int/csr/resources/publications/ebola/ebola-panel-report/en/, accessed 30 January 2017).
  8. Oxman AD, Lavis JN, Fretheim A. Use of evidence in WHO recommendations. Lancet. 2007; 369:1883–9. doi:10.1016/S0140-6736(07)60675-8.
  9. Ansari S, Rashidian A. Guidelines for guidelines: are they up to the task? A comparative assessment of clinical practice guideline development handbooks. PLoS ONE. 2012; 7(11): e49864. doi:10.1371/journal.pone.0049864.
  10. WHO handbook for guideline development – 2nd Edition. Geneva: World Health Organization; 2014 (http://apps.who.int/medicinedocs/documents/s22083en/s22083en.pdf, accessed 30 January 2017).
  11. De Savigny D, Adam T. Systems thinking for health systems strengthening. Geneva: World Health Organization; 2009 (http://www.who.int/alliance-hpsr/resources/9789241563895/en/, accessed 30 January 2017).
  12. Tukey JW. The future of data analysis. Annals of Mathematical Statistics. 1962;33;1–67.