Why to evaluate?
- Accountability to stakeholders – not only to donors, but also to the beneficiaries, partners and others.
- Learning from past experience
- Improvement. Even though this is not frequently mentioned, I would underline this. For me, learning is not enough unless it is translated to an action. Based on a utilisation-focused evaluation, stakeholders would actually keep doing the great things, change what needs to be changed and start doing things that are missing (for example concentrate on an area with the highest HIV/AIDS prevalence rate).
Terms of Reference – How to plan an evaluation?
First of all, discuss with your project team and ideally with key stakeholders (donor, other institutions, community):
- Background: project name, identification, history, objectives, results, key activities, progress over time (add logical framework if you wish – you get more tailored proposals), organisational, social and political context of the evaluation, main stakeholders involved in the project incl. target groups, beneficiaries, partners, donors
- Purpose: What would you like to get out of the evaluation? What would you like to learn? Why do you do the evaluation now? (for example, you plan a subsequent project on a bigger scale, or you would like to apply the approach elsewhere, or you would like to find out why your approach did not work in a certain context)
- Use: What are the expected outputs? (debriefing, presentation, report, videos, posters – printed or on-line…)Who would use the evaluation outputs, when and how? Who else should know? Shall we make the evaluation report public? How? Why not? (for example, if you work with a remote community without access to technology, publishing the report on-line would probably not help to inform them about the project achievements)
- Scope and focus: What are the project details? Which components, geographical areas, period of time etc. would you like to evaluate? What is its logical framework or theory of change? What are key stakeholders? What monitoring and evaluation data are available?
- Evaluation criteria and questions: What exactly do we want to know? OECD/DAC evaluation criteria are still the most frequently used ones (required also by the European Commission, together with additional ones). OECD/DAC also suggests some very general evaluation questions. Try not to ask too many, 7-10 are just fine. There are three main types of evaluation questions – descriptive, normative and cause-and-effect. Be careful to choose apropriate evaluation design as per the evaluation question (check presentation below)!
- Methodology: How would the evaluation criteria be answered? In an external evaluation, you would probably expect an evaluator to propose this. Nevertheless, it is a good exercise to think of the indicators for each question, sources of information (documents, stakeholders), data collection methods (quantitative like surveys or qualitative like interviews) and data analysis / reporting. It gives you an idea of evaluation timeline, budget and other aspects.
- Timeline: Who needs to participate in the evaluation planning? Who will be approached by the evaluator during the inception to get a project overview and to plan logistics in detail? Who will be involved in data collection, such as interviews, group discussions, surveys etc.? When are important events that evaluator can join? When are key stakeholders (not) available? When can the final debriefing take place? Is this realistic, taking to account the above?
- Budget: Given all the above, what is the estimated budget for evaluator’s remuneration and expenses as well as expenses incurred by you and other stakeholders? What budget do you have available and thus what is realistic?
- Human Resources: Do we have the capacities (expertise, money, time) to do the evaluation internally? If we have an external evaluator, what should be the key requirements? Who will coordinate the evaluation with the evaluator, project partners and others? Who is the evaluator accountable to?
Based on the above, you would normally develop an Evaluation Terms of Reference (ToR). I find this UNDP Handbook and this Guide from New Zealand useful to develop ToRs – they provide detailed instructions and examples of ToRs. If you implement a EuropeAid / Devco project, you may also want to check their Guidelines – see page 126 onwards.
Evaluability – are we ready?
Before you start the evaluation, I recommend to check, if you are ready – the evaluability factors are further discussed in the book Road to Results by Linda Imas and Ray Rist or in Making Evaluations Matter: the Practical Guide for Evaluators by Cecile Kusters et al.
- Are we clear why we do the evaluation?
- Do we have an (updated) logicalframework?
- Do we have sufficient (baseline, monitoring) data available?
- Do we have accessible reliable information sources?
- Do we have sufficient funds for an internal/external evaluation? Will the evaluation be cost-effective, will it bring reasonable benefits vs. costs?
- Is it likely that it will be used to improve actions in future? Can stakeholders influence the evaluation decisions? Will they accept and use the findings? Is there a strong leadership to put the recommendations in practice?
- Are there no major factors hindering the evaluation? Are staff members or other stakeholders overloaded due to other priorities? Are there any tendencies that would affect impartiality?
Internal or external evaluation?
Below are some (dis)advantages of internal and external evaluations. Even if often claimed, external evaluation is not necessarily independent especially if the evaluator is paid by the implementer or if the implementer decides the changes in the final evaluation report. Further, even external evaluation consumes time of the project team. Ultimately, it depends WHO the evaluator is.
- May have a better understanding of the project, context, policies
- Develop organisational capacities
- Higher ownership of recommendations by the organisation
- Usually cheaper
- May not be able to see alternative perspectives, solutions (are biased)
- Influenced more by the implementing organisation (want to keep their jobs)
- May be less credible to stakeholders
- May be time-consuming
- May bring a new perspective or special (technical, evaluation) expertise
- More independent from the implementer – may facilitate better between stakeholders (across hierarchies, in case of mistrust)
- Usually perceived as more credible
- May not be able to comprehend fully the project due to time/other contrains
- Usually more expensive
Mixed or participatory evaluation
- Can combine both of the above and utilise the advantages.
Where to learn more?
- NoNIE Guidance on Impact Evaluation is available free here.
- List of other useful resources on impact evaluation are at the OECD/DAC website here.
- Participatory Program Evaluation Manual by Judi Aubel is here.
- IPDET Handbook on Evaluation Ethics, Politics, Standards, Principles is here.
- OECD/DAC Evaluating Development Cooperation – key norms and standards at OECD/DAC website here.
- UNDP Evaluation Policy is here.
- United Nations Evaluation Group norms and standards.
- the Czech Code on Ethics for Evaluators and evaluation standards are available here.
- National evaluation standards and standards of different organisations are listed here.
- IDEAS Competencies Framework for International Development Evaluators, Managers, and Commissioners is here.
- Other evaluation standards and codes are available at Mande.co.uk.
Specific methods and toolkits
- Betterevaluation.org contains a number of evaluation options (methods or tools) and approaches.
- EvalPartners offer manuals, handbooks, roster of evaluations, evaluation jobs and trainings.
- The Most Significant Change Guide is free to download here. Web full of resources relevant to this method is here. You can share your experiences and ask question in a yahoo group.
- Toolkit on gender equality results and indicators by SEAChange is free to download here, this web has also many other resources on gender, including case studies, policy briefs etc.
- Gender in Development Matters is a resource book and training kit for development practitioners, check data collection tools and examples of indicators on pages 60-64.
How to advocate for evaluations on the national level
Advocating for Evaluation: A toolkit to develop advocacy strategies to strengthen an enabling environment for evaluation is free to download here.
Other key websites
- OECD/DAC web on evaluating development programmes is here. Check evaluation criteria, principles, glossary, norms, quality standards, etc.
- International Initiative for Impact Evaluation (3ie) supports quality impact evaluations and provides funds to implement these. Check also their resources here.
- Innovations for Poverty Action (IPA) promotes the use of Randomized Control Trials for assessing impact. It also produced policy papers based on the same. Check poverty-action.org and J-PAL.
- The World Bank´s Development Impact Evaluation Initiative- check their database of evaluations and resources here.
- Center for Effective Global Action (CEGA) also promotes Randomized Control Trials and other rigorous forms of evaluation. Check also their analyses here.
- 3iE – International Initiative for Impact Evaluation – offers impact evaluation methodology, database of evaluations, funding for impact evaluations, jobs, conferences and more. Join their mailing list here.
This web is for implementers of social, educational, awareness raising or international development projects.
Let's explore together how to plan projects, monitor them to make informed decisions, engage meaningfully diverse people... and conduct evaluations that matter - that are used to improve people's lives and sustain our environment.
Feel free to contact me to share your experiences or ask for help. I am eager to hear your stories!