Evaluation revisited — the intriguing title of the conference immediately spoke to me. Rethinking what we do, constantly learning whether also we evaluators can do better was something close to my convictions. After all: that is what we would like others to practice following the lessons drawn from evaluations. The conference proved a great opportunity to learn while practicing, using participatory tools that demonstrated through their use their usefulness. The lively exchanges of views and ideas contrasted various approaches that all taken together are valuable parts of an evaluation tool kit.
The conference “re-convinced” me of the value and importance of mixed methods, of accepting different evaluation tools for different purposes and of combining them to generate a more complete understanding of a complex mosaic of factors that mutually influence each other. Complexity that involves a lot of unknowns, including relationships between different stakeholders of constantly changing nature reaffirms that we need to look at various facets to put together a multi-dimensional perspective on changes we observe. I came away from the conference with a renewed and deeper sense of the need for awareness: of our own values that influence how we evaluate and of what is unknown; now at the time of evaluation (in hindsight) and at the time people decided on their projects, policies, or whatever social transformation processes they embarked on.
The conference showcased a lot of community-based tools that can help develop evaluative capacities in communities for them to understand better their social values and the social transformation they undergo through the assistance received. In these cases, evaluation becomes part of the social transformation process, part of the “intervention” in the community and fully integrated into the “doing” rather than a more independent exercise. I believe there is room for both: for communities to be empowered to understand their own development processes, especially if they are empowered to take decisions about participating in, opting out, or changing their course of action, but also for an “arms length” evaluation that reflects and learns from a different perspective.
Caroline Heider (head of evaluation, WFP)
Impressions from Ricardo, independent consultant, who presented ‘Developmental Evaluation‘ during the methods session.
At the end of May, the international conference brought together over 150 evaluation professionals, commissioners, theorists (many working on innovative evaluation practices) met to debate what rigour is, peer-review specific examples in focused case clincs, and debate the pros and cons of emerging innovative methodological options.
Outputs are emerging from this event which can be found at this website and others. It is a rich source of information!
- All case materials (http://evaluationrevisited.wordpress.com/cases/cases-2/)
- All method summaries (http://evaluationrevisited.wordpress.com/cases/methods/)
- A lively video impression of the two days created by Mirte van den Oosterkamp offers a sense of the energy that characterised the event ( http://www.youtube.com/watch?v=D-fQGljmDT8)
- Several video interviews with participants during the event ( http://www.youtube.com/watch?v=D-fQGljmDT8)
- Full videos of the entire plenary sessions over the two days (http://evaluationrevisited.wordpress.com/video-3/plenary-video/)
- And we are adding participant blogs as they come in, reflections after the dust has settled. Today we feature blogs from Caroline Heider (head of evaluation at WFP), Giel Ton (researcher at LEI), and Ricardo Wilson-Grau (independent consultant).
Several participants of the conference have blogged independently about the event.
Sarah Cummings of Context and IKM, the Netherlands, wrote two blogs:
Jess Dart of Clear Horizon, Australia – http://www.clearhorizon.com.au/discussion/rigorous-evaluation-practice-that-embraces-complexity/
Adrian Gnagi of SDC, Switzerland – http://www.sdc-learningandnetworking-blog.admin.ch
Jan Brouwers, Irene Guijt and Cecile Kusters (organising committee)
The conference “Evaluation Revisited: improving the quality of evaluative practice by embracing complexity” focused on how evaluative practice can be improved, given the need to view much of development as a process of societal transformation and, therefore, complex. Current evaluation practice has not yet embraced the full implications of assessing ‘the complex’ and existing approaches often fall woefully short. During the conference, participants explored concrete evaluation practices that reconcile an understanding of complex societal change processes with quality standards, including rigorous, ethical concerns, appropriateness and feasibility. Here is a visual impression of the two days.