Thinking back to end of May… Sonal Zaveri (COE)
The Evaluation Revisited Conference was about making sense of what we do. It raised pertinent questions not just about methodologies but also about how we perceive and value evaluation. And it did so in a wonderfully participative engaging manner that involved all of us in debate and discussion, not providing definitive answers but exploring a plethora of ideas and approaches that have attempted to push the envelope in understanding the complex world that we try to evaluate.
For those of us, like me, who have long lived and worked in the hurly, burly of real world interventions, it has always been a challenge to understand what works and what does not. We know that our problems of poverty, gender, education and health are very complex and intertwined, long existing and difficult to resolve. And so it was with some consternation that one has listened to in other conferences and venues about claims of single bullet interventions producing rather far reaching impacts. Because we know that is not how the real world works. Funders are constrained in what they can fund and for how long but any intervention does not happen in a vacuum – it is influenced by what has happened before; by the present existence of multiple, interconnected issues and the many ‘new’ relationships that the ‘single’ intervention unexpectedly releases. But that is the nature of any activity or intervention that is implemented with real people in the real world. To then say that we can identify what that single intervention does or does not do is simplistic and arrogant, and ignores the many ripples of change it has produced. The conference reminded us of the need to address these complex ‘ripples’ and that we as evaluators must first acknowledge their existence and then make sense of it using our evaluation tools, methods and approaches. As one of the speakers, Sheila Patel from India mentioned, we ignore the deeper changes and are satisfied by evaluating the tip of the iceberg. What is worse is that we consider the tip of the iceberg evaluation to represent the whole iceberg. Such evaluations serve the narrow needs of budgets and timelines, selectively (sometimes erroneously) identify effects but worst of all, lose out on evaluating the richness of the human change that has occurred.
The Conference with its breakout groups, case presentations and plenary conducted in a seamless, thoughtful process enabled us participants to share our thoughts and argue about the various paths that evaluation can follow – from the complex to the simple definitive answers. The discussion also veered around whether evaluation thinking could be somewhere in between, recognizing that work plans and budgets need the bottom line clarity for effects and impacts but also understanding an equal need for ‘space’ to meander and explore the many contributing factors that affect and are affected in development work. The conference message was crystal clear in encouraging us to explore these undefined boundaries of evaluation.
The conference most importantly reassured us, who work day in and day out in these complex situations, that our voice is heard loud and clear that evaluation must ‘value’ and be ‘responsible’ not only to those who provide the funds but to those whose lives are directly and indirectly affected by the programs and projects. Our evaluative decisions will have far reaching effects on their lives and if nothing else, it should at least remind us, in all humility, with the best ethical and rigorous standards we have at our disposal – to learn from them and make sense of what works, what doesn’t and most importantly, understand why. This deep sense of commitment is what I took away from the Evaluation Revisited Conference.
Sonal Zaveri, Community of Evaluators (COE) South Asia