Skip to content

A sense of disquiet … by Catherine Kell (Twaweza initiative, East Africa)

July 27, 2010

Reflections on “Evaluation Revisited: Improving the quality of evaluative practice by embracing complexity” – a Conference on Evaluation for Development.

I am currently working on the design of an evaluation of a large-scale, complex initiative (and complicated and chaotic – or that’s how I am experiencing it)!  As I, and my colleagues, think about the learning that we hope will come from this initiative and the nature of the evaluation as a kind of animator of learning, we are grappling with what sometimes seems to be different and mutually exclusive epistemologies, each with their own ‘truth claims’ (or for the more modest – partial or provisional truth claims). Some amongst us would wish to see an evaluation that can make claims about impact through addressing a set of hypotheses using experimental or quasi-experimental methods involving the use of a counterfactual. Others would prefer a more mixed approach that uses quantitative and qualitative methods.

In attending this conference I wished to better equip myself with arguments for why a diversity of approaches can yield more generative learning and how a diversity of approaches can help us learn in our context, and share the learning more widely. I had little need to be convinced of this myself; but what I needed was stronger arguments and specific examples to assist in debates with those (sometimes called the randomistas) who believe that rigour is only evidenced in randomized controlled experimental forms of evaluation of development initiatives or RCTs (see this paper for an example of the way in which the term ‘rigorous’ is coupled with the experimental approach). In retrospect, I found the conference to be very interesting, extremely well-organised and timely, but I found I left with a sense of disquiet. There seem to me to be three main sources for this. I’ll discuss them, aware that this this is a provocative reflection but I hope honest. At the same time I’ll point out what I saw as highlights of the conference.

Firstly, it seemed to me that there was an elephant in the room – or perhaps a herd of them i.e. the randomistas! This resulted in a sort of asserting of “complexity” in the face of the elephants’ silent trumpeting of the “counterfactual”. I found Patricia Rogers’ articulation of the unhelpful ways in which complexity is used very helpful – but in my view, the concept of complexity was not made clear enough to really equip participants to argue strongly for why (rather than assert that) a diverse range of approaches is needed and is better, and what it is that these diverse approaches can offer that an RCT can’t offer. And because the experimental approach and its increase in influence were only really hinted at, the broader reasons for why this strong argument is needed were also not clear. So there was a kind of celebration of complexity, but without sufficient substance to it I was left feeling that ‘taken-for-granted’ assumptions were being confirmed rather than understandings being deepened, challenged or sharpened. There are a number of economists working with only quantitative data who are articulating very robust critiques of RCTs and analyses of their limitations, a recent one coming from the World Bank itself. I think the conference needed to bring such debates out more explicitly.

Patricia made the incredibly valuable point throughout her presentation that complexity can only be understood in retrospect. However, while I found them really useful, each of the three cases/methods sessions I attended were of studies that are either in progress or are still at the planning stage.  The randomistas present a wide array of completed studies with measurable results that take on lives of their own. An alternative approach needs something similar, and I am not asking for measurable results here, but for completed accounts that simply say: “this is what we did, this is what it showed, and this is why understandings of complexity and emergence are important”, for example. It may have been that other case studies provided these, but the summaries do not give sufficient detail to really know this. In addition, I know that RCTs can be undertaken in relatively short time-scales, whereas retrospective accounts would need much more time. But I think it is important to demonstrate the work rather than talk about values and standards or state what “should” or what “needs” to be done. Descriptions rather than prescriptions can better prove the point.

Maarten Brouwers’ keynote presented an excellent framing of global trends in development, I thought. Although he hardly used the word complexity (if my notes are correct), his explanation of these trends (increasing uncertainty, increasing connectivity and the shift from a 1.0 society to a 2.0 one) illuminated the need for the capturing of multidimensionality. Instead of a polarization of approaches (randomistas vs relativistas, or even quantitative vs qualitative) he suggested how different methods can address the needs for different types of learning. I was also struck by his point that these three trends link, such that through real-time data collection there is a possible merging of monitoring with evaluation.

My sense of disquiet seemed to come, secondly, from the sense that a tacit agreement was conveyed that all in the room subscribed to a set of ideas best seen in the statements in the programme that “the conference will further methodological democracy” through the “choice of examples and advocating for methodological diversity that do justice to the existing diversity of development modalities and contexts”; and that it will contribute to development by showing how “evaluation can influence societal transformation”. This felt to me like a ‘taken-for-granted’ assumption again; sort of like we are all activists in the same cause. So I felt like I was being recruited to a position rather than being strengthened in my abilities to argue for my position.

And yet, issues of politics and power seemed strangely absent from the proceedings, apart from Sheela Patel’s excellent keynote. I felt that she was providing a powerful challenge to the way evaluation tends to get done. In putting the story of Slum Dwellers International (SDI) (rather than evaluation) first, she was able to express her frustration at being evaluated through the ‘project lens’, her wish for learning to be tracked throughout SDI’s long, long history, her own mystification about evaluation and its fads: “Northern organisations have very short memory spans” and “I’m often the historian of many Northern NGOs’ initiatives with us”. For me, there were two challenging and deeply political points in her presentation – the first was about the importance of scale (both in the sense of ‘scaling-up’ as SDI manages to do so well, and in the sense of ‘scale-jumping’ in that SDI can take a debate that is intensely local and amplify/project voice across scales and internationally); and the second was about how scaling can be designed when activities are thought of as both means and ends (this seemed to imply a critique of the idea of activities and outcomes – and instead the idea that activities themselves are outcomes – which is a very profound and un-technicist idea). (I wonder: Does this idea somehow link with Maarten Brouwers’ point that new forms of connectivity enable real-time evaluation by citizens and that this can lead to the merging of monitoring and evaluation, and that this can perhaps enable us to see activities as outcomes in themselves?) I felt that some of Sheela’s observations could have offered framings for much of the discussion about societal transformation and the role of evaluation, perhaps even disrupting and leading to emergent themes in the rest of the conference – yet they were not picked up.

This leads to my third sense of disquiet. A critique has been articulated in various places that the RCT approach puts the choice of method before clarifying the question that the method needs to answer, and this point was also made in the conference (not in relation to RCTs but in a more general sense). As a relative newcomer to the field, I felt that a bewildering array of methods was put forward in the conference, especially with the concept of the “methods market”. The reliance on methods and tools (which are often packaged and seen as applicable across all contexts) seems to fall into the trap of elevating these above the purpose of evaluation, which I see as finding principled ways of exercising judgement for illumination. Too much reliance on methods, tools and techniques takes away from judgement and can cloud illumination.

In summary, I find that Sarah Cummings’ thought-provoking blog post about “being caught in a cleft stick”, together with Chris Mowles response, really got to the heart of what my disquiet was about. Rather than trying pre-emptively to match or fit theories of complexity (or “embracing it”) with current evaluation practices, perhaps evaluators need to simply live with that gap. Staying close to Sheela’s provocative ideas, I offer an admittedly somewhat quirky set of issues that could be explored instead of trying to find a fit (and I know that much of this is already happening): that is, deepening understanding of the theory and the trends through arguments and counter-arguments, while at the same time trying to:

  • address limitations with ‘the project lens’
  • attempt to track learning across much longer time-scales, and before and after the time/space boundaries set around projects
  • search for approaches that do justice to people’s own attempts to up-scale and scale-jump
  • consider the possibility that activities that can be both means and ends
  • experiment with real-time data collection that shortens feedback loops, increases citizen participation and merges monitoring and evaluation
  • focus on descriptions rather than prescriptions
  • become critical historians of our own initiatives
  • scrutinize any agglomerations of new ideas, methods and techniques for faddishness!

Many thanks to the conference organizers for providing this forum and for opening up the space for honest reflection!

Catherine Kell (Twaweza initiative, East Africa)

July 23 2010


Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: