The infatuation with randomized controlled trials that has swept the field of international development in recent years has certainly not been without clear benefits. The language of “causation” versus “correlation”, “attribution” versus “contribution” and so forth has helped both donors and practitioners to think more clearly about the impacts of different aspects of programming. But what many RCTs have shown – and the comic provides one example of – is just how complex international development is, including the education sector. This complexity is particularly pronounced in the crisis and conflict-affected areas where rapidly changing conditions tend to rule out any meaningful experiments of a traditional kind.
In fact, what experiments are particularly ill-suited to provide is what one might consider to be the most useful type of information: a rich description of the context that helps one navigate complex social relationships and anticipate the ways in which their evolution is likely to impact interventions. So, what are our alternatives for generating rich and reliable evidence when researching in crisis and conflict affected environments?
In crisis and conflict contexts, observational and descriptive studies play a critical role. Both quantitative and qualitative descriptive studies can help shed light on the validity of the proposed theories of change, help understand which assumptions are likely to hold and which are not, document prevailing perceptions on issues important for programming, map out stakeholders and their influence on each other and on the topic of interest, and so forth. Qualitative methodologies, in particular, can be helpful for collecting evidence about the context. These methodologies have been originally developed to attain an in-depth understanding of complex behaviors and social relationships.
A broader understanding of evidence as inclusive of both qualitative and quantitative descriptions of context opens up a range of other program documentation, such as, midterm reviews, quarterly and annual reports, and performance evaluations, as sources of data. Such information is often exactly what’s needed not only to make sense of disparate data points that make up the “evidence pool” for any particular country or area, but also to produce entirely new meaning from connecting those dots.
USAID-ECCN is working towards establishing criteria to determine the quality and relevance of the information contained in such documents, relying heavily on the BE2 guide on assessing the strength of evidence in the education sector. This allows us to broaden our definition of evidence such that researchers and practitioners alike are able to access useful education research resources that may otherwise be excluded with such a strict definition of what constitutes ‘evidence’.
What are your views on this topic? Do you have experience or good examples of studies or programs that incorporate qualitative and quantitative data? Please share them with us in the comment section below.
Stay tuned for work under USAID ECCN’s Defining Evidence theme on this topic.