Loading Events

« All Events

What’s Good Enough? A Webcast on Strategies for Data Collection and M&E in Conflict Zones

September 26, 2017 @ 10:00 am - 11:00 am

Webcast

Have you faced challenges in data collection efforts while evaluating activities in conflict or crisis zones? Do you wonder what the best research design is, given the constraints you’re facing? Have you considered how to adapt findings for stakeholders with differing agendas?

On September 26, 2017, ECCN hosted a webcast about navigating challenges in monitoring and evaluation (M&E) when working in crisis and conflict zones. This webcast was part of an ongoing series of events ECCN is hosting about M&E strategies and practices when working with education programs in crisis and conflict settings (EiCC). The webcast considered issues around assessment development and adaptation, enumerator recruitment and supervision, data collection, and funders’ measurement expectations in unstable environments. Panelists, featuring development practitioners from Chemonics, School-to-School International, Creative Associates, and USAID, shared lessons learned from reporting on a national EGRA (Afghanistan’s first) and other stories and lessons from crisis and conflict zones, including experiences shared by attendees. ECCN also communicated updates on a new guidance tool for data quality considerations and indicator development efforts aimed at improving equity in EiCC programming.

What's Good Enough? Slide deck

Slides from the webcast.

The collaborative webcast on M&E Standards in Crisis and Conflict was both thought-provoking and candid. Field experts challenged participants to re-imagine the relativity of data collection as well as the reality of school samples in crisis and conflict zones.

The webcast slides are available here, and the recording is available above.

Resources

You might also be interested in:

  • Our July 25 webcast on Adapting M&E Tools for Crisis and Conflict Settings. Access the recording and resources on our website.
  • ECCN’s Rapid Education and Risk Analysis (RERA) Toolkit, which helps implementing partners collect timely inputs about their environment and communities in conflict or crisis settings to inform program design and implementation. RERA works within USAID’s Collaborating, Learning, and Adapting (CLA) framework. In December, ECCN will offer in-depth training on the revised version of the toolkit (RERA 2.0).
  • You can read more about STS’s recent experience in navigating common M&E challenges in crisis and conflict settings in a new blog post.
Submit a Comment/Question

Webcast Presenters

Moderator: With over 30 years’ experience in international education development, including in crisis and conflict zones, Dr. Mark Lynd brings in-depth skills and knowledge in the design and implementation of large-scale education initiatives. Since 2002, Lynd has served as president of School-to-School International (STS), working to build educational systems and improve learning outcomes in contexts where safety, security, and stability were at times uncertain. These contexts include post-war northern Uganda, southern Sudan before independence, northern Nigeria, the Democratic Republic of the Congo, Mali, Guinea, Pakistan, and Afghanistan. He has designed and implemented Early Grade Reading Assessments (EGRAs), Early Grade Mathematics Assessments (EGMAs) impact evaluations, experimental and quasi-experimental studies, monitoring and evaluation systems, and fidelity of implementation research across the globe.

Mark Lynd

Dr. Jordene Hale is a monitoring, evaluation, and education specialist with over 25 years’ experience in strategic planning and project management. Hale recently joined Chemonics as an education technical director. Prior to this, she served as chief of party on the USAID funded READ M&E in Ethiopia. Hale has provided oversight to projects in counter-terrorism with the Department of Defense (DOD) and Department of State (DOS), diplomacy in the Lower Mekong region, Democracy and Human rights in Pakistan, among others. She has extensive experience in education, mother tongue instruction, Early Grade Reading Assessment (EGRA), and gender. She has worked in several areas of crisis and conflict, including Liberia, Mali, and Sierra Leone. She holds an EdD in education from the University of Massachusetts-Amherst.

Jordene Hale

Dr. Sarah Jones is a senior monitoring and evaluation (M&E) advisor for USAID’s evidence team in E3/Education. Jones brings to the position over 15 years of experience in research and evaluation of social reform programs domestically and internationally with specializations in research methods, education, and youth. In her previous position as technical director at Social Impact, she focused on the evaluation of education and youth programs. She has also worked across sectors on complex evaluations, including serving as a technical lead on the Impact Evaluation of the Malawi CDCS Integration Initiative and as the qualitative specialist on the Food for Peace baseline studies in Uganda, Niger, and Guatemala during her time at ICF International. Both professionally and personally, her primary objective is to increase learning and improve education and youth-based programming to better meet the needs of children and youth (especially in countries affected by conflict or crisis). Jones holds a BA in Spanish and Italian from the University of Wisconsin, Madison (1996), a Masters (2000) and PhD (2004) in Sociology, and a Post-Doctorate in Education (2005) from the University of California, Santa Barbara.

Sarah Jones

Casey McHugh, a program manager at School-to-School International (STS), brings over six years of experience in international education, gender, program management, and monitoring and evaluation. She has experience in applied research and monitoring and evaluation, specializing in both quantitative and qualitative data collection and analysis. McHugh managed STS’s subcontract on the USAID-funded Resources, Skills, Capacity Building in Early Grade Reading in Afghanistan (RSC-EGR), implemented by Chemonics International. Collaborating closely with Chemonics and a local data collection firm, McHugh managed the assessment design and data collection processes for Afghanistan’s first nationally-representative Early Grade Reading Assessment (EGRA) and accompanying School Management Effectiveness and Safety (SMES) survey, including a sample of over 1,200 schools and over 19,000 students across Afghanistan. She coordinated and supported the training of EGRA and SMES assessors through a mix of in-country and remote technical support, with 238 assessors trained over three rounds, including 89 MOE participants from the provincial education directorates.

Casey McHugh

Karen Tietjen has 35 years’ experience in international education including through her current work leading design and implementation of EGR programs for Creative Associates. She has supported education program design and implementation for USAID and other donors in a range of countries, including Benin, Ethiopia, Ghana, Guinea, Kenya, Lesotho, Liberia, Mali, Namibia, Nigeria, Rwanda, Senegal, South Africa, Sudan, Uganda, and Zambia. Tietjen has implemented programs, conducted research, and developed monitoring and evaluation (M&E) systems in several conflict countries, including Afghanistan, Haiti, Nigeria, Pakistan, South Sudan, and Yemen. She specializes in education planning and design, early grade reading, institutional and systems development research and M&E, and policy development and reform. She holds an MS in Economics of Education from Florida State University.

Karen Tietjen

Webcast Questions and Answers

Submit a Comment/Question
Ahmed: Which is better in this situation tool, Table or chart, or....any other?

Mark: Again, not sure what they were looking for. If Ahmed would like to elaborate, I’d be happy to respond.

Lam: Could you elaborate on 'recce' as part of the data collector security as mentioned in slide 3?

Karen: I apologize for the jargon.  “Recce” is a military term which refers to “reconnaissance” or “reconnoiter”.  This has been adopted by security personnel in conflict settings.  Many organizations working in conflict environments will have established security teams or networks, who will keep an eye on security threats and levels in project intervention areas. They can provide information for and guidance on data collection team security.  In some cases, they may visit an area in advance of the team or consult informants in the field to ascertain whether team deployment is feasible.

Autumn: In measuring dosage--we have challenges collecting individual level student attendance data and tracking it over time. Do you have any methods that have been successful? Or know of any innovative approaches in this area?

Karen: We had to confront the issue of tracking individual student attendance head-on as part of our Early Warning System intervention for USAID’s School Dropout Prevention Pilot Program; attendance was also a key outcome measure in our research plan.  In our treatment schools, we trained teachers and school directors on the importance of attendance, how to take it, and how to use the information (this last activity was part of the EWS intervention). We provided attendance registers, if not available, and implementation staff did periodic spot-checks to ensure that the information was up to date and accurate (comparing the attendance register with attendance on the day of their visit).  Note that we also trained our control schools.   This approach worked, but wasn’t perfect—for example, in some cases, we had to track down registers at teachers’ homes or found that they had “migrated” with teachers to different schools.   Additional triangulation measures used to estimate dosage included:  taking student and teacher head attendance head counts the day of the data collection teams’ visit at the beginning and end of the school day; asking the school director the days and dates when the school was closed and verifying with parents or older students; asking the school director and teachers whether and what events interrupted the school operations and for how long (e.g. elections, health campaigns); and—most importantly–including attendance questions in student, parent and teacher interviews that are part of the sample.  The respondents were surprisingly open.

Mark: In a project we’re supporting that’s financed by DFID, spot checks are considered mandatory for attendance tracking. We’ve also found that tracking students within a year (e.g., baseline to endline in a given school year), we get about 20% attrition (i.e., kids at baseline cannot be found the day of endline). If it goes beyond a given school year, attrition can be 30% or higher.

Tim: What problems do you think technologies like satellite imagery and GIS can help overcome?

Sarah: One creative way that AAAS is using satellite imagery is to track displaced populations. They do so by seeing which locations are “habitable” or show signs of life. Perhaps this is one way we can locate displaced populations and identify locations for implementing alternative education programs. This may also allow us to creatively track whether or not schools that are built are still standing. Another thought is to use photographs to track school conditions and maintenance.

Mark: I think I mentioned during the webcast that in one country, our school verification exercise included getting digital pictures of schools, then attaching them to data sets so when you click on the name of the school, the picture comes up. This gives headquarters staff an idea of what “school” means – building, under a tree, etc. Also, in time, I could see this linking to Sarah’s idea (above) of tracking school conditions. Eager to hear what other applications you’re familiar with!

Lam: Do you have examples of solutions when organizational M&E capacity is limited (I.e. data collection is just one 'hat' that organizational staff wear and they are stretched pretty thin)?

Karen:  In my experience, it is seldom feasible to ask program implementation staff to conduct extensive M&E or data collection tasks.  It’s also not always desirable, as it can introduce bias.  Hiring outside data collectors is generally the best way to proceed.  In some cases, you can hire them directly; I have worked on many surveys where we have directly recruited enumerators, such as university students. If you have the resources, the easier route is hiring a local data collection firm to collect the data (and take on the logistical arrangements of transport, per diem, etc.).  In some cases, your counterpart ministry may make staff available for data collection.  In every scenario, you will need to take primary responsibility for training the trainers and/or data collection teams; if you are not training directly, your M&E staff should be present to ensure quality and answer questions.   You will also need to be involved in field supervision and plan to do quality assurance spot checks during the data collection activity.   This is where you can often productively and efficiently involve implementation staff, so that they are engaged in the process and have a stake in the data.

Mark: Unfortunately, there’s no easy answer to your question about M&E capacity. M&E by nature requires people who can design M&E programs and tools, test and validate them, use them to collect data, enter and manage data, and if possible, analyze and report on the data. And as Karen noted above, train others to do all this while assuring quality. In our experience, M&E programs are usually understaffed and are rarely carried out as planned. I think the answer is to raise your kids to be M&E officers – there will be plenty of work for them! Until they arrive, though, I think the main thing is to keep it simple – e.g., clearly stated goals, no more than 5 top line indicators, basic calculation of progress (sums & percentages), etc.

Sabeen: Are there examples where solely/primarily qualitative methods were used in EiCC settings and were deemed successful and relatively reflective of the larger setting/context? (Over mixed-methods or quantitative methods?)

Sarah: Great question, I think this is less about the data collection method and more about the sampling approach, and also about what it is that you are trying to accomplish. Given the right sample size qualitative data can be generalized in a similar way that quant can. The size that is needed depends on the population you are trying to generalize to. So, if you are trying to generalize to the nation as a whole, you likely need a fairly large sample. But if you are only trying to generalize to a school, then the sample needed is smaller. In evaluation, we usually find ourselves in between. Sample size is also driven by the level at which you are trying to report your findings. So, if you want to look at program impact on different age groups, ethnic groups, school level, gender, etc, then your sample size is typically larger so that you can report findings at those levels in a significant way.

Rather than thinking about generalizability, I think it is important to think about what questions you are trying to answer. So, if you are looking at successes and challenges, or the diverse methods by which an activity is implemented, rather than whether or not a particular approach worked, then the sample size is less relevant than capturing a diverse set of voices. In that case you want differing responses. This is one of the biggest downfalls we see in qualitative research. Folks tend to report numbers, when in most cases, qualitative involves a purposive sample, and is not representative nor generalizable. In situations where we have purposively sampled a group of individuals,  if 8 individuals have one answer and a single individual says something different, the two responses should be given equal weight. This is because the sample is small enough that we can’t guarantee that the individuals that were selected were representative. We have a tendency to want to report “trends” or “themes” in the data. But in qual research with small samples, all voices are important. Even if there is only a single individual voicing a response. The idea of saturation has to do with the idea that you are not hearing ANY new responses. Not that you keep hearing the same response over and over. This another misunderstanding about qualitative research.

As I said in the presentation, what is important is making sure that your research design (methodology) fits the purpose you are trying to achieve. If it turns out that the approach you  need to undertake, then it is important to talk this through with your AOR/COR to determine what is feasible. And then if you face limitations, which always happens, it should be clearly discussed in the report. This isn’t seen as a weakness, but rather as a important context for understanding the findings you are presenting.

Mark: I agree with Sarah: Talking to your funder (e.g., AOR/COR) is key, especially at the beginning of the project, when there may be a possibility of defining, or redefining, research questions and indicators. The issue is that most M&E questions are based on results frameworks and indicators, which are outputs- or outcomes-based, which usually means some kind of counting and therefore the need to represent something (a region, a country) somehow (census or sample). If in the beginning a balance can be struck between reporting countable things and learning about processes, rationales, perceptions, etc. via more qualitative approaches, then a conversation about alternative research designs is possible. Incidentally, I’m encouraged by the emphasis CLA (Collaborative Learning and Action) places on learning processes and feedback loops, including methods like focus groups and reflective sessions with stakeholders. https://usaidlearninglab.org/library/collaborating%2C-learning%2C-and-adapting-cla-framework-and-maturity-matrix-overview

Daniel: Re: Sampling: Overall Size is a function of structure, i.e., cluster size. Any approaches, guidance, experiences to estimating a plausible ICC?

Sarah: ICC Is a really tough one. Because it depends on a lot of factors. While we tend to think of an impact evaluation as a “Gold Standard” or perfect a perfect science, there is actually a lot of subjectivity that goes into the approaches we take, including the assumptions we use for our power calculations. For ICC, I typically start by reviewing the literature and see what ICC others have used for that particular setting. Sometimes this even means going outside of the education literature to better understand the population the study covers. Basically when thinking about ICC you are thinking about two things, but intra-cluster correlation as well as intercluster correlation, and how similar or dissimilar individuals are within a cluster and between clusters. So if there is not another study that has taken place, then you can look consider what the population looks like and try to make an educated guess. When working in conflict and crisis settings this can be particularly challenging if you have a fairly mobile population. So I will be curious to hear what approaches others have taken in estimating their ICC?

Mark: We’ve also used IIEP’s guidance to help us think about this question:

http://unesdoc.unesco.org/images/0021/002145/214550E.pdf

However, this question leads me back to the question of how good is good enough? ICC is part of the calculation for generalizability, which usually means a 95% confidence level and 5% margin of error. I wonder if others have views on conditions under which lower confidence levels or higher margins of error would be acceptable – e.g., in high conflict zones, in pilot stages of development, etc. This could make a big difference in sample size and the logistical difficulties and costs associated with it.

Chemonics: One of the presenters mentioned including students or teachers who questioned why they weren't included in the surveys. Does this negatively effect the statistical design of the monitoring/surveying in random sampling?

Karen:  Thanks for letting me clarify my earlier response.  I was not suggesting that more students or teachers be added to the sample or that they be administered the instruments for the respondents designated by the survey/research design. It would absolutely negatively affect any rational sampling plan!  Instead, my solution has been to convene informal group discussions–once data collection has been concluded with the respondents–with the non-sampled teachers, parents or community members who seem eager to express their opinions or aggravated that they were not among the sample.  By doing so, you assuage feelings of exclusion and provide protective cover to the actual respondents who may be targeted as snitches or “informers.”  This need not be done at every school-community, only when the situation warrants.  As such, the discussions provide useful insights but are unlikely to qualify as representative.   Of course, if you have time and resources, you could build into your design on a regimen of formal focus group discussions.

3 Comments
  1. afzal shah 7 months ago

    Mark Lynd sorry for delayed response as i went through your comment now. VES volunteer education system was devised to cater the issue of damage schools, during the shift of uncertainty in the area I work was about the building or infrastructure which was under severe threat and were blown up mostly. The main cause at the end observed as community and public-sector linkages and understanding of initiatives. There fore once we design the concept of VES in which local community member with minimum graduate qualification to start educating children with in community building minimum of 25-40 children case to case varied due to geographical divisions. They were provided with incentives based on assessment results of the children and number of the children with registered body, State was responsible to provide books, salaries, and other incentives accordingly with the number and grades of the students. After completing specific time of 4 years based on progress school was supposed to be recommended for infrastructure by providing building in appropriate place. But this doesn’t last for long due to political and economic shifts. Federally administered tribal area education foundation and National commission for human development Pakistan were the platform where we drafted the said model for implementation.

  2. afzal shah 9 months ago

    well awesome research and methodology opted the source of investigation which is language, but one thing is confusing for assessment of children at schools seems unwise at this stage when the systems itself not working or available with EMIS records then obviously communities comes under direct support through citizen led initiative. what if the research was house hold based covering the out of school children and never enrolled children. Dari, Pashto or else are not beneficial for sustainable development goals, they are required with geographical arrangements by simple techniques of location data base, focus should be locations not schools as due to this might be most of the children did not participated into the schools. Early science, Mathematics and other topics of interest to be focused for research at this phase especially the buildings and the communities houses and hamlets.
    In conflict area one concept was developed which was Volunteer Education schooling system. this ensures the involvement of each and every stake, communities and benefiacries.

    • Mark Lynd 9 months ago

      Hi Afzal, Mark here. I completely agree with you: in order to meet sustainable development goals, research like the types we described in the webcast should go beyond formal schools to include out-of-school youth and their communities, as well as language minority groups. I’d love to learn more about the Volunteer Education schooling system so please send information. I’m aware of other volunteer efforts, including ASER/UWEZO assessments conducted largely by community volunteers, and the BRAC model of education which draws extensively on local citizens and community volunteers to staff its schools, often with very good results. I agree that these and similar volunteer- and community-based initiatives should be part of the conversation, with the research issues that arise in these contexts. For the webcast, we focused on some of our immediate experiences in formal schools but as you say, this is not the whole picture. Thank you for your observation!

Leave a reply

©2018 ECCN. All rights reserved. See our Privacy Policy for more information.

Log in to get access to NowComment

Log in with your credentials

or    

Forgot your details?

Create Account