In this guest Blog, Els Lecoutere from the University of Antwerp in Belgium, describes how she assessed the impact of improving intrahousehold decision-making on the efficiency and equity of smallholder farming among HRNS’ member coffee farming households in Uganda and Tanzania.
It is a type of impact evaluation that uses random assignment to run programs as part of the study design. The main purpose of randomized evaluations is to determine whether a program has a causal impact, and to quantify how large that impact is.
More: The Poverty Action Lab
The design of the randomised control trial
The scientific ideal is a random selection of a treatment and a control group; which in reality is not as simple as it sounds…
The program promoting a more participatory way of intrahousehold decision-making among coffee farming households hinges on a two stage approach with an initial less intensive couple seminar and a subsequent intensive coaching package for a selection of couples. Going straight for a random selection of couples for the intensive coaching while skipping the couple seminars, or randomly selecting participants for the couple seminars is not desirable because the less intensive couple seminars are needed to create awareness at the community level and to identify couples with an interest in intrahousehold decision-making.
The Hanns R. Neumann Stiftung and myself came to a compromise to randomly select couples for the intensive coaching package out of the couples participating in the couple seminars – the toughest negotiations, however, revolved around abandoning the self-selection of couples who attended the couple seminars into the intensive coaching package. As a researcher I emphasised the objective of establishing the impact – which required some form of randomisation – of the intensive coaching package, which is scientifically the most interesting because this is where participatory intrahousehold decision-making is actually introduced, whereas the couple seminars are more about raising awareness. The initial fear that a random selection of couples for the intensive coaching packing would not be effective proved unneeded as we observe impact; but the issue of whether a random or self-selection of couples is the most effective to engender change is unresolved.
The next hurdle was the fact that we cannot force people into an intensive coaching packing about intrahousehold decision-making – I can think of farmers who are not ready for that. Besides, from an ethical perspective and from the perspective of the Hanns R. Neumann Stiftung that needs to keep sound social relations with its members, it is not desirable to bar couples we did not select for the intensive coaching if they really insist. Thus, a randomised encouragement design it has to be – this means if a couple is randomly encouraged for the intensive coaching they can opt out; if not encouraged they can opt in.
The implementation of the randomised control trial and data collection
Proud of our well-crafted encouragement design, we also had to implement it. I made sure all field staff and enumerators involved got the idea of an RCT with my – by now, memorable – example of randomly selected chicken to test an egg booster vitamin. The implementation hinged on a swift communication between myself and the gender officer, who, when s/he had conducted couple seminars, sent me the attendance lists. In turn, I randomly selected the couples to be encouraged and non-encouraged for the intensive coaching package and transferred the lists of wives and husbands to be interviewed to the enumerators as soon as possible. Believe me, the gender officers made lists, I randomly selected couples and listed them on weekends, evening hours, early mornings…
The scientific ideal is a strictly scheduled sequence of the different activities that are part of the intensive coaching package and it is very important that every couple gets the same ‘dose’. Rain – downpours like you can have them in Uganda and Tanzania -, budgets, illness or the temporary unavailability of respondents made the implementation and data collection schedule somewhat messier than planned. But regular communication – our WhatsApp hotline – and a strict monitoring of activities and the timing of the activities by the gender officers and myself made it all work out.
After the challenge of managing the implementation and baseline data collection, the endline data collection was a piece of cake with its predetermined lists and schedule. Retrieving people can be difficult but the lab-in-the-field experiment with real payoffs that we organised as part of the endline was a motivating…
Sharing results with Hanns R. Neumann Stiftung
During feedback meetings with the field staff, we had detailed discussions about the observed positive and negative effects, and effects that are absent; immediately followed by reflection: How can we deal with negative or absent effects? What can be changed? Me, as the researcher, had a role to play as an informed third party. But, an impact evaluation only evaluates the current program and does not say anything about possible adaptations and why they would work, or work better. Suggesting adaptations took me out of my comfort zone as a researcher. And, something I did not really think about was the stress the impact evaluation caused to some of the gender officers: “Did our efforts make a change?” “What if our program is not effective? Or worse, has negative impacts?”. I think they slept better after seeing positive impacts…
When feeding back results at the headquarters of the Hanns R. Neumann Stiftung, there was an obvious interest in the detailed results, the learning, and the possible improvements but a great interest in communicable facts and figures in digestible formats and fitted within a communication strategy to partners, donors and the general public as well. As a social science researcher, I am more at ease with presenting facts in a nuanced way, listing all the possible caveats, but I understand the power of punchy key facts.
What I also noticed, is that with strategic planning on the agenda, the advocates of the Gender Household Approach see the impact evaluation as an opportunity to show its potential and to strengthen and scale out the approach. It feels good that my little contribution to promote cooperation and equity within farming households by studying it, is taken up to make a change for a whole lot of men and women farmers.
The design of a monitoring tool
Together with a team of Hanns R. Neumann Stiftung staff from the headquarters and from the field offices, we co-designed, a flexible but solid, updated monitoring tool with a clear definition of key expected outcomes, inspired by some of the research tools used in the study, and adapted to the intervention logic and reality.