The COVID-19 pandemic pushed data collection for research from in-person to remote methods. Researchers and practitioners alike were rapidly mobilized to learn new systems, adapt instruments, and engage in a great deal of trial and error. Today, nearly two years after the first shutdowns, the educational research sector is cautiously moving back to in-person activities with new expertise and new questions. What will we continue to do remotely? What can’t we do remotely? What does this mean for our future interventions? Answers will vary by organization and context. Our experience comes from piloting a new intervention, the SMS-based Parental Nudges Program (PNP) in northern Ghana.
“The goal was to improve caregiver engagement in educational activities and beliefs on the value of investing in children – all with a focus on gender equity.”
The northern regions of Ghana struggled to achieve equitable education outcomes before the pandemic, with the lowest enrollment, attendance, and literacy rates in the country, and wide gender inequalities. Engaging parents in child education became a policy goal in these regions, especially during school closures. This motivated us to partner with Innovations for Poverty Action and Movva Technologies to develop and test the impact of short behavioral “nudges” in the form of text messages. These nudges were based on an existing evidence-based program (Ready4K!) and adapted to the Ghanaian context. The goal was to improve caregiver engagement in educational activities and beliefs on the value of investing in children – all with a focus on gender equity. An example message to parents was: “Your daughters have a lot to say! Sitting down to talk as a family strengthens your relationship and creates a space for children to be guided.” Parents received a follow-up message a few days later prompting them to have a conversation with their daughters and sons, and two additional messages the following week encouraging the same activity. Although the intervention was designed for remote implementation in other settings it had never been used in northern Ghana.
Piloting an intervention usually employs user-centered methodologies, which involve repeated interaction with the user(s) of the product or service. Although remote user design and research are common for digital products, the unreliable internet connectivity of northern rural Ghana made remote user testing tools challenging to use. We created a piloting team based in northern Ghana to meet regularly over Zoom with the remote team based in Tamale, Ghana to pilot the program.
“Remotely piloting an intervention came with a lot of challenges but was necessary during country-wide restrictions in Ghana.”
We used an Improvement Science technique for implementing and testing a change, consisting of Plan-Do-Study-Act cycles. We planned the program, tried it, observed the results, and acted on what was learned. The first round of messages we piloted had a high delivery rate – around 90% of text messages were successfully sent – but only 32% of caregivers remembered receiving the message. The caregivers told us that the timing of the message was critical, and that the language needed to be simpler and easier to understand for all reading levels. In the next round of piloting we sent messages at individualized times according to their preferences, and simplified the language. We had much higher rates of engagement and more positive feedback.
Remotely piloting an intervention came with a lot of challenges but was necessary during country-wide restrictions in Ghana. It could not replace the in-person interactions that help us understand how to meet a user’s needs beyond simple program modifications. Observing users engaging with the program and establishing a trusting relationship for candid feedback requires face-to-face interactions that simply cannot be replaced by phone calls and the analysis of delivery rates.
“Piloting remotely, with co-creation, can offer some value in ensuring that the program is appropriate for the user.”
Nonetheless, our pilot still offered value. We think there are two critical aspects to a successful remote pilot. Most crucial is to co-create the plan, metrics, measures, and changes with those closest to the users, and ideally the users themselves. Members of our pilot team were from communities in northern Ghana, and were the closest the remote team could get to the caregivers given the unpredictable network connectivity.
Defining success and related measures in the piloting phase—such as the percent of caregivers who remembered receiving the messages—allowed for the quick integration of those survey items into subsequent measures. The midline survey for the larger study, for example, showed that 71% of caregivers remembered receiving the messages. This indicates improvement, compared to the initial 32%, but also shows room to reach more caregivers.
Piloting is a crucial part of developing and implementing an intervention, especially in a new context. It remains to be seen whether remote piloting can be as effective as in-person piloting. But piloting remotely, with co-creation, can offer some value in ensuring that the program is appropriate for the user.