Africa’s Voices helps government decision-makers, development agencies, service-providers and humanitarians design programmes that are grounded in people’s everyday realities. We do this by combining interactive media, mobile technology, and cutting-edge data and social science to listen intelligently to citizen feedback.
Since Africa’s Voices earliest days as a pilot research project at the University of Cambridge, we have been preoccupied with the phrase “valuing voices”. It’s a loaded phrase, one with all sorts of meanings and motivations – an ethical commitment to citizens as agents, a call to arms in an era of big data, a methodological challenge for analysing rich local language text at scale, and a test for human-centred technological innovations. It is also, and above all, synonymous to how we understand and value citizen feedback.
If listening to citizens is at the heart of effective development and governance, then valuing voices, i.e. valuing citizen feedback, becomes a humanizing act, an act of dignity.
Where citizens are better heard by decision-makers, they access more timely, relevant and valuable services. Yet development, humanitarian and governance initiatives often extract from citizens, rather than engage with them. Extractive surveying methods are abstracted or of little relevance to how citizens’ perceive their priorities or opportunities. Similarly, prevalent closed-answer questions constrain the possible expression available to citizens. As a result, development is done to people rather than for them and with them. Feedback – citizen consultation – becomes sidelined or deprioritised, a luxury afforded to those with unrestricted budgets and unhampered by tight project deadlines.
In truth, across aid and governance in developing country contexts, the incentives are often aligned against listening and being accountable to citizens.
Where feedback is practised, it is often done in an extractive, one-way manner. This is an old conundrum in international development.
Africa’s Voices is trying to reverse this, to close the feedback loop between citizens and service-providers. Below are three lessons from our journey so far:
- Voices are more than data points. To “value voices” is to place “voice” at centre stage and to juxtapose it with the dominant frame of “data”. Feedback originates from individuals who exercise voice as social agents within a social reality they care about.At Africa’s Voices, we veer away from extractive feedback collection methods like perception surveys that prescribe how citizens may respond. For example, we use interactive radio (radio debates driven by citizen input through SMS to a free short code) to curate conversations in local languages and allow audiences to drive them. This way, feedback is the output of the real social discursive spaces that emerge. A recent deployment of our interactive radio in Mogadishu to consult citizens on solutions to the city’s displacement crisis resulted in the surprising finding that crowdfunding (broadly understood) was seen as a good way to provide relief to displaced communities. No aid organisation, however, working in IDP settlements would have easily been able to ask an open question which would have resulted in this finding.
- Feedback is a two-way conversation. Feedback, especially in the context of humanitarian emergencies, is often practiced as a one-way exercise. Is this not equally extractive? But what if we were able to sustain a two-way channel of communication with citizens directly and at scale? We tested this with cash transfer recipients in Somalia using SMS channels to engage them in responsive, two-way feedback processes that enable transparency for programme decision-making as well as complaint resolution. Not only did this improve the accountability of cash-transfers but it also developed a greater sense of trust, inclusivity and agency among beneficiaries who felt consulted in decision-making.
- Interpretation of feedback matters and it should be done by people. There is a surge in tech solutions that claim the ability to extract insights from people’s feedback by structuring, analyzing and synthesizing big data. Most rely on reductive and often dangerous applications of machine learning to automate data analysis. Our perspective is that bots are not the solution: interpretation of feedback matters and it should be done by people rather than machines – people deserve to be heard and made sense of by other people. This is why we use machine learning and artificial intelligence to augment the interpretive capacities of human researchers rather than automate data analysis. Humans, unlike computers, are able to capture doubt, curiosity, emotions, and human authenticity which are critical when reviewing feedback aimed at driving decisions that affect human lives.
We know that people value this approach to feedback. In 2018, we used interactive radio to consult Somali citizens on humanitarian priorities. In the end, we asked whether this process made them feel involved in decision-making. This is a quote by a female participant:
“I feel involved because community consultation is always the best thing to do and I personally believe that I am part of the decisions in the community and we appreciate a lot those who made this safe spaces to talk like the radio presenters, the leaders involved and those aid organisations who are involved as well.”
The original post appeared on Feedback Labs’ Three Things Thursday blog in June 2019.