Understanding the impacts of civilian peacebuilding efforts remains a fairly information-light space. While researchers face practical challenges in the kinds of contexts that necessitate the building of peace, complexity is not unique to the field. Less-recognised challenges persist, though, relating to unit of analysis problems; roll-out designs; and even to the choice of outcome indicators, and might actually explain more of the blockage. While these challenges are not unique, they may play a uniquely dominant role in holding our field back. Allow me try to explain my experience working against (and with) some of these problems.
To do so, I’ll start with a bit of macro.
The receipt of development assistance and other forms of aid appears to reduce violence at the aggregate level. In jarring contrast, individual interventions, if they do anything at all, might make matters worse (at least when we look at those run at the scale of the standard treatment, rather than at rarer, state-level interventions). This outcome is superficially disappointing, but, analytically, it shouldn’t be surprising. In an aggregate level analysis, all kinds of funds doing all kinds of things are pooled together. Some build infrastructure; some livelihoods, human capital or resilience programming; some go towards boosting food security; some to WASH; and some to conventional peacebuilding projects, like transitional justice or support for local mediation processes.
Analytically and practically, it makes sense to ask if the aggregate outcome of those strategically pooled programmes influences the level of violence. Both externally and internally, conflicts have a range of causes that determine their onset, intensity and duration. Asking if multiple strands of programming, jointly, influence that multi-causal phenomenon is logical.
It is less clear why a project that targets only one cause should be measured against the same outcome. This, however, is what some donors seem to demand and what some researchers attempt to do; but testing the impact of something that, at best, targets a subset of a multitude of causes is not only unfair; it is setting up failure. What all of this means is that we need to be very careful in the expectations we form for individual projects.
Rather than asking the extent to which a programme has reduced violence, we should consider its outcomes, both against what it’s feasible to achieve; and the relationship between those outcomes and peace. No one should expect $5m spent training mediators to resolve an entire war; fewer should expect a $5m vocational training programme to do the same.
Even if both are integral parts of a broader strategy, evaluating each component – in isolation – against violence asks too much. We might be better served by asking a slightly different question, such as whether or not a programme contributes to building a peaceful environment – for example, if it addresses the extent to which individuals have competing, overlapping, interests or (group-based) interpersonal differences.
A failure to reframe this question affects the internal relevance of many analyses because it sets them off looking for effects at a level inappropriate for the program. For example, in some recent work in Jordan and Lebanon, where we evaluated a fairly small vocational training programme, we would have found nothing by looking at measures of violence – not least because neither country has been particularly violent in recent times. By contrast, by looking at interpersonal behaviours and group-based interactions, we gave ourselves the chance to study something related to conflict but that might move as a consequence of a small TVET programme. What we saw was members of refugee communities becoming more generous to members of their host community following participation in the co-ed training workshops.
Of course, the problem we will always face is that there is no way to state that this offers conclusive proof that co-ed TVET builds peace. On the other hand, however, it at least takes a step along the way to showing the kinds of social outcomes such programmes need to have if they are to contribute to a peaceful environment.
External concerns, however, are also relevant, particularly because they systematically influence the kinds of interventions that are evaluated in the first place. Although not unique to the peacebuilding field, the design of many standard interventions within it is at odds with the needs of impact evaluation. How one can apply a standard impact evaluation approach to a national-level transitional justice programme, for example, is unclear.
Standard approaches, similarly, are ill-equipped to deal with rollout to a small number of large clusters, as might be required for meaningful land reform projects. Analysing, only, a specific subset of programmes deemed suitable for impact evaluation is also, therefore, a recipe for disappointment because parts of what does the lifting are systematically excluded. Indeed, given that conflict exists as much at collective levels as individual ones, programmes with the right roll-out structure for impact evaluation are likely to be among the least effective at moving these community-level indicators.
This might well go some way to adding texture to the why results in this field might not match the hopes and expectations of programme designers and implementers, while still allowing us to remain optimistic about the gains possible from civilian peacebuilding.
There is an implication from all of this that only a subset of peacebuilding interventions are suitable for evaluation – at least, those using standard tools. Specifically, they are those that target individuals or discrete communities, rather than those that target a conflict as a whole.
Because of attribution, there is a tendency to look at a small number of inputs that can, easily, be separated in implementation. Although not always the case, interventions of this form might have a shorter reach, as they affect individuals, not communities; or some communities but not all within a place; and provide a narrow and limited range of support. In turn, it is rational to expect a weaker, or narrower, set of outcomes will arise from analyses of this sort; and that such programmes are less likely to shift the multicausal, aggregate, indicator they are tested against.
This adds to the problem, especially since it might build an expectation of null, or adverse, findings, which it is rational to expect reduce the desirability of future assessments. As a consequence, the problem compounds.
This calls both for new thinking about how we evaluate the kinds of programme that do match the standard needs of a rigorous impact evaluation; and for new approaches to analysing promising interventions that do not. These approaches can still keep experimental and quasi-experimental quantitative methods at their core, but need to overlay these approaches on situations where, for example, it is not so easy to define treatment and control groups, or baseline and endline phases; for example, by borrowing ideas from encouragement designs to vary “exposure” to treatment within treatment locations. It might also mean, at least in a first step, foregoing attribution to understand the impacts of broader, multi-faceted, strategies.
This will, hopefully, open the door to better evaluations of more promising projects and baskets of projects that are a key component of the peacebuilding weaponry; as well as allowing the field to demonstrate the enormous contribution it makes to reducing conflict.
The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of Vision of Humanity.
Subscribe to the Vision Of Humanity mailing list