Unravelling the 2016 Global Slavery Index. Part two (original) (raw)

Photo 1_slave-ship_joseph mallord william turner.jpg

Slave ship, J.M.W.Turner

The Global Slavery Index emerged out of a worldwide social and political movement that seeks to expose and eliminate the exploitation of human beings for private profit. While the Index’s efforts to quantify the problem of modern slavery are laudable, this should not preclude careful scrutiny of the methodology and how that is applied. This article, the second in a two-part series, analyses two of the Index’s key measurements: prevalence (how many slaves are present in each country), and the quality of government responses. It then turns attention to the apparent reluctance to critically engage with the Index on the part of those who are well placed to identify its technical limitations and guide improvements. What is behind this reticence? And does it even matter? Is there perhaps some value to the argument that it is unfair and counterproductive to criticise those who are acting in good faith, who are genuinely seeking to do good and useful work to end human exploitation?

Assessing country-level prevalence

Prevalence, in this context, refers to the number of persons within a given country who have been subject to modern slavery, whether at home or abroad – or, in the liberal language of the GSI: persons whose “individual freedom has been restricted in order to exploit them”. For 25 of the 167 countries assessed by the GSI, prevalence figures are based on information secured through random sample surveys conducted during the period 2014 to 2015. Around one thousand people were interviewed in each country. Of the total pool of nearly 29,000 respondents, just 459 affirmed that they or someone in their immediate family were or had been in a situation of forced labor or forced marriage sometime in the past five years. The modest nature of these figures must be highlighted because, as discussed below, they provide the raw data on which the entire prevalence structure of the Index is built.

For the United Kingdom and the Netherlands, slavery prevalence was determined on the basis of two studies that examined rates of human trafficking (which is not the same as ‘modern slavery’). Both studies used ‘multiple systems estimation’ ( MSE ) a statistical methodology popular in human rights and conflict research that utilizes overlaps between incomplete information sources to estimate what might be termed the ‘dark figure’ (in this case, the number of victims who have not come to public attention and are therefore not on any available list). MSE is considered to be an alternative when the suspected low rates of prevalence make low-number random surveys unworkable. Application of this methodology by the UK National Crime Agency in 2013 supported a conclusion that the ratio between the number of detected and undetected victims is around one in four. The GSI uses that information to assert that there are presently 11,500 victims of ‘modern slavery’ in the UK – or 0.02% of the population. In relation to the Netherlands, the Index relied on a study that was commissioned by the UN Office on Drugs and Crime. Using the same methodology, that study found prevalence rates to be more than five times higher, at 0.104%. These substantial differences between two countries with very similar risk profiles, (and consideration of the very different ways in which States identify and record victims in the first place) - indicate that the MSE methodology requires further interrogation before it is endorsed as the go-to back up in situations where the already-problematic standard survey approach cannot be presented as a credible option.

The remaining 139 countries – those without any survey or other data point - were then divided into 12 groups, each considered, with reference to the vulnerability data, to have a common or at least similar risk profile. Each of the 12 groups was then assigned one or more of the 27data points, (25 country survey results, the UK MSE estimate and a six year old survey of sexual violence in the Democratic Republic of Congo that replaced the Netherlands MSE data). Regrettably, the detailed methodology attached to the Index does not tell us which of the 139 countries made it in to which group, and how multiple data points within each group were used to calculate individual country prevalence. For example, we are only told that Group 1 comprises 13 unspecified countries, the prevalence estimates of which are based on survey data from Pakistan and Nigeria and the sexual violence study from the Democratic Republic of the Congo.

Additional refinements (affecting 40 of the 139 countries for which there was no survey or other data available) sought to adjust the raw group allocations to take account of unique or particular factors such as conflict and the existence of state-imposed forced labor. For example the prevalence estimate for Uzbekistan, which would have been calculated with exclusive reference to the survey results for Russia, were recalibrated to account for the known existence of forced laborers in the agricultural sector. Other untestable adjustments were made when the results of the extrapolation exercise failed to produce the desired results - or, in the words of the Index, when “selected stakeholders and country experts” felt such adjustments were necessary to “better reflect known realities”. For example, while the vulnerability data on China indicated that prevalence should be measured with reference to countries such as Myanmar and India, factors such as high internet penetration, stronger infrastructure and a dramatically different level of economic growth were considered sufficiently relevant to push it to being measured with reference to Vietnam, whose country survey indicated much lower levels of vulnerability. But the reasoning behind most of the expert adjustments is much less specific. And in some cases, it doesn't make sense. For example Iceland, Finland, Portugal and Italy were all subject to unspecified “geopolitical adjustments” on the basis of their “geopolitical similarities to Western Europe”. Unfortunately the Index’s authors do not reflect on what the perceived need for such adjustments tells us about the validity of the underlying methodology and the data it produced.

Assessing government responses

It is impossible to explore this aspect of the Index without reference to its unacknowledged lodestar: the annual U.S. State Department Trafficking in Persons (TIP) reports which, since 2001, have been assessing and ranking national responses to trafficking in every country, every year. Many States understandably object to heavy-handed scrutiny and criticism from the United States Government. But few would question their influence on legal, institutional and attitudinal change in this area. The assessment criteria used in the Reports is set out in the relevant federal statute and covers a range of factors that all fall loosely under the headings of prevention, protection and prosecution. As I have explored elsewhere, the criteria, which has been refined over the years, more or less tracks the international legal obligations that govern State conduct in this area. The standard of the reports has improved steadily over the years and the individual country assessments paint a necessarily incomplete but generally realistic picture of the trafficking situation and of the quality of the relevant government’s response.

The GSI’s assessment methodology, while clearly based on the US approach, is much more complex. It is structured along the lines of a logical framework for a development project – complete with five overarching milestones (broad goals, sometimes confusingly referred to in the narrative as “outcomes”); several dozen intermediate outcomes (more specific goals, sometimes referred in the Index as “activities”); almost one hundred general indicators of “good practice” attached to the outcomes; and many more specific ‘description’ indicators that explain exactly what should and should not be taken into account.

Throughout, there appears to be an in-built bias in the assessment tool towards wealthier countries. At the most basic level, the Index places a high premium on information availability, which is directly related to both capacity and resources: if information relevant to an indicator is not available, the country receives a zero score against that indicator, the same as if it did not meet the indicator at all. Wealthy countries benefit in other ways: only they can realistically be expected to meet indicators such as funding or undertaking research - or developing and implementing expensive victim support programs. And one entire milestone, “governments stop sourcing goods and services linked to modern slavery”, seeks to capture an advanced response that is understandably not on the agenda of any poor or even middle-income country. A table that rates government response against GDP provides important additional perspective and balance: for example serving to highlight Singapore’s relative poor performance when measured against capacity – and the Philippines’ relatively strong one. But this one table cannot remove the in-built distortions that, just like the TIP Report, will almost always see wealthy countries of destination come out on top. This matters because it obscures the reality that wealthy countries of destination stand to reap the greatest benefits that flow from human exploitation.

Despite these weaknesses, the framework for assessing government response nevertheless appears to broadly reflect the current, imperfect state of our knowledge about what can be done to address ‘modern slavery’ and what might make it worse. But the Index projects the framework as something much more precise and definitive. This approach is highly problematic, not least because complexities in the methodology obscure multiple assumptions and value judgments.

These problems are brought into sharp relief when considering how country ratings are calculated. Groupings of indicators (outcomes) are evenly weighted and, with just a few exceptions, the 98 indicators also receive the same weighting. What this means is that a core outcome such as “victim-determined support is available for all identified victims” (the holy grail of a victim-centered response that has not yet been achieved anywhere in the world) has the same value for assessment purposes as “domestic legislation is in line with international conventions” (which is extremely complex to assess and can be of great or lesser significance depending on myriad factors). It also means that both these outcomes have the same value as the wide-ranging “safety nets exist for vulnerable populations”. Furthermore, equivalence in indicator weighting means that the indicator “government facilitates and funds research” has the same absolute value as the much more important indicator “government facilitates and funds labor inspections” and that ratification of the highly contested and substantively problematic UN Migrant Workers Convention has the same value as ratification of the centrally important UN Trafficking Protocol.

The problem is not just the false assumption of like for like across both outcomes and indicators, but also the Index transforming qualitative information into quantitative data. In order to calculate precise rankings, the Index applies a 1=yes, 0=no formula to situations, conduct or actions that can never be captured in this binary way. For example the indicator: “government interventions … are evidence based” (or, as the attached description clarifies: “interventions are based on strategies or theories of change identified by research”) is impossible to measure on a yes-no scale for myriad reasons including the fact that research in this area is of an uneven and often very low standard and, critically, that governments themselves rarely articulate the evidence base or theory of change on which their intervention is based. Furthermore, as mentioned previously, a score of zero may be given on the basis of no available data, but a score of zero can alsobe given if the information available “explicitly demonstrated the government did not meet any indicators”.

And the devil is also in the detail of how the assessment framework is actually applied. The US Government flexes its considerable political muscle to effectively coerce governments into providing certain information on their response to trafficking. This information is checked and supplemented through in-country work by the State Department that mobilizes the abundant resources and contacts of US embassies throughout the world. The GSI assessment of government responses against each indicator uses a range of sources: desk reviews of publicly available information (including, clearly the TIP Reports themselves); information received from governments who responded to a questionnaire (38 of the 161 countries assessed); and other information obtained through “in-country experts” and online survey responses from and interviews with “NGOs and partners”.

Certainly it can be expected that this process yields useful insights, but there is no way to assess the quality of different information sources or indeed, to replicate the results. Thus, it is extremely difficult to work out why the Netherlands (which was awarded the highest ranking overall) is so much better than the United Kingdom (which received the third-highest ranking overall). It is equally difficult to determine exactly how the Governments of Saudi Arabia, Lao PDR and China are performing better than the Governments of Singapore, South Korea and Timor-Leste when it comes to responding to modern slavery. They may well be, but we have to take the Index’s word for it. The United States received the highest absolute score but was relegated to second place on the basis of a disqualifying factor that victims have been arrested in and deported from that country. It is difficult to resist speculating that this peculiar and under-explained recalibration of the Index’s own calculations sought to avoid the politically unpalatable result of the United States – widely criticized for its self-appointed role as global anti-trafficking sheriff - being named best-performing country in the world on this issue.

A complicity of silence?

The lack of critical engagement with the Global Slavery Index with only very limited exceptions is troubling. But it’s also understandable.The Walk Free Foundation, the organization that produces the GSI, and its many offshoots have woven a dense web of private philanthropythat few organizations or individuals working in the ever-ballooning anti-slavery sector, with the striking exception of the Vatican, can afford to ignore or offend. This has rendered the GSI virtually immune to criticism. Such deference is bad for the Index and for the anti-slavery movement as a whole, trapping us all in an echo chamber of like-minded and right-thinking souls who provide each other little incentive or encouragement to really interrogate how we are thinking and working.

And then there is the accusation, devastating in its chilling effect, that attacking good works is ungenerous and unsporting. Or, as the argument has often been put to me: “we are doing our best: why are you trying to undermine us?” In public, GSI founder and funder Forrest has dismissed his critics as mere “academics”and, in his preface to the 2016 GSI Index, implies that those who point out the imperfections of the Index without offering alternatives are obstructing the “emancipation journey”.Such sentiments are not uncommon and, in the field of anti-trafficking and modern slavery, have sometimes been channeled in ways that do damage to trust and credibility, even of organisations and individuals who are clearly acting for good purposes and in good faith. A major Australian child protection charity exposedfor recruiting a Cambodian child to impersonate a miserable, mud-smeared sex worker for a fund-raising campaign reacted swiftly to charges of exploitation, accusing its critics of misunderstanding what was needed to “bring the horror home”. A US-based organization that stages high profile, ethically compromised ‘rescue’ operations in impoverished countries responded with anger and bewilderment (and eventually the threat of a defamation lawsuit against the present author) when its actions were publicly challenged. This is all wrong. Good intentions matter but they are not enough, especially in relation to an area where the capacity to do harm – to make bad situations even worse - is demonstrably real. There are many examples and they include my own abundant, first-hand experience of how well meaning pressure on dysfunctional criminal justice systems to prosecute traffickers has led directly to unfair trials and other serious miscarriages of justice.

Of course constructive criticism implies contemplation of alternatives. In the present case the search for something better should start with recognizing that quantifying and measuring every aspect of the ‘modern slavery’ problem is, in the end, just one of many possible approaches to improving our understanding of the problem and our capacity to influence positive change. Done well it will undoubtedly make a useful contribution. But why not openly admit the limitations of country-level, state-focused assessment of a phenomenon that is essentially transnational in its operation and impact? Why not redirect the massive resources currently devoted to obsessive quantification of slavery into forensic examination of the industries and sectors that we already know are nurturing exploitation? Such a deep and strategically targeted focus could help us to understand, better than any prevalence or vulnerability assessment, the structures that support exploitation; the myriad government, corporate and other interests that reap the benefit of exploitation; and the possible pathways and targets for intervention.

There are tentative but perceptible signs that we are on the verge of a transformative social revolution - a fundamental, worldwide shift in our common understanding of what one human being owes to every other. In this new moral and political universe, the idea and practice of human exploitation for private profit will be unthinkable. Andrew Forrest, Roosevelt’s “man in the arena”, deserves credit for daring greatly and striving valiantly in the cause of this revolution. But none of us working in this space is safe from the risk of mistake, overreach and hubris. It follows that no aspect of our work should be considered off-limits to scrutiny, criticism and debate. The revolution deserves nothing less.