‘Let’s not get hung up on the numbers’
Listening to Radio 4’s The Moral Maze http://www.bbc.co.uk/programmes/b01l0kcc last night (on the government’s ‘troubled families’ initiative) brought home to me a paradoxical feature of contemporary public debate. Policy issues of all kinds are routinely framed in terms of statistics, often selected and publicised by the government or other interested parties. But when those statistics are challenged – as misleading, or irrelevant, or methodologically weak – a common response is to say, as several panellists said last night, ‘let’s not get hung up on the figures’. Indeed one of the panellists responded to a forensic demolition of the government’s statistics by Ruth Levitas by saying ‘This is a moral discussion’ as if querying the numbers were some sort of diversionary tactic.
You can’t have it both ways. If the numbers are irrelevant to the issues being discussed, then they shouldn’t be in play at all: if they are relevant, they merit the same level of critical scrutiny as the moral arguments being advanced. To say ‘let’s not get hung up on the numbers’ so as to get down to the real issues is indefensible when those issues are defined in terms of what is claimed to be quantitative evidence. And the evidence offered by government, as Ruth and others have shown, is junk. (Links to articles by Ruth and Jonathan Portes here http://lartsocial.org/Pickles )
I don’t want to get into the merits of the Trouble Families Initiative or of family intervention more generally. (For that, I’d recommend http://childrennortheast.blogspot.co.uk/2012/07/more-trouble-with-troubl... and http://blogs.spectator.co.uk/coffeehouse/2012/07/troubled-families-polic... which are broadly sympathetic to the policy if not its presentation). I do want to offer five general arguments as to why numbers and criticism of numbers are often central to, rather than peripheral to, 'moral' debates on public policy, using ‘Troubled Families’ as an example. The arguments turn respectively on the moral aspects of the production of statistics, on the undesirability of policy debates being conducted in terms of thought experiments, on the challenge to intuitive responses provided by quantified evidence, on the competence and trustworthiness of powerful institutions and on the use of statistics to frame, rather than inform, debate.
(To recap, in this case, controversy surrounds a figure of 120,000 which the government, wrongly, claims is the number of families presenting a combination of behavioural problems (truanting, criminality, anti-social behaviour) requiring multi-agency intervention. Newspapers have gone even further in associating the 120,000 figure with extremes of family dysfunction (child abuse, drug and alcohol addiction etc) drawing on a report based on interviews with a mere 16 families by the director of the Troubled Families Unit.)
People who produce statistics for government, the ONS and academic institutions work to a moral code. To adopt Alasdair Macintyre’s http://en.wikipedia.org/wiki/Alasdair_MacIntyre phraseology, the production of statistics is a practice with ‘internal goods’- ethical standards which are specific to that discipline and inherent to its objectives. Analysts endeavour to ensure that the data they are in charge of is among other things accurate, statistically significant and free of ambiguity: they would not be doing their jobs otherwise. ( The force of this professional morality will be well-known to anyone who has ever asked a statistician to push analysis further than the data will bear, as many who work for government will have done at some point. ) The credibility of statistical evidence is to a great extent dependent on the code to which analysts operate. Abuse of that evidence by others is therefore parasitical on the moral standards observed by professional analysts. Non-statisticians are not perhaps obliged to follow the statisticians’ code (even were they able to) but they are surely not entitled to exploit the authority conferred by that code to blow smoke in the public’s eyes by using data in a misleading fashion. (In fact, government does have obligations in this area, through National Statistics standards and the Civil Service Code.)
The question about the limits to state intervention in family life is not just a hypothetical thought experiment to test out different moral theories. At least as presented on last night’s programme, it is a real-world question about the sort of policies the UK government might adopt, including some coercive policies which have been advocated, it would seem, by the head of the Troubled Families Unit. In issues of public policy, numbers are often of central importance because the public and decision-makers need to have a sense of how big the problem being addressed is in order to form a judgment on whether the policy response is justified: for example, whether it is proportionate, or whether the associated risks outweigh the benefits. These are moral questions, but they are the sort of moral question which can only be addressed on the basis of reliable evidence . Are the 16 or so families described in Louise Casey’s report representative of a larger group of extremely dysfunctional families, and if so, how large is that group? To say that it makes no moral difference to the merits of a controversial policy proposal whether that number is 16 or 1,000 or 120,0000 or 5 million would surely be extravagant.
Numbers act as a check on intuitive responses and unacknowledged assumptions. Bringing quantified evidence to bear forces us to stand back from generalisations (in this case, about large numbers of lower income families) and to ask questions that might not otherwise have been raised. In this case, they should amongst other things alert us to the dangers of social stereotyping. To take an example cited by Matt Barnes http://natcenblog.blogspot.co.uk/2011/08/is-helping-troubled-families-an... only 10% of the 11-15 year old children in the multiply-deprived families making up the 120,000 figure have been in trouble with the police. Put another way, 90% haven’t. The sort of simplistic equation of multiple deprivation with problematic behaviours advanced by some politicians and journalists is wrong. We thus learn two morally relevant things: an empirical relationship between two characteristics (multiple deprivation and being in trouble with the police) which should make us query any claims to be able to predict problem behaviours using evidence on multiple deprivation, claims which in turn could influence policy decisions regarding coercive intervention; and the extent to which some people who claim to speak with authority on these issues are subject to erroneous unacknowledged assumptions.
The credibility of the government in this policy area is a morally relevant consideration, and the way in which this number was abused is relevant to the government’s credibility on this issue. I take credibility here as a matter of ability/competence on the one hand and trustworthiness/honesty on the other (analogous to the ‘accuracy’ and ‘sincerity’ of the late Bernard Williams’ definition of ‘truthfulness’ http://press.princeton.edu/titles/7328.html ) . One of the odder aspects of the ideological politics of this issue is that it is the political right which seems to assume that government is not only benevolent but omniscient and omnicompetent. But we could hardly look for a better example of the limitations to government’s knowledge and competence in this area than the way in which it has pretended to be able to ‘identify’ families presenting combinations of highly specific problems in each local authority area in Britain using survey data which does not concern those problems and which would not enable any such identification even if it did. If ministers and civil servants think they have succeeded in this, they are incompetent, and if they do not, they are disingenuous. Either way, credibility is undermined. I am not saying that that is a morally decisive consideration, but does anyone really think it is morally irrelevant?
In public debate, statistics are often advanced as framing devices rather than for any genuine insight they offer on the issues. (If government were to say ‘some families have extreme problems’ this would not be a news story: putting a figure on it is essential to getting coverage, which is one of the reasons so many manufactured statistics are in circulation.) Government has a privileged ability to frame debate through the attention that its assertions will command in the media. Bringing genuine evidence to bear is a way of addressing a power asymmetry which threatens to distort public understanding. If a government wants to impose a frame, that is reason enough to want to push back against it: all the more if that frame depends on the misleading use of statistics. Again, the criticism of what claims to be quantified evidence turns out to be morally central, not peripheral.
It’s not my aim here to challenge the positive/normative distinction, or to urge commentators on issues of public policy to learn some basic statistics (although they should: it really isn’t that difficult, and a little goes a long way). But trying to debate the merits of policy proposals in abstraction from the evidence is a waste of breath: when someone says ‘Let’s not get hung up on the numbers’ it is usually a way of saying ‘Let nothing get in the way of my intuitive response.’ They’re perfectly entitled to say this of course, but if they do so they can hardly expect to be taken seriously. Numbers are morally central to any intelligent discussion of the issues raised by the ‘Troubled Families’ initiative, and of many other areas of government policy. We expect dubious assertions about morality to be shot down by opponents in programmes like the Moral Maze: we should expect nothing less when it comes to dubious (or in this case, false) statistical claims.