Radical
Statistics

The Journal

The Subjects

The Books

News

Links

About

Home

Comments on the draft Code of Practice for National Statistics

Ray Thomas

What are National Statistics

A notable point made in the draft Code of Practise blurs the distinction between National Statistics and other official statistics. In his Introduction Len Cook points out that some will find this distinction 'hard to determine', and argues 'that means that 'building trust in National Statistics will mean building trust in all other official statistics. and vice versa' (p 5). Len Cook goes on to say that 'some of the standards applicable to National Statistics should be met by all official statistics. At minimum .. guaranteeing confidentiality, using objective methodology, .. transparency in release practices, meeting quality and value for money standards .. (foot of p 5).

This blurring takes us back to the Green Paper of 1998 that tentatively put forward the idea of National Statistics as the 'production of statistics intended for public use' (para 4.3). It would be difficult to argue that the government should publish statistics not intended for public use, and equally difficult to make any general case for statistics to be produced but not published. But somehow this tentative idea of National Statistics got enshrined in the 200 page National Statistics Work Programme published this year.

The distinction between National Statistics and other official statistics seems be limited to matters of authority within the machinery of government. The RPI is not included in National statistics because the government itself claims authority over the RPI. On the other side some statistics fall outside the National label because the departments producing them just do not concede that the ONS have authority over these statistics.

It seems doubtful that the Statistics Commission, for example, would not take up a matter because the relevant statistics did not formally belong to 'National Statistics. The draft Code gives support to such a position.

The Scope and Coverage of National / Official Statistics

The scope of National Statistics is summarised in Principle 1 of the Draft Code of Practice (page 8). Principle 1 states that: 'National Statistics are statistics that inform decisions of significance in government, businesses and the wider community. Within available resources, National Statistics will meet the needs and aspirations of users'.

This expression of the scope of National Statistics represents a narrowing of previously articulations. The Green Paper of 1998 said that National Statistics were those intended for public use 'describing the state of the nation and measuring the performance of government' (Cm3882, Para 4.3). Tony Blair was more specific in the White Paper of 2000, saying that official statistics should 'allow people to judge whether the Government is delivering on its promises' (Building Trust in Statistics, Cm4412, p iv). The current ONS website goes further than any of these and claims that National Statistics provide 'an up-to-date, comprehensive and meaningful description of the UK's economy and society'.

These four articulations are different enough from each other to suggest that neither the Government, nor the ONS, nor the Government Statistical Service, has a clear idea of what the coverage of National Statistics, or official statistics in general, should be.

A former editor once described the advertised claims of comprehensiveness for the 'Social Trends' annual as 'legitimate puffery' (puffery = advertisement disguised as praise). But puffery cannot be expected to give guidance to members of the Government Statistical Service as to what should be their statistical priorities. Ruth Levitas in her book 'Interpreting Official Statistics' concluded that official statistics 'embody the interests of the state rather than those of citizens'. Is not Levitas's generalisation closer to reality than the 'comprehensive' claim?

Is it not reasonable to expect that a Code of Practice should expand on Principle 1 quoted in the first para above? Should not the Code acknowledge the extent to which official statistics measure governmental activity, and to which official statistics should focus on the measurement of governmental activity? Should not the Code discuss the principles that underlie the nature and priorities in the production of statistics that are not measures of governmental activity?

Many individuals and groups have an interest in government statistics as giving a fair and balanced picture of conditions in society and in measuring the performance of government in dealing with social and economic problems. This interest goes beyond the needs of 'users' of government statistics. National Statistics should have a scope and coverage that aims to meet wider societal needs. The ONS and other components of the GSS should have responsibility for drawing the attention of government to trends in society that should be taken account of in formulating government policies, as well as those that are the result of government policies.

The scope of official statistics will doubtless be a matter of endless debate. But the 'within the available resources' qualification would make that debate lopsided by disguising a major feature of statistics production. If the GSS had to provide all statistics about society there would not of course be enough money. But the GSS does not provide all statistics, and should not see itself in that kind of role.

It seems easy for the GSS to get involved in work that goes well beyond the traditional concerns of government. Recent examples include the construction of satellite accounts, the measurement of social capital, and the conduct of attitude surveys. There is no intention here to question the value or quality of this kind of work. But it is not clear that these kinds of work belong to the ONS or the GSS. They might well fall within the responsibilities of academics or independent survey organisations.

Just over twenty years ago Sir Derek Rayner complained of a tendency for the GSS to regard itself as the 'universal provider of statistics'. The Draft Code displays the same tendency in seeing the GSS as the only available resource for statistics production. There is no mention of the production of statistics outside the GSS. If the GSS sees itself as the only source of statistics it condemns itself to direct financial dependence on government. Should not the Draft Code include a statement that would indicate the nature of any self-imposed limits by the GSS on the unachievable ambition of giving a 'comprehensive' description of society?

Non-Government Producers of Statistics

The draft Code of Practice omits any mention of producers of statistics other than the Government Statistical Service. This omission is important and significant. The omission is also curious in the light of the emphasis given by Tony Blair in the White Paper to the function of statistics in assessing the performance of the Government.

All politicians, and many others, use the 'opinion polls' to help assess the performance of government. Some journalists appear to think that ministers use nothing else but the opinion polls to assess the performance of the Government of the day. But the 'opinion polls' are not produced by the GSS. We all, surely rightly, assume that such politically sensitive statistics should be produced quite independently of government.

The 'opinion polls' are not an isolated example of statistics produced independently of government. Virtually all statistics for the media industries are produced independently of government and the Government Statistical Service. If the government wants to know the audience size for a party political broadcast, for example, they depend upon statistics produced by BARB (Broadcasters Audience Research Board) that has no connection with the government machine. Many other organisations produce and publish statistics and many organisations produce statistics that are not published.

Local government is a major producer of statistics. The academic community is a major producer of statistics, as well as being archivists for major governmental datasets. The household panel surveys conducted at Essex University has, with justification, been described by Gordon Marshall, Chief Executive of the ESRC as the equivalent of the Hubble telescope for the social world. Philanthropic bodies, notably the Rowntree Foundation in recent years, are prolific producers of statistics. Many other organisations make significant contributions to the production and dissemination of statistics by publishing compilations of statistics derived mainly from official statistics. The Code of Conduct is intended to support the establishment of a statistical service independent of government. So why is there no mention in the Code of all this range of independent producers that already exists?

The Code of Practice should acknowledge the contribution to the development of statistics of organizations outside the GSS. The independence of the Government Statistical Service from the Government of the day would be reinforced if the Code should required the Government Statistical Service to establish and maintain collegiate relationships with other organisations concerned with statistics. These other organisations include producers, and potential producers, of statistics in the commercial sector (including, for example, the whole of the media industry), local government, professional organisations (such as the Royal Statistical Society), philanthropic bodies (such as the Rowntree Foundation), and the academic community (particularly the ESRC and disciplines associated with the social sciences). Organisations of these kinds make significant contributions to the production and dissemination of statistics - including compilations of official statistics and statistics derived from official statistics.

Establishing collegiate relationships with organisations where these relationships do not already exist, and maintaining such relationships should not impose significant costs to government.

It is expected that such collegiate relationships would contribute to the range and variety of publicly available statistics without giving the false impression that the government is the universal provider of statistics. But such relationships should involve the encouragement of exchange of information and the identification of complementary roles in the production and dissemination of statistics.

If there is no clear definition of the coverage of official statistics, as pointed out in the preceding Section, how does the GSS take into account this range of independent producers in determining its statistical priorities?

Who has Control Over the Production of Statistics?

One of the similarities between the GSS and many other producers of statistics is in their organisational structure. Members of the GSS describe the structure of the GSS as 'decentralised'. What is meant is that individual Departments have control over the statistics produced for the activities covered by the Department. The organisational structure might be more accurately described as departmental.

Such a decentralised or departmental structure is characteristic of the production of statistics for many other areas of activity. The BARB organisation that produces statistics of TV audiences, for example, is set up and controlled by the BBC and the independent TV companies. The production of audience statistics is centralised in operational terms. The contract is given to an outside market research organisation. But control over who gets the contract and what is produced lies with the BBC and the independent TV companies that finance BARB.

This pattern is characteristic of the production of statistics outside government. Local authorities exchange information to produce volumes of statistics like those produced by CIPFA (Chartered Institute of Public Finance and Accountancy) that have series that have been published for more than half a century. One of the main functions of many trade associations is to produce statistics for the industry as a whole. The SMMT (Society of Motor Manufacturers and Traders) publishes the annual UK Motor Industry Facts. The Brewers and Licensed Retailer Association produced an annual Statistical Handbook including more than seventy tables. Many smaller trade associations are believed to exist largely in order to produce statistics for the industry as a whole. But such statistics are often not published beyond the membership of the trade association.

In all of these cases there is centralisation in the production of statistics but certainly not centralisation of control over what is produced. What is produced is under the control of the supporting organisations, not the producer of the statistics.

This matter of decentralisation is important because most of the changes made in pursuit of an independent statistical service have been towards centralised control. What the Code of Practise states as the responsibilities of the National Statistician has to be read to be believed. There are seventeen clauses that give the NS responsibility for everything that happens in the GSS, and what ought to happen in the GSS. It would not be out of character for one of these clauses to specify that the National Statistician is responsible for ensuring the loos in GSS offices are kept clean!

Why does the Code go against the decentralised control pattern evident in the production of statistics outside and independently of the GSS? Why is the centralisation of control in a National Statistician seen as the way for the GSS to achieve independence from Government?

The Status of Members of the GSS: Civil Servants or Public Servants?

One of the differences between the Government Statistical Service and other producers of statistics is in the status of their staff. Members of the GSS are all civil servants and so subject to the Official Secrets Act that makes it difficult for them to say anything to anyone outside unless it is prescribed in their job description and has been approved by their Department. The staffs of other organisations producing statistics are employees with varied job descriptions.

The roles and responsibilities of the National Statistician are expressed in seventeen specific clauses that relate to the UK, and thirty seven clauses that relate to the devolved administrations of Scotland, Wales, and Northern Ireland (pp 17-22). There are in addition general responsibilities, such as that the National Statistician will provide 'a balanced and comprehensive view of economic and social progress and conditions over time .. (p 12) and has 'authority for all statistical professional matters in government, .. can provide assurance to the public and Parliament on the integrity of official statistics..

These responsibilities to the National Statistician that go well beyond those of other civil servants - still subject to inconveniences like the Official Secrets Act. It seems fair to say that the role of the National Statistician has been defined as that of a public servant responsible for applying professional standards to give a statistical picture of society and the economy.

This public servant function is delineated as far as the National Statistician is concerned. And it could be argued that the public servant role is also delineated in the in the descriptions of the responsibilities of the Chief Statisticians of the Devolved Administrations and those of the Heads of Statistics in UK Government Departments. But the public servant function does not appear to get any acknowledgement further down the hierarchy of the GSS.

It is worth recalling aspects of the statistical scandal about unemployment statistics that led to the election pledge for an independent statistical service. Many believed that the unemployment statistics were being fiddled in the 1980s. Junior statisticians, speaking at RSS meetings at that time, pointed out that the statistics measured the numbers entitled to receive unemployment benefit and that the decision as to who was entitled to benefit was a governmental responsibility not under the control of statisticians. But this 'explanation' was not being given by the GSS at that time and it appears that such junior statisticians would have been sacked had they given publicity to such reasoned explanations.

The underlying cause of the belief that the unemployment statistics were being fiddled did not become apparent until the mid-1990s. That problem was the drift of a hundred thousand a year or so from unemployment benefit to sickness benefits (a drift whose results are a great embarassment to the present Government). Senior statisticians were apparently unaware of this drift. There must have been many people in the DHSS and DfEE lower in the hierarchy who must have known what was happening, but without some kind of public servant function being enshrined in their job description, there was no reason to put their heads above the parapet and speak out.

The widespread use of performance indicators will give rise to many pressures on statisticians far from the National Statistician to fiddle the figures in the interests of the local organisation. And under this draft code they will not be obliged to do anything to prevent this happening. The ordinary member of the GSS is not obliged 'to ensure that all outputs are produced in accordance with the standards set out in the Code'. That kind of ensuring belongs to the National Statistician not to individual members of the GSS.

How can independence be assured unless the public servant role is made explicit and accorded to all government statisticians? Perhaps the question that ought to be asked is whether the job descriptions of government statisticians will be changed to accord with the job description of the national statistician?

Debate

In an earlier email discussion of this point David Byrne of Durham University commented:

...can I record my amazed surprise that 'senior statisticians' were unaware of the 'drift' from unemployment to invalidity benefit in the 80s and early 90s. Everybody at the sharp end - Employment Service Staff, GPs (who had a certification role), welfare rights advisors and even people who were just local councillors like me interpreted this as an active if unstated policy and it was a common discussion point on the number 53 bus in Gateshead. The social scientific point is surely that one crucial element in interpreting any count is a detailed knowledge of what happens at the point of classifying and recording.

Dave Byrne's amazed surprise is fully justified. But what was common knowledge in the Northeast, and in some other regions, was not acknowledged in other parts of the country. More relevant to the Draft Code of Practice, the drift was not recorded in official statistics in any meaningful way.

The justifiably famous Royal Statistical Society Report on the measurement of unemployment published in 1995 includes a Table listing 'Changes in unemployment counting methods since 1979'. The table includes 30 different items most of which give estimates of the effect of the different ways in which entitlement to unemployment benefit was tightened. But there is no mention of the drift to sickness benefit.

The RSS Report was made with 'the full co-operation' of what was then the Employment Department Statistical Services Division. The then Director of ED statistics led the discussion with a 2000-word contribution that does not mention the drift to sickness benefit. So it appears that government statisticians in the North East (and some other areas) either did not tell their colleagues in the HQ what was happening in their region, or what they said was not listened to, or the ED was deliberately suppressing what should have been public knowledge.

The important point is that it is fanciful to expect that such problems can be overcome by giving responsibility just to the National Statistician, and Chief Statisticians of government departments. As Dave Byrne says 'The social scientific point is .. that a crucial element in interpreting any count is detailed knowledge of what happens at the point of classifying and recording.' That knowledge often only belongs to the statistician in the locality who is in fairly direct contact with what is happening at the point of recording. It does not belong to heads of statistics.

What is ‘Fit For Purpose’?

The Code of Practice baldly states that National Statistics 'will be fit for purpose'. But the Code says nothing about how the purpose of particular statistics might be defined, nor who might define the purposes, or even how anyone might try and find out about the purpose of particular statistics. The lack of any discussion of purpose robs fitness-for-purpose of any practical meaning.

Most social scientists would assume that fitness for purpose ultimately has to be decided by the user. The range of possible purposes has no practical limit. Fitness for a particular purpose depends upon whether categorisations and data collection processes are validly related to the purpose. Fitness also depends of a variety of other factors such as accuracy and reliability, sampling error, possible bias, etc. Ultimately only the user, using knowledge of how the statistics have been produced, can determine whether the statistics are fit for the purpose the user intends.

Official statistics are presumably produced with a range of purposes in mind. But these purposes are usually taken for granted rather than made explicit. The National Statistics Work Plan, a 200 page volume aiming to cover the period up to 2004, discusses developments, scope, users' needs, key outputs, etc., etc., but there does not appear to be any significant discussion of purposes.

How can fitness for purpose be assessed if there is no specification of purpose? Should not the Code of Practise require that the producers of official statistics specify the range of intended purposes for the statistics produced? Should not the Code require that producers of official statistics also specify any necessary limitations and qualifications that should be made in using the statistics for the intended purposes.

The draft Code links the concept of fitness to a promised protocol on quality. But this is at best irrelevant where there is no specification of purpose.

The relevant protocol is that promised for 'Customer Services'. The Customer Services Protocol should surely require the ONS/GSS to specify the purposes that a set of statistics is intended to serve. The Protocol should indicate the nature of invalid purposes, and give full metadata details designed to enable users to gauge the extent to which the statistics would be fit for the particular purposes the user has in mind?

Administrative and Survey Statistics

The Draft Code includes clauses (Section 10, page 17) that point to the importance of the use of administrative statistics in reducing the burden on respondents.

These clauses should be welcomed. There is considerable scope for development of statistics from administrative records. A major example are the returns made monthly by employers relating to PAYE and National Insurance Contributions that could be used to substitute for surveys of employers that are costly to government and, in terms of form-filling, to employers. An employment series based upon these PAYE/NIC returns could be more reliable and meaningful than the existing series for employment that are based on surveys. The PAYE/NIC returns (with some checking of individuals' addresses) could be used to produce more reliable and meaningful statistics for employment for local areas than the existing statistics that are based on surveys of employers.

The PAYE/NIC returns are made to the Department of Inland Revenue a part of the Treasury. The Treasury controls the Office for National Statistics. But curiously the Inland Revenue has long jealously and steadfastly defended the PAYE and NIC returns against any use for the production of statistics by the ONS - or its predecessor the Central Statistical Office. The first signs of any incursion has come only very recently when it was reported that the PAYE returns would be used to investigate the matter of equal pay for women and men.

On the other side the major expansion of the activities of the GSS in recent decades has been in the conduct of national surveys. There are now major social surveys conducted for all major areas of economic and social activity in the UK - and most of them are conducted by the GSS. They include Family Expenditure Survey (1953-4), the National Travel Survey (1965), the General Household Survey (1971), the Crime Survey (1981), the Labour Force Survey (1984) and the Family Resources Survey (1993) (now amalgamated with the National Food Survey). These surveys have come to be seen as an indispensable part of the statistical scene in complementing existing administrative series. .

In the case of the LFS and the Crime Survey Surveys at least the statistic provided are often seen as superseding or replacing the use of administrative statistics (e.g. the workforce employment series, the claimant unemployment series, statistics of recorded crime.) This tendency for survey statistics to be seen as a substitute for administrative statistics points to the need for the GSS to achieve a balance in its programme between statistics derived from administrative records and those derived from surveys.

The Code of Practices could well require members of the ONS and GSS to aim to seek to satisfy statistical needs as far as practicable on the basis of administrative records. The production of statistics on the basis of administrative records can be generally expected to be substantially less costly than that on the basis of surveys involving interviews. And it is difficult to imagine ways in which other organisations other than the ONS and GSS could produce statistics based on the administrative records of government.

This limitation does not apply to social surveys. The Code of Practice could well make the point that there are a number of organisations in Britain with the capability of conducting large scale social surveys (and of course some of these organisations conduct surveys on behalf of the ONS and the GSS).

The Relationship of Administrative and Survey Data

The draft Code does not discuss the relationship between administrative and survey statistics. But this could be covered by the Protocol promised for Statistical Integration. A Protocol for Statistical Integration could well state that surveys should be designed in ways that aim to complement existing administrative statistics. There are a number of important examples where surveys statistics are not consistent with related administrative statistics.

The Labour Force Survey, for example, fails to provide statistics on Claimant unemployment that are consistent with the Count of Claimants, and so also fails to provide statistics on non-Claimant unemployment. The Family Resources Survey undercounts the number of recipients of a number of government benefits. It is difficult to take claims of quality for these surveys seriously when they fail to provide reliable statistics on the very populations that are their main subject matter, and fail to provide reliable evidence on the individuals and households who are the principal recipients of public expenditure. The ONS keeps very quite about these quality defects.

It is recognised that there may be intractable problems in achieving complete consistency between surveys such as the LFS and FRS and related administrative statistics. But the Code of Practice should make clear that National Statistics should aim to achieve consistency. Unless a problem is recognised and acknowledged it will never be solved. The Protocol for Statistical Integration should reinforce the aim of achieving consistency and indicate ways in which the problems should be tackled.

Statistics Being Used Against Respondents

The draft Code of Practice does not use the term 'ethical issues'. But there are at least two areas where it might be considered that ethical issues are involved. One is the matter of statistics not being used against respondents, and the other matters of confidentiality.

Len Cook's Introduction to the draft Code takes up the first issue. Cook argues that to gain the confidence of respondents a key factor is

to ensure that statistical information is not used against respondents, and to ensure them that this will not happen ... the quality of official statistics depends upon the suppliers of data perceiving that ... they do not risk being harmed commercially or in any other way by supplying the data.

Cook's argument may be related to the question of 'informed consent' that is generally upheld as a necessary condition for ethical social science research. Such issues are rarely discussed and are not discussed elsewhere in the Draft Code of Practice.

An example of where these kinds of issues might be seen as important is in the use of household surveys to measure employment and unemployment. The questions used derive from ILO criteria. Respondents are classified, for example, according to whether they work for an hour or more a week. Someone working two hours a week might consider himself or herself as unemployed. But they are counted as in employment. This procedure can hardly be regarded as involving informed consent because respondents are not asked if they are in employment or unemployment and are not told how they have been classified. Is this a case where 'statistical information is used against respondents?'

This example is fairly important. The ILO criteria are used in the Labour Force Survey, the Census of Population, and are being extended under the harmonisation programme to all government surveys.

There is no intention here to challenge the production of statistics of in accordance with the ILO criteria for defining unemployment. Such statistics are necessary for the specific purpose of international comparisons. What is being suggested is that the production of such statistics should be subject to quality checks with regard to use for other purposes. Why should how ILO measures unemployment dominate the way social investigations are carried out in the UK?

One of the quality checks that should be made of surveys is whether the statistics are meaningful to respondents. If they are not is there not a violation of the principle of informed consent? And is it not a case where the statistics are being used against respondents?

Confidentiality of Records

The Code of Practice is very strong on the confidentiality of responses to surveys and census data. Such data will only be used for 'justifiable statistical and research purposes ... any usage will not harm the individuals involved ..'. Provisions that require that staff sign confidentiality agreements that will apply without time limit reinforce the points.
Then follows: 'Where data are obtained for National Statistics as a by-product of an agency's administrative functions, all obligations to preserve the confidentiality of the administrative information will apply equally to the statistical data'. (Principle 5, p 14).

This is not so easy to understand. The provisions that apply to individual records of survey responses seem straightforward. The detail of individual survey responses should be sancrosanct for, at least, the very practical reason that, if the detail is not sancrosanct, the response rate and the quality of responses to individual questions is at risk.

But it would be counter-productive to assure people that admin records are equally sancrosanct. Administrators access admin records. So do fraud investigators. And it is widely known that such access is given. What exactly is the point of assuring people that statisticians are bound to respect confidentiality in ways that do not apply to administrators and fraud investigators?

The danger of such assurances is that they detract from the credibility of the assurances of confidentiality of surveys responses.

The wording of the Code is also confusing. Why is the assurance given that confidentiality will apply to the statistical data as well as the administrative information given? In general if the 'administrative information' is confidential, then there is surely little possibility that statistics would reveal this 'administrative information'?

The footnotes to this section of the Code do not clarify. I wonder if this confused section reflect the situation that was revealed at the consultation meeting on social and welfare statistics in November?

At that meeting it was stated that the ONS do sometimes use admin records as a sampling frame for surveys. But when this is done letters are first sent to the sample drawn to ask if they are willing to be interviewed. This practice is of course a violation of the basic principle of sampling that each member of the population should have an equal chance of being selected, and so can be expected to lead to bias in survey results. It appears from footnote 2 that the wording of the Code of Practice might be an attempt to rationalise this practice.

The practice of sending a letter in advance appears to pose a conflict between confidentiality and quality issues. The Code of Practice should surely clarify such matters?

Consultation with Ministers

The Protocol on Consultation with Ministers is a real advance, but how relevant this is to the establishment of an independent statistical service is unclear.

The problem of conflict between statisticians and government is a relatively well explored area. The leading paper 'Politics and Statistics' by William Selzer published by the United Nations in 1994. It seems fair to say that the main theme of Seltzer's paper is discussion of conflicts between statistical offices and their government. Seltzer sees such conflicts as inevitable if only occasional, but does not suggest how such conflicts should be resolved.

It can be expected that Seltzer would approve of the Protocol for Consultations with Ministers in the Code of Practice, because the Protocol does address his concerns. Without going into detail it can be said that the Protocols requires that where conflicts could not be resolved there would be public statements and involvement of the Statistics Commission. It seems likely that the existence of this Protocol would ensure that most conflicts would be resolved by negotiation because no government would want to be seen publicly in conflict with its statistical office.

But it has to be said that conflicts arising from the Government imposing its will against the ONS/GSS do not appear to be a major problem in the UK.

In the labour market field the problem seems to be the reverse - that the ONS/GSS is reluctant to show any independence. Labour market statistics are still dominated by the ideologies of the former DfEE where it appears that is believed that the function of statistics is to support government policies rather than provide objective evidence about them (see, for example, the DfEE memorandum to the Education and Employment Committee of April 2000).

A major problem that is now well identified is that of having more people 'on the sick' than 'on the dole'. The scale of this problem stems from a failure of GSS to report on a trend that started in the early 1980s, but was not publicly noted until the Sheffield Hallam group published 'The Real Level of Unemployment' in 1997. The fact that the DE encouraged claimants to switch from unemployment to sickness benefit for more than a decade is a problem that must have been known about by members of the GSS, but, as discussed earlier, it appears that the GSS did not see it as part of their function to inform the government of the day that this was happening.

The origin of the call for an independent statistical service goes back to the 1980s when 'fiddling the unemployment statistics' became a national joke against statisticians. The joke led to abuse of statisticians in Parliament. 'If the DoE statisticians were a football team they would be banned for bringing their profession into disrepute'. The RSS report of 1995, from which this quote is taken, records that the MP perpetrator was invited to the CSO for a chat.

Even at that time it appears members of the GSS were unwilling to be seen in conflict with the government by explaining in a public way, rather than in an office chat, that entitlement to unemployment benefit was a responsibility of government not the GSS.

So most of Seltzer's discussion seems fairly irrelevant to the matter of an independent statistical service in Britain. Seltzer took it for granted that the Chief Statistician and his staff are all civil servants. He did not consider the 'public servant' role that, I have argued, has in effect been written in to the job description of the National Statistician. And so he did not consider that public servant role might be extended to all members of statistics offices.

Release Protocols

I don't want this discussion to be a monologue. So I'm going ahead of a logical order to the Release Protocol because this Protocol even gets members of the RSS jumping up and down with excitement (if you can imagine that!). Members of the RSS fear that the Release Protocol shows that the government is not serious in its support for an independent statistical service. So I hope that others, and particularly members of the RSS and GSS, will join in the discussion on at least this issue.

Members of this list should not, by the way, construe any of the messages I have posted as derogatory of Len Cook the National Statistician. In my view he has done a great job in putting together this Draft Code of Practice. Does any other country in the world have such a Code? But inevitably the Draft Code largely reflects existing GSS practices and ways of thinking and reveals the character of the GSS - than manages to be both insular and fragmented in both its practices and patterns of thought.

The Protocol on release dates can be used to illustrate the point. The point of controversy is ministerial access before publication. The Protocol points out that it is customary in the UK for ministers to have early access. 'The sole purpose is for Ministers to be able to respond completely when questions arise at the time of release'. The Protocol proposes 40.5 hours early access for regularly published statistics and up to five days for special reports. The justification for the 40.5 hours and five days is not spelt out.

Surely 24 hours is enough for any Minister to be prepared to deal adequately with questions?

It is therefore difficult to disagree with the critics who say that this Protocol shows that the governmental machine is not giving the Office of National Statistics full control over publication. But I find it is difficult to agree that this is a substantial matter related to the creation of an independent statistical service.

Who exactly in the government machine wants this five days early access? Is it ministers, like John Prescott, who want five days to study a report so that they can create their own spin on the statistics? Or is it government departments who want five days to make sure that they can give the correct spin their minister? If the latter then it appears we are back to the decentralised/centralised battle within the GSS that has very little to do with a statistical service independent of government.

The Role Of Consultation Exercises

A remarkable feature of the Draft Code is the inclusion of a list of eleven protocols that will determine how the code is implemented. It is arguable that these protocols may be just as important as the Code itself in determining future practice. If so it could be pointed out that publication of the Draft Code represents only half a consultation rather than a proper consultation!

The Draft Code of Practice does not, for example, discuss the role that consultation might play in the production of statistics. But the Draft does promise a Protocol on consultation.

The draft Code of Practice specifies that 'The planning framework and priority setting process for National Statistics will be transparent' (p 12). And it seems appropriate that the proposals for transparency in planning and priority setting for statistics should also apply to consultation exercises on statistics. If that happened the Consultation Protocol would lay down procedures rather different from those currently followed. The procedures could well follow just three principles:

1 Consultation exercises should aim to involve a wide public. Advance notice should be given for the date of publication of a consultation document. The consultation documents should be designed to appeal to wide range of users, potential users, and critics. The document should state the purposes served by existing statistics, what were seen as the strengths and weaknesses, explaining the kind of proposals being made and the reasons for the proposals being made.

2 Consultation procedures should be made transparent in the same way. The consultation document should invite comments and suggestions on the web, and comments and suggestions made in printed form should be published on the web, in order that. Members of the ONS should be encouraged to participate in informal but informative discussion of the comments and suggestions made.

3 The outcome of the consultation exercises should be published including the reasons for accepting, or not accepting, comments and suggestions by those who have responded to the exercise.

It cannot be said that the Consultation on the Code of Practice has followed anything like these three principles. But if, as promised 'The planning framework and priority setting process for National Statistics will be transparent' the question needs to be asked as to whether this promise will also apply to future consultation exercises.

Ending Departmental Problems

What are the really crucial points to be made about the Code of Practice? The core problem as almost the same as that often self-diagnosed by members of the GSS. For more than two decades members of the GSS have complained that the UK suffers from a decentralised/departmental statistical system. That diagnoses should be supported. Nearly all the problems identified in this paper stem from lack of coherence in statistical policy and lack of coordination and consistency between different parts of the GSS.

The implied solution is that the system should be centralised. This kind of solution is expressed in the idea of giving the National Statistician responsibility for everything. This solution is parallel to the 'Great Man' theory of history, and the currently idea popular idea that you can turn an organisation round if you appoint a new Chief Executive at a million a year. The National Statistician solution will not end departmentalisation. If nothing else is done Len Cook will go back to NZ in two years time with an honour hanging from his collar, but with his tail between his legs.

The fact of the matter is that the system is departmental because the job descriptions of members of the GSS are specific to the their departmental responsibilities. In behaving departmentally members of the GSS are just doing what they are paid to do.

The main component of a solution is extraordinarily simple - to change the job descriptions of members of the GSS!! They should all be parallel to those of Len Cook's. Len Cook, as National Statistician, has been given a role that can be summarised as being that of a public servant rather than a civil servant. All members of the GSS should have such a role. Their job descriptions should echo those of the National Statistician and should emphasise their responsibility to produce statistics fit for public use.

Such emphasis on the public responsibilities of members of the GSS can be expected to contribute to consistency, cooperation and unity within the GSS. All could be singing from the same hymn sheet. And if the GSS were united instead of being fragmented, there would be no departmental problem, and there would be little problem in achieving a statistical service that is seen as independent of government.

Can I have a million pounds please??

Ray Thomas

Home Page Top of page

Valid HTML 4.01!