Alison Wolf

十月 12, 2007

King's College London, like most UK universities, has a large number of students on one-year masters programmes; and "one year" means what it says. One cohort hands in its finished dissertations at pretty much the time that the next group starts trooping in to register. Other people and institutions may be more organised, or harder-hearted, but I find myself giving a lot of intensive feedback on drafts at exactly the time I am trying to get away on holiday. I am also impressed, when I get back and mark the final products, by what a frenetic and sleepless August can achieve.

The usual argument for the importance of dissertations in masters programmes is that they are greater in their general demands than anything else most students have done. Students need to develop coherent and answerable questions, grapple with methodology, pull together the literature, and integrate this with their own findings. All of these are indeed important skills. But what I observe and, increasingly, value, is what students learn about the specific process of measurement and the nature of data.

Students - people - have a tendency to assume that quantitative data must be out there waiting to be found: on the web, organised and collated. How the figures get there and who collected the data and analysed them are not questions they seem to ask. Nor do they probe definitions (let alone response rates) - or not unless and until they start trying to locate, manipulate and integrate a variety of data on a specific subject. In many cases and in many disciplines, that is exactly what a dissertation involves.

The process of collecting and classifying observations is enormously important in learning to treat data cautiously. If one deals only with large secondary data sets, the numbers take on a life of their own. One assumes that samples are representative; that the people who filled in questionnaires all did so seriously, truthfully and in full; that there was no ambiguity in physical measurements; that whoever coded and entered the data knew what they were doing; and that it was self-evident what all the observations meant.

In fact, of course, "raw" data are nothing like so well behaved. To take a recent, masters-level example, suppose one is comparing something as apparently simple and objective as how many times different sorts of music are played in "top" venues. What defines "top"? If one is interested in changes over time, what does one do about the way that new sorts of music emerge? How does one even count? Do four short Schubert lieder count as four different classical pieces, and a Mahler symphony as one?

Or maybe, like one of my recent students, you want to track how many history graduates from a given university enter teaching over a 20-year period. What do you do when "teaching" as a recorded student destination sometimes includes further education as well as schools, sometimes includes higher education as well, sometimes does neither? What counts as a "history" graduate, for that matter? And all this before you start the actual analysis.

Working with data also means that, every year, a good few students learn another, rather darker lesson: that following trends, at least in the UK, can be very difficult indeed, even with large "official" data sets compiled by government agencies.

First, statistics that were routinely calculated can suddenly disappear. (The decision, by the Office for National Statistics, suddenly to stop calculating average non-manual earnings caused enormous problems for one of my students this year.) But also, and more frequently, definitions change constantly, in ways that seem to be dictated by changing government priorities. This makes it enormously difficult to see what is happening over time. In education, for example, different qualifications are lumped together (and then separated again); or statistics are reported in terms of performance targets that keep altering. The definition of "unemployment" has been changed so frequently that comparisons over time (let alone with other countries) become virtually impossible.

This might not matter terribly if it were just a matter of student learning or academic inquiry. But good statistics are at the heart of governmental accountability, as well as good policymaking. The ONS is being reorganised in a way that the Government claims will increase its independence, but which everyone else thinks will do the opposite. And the quality of the ONS staff is threatened by its forced move to Newport. The bigger our Government, the more important it is that citizens understand the whole process of statistics gathering; so long live the masters dissertation.

- Alison Wolf is Sir Roy Griffiths professor of public sector management at King's College London.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.