Friday, February 27, 2009

Estonia Winter School

Here is an advert for Christos Papadimitriou's talk at Liverpool next week. my shame, I will not be there; I will be giving lectures at the 14th Estonia Winter School in Computer Science. This school is for research students mainly from Eastern Europe, and my lectures will be an intro to game theory and computational complexity, including an introduction to the complexity classes of total search problems introduced by Christos in the early 90s.

The school is located at Palmse Manor, a remarkably isolated-looking spot (based on checking google maps) about 80km east of Tallinn.

Sunday, February 22, 2009

A mild defense of bibliometrics

In an ideal world, bibliometrics would be a character from Asterix (the village librarian, presumably). In real life, they are one of the dark clouds that hang over the academic world, and give it cause for concern. They are going to be used in Britain's Research Excellence Framework, which may be enough to antagonize a few people. And yet, I'd like to propose that they have some merit; to make a half-hearted case in their favour. I don't have a particularly good bibliometric standing myself, so this defense is not motivated by self-interest.

Here are two places where I have recently seen bibliometrics coming under fire. This post at Luca Aceto's blog has a go at them, and links to a statement by the editorial board of a journal that criticises them. More recently, I have been following a debate that was begun by a letter to the Times Higher a week ago that criticised the emphasis on economic benefit of research proposals in funding decisions; one place this debate has been continued is here at Steven Hill's blog. Hill commends bibliometrics as evidence that Britain's scientific prowess is undiminished; his opponents challenge this evidence. Hill's status as a funding chief marks him out as the villain of this particular battle, thus bibliometrics, as a measure of a person's scientific contribution, are further tainted.

So what do I like about them, then? Actually, as a measure of scientific contribution, they are indeed inaccurate. What I like is not so much the way they measure a researcher, as the way they incentivise one. Let's accept as a given that one's scientific output must, from time to time, be assessed for quality. The act of measuring will inevitably affect behavior, as the measurement gets announced ex ante, so that measurement-as-incentive is just as important as measurement for the sake of measurement, if not more so. Now, to boost your citation count, what must you do? Write stuff that other people find interesting, seems like an obvious answer. This seems positively virtuous. (An alternative way to measure a person's research output, which is not unfamiliar to most of the scientific community, is to compute his research grant income. Grant income may indeed correlate with research quality, but it seems clear that the pursuit of grant income is by no means as socially virtuous as the pursuit of citations.)

To develop this observation about measurement-as-incentive in more detail, consider h-index, for which I will now include a definition for the sake of completeness. A person's h-index is the largest value of N such that he/she has written at least N papers each of which have been cited at least N times. Again, as a measure there are problems with this - if my h-index is 10 and I write a new paper that picks up less than 11 citations, it cannot improve my h-index. But surely that paper should have positive contribution to my research output? Yes, but h-index encourages a researcher to "raise his game" as his h-index increases; the better you do, the more ambitious should be your next effort -- and this seems like a good thing to encourage.

Now, let me turn to the most obvious fault of bibliometrics, which is their weak correlation with research quality. My hope, as a technocrat, is that technology will in due course alleviate this problem. Let me give one example: their current failure to distinguish between a high-quality and a low-quality citation. By way of example (I exaggerate to make the distinction obvious) a paper may be cited in a context like: ...finally, the papers [4,5,6,11,12,13,17,24] also study minor variations of this model, thus providing useful evidence that this is a "hot topic". or it may be cited in a context like We make repeated use of a fundamental and masterful theorem of Haskins [13] without which the present paper could never have been written. Both of these sentences would result in [13] picking up a single citation; clearly, one would like the second to carry more weight. In the future, better computer analysis of citation contexts may well allow this distinction to be make. One may also hope that better citation analysis may also be able to detect other undesirable artifacts, such as a flurry of mediocre papers that all cite the previous ones but have little or no outside influence. Another idea I have is a simple one - take the age of a cited paper into account. It should be more a valuable citation when you cite a 10-year old paper, than when you cite a 1-year old one.

Finally, a possible strength of bibliometrics, acknowledged in Oded Goldreich's (otherwise critical) essay On the Evaluation of Scientific Work, is that the errors they make are the work of many, not the work of a few, as is the case for expert evaluation. So, can "the wisdom of crowds" be harnessed more effectively? Perhaps. Indeed, making it work could turn out to be quite a technically interesting problem.

Tuesday, February 17, 2009

Overseas research proposals good, domestic ones bad

Simon Rattle once observed that Liverpool is a city that turns its back to England, and look outward to the rest of the world. At any rate, I vaguely recall some such sentiment being attributed to him in a concert programme that I was perusing a few months ago. If there's anything to it, I offer the following in the best Liverpudlian spirit.

I got an email asking me to review a research proposal, emanating from a foreign country, their equivalent of EPSRC. Without having perused the proposal in any detail, I feel much happier about having to review it, than I would for one submitted to EPSRC. EPSRC, and also probably most other national research funding organisations, are biased towards getting reviews from within their own country. But there are good reasons for seeking them from outside.

The main reason: Foreign reviewers do not have a conflict of interest. This problem is particularly acute in the UK, where the value of research grants has become extremely inflated as a result of "full economic costing". In the zero-sum game of research funding, when a rival institution attracts a grant, one's own institution has been disadvantaged as a direct consequence. This is especially true in the present economic climate, which may be roughly characterized as "everyone is running out of money". Ignoring this problem increasingly requires one to take up residence in an ivory-tower, and become the sort of complacent academic who imagines that higher education is recession-proof. The other obvious source of conflict-of-interest is that one's own taxes are being used to fund the proposed research. Clearly, this problem goes away when one has to review a foreign research proposal.

It enlarges the pool of expert reviewers. Most individuals' research interests are highly specialised, and in a globalised scientific community there is no particular reason to assume that there exist any reviewers within one's own country, who are competant to do the job. I've got an example in mind; the less said about that the better.

It exposes a national research community to external scrutiny. This is related to the previous point, but by no means the same. A national research agenda can become misguided, or a line of research having questionable value thrive unchecked, within a system where such scrutiny is absent.

There are practical hurdles to the process of globalisation of research-grant reviewing that I am advocating. An EPSRC official was asked at a meeting I attended a few years ago whether it was OK to list overseas reviewers on the usual list of potential reviewers that form part of a proposal, and he or she advised that they had trouble getting reviews from foreign reviewers. Still, that's how we review research papers. One might also object that a foreigner might fail to appreciate the "grade inflation", that if you don't rate a proposal very highly, then it is likely to fail. But, anything that reverses that grade inflation and allows reviewers to use the rating system in a more balanced way, would surely be a fine thing.