Tuesday, October 27, 2009

question about random sequences

OK, a respite from my last two posts about the Research Excellence Framework. The following question occurred to me, I make no promise that it is interesting.

Suppose you repeatedly roll a fair die and write down the sequence of numbers you get. I'm going to delete numbers from the sequence according to the following rule. Let X be the first number in the sequence. I delete the first block of consecutive X's (which will usually just be the single X of course), then I delete the second block of consecutive X's, but retain everything in between. Then I do the same thing to the rest of the sequence of dice rolls - it will start with some Y not equal to X, so I get rid of the first two blocks of consecutive Y's and continue.

For example, I would delete the bold-face elements of the following:


And the question is, is the sequence of undeleted numbers indistinguishable from a completely random sequence? Seems to work for 2-sided dice (coins).

Monday, October 19, 2009

Research Excellence Framework (part 2)

Continuing from my previous post; once again, foreigners who are not into schadenfreude should again read no further. Some new web links follow. First a couple of petitions:
A new collection of web pages highlighting this topic: The Danger of Assessing Research By Economic Impact

The topic has led to a flurry of emails on the CPHC mailing list; opinions there are divided, there are some who don't mind the proposal, some who are defeatist and some who also object to it. Some discussion has addressed whether one should push for a broader definition of "impact" beyond economic impact, and also the general burden of assessment -- it is costly to have to stop what you're doing every 5 years and enter into an episode of acute navel-gazing, and the present Government does not propose to compensate us for that! Outside of CS, there is a stronger consensus against the proposed definition of "impact". Do not be defeatist - the REF is a version of (and probably largely plagiarized from) Australia's Research Quality Framework (RQF), a similar Gradgrind-like model which was cancelled in December 2007 due to a change in government. Notice that in the UK, prospects for a change in government are very strong -- let's see if history repeats itself!

And finally, let me quote from an article in the Guardian yesterday by Madeleine Bunting addresses the point that Market theory closed down public discourse about injustice. But we urgently need to describe what we should value. From the article:
But don't look to economists to get us out of this hollow mould of neoliberal economics and its bastard child, managerialism – the cost-benefit analysis and value-added gibberish that has made most people's working lives a mockery of everything they know to value.

Friday, October 16, 2009

Research Excellence Framework

Readers from outside the UK may wish to stop reading at this point, unless they are into schadenfreude. I recommend readers from the UK to sign this petition, which is sponsored by the Universities and College Union (UCU). It relates to the Research Excellence Framework (REF). The following text accompanies the petition; below I add some of my own comments.

The latest proposal by the higher education funding councils is for 25% of the new Research Excellence Framework (REF) to be assessed according to 'economic and social impact'. As academics, researchers and higher education professionals we believe that it is counterproductive to make funding for the best research conditional on its perceived economic and social benefits.

The REF proposals are founded on a lack of understanding of how knowledge advances. It is often difficult to predict which research will create the greatest practical impact. History shows us that in many instances it is curiosity-driven research that has led to major scientific and cultural advances. If implemented, these proposals risk undermining support for basic research across all disciplines and may well lead to an academic brain drain to countries such as the United States that continue to value fundamental research.

Universities must continue to be spaces in which the spirit of adventure thrives and where researchers enjoy academic freedom to push back the boundaries of knowledge in their disciplines.

We, therefore, call on the UK funding councils to withdraw the current REF proposals and to work with academics and researchers on creating a funding regime which supports and fosters basic research in our universities and colleges rather than discourages it.

It is not only the UCU which expressing grave concerns about the REF; universities and societies that represent academic disciplines are also similarly concerned, and I will give examples of these in later posts. For the moment, the REF seems to be doing the impossible, namely to make us feel nostalgic for the Research Assessment Exercise (RAE). At least the RAE was exactly that, a research assessment exercise. It did not set out to distort the meanings of the words it uses such as "impact" and "excellence".

The REF -- in its proposed form -- discriminates against theoretical work and imposes an artificial incentive to do work that has short-term economic impact. And you know what? I've got nothing against economic impact. But if a certain kind of research is able to make money, that should be its own reward; government-funded money-making is ridiculous. And just don't call it "research excellence", it's not the same thing.

Some articles

Article in the Independent Against The Grain: 'I didn't become a scientist to help companies profit' by Philip Moriarty

See the comments that follow this article in the Guardian (the comments that get highly-recommended are correct)

Monday, October 05, 2009

State the open problems!

A few incoming paper-review requests remind me of the following long-standing gripe.

A nice feature of theoretical computer science is that the results we obtain tend to raise lots of very well-defined research questions. I'm very much in favour of the practice of stating these open problems in detail, in our research papers. Indeed, a thorough discussion of future work is arguably as important as including references to all appropriate previous papers -- the latter puts you in touch with the past and the former makes the connection with the future. Some papers take a lot of care to spell out what the open problems are, but some (most?) don't bother. Maybe the authors think it should be obvious what the open problems are. Occasionally they perhaps don't want the reader to pick up on an open problem they've got lined up for a follow-up paper.

Let's consider the case where the open problem being raised is "obvious". For instance, suppose you give an algorithm that approximates some objective within a factor of 2.5. Clearly, any approximation ratio of better than 2.5 is (implicitly) raised by the paper's result, so why bother to point it out? I would say it usually is worth pointing out, partly just to confirm that you really would find further progress to be interesting, and partly because you might have some useful additional discussion to add, such as a consideration of the prospects for improving beyond a ratio of 2, say.

If someone writes a paper that is able to claim to have solved an open problem stated explicitly in a previous work, that's usually a good piece of relatively objective evidence that the result is interesting. Most papers do not manage to achieve this - they usually solve some variant of an open problem posed previously, and indeed "posed" may just mean implicitly rather than explicitly. This is, there is a scarcity of "official" open problems (those that have been raised and deemed to be interesting in a published paper).

(There are some web sites that help, e.g. Comp geometry problems has links to other open problem pages. Open problem garden: wiki for general math problems.)