The
Wikipedia page on Computer Science quotes the folklore one-liner that
Computer Science is no more about computers than astronomy is about telescopes. On the other hand, it turns out that algorithmic mechanism design is sometimes about telescopes — provided that it’s a telescope that everyone wants to use.
The paper
Telescope time without tears – a distributed approach to peer review (
link to journal version) proposes a mechanism for allocation of telescope time to astronomers, and this mechanism
is being tested by the NSF in their review process for grant applications. There is a critique and discussion
here at Michael Mitzenmacher’s blog. The general approach of the mechanism is that each competitor gets to rank some of of the other competitors; these rankings are aggregated to get an overall ranking; then (the controversial bit), a competitor gets a small bonus if his partial ranking has good agreement with the overall ranking. The risk is that a competitor tries to second-guess the consensus rather than express their honest opinion.
The scheme being proposed by the NSF is very faithful to the one proposed in the Telescope Time paper. I read the paper — it is very readable and does a great job of articulating well-known problems in the scientific world, such as the problem of finding reliable referees, the lack of incentive to be a reliable referee (the main reward of which being that you get more refereeing to do) and the problem of allocating scarce resources to competing proposals, where the resource could be telescope time, funding, or presentation time in a conference. It makes lots of points that will resonate with many readers, for example that the ideal of a correct ranking of proposals in order of merit is “at best an over-simplification and at worst a fantasy”. The paper also carefully considers the limitations and shortcoming of the approach. Regarding the main concern noted above, they write:
Perhaps the greatest potential concern is that this procedure will drive the allocation process toward conservative mediocrity, since referees will be dissuaded from supporting a long-shot application if they think it will put them out of line with the consensus. This effect would need careful monitoring, but it could also be addressed explicitly in the instructions given to applicants as to the criteria they should apply in their assessments: if they are aware that all applicants have been encouraged to support scientifically-exciting speculative proposals, then they are likely to rank such applications highly
themselves to remain in line with the consensus.
The following quote explains some background to the paper:
In his attempts to put off assessing telescope proposals by learning more about the subject, MRM came into contact with this work’s co-author (DGS) who has extensive expertise in both astrophysics and the mathematical theory of voting and its associated complexities.
Finally,
here is a link to an attempt I made to address a similar problem, in the context of reviewing papers for conferences.