Monday, July 11, 2011

Prepayments on student tuition fees

There is pleasure (but not much) to be had from the spectacle of a Government department caught up in a web of its own contradictions. The Dept of Business, Innovation and Skills (BIS) is currently consulting on whether there should be a scheme for early repayment of tuition fees, and what should the rules be.

Recall that the current plan is that interest on student loans is to be charged at Retail Price Index plus 3%. BIS say:
We are committed to the progressive nature of the repayment mechanism. It is therefore important that those on the highest incomes after graduation are not able unfairly to buy themselves out of this progressive mechanism by paying off their loans early. That is why we are consulting on potential early repayment mechanisms – similar to those paid by people who pre-pay their mortgages.
This quote is an admission that the Government is rubbing its hands at the prospect of lots of lovely interest being paid on their loans, and don’t want to let people pay them off early. The word “progressive” has been hijacked to refer to a scheme for keeping us all in debt for as long as possible. (Should people who can buy houses without mortgages be prohibited from “unfairly” buying them without incurred interest payments?)

Digging deeper into the consultation document, we get:
These mechanisms would need to ensure that graduates on modest incomes who strive to pay off their loans early through regular payments are not penalised. For example, a five per cent levy might be charged on additional repayments each year over a specified amount such as £1,000 or £3,000. Alternatively, those on higher incomes (e.g. over £60,000) who made an additional repayment could be required to pay a five per cent levy on this sum.

Further down, they admit that there’s a case in favour of allowing early repayments, in that “It allows graduates on modest incomes to pay off their loans early.” Here they flat-out contradict themselves by admitting that people should actually be encouraged to pay their debts. Provided, that is, you’re on a “modest income” — if someone who could actually afford to repay tries to do so, well, that would just not be cricket.

I could go on, but there’s no need to; for anyone who wants to read further criticism there are some good responses on the web site; I recommend the comment by Tim Leunig.

Wednesday, July 06, 2011

hat tricks

Among many things the internet has been blamed for, is the phenomenon of people only ever reading stuff that they agree with, and thereby reinforcing their opinions without having them challenged. By analogy, do we spend too much time hanging out with our familiar research communities? At the 23rd British Combinatorial Conference, to which I was invited to give a talk, I now have to struggle to come to terms with research on topics that have obvious computational questions, but where it’s other questions that get considered.

One talk was on a topic that is, at a stretch, vaguely game-theoretic: “Hat problem on a graph” by Marcin Krzywkowski relates to a kind of puzzle described here1, in particular the 3 hat problem. Three players are fitted with hats, each hat may be either red or blue, the colour chosen uniformly at random. Of course, you can see the other players’ hats but not your own. Each player may (simultaneously) make a guess as to his own hat colour, or else abstain from guessing. They win if at least one player makes a correct guess and no player makes an incorrect guess. For this 3-player case there is a simple guessing scheme whereby their win probability is 3/4. As hinted in this version of the puzzle, if the players can flip coins, there is a randomized scheme that lets them win with probability 3/4 even if the hats are allocated by an adversary — but the adversary should not be able to eavesdrop on their coin-flipping before they get fitted with the hats. (A question: suppose the adversary can eavesdrop on them beforehand but then they can flip coins after the hats have been fitted. The players can win with probability 1/2 by delegating one of them to guess at random. I don’t think they can do better and would guess there should be a nice proof that even works for any number of players, but haven’t figured one out.)

Back to the paper — suppose there are n players and suppose that there are some restrictions on which players can see each others’ hats. It is natural to represent this with a graph on the players, in which a player can see only his neighbours, so that the basic hat problem would assume the complete graph. Then, one can ask how well the players can do for various kinds of graph. If I recall correctly, the players can win with probability 1/2 if the graph is a path; other kinds of graph are also considered.

1I thank Sophie Huczynska for drawing my attention to this web site.

Monday, June 20, 2011

Computer games as a lens on higher education policy

Trying out the Facebook app "global warfare", it seems that a player can equip his town with various facilities, one of which is a university. Of course, my own biases being how they are, I promptly acquire one and proceed to spend all my resources on upgrading it (which is not something that usually happens in real life, but computer games are there to allow players to indulge their fantasies). Anyway, given a university, a player can then click on "research" and select from a list of topics to devote one's (virtual) academic efforts. Unfortunately, "computational complexity" was not on the list. Instead, I settled for "drilling", which when completed, causes your oil production to increase by 10%. This helps to make sense of the various calls for proposals that emanate from EPSRC; I have a vivid vision of our political leaders clicking on topics like "construction" or "military science" in an effort to obtain a quick reward. Maybe next time I will select "espionage" which could perhaps lead to cryptography, and thence to computational complexity.

Monday, June 06, 2011

nchum

Taking a break from marking exam scripts, I am intrigued by this new story about the New College for the Humanities, www.nchum.org, reported at the BBC: Academics launch £18,000 college in London, the Guardian: Richard Dawkins heads line-up at private £18,000-a-year university (with howls of dismay in the comments), discussed at Mary Beard's blog here (with some skepticism that I share), and see this new post at the Exquisite Life blog for some more sensible skepticism: “rushed and half-baked”.

I first spotted the story yesterday in a headline in the Sunday Times while at the supermarket, a clue that while this story has sprouted legs, it is not going to go the distance. Surely we should have had some preliminary reports in the Times Higher ages ago (they now highlight it: Top names, top whack: new humanities-focused institution to charge £18K fees".) A new venture of this nature is something that will be interesting to watch; I am not dismayed like the UCU (Launch of new private arts and humanities college is proof government is entrenching 'inequality'), but....

Various things don’t quite compute. For one thing, the numbers. 18k per student isn’t enough to bring in all these top academics, at least not with the sort of commitment from them that might be imagined by an uninformed reader. The USA has better options. Plus, the people who invested in this venture will want some return on investment. The name of the web site seems... poorly considered. “nchum” will get pronounced “en-chum”... also, why use the .org suffix as opposed to .edu, or .ac.uk? Over at the Daily Telegraph, Boris Johnson is still enthusiastic:
The trouble with Britain today, he [A C Grayling, the new master of nchum] said, was that we simply didn’t have enough elite university provision – and especially not in the humanities subjects, where teaching budgets are under such pressure.
But, that’s got more to do with the definition of “elite” — the whole point of elite provision is that there shouldn’t be enough of it. If you create more elite provision, you end up killing the thing you love.

All right, back to the marking.

(Added 7.6.11: this article raises the above financial concern in more detail. This critique of the "money-grubbing dons" has attracted over 500 comments but doesn't address the concern about viability.)

Wednesday, June 01, 2011

Watching one’s own talks on video

For readers who don't get beyond the first sentence, the take-home message is that it’s a great idea to watch videos of your own talks, for the purpose of improving your style and presentation.

The papers we write are very restricted in format. Indeed, a paper is pretty much determined by the result you’ve obtained. You get to name the variables, but that’s about it. Thereafter, you follow the well-worn path of introduction, model, results, and conclusions.

Contrast that with the activity of giving a talk on the work. Do you make jokes? Do you dress up? How much technical detail do you include? Do you compromise on accuracy for the purpose of conveying the intuition? Do you include cute animations in your slides? Do you stand primly to one side of the screen, or do you prefer to pace around? Do you sound informal and chatty, or grand and authoritative? Do you memorize any key passages? Decisions, decisions!

And, there are no right answers to the above questions; the answer depends on who you (and the audience) are, and what you’re talking about. Different approaches work for different people. And here’s where watching oneself on video can help.

I watched the video of my talk at the iAGT workshop mentioned in the previous post (and also here :-). And —this is the key point— various mistakes in the delivery of which I was blissfully unaware were suddenly exposed to the harsh light of day as a result. I then tracked down a video of a talk I gave at Microsoft Research (Cambridge) a few months ago, just for the purpose of gathering more data. The only previous time I watched myself give a talk was at a training session on lecturing quite a while ago, where a group of us had to give short fragments of undergraduate lectures, that were recorded and played back. At the time, the equipment was cumbersome and analogue, so you did not get to study your performance at leisure, at a later date. Also, it’s worth taking in a video lasting half an hour or so, to see if your style changes over time.

As it gets more common to record talks, you hopefully get more chances to do this. If not, maybe you should get a colleague to record one of your undergraduate lectures, or any other similar technical presentation.

Sunday, May 29, 2011

iAGT workshop, May 22nd-26th

I just returned from the workshop on Innovations on Algorithmic Game Theory at Hebrew University, Jerusalem. This blog post is to warmly thank the organisers, Noam Nisan and Michal Feldman and the student “volunteers” (that phrase with its inverted commas appeared on their offical T-shirts, along with a picture of a chess pawn). (Noam himself has not so far mentioned it at his blog; I guess that being too busy would be a valid excuse.) Anyway, it was a great meeting, indeed at a time when there is much to complain about academic life, a reminder of why I went into this business (ie, you get to meet up with interesting people and exchange interesting ideas). Later I’ll hopefully get around to discussing in more detail some topics that came up. There are videos of the talks and panel discussion here (and here is the link to presentations 21-37).

Saturday, April 30, 2011

SAGT and WINE

In the unlikely event that someone reading this hasn't already read it at Noam’s blog, I was also asked to publicize them, so here goes. Publicizing them here may, at least, help to boost their rank in Google searches, which is especially worthwhile for WINE.

Giuseppe Persiano asked me to post a reminder that the submission deadline is coming up (May 9th) for SAGT 2011, the 4th international Symposium on Algorithmic Game Theory (meeting in Amalfi, Italy, Oct. 17-19). Edith Elkind asked me to advertise WINE 2011, the 7th Workshop on Internet and Network Economics (submission date July 31st; meeting in Singapore, Dec. 11-14).

Thursday, April 28, 2011

AV referendum: it's the results, stupid!

Winston Churchill said "However beautiful the strategy, you should occasionally look at the results." But, that observation has had little impact on the debate about which voting system to support in the forthcoming referendum. That is to say, there is much discussion about which voting system better represents the will of the people, or would give us strong versus weak governments, or would make our MPs work harder. But those us who support AV should address the possibility that the present voting system may, despite the objections, actually produce good outcomes. Where "good outcomes" does not mean strong governments or representative governments, rather it means social welfare.

At the risk of coming on like David Cameron, my general impression is that the current system has let us down. Right now, being a citizen of the UK makes me feel like a shareholder of a company that is underperforming, and I'm watching its price steadily go down. My sense is that we're getting things wrong where other countries are getting them right. This is not the place to review examples in detail— in contrast to David Cameron I'll simply test out this claim by taking a look at the obvious evidence: our performance in quality-of life rankings. The results do not reflect well on the status quo!

2007: down 2 places to 17th; 2010: down 5 places to 25th; 2009: quality of life poor relative to other EU countries 2009: child poverty: European league table: The UK came 24th, well below countries of similar affluence (despite 10 years of a Govt that supposedly tried to improve it!) 2008: Zut! France leapfrogs UK in economic table

What we are seeing here is not just poor performance, but poor and worsening performance. So you can't just blame the weather! The obvious culprit is bad public policy. AV offers a genuine change to the system by which the electorate gets to influence public policy. The time has come to vote for Churchill, not Cameron.

(Added 6.5.11: OK I admit it, the reason why Britain is declining is that we just have a lower average IQ than most other places.)

(Added 6.6.11: UK slipping down the global rankings, Centre for Policy Studies warns.)

Monday, April 18, 2011

Alternative Vote referendum

To see why you should vote in favour of the move to Alternative Voting on the 5th of May, look no further than the contorted arguments of its opponents. Here is David Cameron on the topic:
Too often debates about AV are less like political arguments, and more like scientific discussions, where people get lost in a language of proportionality and preferences, probabilities and possibilities.

Of course, some of these things are important. But for me, politics shouldn't be some mind-bending exercise. It's about what you feel in your gut – about the values you hold dear and the beliefs you instinctively have. And I just feel it, in my gut, that AV is wrong.

To think that we (the academic community) once had him eating out of our hands, studying for a degree on Philosophy, Politics and Economics. And what a colossal failure of education that the above is DC's take on social choice theory. Not that Oxford PPE is very strong on social choice theory; I took a look the course information page; it's a nicely-designed web site, but I think you'd be lucky to graduate with a knowledge of Condorcet's theorem. And I would like to know if anything could possibly more central to PPE.

(Added 23.4.11: This article in the Guardian contains what is for me the first sighting of the Gibbard-Satterthwaite theorem in a national newspaper. This blog post by Tim Gowers proves that by voting yes, I am in good company!)

Friday, March 18, 2011

Incentivizing reviewers to make sincere predictions

I'm not sure that there's a research problem to be derived from the following discussion, but it was fun to write. Our main finding is that two reviewers are better than one, on the grounds that they can be incentivized to provide accurate forecasts of the acceptability of a paper by a program committee that is arbitrarily unlikely to read it.

We regard the reviewer’s review as a forecast of whether the paper will be accepted, rather than an input to the decision. This has the advantage that the reviewer is not required to make a value judgment on the paper; a forecast is just a guess as to whether the paper will be accepted. The obvious problem is that if the forecast gets used to help with the decision, it is at risk of becoming a self-fulfilling prophesy.

We model the paper selection process as follows. Any paper has an associated quantity p∈ [0,1] that represents the probability with which it ought to be accepted, and furthermore, this value is revealed to a careful reader. In an ideal world, the PC passes the paper to a reviewer, who reads it and reports this number p to the PC, and the PC proceeds to accept the paper with probability p. (We assume that the PC has access to an unlimited source of randomness, which appears to be a realistic assumption.)

Suppose now that a reviewer knows in advance that his review will be ignored by the program committee, who will instead read the paper and accept it with the correct probability. In that case, if the reviewer reports probability p, he/she should be given a reward of log(p) if the paper is accepted, and log(1-p) if it is rejected. (These quantities are negative, but we do not claim that reviewing papers is rewarding.) These rewards incentivize the reviewer to report the correct probability.

Now, suppose when the PC has received the review (the number p), they then read the paper with some probability r. If they read the paper, they accept with the correct probability, and if they don’t read it, they accept with probability p. The problem is that if r is very small, and the reviewer finds that the paper should be accepted with probability about 1/2, the above rewards tempt him to go to extremes and report (say) 0⋅01 or 0⋅99. Important note: the reward should depend only on the review (the reported p) and the (binary) acceptance decision, since you don’t want to reveal any secret discussions to the reviewer. So the PC doesn't have the option to read the paper with some small probability and punish him if they then find he lied.

Given 2 reviewers, we can exploit their professional rivalry to make them tell the truth, by carefully aggregating their forecasts into a single probability, as follows. A basic requirement for a reward scheme is that if a reviewer has obtained value p from a paper, and the other reviewer is truthfully reporting that value p, then the remaining reviewer should also have the incentive to do likewise. Suppose we use the logarithmic rewards above, and the PC uses the average of the 2 reported probabilities, to decide on the paper. The following problem arises: suppose it's a great paper and p=0⋅999. A reviewer might be tempted to report (say) p=0⋅5, since that way, the PC will use a probability of about 0⋅75 to accept the paper, exposing the other reviewer to a big risk of a large penalty. The assumption here is that a reviewer aims to get a higher reward than the other one (his professional rival); the reward being some sort of credit or esteem rather than money.

Let q be the other reviewer's probability, and we seek a function f(p,q) that should be used by the PC as probability to accept the paper; we have noted that (p+q)/2 is not a good choice of f, in conjunction with the logarithmic rewards.

The reviewer’s utility u(p) is his expected reward minus his opponent’s expected reward:

u(p) = f(p,q)(logp-logq)+(1-f(p,q))(log(1-p)-log(1-q))


We now notice that the above must be identically zero, since it should not incentivize the reviewer to change his mind if q is incorrect, but it should not incentivize him not to change his mind if q is correct. Setting the above to zero tells us the function f should be

f(p,q)=(log(1-p)-log(1-q))/(logq-logp+log(1-p)-log(1-q))


It just remains for the PC to read the paper with any probability ε>0, and in the event that they read it, accept with the correct probability. If they read the paper, the reviewers are incentivized to tell the truth, and if they don’t, (and use the above f) the reviewers have no incentive to lie, so overall their incentive will indeed be to tell the truth.

(Added 6.6.11: at the iAGT workshop, I heard about 2 papers that relate (so now, an answer to the first comment below). Only valuable experts can be valued by Moshe Babaioff, Liad Blumrosen, Nicolas S. Lambert and Omer Reingold, about contracts that will be accepted by self-proclaimed experts, provided that they really do have expert knowledge (and will be declined by a rational charlatan). And, Tight Bounds for Strategyproof Classification by Reshef Meir,
Shaull Almagor, Assaf Michaely and Jeff Rosenschein, about learning classifiers where the class labels of data have been provided by agents who may try to lie about the labels. The latter paper is closer to the “self-fulfilling prophesy” situation described above.)

(Added 22.8.11: This article on “decision fatigue” suggests another reason why it may be better to ask people to try to predict the outcome than to influence it (assuming you believe it puts less strain on someone to make a prediction than to make a decision. It does sometime stress me out a bit to make an accept/reject recommendation for a paper.))

Thursday, March 03, 2011

game theory/tuition fees

(found some of the following links from a facebook post) this link to an attempt to write down a game that expresses the setting of tuition fees. In the comments, this link to a speech by David Willetts "The ideas that are changing politics" (dated 20.2.08, so now a couple of years ago) in which he is enthusiastic about game theory (follow the link to PDF of the speech at the bottom of the page). This new article “What are David Willetts' options for limiting spending on student loans?” is worth reading by anyone who would like to keep track of this issue. The previous article gives the text of a speech on universities made by Willetts; the speech mentions game theory, and so too does one of the annotations that have been inserted. Here’s a quote from the speech:
Of course, academics approach these issues with great sophistication, and I have been warned that we face a dilemma from game theory in which the incentives for individual institutions are different from the interests of the sector as a whole. But it's not the dilemma in its classic form, because this is not a one-off. You need to think of subsequent years – not just in terms of funding levels but also the challenges you will face from new competitors if you come in at such a high fee level. And you also need to think of the collective interests of students.

Tuesday, February 22, 2011

IIT student seeks internship at your esteemed institute

A question: has anyone taken on one of these prospective interns? (And if so, how was it?) Most of them would clearly be hopeless, but there are some that look like they might be OK. The trouble is, there's some kind of economic principle at work here, that says that in a market that's flooded with bad eggs, the good eggs cannot be sold. In this case, what happens in that we end up deleting all these emails without reading them.

One such email that I bothered to read yesterday was typical: it claims to have read the recipient's work in detail, then goes on to profess an interest in a range of topics none of which related to anything I know about. I suppose that email was sent off indiscriminately to a large number of academics, and you might be fooled if, by chance, you have the same research interests as the ones mentioned.

I'm vaguely disturbed by the way these emails produce a kind of stereotype of these students; eventually you cease to regard them as individuals. I'm curious as to why it is Indian Institutes of Technology that produce them, and no other countries seem to do so. (I could not find much from a brief perusal of the web. Here is a related earlier discussion.)

Saturday, February 12, 2011

Academia and the deterioration of jobs

A recent article in the New York Review of Books, The Grim Threat to British Universities makes interesting reading since it discusses a UK problem from a US perspective. It highlights the key difference between the British and American systems, namely the vulnerability of the British one to central control and manipulation, and seeks to draw lessons that could be learned within the American one. It blames certain US business schools for generating the managerialist ideas that have subsequently beset British universities. It goes on, via a critique of research assessment, to study the growth (in the USA) in use of short-term, untenured faculty to provide teaching.

Some thoughts on the article:

It makes a mistake that is common to most polemics against the decline in academic working conditions, namely that the author has not tried to place himself in the position of his adversary, and consider how it looks to the opposition. Consider the following quote:
The growth of the contingent academic workforce brings the labor economics of the call center and the Wal-Mart store to higher education. With these contingent academics, few of whom have firm contracts, managers now have at their disposal a flexible, low-cost workforce that can be hired and fired at will, that can be made to work longer or shorter hours as the market dictates, and that is in a poor position to demand higher pay.

The problem with this observation is that the situation it describes looks pretty good to anyone whose job is not university teaching. Taken out of context, it could pass for high praise for the trend that it criticizes. I can't see call-centre workers, or most other people, losing much sleep abut the problem being highlighted. Furthermore, many jobs and professions have suffered from debasement over the past few decades. I have heard from a flight attendant about how the growth of low-cost airlines has led to poorer working conditions, poorer safety training, and more disagreeable passengers. And, a general background story of the past few decades, both in the USA and Europe, is the decline of the steady, well-paid, blue-collar job.

In the presence of a working public who have their on-the-job performance being measured and assessed in various onerous ways, it is pretty hard to rail against research assessment. And that does not mean we should not criticize excessive performance monitoring and managerialism, but let's not do it in the name of “scholarship”, and expect to be taken seriously. The following cri-de-coeur quoted from the article is not helpful:
The bureaucratization of scholarship in the humanities is simply spirit-crushing. I may prepare an article on extremism, my research area, for publication in a learned journal, and my RAE line manager focuses immediately on the influence of the journal, the number of citations of my text, the amount of pages written, or the journal’s publisher. Interference by these academic managers is pervasive and creeping. Whether my article is any good, or advances scholarship in the field, are quickly becoming secondary issues. All this may add to academic “productivity,” but is it worth selling our collective soul for?


If a flight attendant used the word “flightmanship” the way we use “scholarship”, they would be laughed out of the room. And by the way, I don't know what “good” is being used to mean in the phrase “whether my article is any good”. None of which is to say that we should give up trying to defend our working conditions, only that we should do so using the levels of intellectual rigour that we urge on our own students.

Added 1.3.11: This article is very relevant: it gives a bigger picture to the problem, explaining the general threat to “middle class” jobs in Western countries.

Added 15.3.11: just spotted this review of The Global Auction: The Broken Promises of Education, Jobs, and Incomes a book that studies this problem. Rahul Savani pointed me to this new-ish article by Paul Krugman that also considers it. Both of these seem to be about people going to university, expecting it to be their ticket to a middle-class lifestyle, and getting disappointed by the outcome. This new post at the Fortnow/Gasarch blog has relevant links, including the Krugman article, but focuses on the impact of CS/AI on future jobs.

Sunday, January 30, 2011

Obscurantist jargon

Nick Cohen takes academics to task for writing sentences like the following
The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony is bound up with the contingent sites and strategies of the rearticulation of power.
The above sentence won the bad writing contest a few years ago. Cohen takes it as evidence of academia’s failure to engage with the public, and deduces that we academics are the architects of our own misfortune through this failure.

My first thought on reading Cohen’s article was: Has he ever seen the stuff that I come up with? Loads of complicated mathematical formulae couched in the definition-theorem-proof style, surely even worse than the above. My only excuse for writing that stuff is that I was not actually trying to impress the wider public, and I never thought the above quoted sentence was pitched at the man in the street, either. Like legalese, some other people are paid to read it and determine whether it's any good, and according the division-of-labour principle advertised on the twenty-pound note, I should get on with my own work and trust them to know what they are doing.

Scrolling through the comments, I found that I’ve already been outed by fellow computer-scientist Ross Anderson who wrote
The same sorts of criticism can be made of much academic writing even in "respectable" disciplines such as mathematics and computer science... Believe me; the median paper has a tiny idea (if any) dressed up in fifteen pages of stuff that looks like mathematics.
Damn. Mind you, I would hope that most mathematics papers would indeed look like mathematics, even if it’s unreadable to most people. And let’s admit that it’s hard to come up with a major idea in every paper.

However, I can’t possibly leave the topic of obscurantist jargon without complaining about cricket commentary. They’re endlessly dropping phrases like “403 for 3 not out” without explaining what those numbers refer to, and whether it represents good news or bad news for the team being discussed. After I left school and people stopped making me play cricket, I assumed that this sort of thing was now someone else’s problem, or alternatively that sooner or later, someone on the radio would have a spare minute or two to explain their jargon to the uninitiated. But it hasn't happened yet.

Thursday, January 20, 2011

Professor Fluffy

Professor Fluffy is the officlal name of the mascot shown below.


We are told that universities may only charge the upper limit of £9000 per year in fees, if they commit to “widening access”, but like many aspects of the new fees regime, the details are unclear. It seems natural to consider whether scholarships could be used for this purpose, or alternatively just as way to attract good applicants, and if so, how to design the scholarships.

Yesterday Liverpool University held a kind of open-house session to discuss organisational strategy with staff, with reference to the funding regime. I went over to find out what the thinking is on scholarships; my own observation on the topic is that the university has, for PhD study, just one scholarship that is not restricted to any particular kind of applicant, and despite the fact that this particular scholarship can hardly help much to increase student numbers, it has the effect of attracting applications from ambitious students, who may then be available to pick up alternative PhD studentships.

My thought was that we should consider having one or two undergraduate scholarships based entirely on academic results that made no reference to a student’s background. What I learned, talking to our widening-access expert Tricia Jenkins, is that the idea may not apply so well at the undergraduate level. The difference is that at postgraduate level, the student is deemed to be responsible for his academic track-record, but someone with high-school qualifications is not. This refers to the well-known fact that a student from a poorly-performing school will outperform (at university) a student from a very good school, if they have the same A level grades. Indeed, apparently 3 B's at A level from a weak school is better than 3 A's from a strong school, in this sense. Consequently, if a scholarship is based on academic excellence, it should be biased this way.

The downside is, that such a scholarship could not possibly be used to attract students from good schools, no matter how smart or committed those students were. And, these scholarships would require decision-making about how exactly to design the bias in favour of certain types of applicant (it is not just about schools — should you also give a student credit for having been in foster care for some number of years?)

Anyway, I couldn't get away without learning about Professor Fluffy, who as the picture suggests is sort of a mascot, designed to attract the interest of primary-school children in going to university, based on the observation that by the time they are their teens, you've left it too late. Fluffy was born in Liverpool in 2004. Apparently he (or she) makes more money for the university than any other professor, due to licensing fees — it would appear that Fluffy is quite widespread. By now, Fluffy has a Chinese sister, Professor Long Long. And the plan is that Fluffy (along with other colleagues, tba) will shortly feature in mobile phone apps designed for young children. I don't know the details, presumably not “tap fluffy to hear her complain about the rejection of her last research grant application.”

Saturday, January 15, 2011

Wikipedia hard to edit

Jimmy Wales says Wikipedia too complicated for many ...from the article:
He said a lot of people were "afraid" to contribute to the site by the sometimes complicated code - known as Wiki mark-up - needed to format entries.

"If you click edit and you see some Wiki syntax and some bizarre table structure - a lot of people are literally afraid.

and a good thing too! The last thing Wikipedia needs is to get hacked about by people who are too clueless to figure out a bit of syntax.

(And congrats to Wikipedia on reaching age 10.)

Wednesday, January 12, 2011

EU research proposals

“Don’t ever agree to serve as coordinator of an EU-funded multi-site research project” is something I have heard many times. The trouble is, if every day on your way to work, you pass a big lever with the words DO NOT PULL written on it, then sooner or later any self-respecting scientist...     Right now we're still in the proposal-writing stage. Communication overhead is the fundamental problem. One's email habits start to resemble nicotine addiction (“He's a 40-a-day man, he'll be off to an early grave”).

One thing I'll note for people who criticise the European Commission for wasting money, is that projects funded by the Research Executive Agency do not, seemingly, provide much opportunity for lining the pockets of the organizations involved. Then there are various audits and monitoring which I plan to worry about nearer the time, ie if we get the grant. And, we need to have industrial partners, and schemes that add value to the Phd studentships that would be supported. I'm starting to believe that it would be a pretty good opportunity for the students.

The upside, implicit in some of the above discussion, is that it's a pretty good exercise in networking. And maybe this sort of thing is easier the second time you do it, than the first. And if it gets funded, the university gets some credit for increased research income. And it might actually result in some good research...

The submission deadline is the 26th. Whatever's wrong with the proposal at that point, fortunately I won't be able to do anything about it afterwards, and will have to move on to other things.

Monday, December 13, 2010

Another critique of tuition fees

The problem with the Internet is that for nearly all topics, something is available out there that does a better job than one own's efforts could achieve. This new post at the excellent Exquisite Life blog does a great job of the criticizing the dire state of higher education policy in the UK, and should be mandatory reading for anyone who wants to engage with the associated debate. The main focus of the article is on the failure of the tuition fees to fix the fiscal problems that supposedly motivated them, and it also explains the flaws in the arguments that the fees regime is more “progressive” than the current one.

The Campaign for the Public University (which I also found out about via the Exquisite Life blog) has a collection of articles and resources on the topic.

Sunday, December 12, 2010

Game-theoretic board games

'Tis the season (almost) to sit around the fire playing board games and pretending to enjoy it. I recommend two board games, namely Poleconomy and Apocalypse, that I played a few times when I was a student; these came up in a conversation recently due to having some interesting game-theoretic content. I gave a talk at Microsoft Research (Cambridge) on Thursday, and Yoram Bachrach told me about the ripoff game, which has been played by human volunteers for cash prizes. Each individual game takes about one minute and works as follows. Each player is allocated a number, a fraction in the range [0,1] which is his “weight”. A subset of the players can form a winning team if their weights add up to at least 1. However, the winning team has not won the round until they have agreed on how to share the prize (worth 1 pound). For this purpose, each player gets to control another number in the range [0,1], which is the fraction of the prize that he requests from the winning team. And, the team does not win (and share the prize) until those fractions add up to at most 1. Apparently, the players sit in the same room but interact via computers, so the negotiation is somewhat stylized. Anyway, it turns out that Shapley value is quite a good predictor of the winnings associated with weights (although, there is variation from round to round, and some players are better at the game than others). A computational agent was implemented, which computed its Shapley value and then added 10%, and it performed well in competition.

This reminded me of an aspect of Poleconomy, a board game that simulates the interactions of politicians who happen to occupy corporate directorships “on the side”. (The game was developed in the early '80s.) From time to time, an “election” takes place, in which the players cast dice to determine the number of “votes” they obtain, and a Government may be formed by any subset of players who happen to have received a majority of the votes between them. There is some advantage to being in Government, so the immediate outcome of an election is a flurry of mental arithmetic and bargaining amongst the players to identify and agree upon a winning subset.

In Apocalypse, a player's move consists of a sequence of attacks. In each attack, the player chooses a number of units with which to attack another player, a whole number in the range 1...6, which is identified by placing a standard die with the chosen number uppermost underneath a cup; the player being attacked tries to guess the number. If the attacked player guesses correctly, that is the number of units that are lost by the attacking player; otherwise, the attacked player loses a single unit (and the attacking player gains a nuclear weapon, or part thereof, so there is an incentive to make lots of attacks.) Thus, each attack is a kind of generalized matching pennies, where the probability of choosing a smaller number is clearly larger than the probability of choosing a larger number, but all probabilities are positive.

Are there any other board games out there with interesting game-theoretic aspects?

Wednesday, December 01, 2010

CS as STEM

In the Guardian, Stephan Collini imagines an alternative world where the British government is proposing to withdraw all financial support for the teaching of sciences, as opposed to humanities. Meanwhile, a recent flurry of email on the CPHC mailing list1 addresses the status of Computer Science as a STEM field. (That's “Science, Technology, Engineering and Mathematics”.) The background being that these STEM subjects are the ones that in fact benefit from favoritism2 from Government due to their economic importance.

In more detail, STEM subjects will continue to receive a fee premium from Government. CS is STEM, right? Well, maybe not, if STEM is a shorthand for “HEFCE band A or B”; CS narrowly misses out on band B, as a result of reductions in the cost of computing facilities. The Browne review hints that “priority programmes” are primarily bands A and B, with a bit of representation from band C.

The email was about lobbying for recognition as a STEM subject, or alternatively, as “strategically important and vulnerable (SIV)”. Quoting UUK report Changes in student choices and graduate employment
The strategically important and vulnerable (SIV) subjects are chemistry, engineering, mathematics, physics and some area studies, quantitative social science and modern foreign languages (HEFCE, 2010a). They are considered so in relation to the anticipated demands of the economy rather than the exercise of student choice.


While CS is not sufficiently “vulnerable” to classify as SIV, one email argued that computer programming in particular should maybe qualify, and gave some anecdotal evidence of a decline of computer programming in schools in the UK, blamed it on the teaching of ICT in schools, and contrasted it with a relatively high interest in programming in India. ICT, as currently taught here, tends to evade the interesting technical challenges and is an example of the bad driving out the good.

One lesson we should learn: if we really care about our STEM status, it's not just a matter of lobbying to retain that recognition. It's a matter of ensuring that the intellectual content justifies the claims. We need to keep both programming and rigorous mathematics right at the heart of CS.

1Council of Professors and Heads of Computing. I sometime consider getting myself demoted so that I don't have to be on the mailing list.

2Relative, not absolute!

Thursday, November 25, 2010

Student protests

It's pretty hard to come up with anything very original to say about the student protests yesterday and earlier. The fact that there is no major political party here that would reverse this increase in university fees, leaves me with the sense that the protesters are praying to a god that does not exist. As an atheist, I fully acknowledge the benefits some people derive from praying to gods that do not exist, but I don't think the protests will do any good politically.

This comment article in the Guardian made most points that are worth making. This article in the Daily Mash is, I must admit, a trenchant and effective critique of the protest movement. Any case in favour of "higher education as public good" should take it into account. This blog post calls attention to the unresolved issues with the current proposals. We learn that Nick Clegg is "haunted" by his election promise. (That's good of him. I wish Tony Blair and David Blunkett were haunted by their 2001 manifesto commitment not to introduce tuition fees. Or should we all accept that making and breaking promises is the price of winning an election?)

I actually struggle to make a principled case for HE as public good. It's relatively easy to make a practical case: e.g. that there's a slippery slope -- next we'll expect people to pay for post-16 school education, and (following Dearing's logic), since the main beneficiaries of the NHS are the patients, then patients should pay back the costs of their treatment. But, the slippery-slope argument evades the question of what's wrong with high tuition fees themselves. The other practical (not principled) case is that the fees make us out-of-step with most of the industrialized world (as I hinted previously). To elaborate on that, one could accept that high tuition fees are correct in principle, but also accept that nations compete amongst each other for academic talant and skills. Hence we risk a new brain drain not just of staff but of students. Although, that may help with the government's efforts to reduce net immigration.

Sunday, November 07, 2010

Universities, conservatism and Hotelling's Law

I'm not sure what to make of Simon Jenkins' recent article in the Guardian, in whch he accuses universities of being lazy, wasteful, addicted to public money, and resistant to any form of change or innovation. He is probably being more harsh than he really feels, in order to provoke a reaction in the comments contributed by readers, and any reader who takes his article at face value will find that the comments that follow are a useful antidote.

Let's take the article seriously for a moment, and consider whether there is any truth in the central charge that conservatism is the driving force behind academic life. Unreasonable resistance to change is so widespread that it would be amazing if universities were free of it. You only have to look at some of the reaction to Obama's health care reforms to verify this: the hysteria, fear and anger of the rhetoric against these highly incremental reforms is evidence of a psychological disability to cope with any form of change and innovation, one that affects tens of millions of Americans.

But there's another reason why universities might not make the changes that Jenkins urges, and that's Hotelling's Law. That is the observation that in a lot of competitive marketplaces, the rival providers (of goods or services) tend to position themselves very close to each other, rather than go for product differentiation. If one provider does decide to aim at a particular section of the market (say, lower-cost goods) then in order to do so, he just offers goods at a very slighter lower cost than his rivals: he does not consider his cost in isolation, but relative to the competition. (As an aside, I'm not sure that hotels obey Hotelling's law, but other things do, e.g. political parties.)

In an increasingly global marketplace for staff and students, universities are understandably reluctant to go to extremes in order to capture some segment of that market. Despite Jenkins' call for 2-year degrees and year-round teaching, even if there is indeed a big market for those things, universities should be reluctant to go there. British ones are already taking a fairly extreme position on short degree timescales (most countries have 4 or more years for first degrees). And they are, reluctantly, taking an extreme position on low levels of state support for their activities. They are already well into the danger zone and should go no further.

Sunday, October 31, 2010

complexity and elections

I just read an interesting review article (Using Complexity to Protect Elections, by Piotr Faliszewski, Edith Hemaspaandra, Lane A. Hemaspaandra) in the CACM on the complexity of election schemes, and various "attacks" on them, such as searching for voter manipulation opportunities. The article is aimed at a general computer-science audience, and written in quite a colourful style. It touches on a problem with this line of research, one that has bothered me since I started following its progress:
... perhaps the greatest single worry related to using NP-hardness to protect elections—a worry that applies to NP-hardness results not just about manipulation, but also about control and bribery. That worry is that NP-hardness is a worst-case theory, and it is in concept possible that NP-hard sets may be easily solved on many of their input instances even if P and NP differ.
They go on to mention the theory of average-case complexity, but it seems not to have been applied in this context. Is it ever going to be worth using a highly-artificial voting system, just to ensure that a potential manipulator gets a problem that is hard in worst case, but may generally be easy in practice? Perhaps one should look at other sources of difficulty for the manipulator, such as uncertainty about the other voters' preferences.

Added 1.11.10: Another nice survey that Ariel Procaccia pointed me to, ("AI’s War on Manipulation: Are We Winning?" by Piotr Faliszewski and Ariel D. Procaccia) — considers in more detail the problem of NP-hardness as just a worst-case problem for the manipulator, it reviews some work giving negative results, i.e. fundamental obstacles to the project of designing voting systems that are typically computationally resistant to manipulation.

Thursday, October 21, 2010

SAGT 2010

Here are some notes from the 2010 Symposium on Algorithm Game Theory (SAGT), mostly typed up at Athens airport. SAGT had about 60 participants and next year will take place in Amalfi, near Naples. If you want to host it in 2012 then I believe the steering committee would like to hear from you. In the following I mention some of the papers that caught my eye; it is by no means a complete overview, being biased to my own interests, plus I suspect that "conceptual" papers tend to be more eye-catching than "technical" ones.

A special tutorial session on "Games played in physics", along with one of the regular papers "Mixing time and stationary expected social welfare of logit dymanics" highlights a line of research that looks like it could go a long way. Given a game, there's an associated Markov chain whose states are pure-strategy profiles, and transitions consist of the selection of a random player, who then updates his strategy in a way that makes better responses more likely than worse ones (although, it is possible for him to switch to a worse response). Specifically, the probability assigned to a response with payoff x is proportional to exp(βx) where parameter β is the "noise rate": β=0 is entirely noisy and players move at random; as β increases the noise goes down and players prefer better responses. The solution concept for the game is the stationary distribution of the chain. The topic of interest is the mixing rate (a function of β). A connection with physics is that the Glauber dynamics on the Ising model (a topic of enduring interest in the math/CS community) corresponds to a party affiliation game where the players lie on a grid, and in the ferromagnetic version you want to join your neighbours, and in the antiferromagnetic version you want to disagree with them.

2 papers from the "Learning and dynamics" session: one is "On the rate of convergence of Fictitious Play", which I have mentioned earlier. An important point: the lower bounds on convergence rate (for 2-player zero-sum games) apply to convergence to the "correct" strategies, as opposed to convergence to the value of the game. The paper "On learning algorithms for Nash equilibria" forms part of a proposed search for general lower-bound results stating that a broad class of algorithms should fail to find Nash equilibria, even in the 2-player case. They get some negative results for iterative weight-update approaches. They mention that the question of the convergence rate of FP for zero-sum 2-player games to the value of the game is something of a long-standing open question, and seemingly the only convergence rate known is the very slow one that results from the 1951 proof of Julia Robinson.

A cute result: "Responsive lotteries" by Feige and Tennenholtz consider the problem of incentivising someone to reveal their true preference-ranking of a set of items, by awarding one of them to the person, selected from a distribution that is derived from their declared ranking. You have to design the distribution carefully.

Peter Bro Milterson's talk was about NP-hardness and square-root-sum hardness of testing equilibria for being trembling-hand stable; I like the topic since it relates to the complexity of finding equilibria that are restricted to comply with some equilibrium selection/refinement concept. The focus on the "doomsday game" example was nice, and he gave a nice overview of the 3-player hide-and-seek game of Borgs et al (The myth of the folk theorem).

2 papers on network congestion I will mention: Yonatan Aumann's talk on "Pareto efficiency and approximate Pareto efficiency in routing and load balancing games" started by noting that the concept of Pareto efficiency can be used to distinguish the inefficency of the Braess paradox network from that of the Pigou network - in the Pigou network, when you move from equilibrium to optimum flow, some but not all agents benefit, in Braess, all benefit. They continue by studying a notion of approximate Pareto efficency, focussing on parallel links. Then a very nice talk by Martin Macko studies a model of dynamic flows on networks, and shows Braess-paradox-like results... In the model, backlogs may develop at nodes in a flow network, like the build-up of a traffic jam at a junction whose capacity is lower than the rate of incoming traffic. Assuming everyone has up-to-date traffic information, there comes a point in time where some other route is preferred (and possibly, itself becomes overloaded as a result, again not only due to the usage it attracts, but possibly due to a build-up of backlog at some point along it...). And, they get some results characterising the topology of networks that produce Braess-like paradoxes, that are different from the characterision for the standard Braess paradox.

OK, so I'm a bit of a sucker for cake-cutting... Omer Tamuz ("Truthful fair division") gave a nice talk on a protocol where you get the players to declare their valuations of items to be shared, and then you share them out in an envy-free manner. Of course, you want the sharing rule to incentivise the players to be truthful (at a high level, the set-up is similar to the "responsive lotteries" topic noted above). So, this can be done, but if you want a "super-fair" truthful mechanism, it cannot be deterministic.

Thursday, October 14, 2010

Dearing Report

I took a look, by way of reminder, at a copy of the Dearing Report, which has a lot in common with the Brown Review discussed previously. I reckon that you need to know about Dearing in order to write a sensible analysis of Browne, and understand the mess we're in and how we got there. They are both reports with recommendations on higher education funding that got commissioned by unpopular Governments shortly before an election that they duly lost. The Dearing Report got a lot of coverage and discussion in the Times Higher at the time, especially in the run-up to its publication in 1997. Reading it now, it seems terribly overtaken by events -- despite being written only about 15 years ago, it seems to address a situation a lifetime ago, when the basic assumptions about HE finance were totally different. It can be blamed for playing a part in the process of change and decay that it purported to manage. The underlying narrative of UK higher education funding over the past few decades, has been a path of least resistance. It has not been guided by principles; what has happened instead is that certain principles have been retro-fitted to the lazy decisions that have been made.

Dearing's report purported to set the HE funding scene for the next 20 years, but was in practice mainly in the business of short-term fixes rather than an attempt to settle the matter decisively. Its context was a decline in the unit of resource for teaching of about 40% over the previous 20-odd years, this decline having been billed an "efficiency gain". Universities (some of them) were making noises about charging top-up fees. The report concluded that the sector could manage a further 2% cut over the next 2 years, but the planned 6.5% cut would lead to a loss of quality. The report had some good stuff: a recognition that international competitiveness was important, and a recommendation that public spending on HE should increase in real terms by 1% a year. It broached the subject of income-contingent repayment of tuition costs, decided that a graduate tax is unworkable, and proposed a scheme for repaying 25% of a student's tuition costs. Obviously it's a shame we did not stick to that figure of 25%; it looks benign by present standards.

Some other stuff that caught my eye: It recommended a target of 45% participation in HE, which for some reason was rounded up to 50% by the subsequent Govt, but wasn't achieved. With regard to the short-term funding gap resulting from student loans coming from Govt, it recommended that national accounting rules should be changed so that the loans could be treated as a Govt asset.

Here's a point made by Dearing whose dread significance was probably not so very apparent at the time. The Dearing commission considered the question of the benefits to society arising from higher education, and concluded that the main beneficiaries are the graduates themselves, due to improved job opportunities. Not a very remarkable or surprising conclusion to reach, but it has allowed that aspect to crowd out all other considerations of the wider social benefit. Participation in HE has been reduced to an act of economic rationality, an act of selfishness.

Tuesday, October 12, 2010

Browne Review

The Browne Review (wikipedia page) came out today and is being widely discussed. Here is a link to one of many news articles. (Added later: this blog post reports on a devastating critique of the Browne report. Added still later: another great critique in the London Review of Books. Added 20.8.11: another critique (by the same writer, Stefan Collini) with more overview of the historic context.)

On the plus side, it does an efficient job of demolishing the ludicrous "graduate tax" idea. Also, it acknowledges that
Other countries are increasing investment in their HEIs and educating more people to higher standards
And the graphic design is striking in a rather retro way. It would have been improved by being embellished with the dirty fingerprints of assorted Labour party politicians, since the previous Government commissioned the report, but although the fingerprints are missing, the following quote serves that purpose:
Students do not pay charges, only graduates do; and then only if they are successful. The system of payments is highly progressive. No one earning under £21,000 will pay anything.

We estimate that only the top 40% of earners on average will pay back all the charges paid on their behalf by the Government upfront; and the 20% of lowest earners will pay less than today. For all students, studying for a degree will be a risk free activity. The return to graduates for studying will be on average around 400%.
Of course, the above is too good to be true. It reeks of Gordon Brownism, in that it's making promises too good to be true: no-one pays anything, you get much more back that you pay in later, and Government can print all the money needed to fill the short-term funding gap. There's also some stuff about student charters that looks suitably new-Labourish.

However, the present government seem happy to accept this gift. The headline figure of 7000 pounds per year to study at a UK university is dismaying many people, although not nearly as much as the lack of any limit on the fees that may be charged.

Here's a David Blunkett quote that I found here
This is a complete betrayal by the Liberal Democrats of everything that they have ever said on higher education and of the platform they stood on at the general election. The Tories have already performed a volte-face on their previous policy. This leaves only the Labour party with any credibility on student funding and the future of our great universities ...

The fact that the Labour party introduced university fees in the first place, and commissioned this report, seems to have escaped his attention! And here is the single take-home message of this blog post: Labour is to blame (or, if you like fees, they get the credit). Don't ever forget that. And don't ever forgive people like Blunkett for trying to trying to pass on the blame to his opponents.

What happens next? i.e., more specifically, how much will universities will charge? Probably there will be a high ``sticker price'' embellished with a system of discounts and bursaries for students with good exam results. It is tempting to assume that Oxford and Cambridge will gleefully impose very high fees, but they will be reluctant to be seen to be shutting out poorer candidates. Below them, prestigious universities will want to be seen to have a relatively high fee since university degrees are a Veblen good, but then they will have concerns about being able to attract enough students at the basic sticker price. If high fees do not deter too many UK students, then overseas students may be a casualty, at least if they no longer pay substantially higher fees than home students.

Friday, October 01, 2010

trip to Glasgow

Congratulations to Eric McDermid, who successfully defended his PhD at his viva yesterday at the department School of Computing Science, University of Glasgow (myself as external examiner, Patrick Prosser as internal examiner). Title of thesis: A Structural Approach to Matching Problems with Preferences, I thought it was very nice work. I was also fortunate to be able to attend Rob Irving's retirement presentation, which coincidentally took place on the same day.

Thursday, September 23, 2010

GAMES 2010

The games-for-verification crowd seems to be a different community from those of us who do algorithmic game theory (coming from economics). The topic is not covered in the textbook (fair enough I suppose, the book is big enough already). I dropped in on them to give a tutorial Nash equilibria and computational complexity, at the meeting GAMES 2010, the annual Workshop of the ESF Networking Programme on Games for Design and Verification. I got the impression that France and Poland are well-represented in that community, although that may just be an artifact of the composition of the network.

They have the main open problem of: what is the complexity of solving parity games? (Google: did you mean party games?) Parity games are known to be in both NP and co-NP, also known to be mildly subexponential (Zwick and Paterson), so believed to be in P. Currently there are no proposed algorithms that are conjectured to do the job; apparently there used to be one, but it was recently shown to be exponential in the worst case, for an artificial class of examples. The parity games problem is derived from the problem of checking whether a formula in modal mu-calculus is satisfied by a given Buchi automaton, and I gather that is a poly-time reduction. There is also an exponential-time reduction if instead the formula is in monadic second-order logic. However, the work on parity games seems to take on a life of its own, without the need to refer to model-checking. So for example, I am not sure that the question of finding a mixed equilibrium for a parity game relates to solving a model-checking problem, but it's an interesting topic.

Thursday, September 16, 2010

cap on non-EU migrants

I got an email a few days ago that was circulated to all staff, notifying us that all of a sudden, the university will only be able to employ a very small number of people from outside the EU. It applies to all universities, see this article. It's a big problem for universities that deserves wider attention, but as yet I haven't seen it discussed in the Times Higher. The article linked-to above quotes the chief executive of Universities UK who spells out the problem clearly:
The proposed cap will be difficult for universities as a significant proportion of the academic workforce is, and always has been, international. In the UK, over 10 per cent of all our academic staff are non-EU nationals and many are working in key subject areas such as science, technology and engineering.

...

The success of the UK’s higher education sector depends on our ability to attract the most highly talented people to work and study here. Anything that diminishes our ability to do this will undermine the quality of what we do and our ability to compete internationally.

This lamentably crude approach to reducing immigration to the UK does not make one optimistic that they will come up with a more sensible way to regulate overseas students. If they end up choking off the flow of overseas students, that would be complete catastrophe (a short-term one; the impact on staff hiring is a long-term one).

Monday, August 30, 2010

Peer to peer lending

I've been reading about Funding Circle in
this article in the Guardian (Funding Circle's home page also contains links to other news articles that have featured it). It's a new internet-based market that allows individuals to lend money to small businesses. So, it is similar to Zopa, which supports lending of money amongst individuals.

I've not tried it myself but am tempted... you just need a few hundred pounds to make available for loans, and your money gets divided amongst 20 different businesses, so as to spread the risk, and then the claim is that you make on average about 6% on your savings, as opposed to about 2-3% as you currently make in a savings account. Interesting features include: lenders can bid their interest rate, resulting in a trade-off in that if you set a higher rate, it may take longer for your funds to be accepted. You can also set up or join a "circle" (see the help centre) to specialise which businesses you lend to. If you want your money back, you have to sell your outstanding loans to another lender (I assume that effectively results in a penalty for early withdrawal). The system supports "autobid" which manages the division of your funds amongst distinct businesses, and recycles repayments into new loans so that your funds continue to earn interest. Finally, on the downside, my impression from reading the help page is that the tax treatment of income is not very lenient.

Monday, August 16, 2010

Two shocks to the (British university) system

Two recent news articles caught my attention: this one in yesterday's Observer, and this one in today's Guardian. In different ways, they raise profound questions about how British universities will evolve over the next ten years.

The first of the above articles draws attention to a “recruitment drive” by EU universites, aimed at would-be university students in this country. It may come as news to many such students (or their parents) that they have the option to study for degrees in various other EU countries, where the fees are lower than in the UK, and get tuition in English. The article focuses on Maastricht University, by way of example. It's an attractive option: nice town, English language tuition in some subjects, strong research. The recruitment drive is claimed to be driven by our own supply-and-demand problem: too many UK students chasing too few places. The interesting question is whether foreign universities can attract students who actually could get places over here. That potentially reverses higher education's role as a net exporter for the UK. Can this be used as an argument for better state support for universities?

The second article (similar one here), entitled London is most cost-effective UK city for students, publishes a league table, the “2010 NatWest Student Living Index” which evaluates the cost of living for students for 25 cities in the UK, and favours London, surprisingly — the catch is, that cost-effectiveness is a function of a student's earning potential in the city, rather than just cost of living. Seemingly, students are now “meant” to be working part-time while studying (at least, in order for the ranking to be meaningful). Of course, many students get support from their parents and so some of them have the option to study full-time, but (as mentioned in both articles) 46% do not. Universities themselves, meanwhile, remain pretty much oblivious to these inequities — if a student fails an exam, you can cut him some slack on the grounds that his grandmother died recently, but you can't make allowances for him having been pulling pints in the local bar every evening. I think the best fix that is likely to be achievable, would be for universities to be more flexible about the duration of study. The current rigid three-year schedule is dictated by the (declining) Government funding scheme for students, as devised when full-time study was the standard. We could move towards a more German approach of getting your degree in however long it takes you (and we should accept a higher drop-out rate).

added 17.8.10: A new Guardian article:
A landmark review into university finance is expected to recommend that student loans, now only available to those on full-time courses, are extended to part-time students to cover the fees they must currently pay upfront, the Guardian has learned. Such a move would pave the way for a major change in the way university education is viewed, with a three-year stint in a new city no longer a given.
Part-time study is a way forward, but I wonder whether you need a binary divide between full-time and part-time study, or whether we should be able to allow for study schedules that fall between the two.

Tuesday, August 10, 2010

Fictitious play

Fictitious play is a very simple iterative procedure for finding a Nash equilibrium of a game. Before stating an open problem of interest, let me give a quick summary of how it works (or, you can read the Wikipedia page for a more careful description). It's an iterative process that if it converges, must converge to a Nash equilibrium, but unfortunately it doesn't always converge. However it always works for 2-player zero-sum games, more on that below.

Assume you have a 2-player game. To begin with, each player chooses a single pure strategy. Then, both players construct sequences of strategies according to the following rule: At each step, a player considers the sequence chosen by the other player, he supposes that the other player will randomize uniformly over that sequence, and he chooses a best response to that mixed strategy. (So, a player's sequence is treated as a multiset of strategies, one of which is selected uniformly at random.) Technically, we should explain how to choose a best response in the event of a tie (2 equally good responses) - there are various options, but I won't discuss them. Anyway, the chosen best-response is added to a player's sequence, and that's how the sequences get extended.

I have recently been pestering various people with the following question: What is the complexity of the following problem:

input: a bimatrix game. For each player, an initial pure strategy.

question: does FP converge when those initial strategies are chosen at the start?

Additional question: If it converges, give the NE that it converges to.

(and just to be clear, this is not the same thing as asking about the complexity of FP itself.)

I recently read one of the early papers on the topic, namely: Robinson, J. (1951) "An Iterative Method of Solving a Game", Annals of Mathematics 54, 296-301. It is a beautiful paper, just 6 pages; I would recommend anyone to read it. It proves the result mentioned above, that for any zero-sum game, the payoffs obtained by the players do indeed converge to the value of the game. (The convergence rate that is implicitly obtained is very slow - maybe it can be improved?) Note, that is not the same thing as showing that the players play the correct strategies after any given number of iterations - a new paper in SAGT (Felix Brandt, Felix Fischer and Paul Harrenstein "On the Rate of Convergence of Fictitious Play") indicates it may take arbitrarily long to find the right strategies, depending on the payoff values.

Another nice question follows up on a paper by Vince Conitzer "Approximation guarantees for fictitious play" (Allerton 09). This paper studies the quality of approximate equilibria obtained by FP, in a scenario where the number of iterations is less than the number of pure strategies per player. But, if it's applied to a n×n game, and allowed to run for more than n iterations, for all we know it may actually do better than the best currently-known polynomial-time approximation algorithms (that obtain round about 1/3-approximation, if all payoffs are in [0,1].) Although, I will pessimistically guess that some troublesome games that haven't yet been identified, have it obtaining just a 1/2-approximation.

Wednesday, July 28, 2010

Mathematical conversation-starters

It comes up, from time to time, in discussions we have with each other. You're chatting with that long-suffering creature, the Intelligent Layperson, and you feel the urge to explain your professional interests to him/her. And, it has been established that n×n chessboards just don't cut it, or even 100×100 chessboards.

It's time to identify some topics that really work - here's one I tried recently. Consider the following quote from the beginning of the paper Minimal Subsidies in Expense Sharing Games by Meir, Bachrach and Rosenschein, to appear in SAGT.
Three private hospitals in a large city plan to purchase an X-ray machine. The standard type of such machines cost $5 million, and can fulfill the needs of up to two hospitals. There is also a more advanced machine which is capable of serving all three hospitals, but it costs $9 million. The hospital managers understand that the right thing to do is to buy the more expensive machine, which will serve all three hospitals and cost less than two standard machines, but cannot agree on how to allocate the cost of the more expensive machine among the hospitals. There will alway be a pair of hospitals that (together) need to pay at least $6 million, and would then rather split off and buy the cheaper machine for themselves.
The question you ask your audience is, what will be the outcome of the negotiation between the hospitals? Hopefully, someone will begin by saying that 2 hospitals will share a $5M machine, and with any luck, someone else will suggest that the 3rd hospital will offer to share a $5M machine with one of the first two, and pay more than 50%. At this stage, you are in good shape.

Now, you might object that this has nothing to do with computational complexity, which is sort of true, however you can introduce some later on in the discussion if you feel the urge (non-constant number of hospitals or machines). What makes this a nice mathematical topic is that - assuming your audience start to consider a sequence of offers and counter-offers - it raises the issue of making a proper mathematical model of the negotiation (so, if 2 hospitals make an agreement, is it meant to be binding? Presumably not if the 3rd one can "attack" it with a more attractive offer. But if it's not binding, how can the process come to an end?) Finally, despite the fact the problem is ill-posed, there is still a cute answer that is analogous to the answer to this ill-posed problem: when asked what the outcome should be, you say "by symmetry, the hospitals will share a $9M machine". (Actually, don't use the phrase "by symmetry".)

Saturday, July 10, 2010

Research funding, pensions

Quite a flurry of genuinely important material for the Times Higher. The story that the REF will be delayed a year to resolve the dispute over "impact" is good news; regardless of whether this aspect of the REF is a good thing, any delay is a win, simply because of the enormous expense (mainly on academics' time) of national research assessment. Lots of comments follow the article, with Philip Moriarty in good form. (added 12.7.10: 2 more interesting links: This article (5.12.09) discusses the "James Ladyman index" and attracted a lot of comments; this one (1.7.10) has a comment by Ladyman)

The outcome of the USS meeting I mentioned previously is that probably the USS will switch to career-average earnings to compute pensions. In an attempt to make that fact sound interesting, let me put it this way: In the future, your pension contributions statement will not state the number of years of service you have accrued, but it will state your pension entitlement as a sum of money. Oh well, if that bores you, maybe you are wiser than I am.

This article reports on a speech by David Willetts on the case for science funding. So, he believes in it, that's a good thing. From the article:
Mr Willetts said he could not talk about spending commitments until the Comprehensive Spending Review is published this autumn, but warned that the UK could not afford to emulate the examples of the US, Canada and France, which had reacted to the recession by spending more on science.
Full marks, at any rate, for admitting that these other countries are raising spending on science.

Finally, sort of a weak forecast: maybe full economic costing of research grants is not going to collapse under the weight of its own stupidity, as I thought it would. Here's the argument. The Government wants to cut spending on universities but at the same time, wants to protect the strong against the weak. Now, one way to do that is to identify specific universities for preferential treatment, and I guess that's sort of what they're doing in Germany, but it's delicate, to say the least. Alternatively, you can identify characteristics of "strong" and show favoritism to universities having those characteristics. And research grants are quite handy, for that purpose. So, artificially inflate the value of all research grants, and the Matthew effect is duly enforced.

Thursday, July 01, 2010

USS changes

The Times Higher is a good magazine to read if you're feeling a bit euphoric, there's an unaccountable spring in your step, and you need to simmer down a bit. Last week's issue raised the alarm about the sustainability of the Universities Superannuation Scheme (USS). The article notes a forthcoming meeting (7th July) between representatives of the University and College Union (UCU) and USS to negotiate changes. The UCU's question-and-answer page on changes to the USS, argues that no change is needed. Trouble is, we all know that very few final-salary pension scheme still exist; in the private sector most of then closed down to new member during the past 10 years. In a recent blog post, Robert Peston gives an overview of cuts to the benefits from the BBC's pension scheme (which like the USS and the private sector, is also a funded pension scheme.) From Peston's post:
Even those on relatively high salaries, who don't expect their earnings to rise, will have to think twice about whether it makes sense to continue with contributions.
This presents scheme managers with a dilemma - they have to keep the deal sweet enough to retain members and attract new ones, a problem that becomes much more urgent when a scheme is in deficit. When that happens, the whole thing is at risk of being sustained by promises that managers cannot ensure they will be able to keep.

To be honest, I think the UCU is obstructing the necessary dose of bad news that will be necessary to ensure the USS's sustainability. One problem that's come up: the USS's 3rd-largest equity holding is BP, with a market value of 693.7M (the list I got that information from is updated only every 3 months, so BP is likely to be lower-down by now.) USS has a total of about 22Bn in invested capital. (Not that I blame them for owning some BP. It looked like a good bet a few months ago.)

Here's a quote from that UCU question-and-answer page.
Q: Aren't all public sector pension funds going to have to make changes to save money. Whys should academics have special treatment?

A: USS is not a public service pension scheme, so it has to meet the same targets as other funded occupational pension schemes in the private sector. Its big advantages over such schemes are that the HE sector as a whole is less likely to go bust than any individual group of companies, so that it can afford to take a longer view. That does not mean that the politics of envy will not give traction to attacks on USS; but that is no good reason for the preemptive cringe that is part of what characterises the stance of the employers' negotiators.

Full marks for emphasizing that the USS is not public-sector, but the usage of "politics of envy" made me cringe. Finally, here's a fragment of the USS investments page to show that in some respects they have impeccable taste.

(It's the Go game that caught my eye.)

Monday, June 28, 2010

Accepted papers for SAGT

The web site for SAGT 2010 (3rd Symposium on Algorithm Game Theory) now has the list of 28 accepted papers (and a nice facility to show the abstracts). Lots of interesting topics, especially if you're into equilibrium computation. (30.6.10. Hmm, their link to the papers seems to have disappeared. Here it is.)

Friday, June 11, 2010

Competition for rank

It looks like today's the day that my co-author Carmine Ventre gives a talk at ACM-EC on the paper Ranking Games that Have Competitiveness-based Strategies, our take on competition for rank in league tables. At the same time, news that Liverpool University has gone up 8 places to the 39th place in the Guardian's league table of UK universities feels a bit like the news that BP's share price has gone up by 5% this morning. Historically, the university has performed poorly in the Guardian's league table. Regardless of that, positive progress is always welcome.

As I've sometimes mentioned in talks on the topic, the thing about competition for rank is, that when you start thinking about it, you start to see it absolutely everywhere. The games that we consider in the paper seem to capture a lot of the features of this type of competition, and (so far!) we have positive results, by which I mean polynomial-time algorithms. So what we've got is, a class of games that describe a lot of real-world competitions, but for which Nash equilibria (or at any rate, approximate ones) can be found in polynomial time. Of course, my hope is for more work to appear on this topic.

Sunday, May 16, 2010

Research in an age of austerity

Concidence: the new British government is a coalition that is united mainly by the belief that the budget deficit should be tackled sooner rather than later. At the same time, the crisis in the Eurozone has led to severe austerity measures in Greece, Portugal and Spain. All of a sudden, it seems like cost-cutting is "in the air"; our own government will no doubt be encouraged in cutting spending by being able to answer its critics "go to Spain and see how much worse things are over there", or perhaps "if we don't cut now, we'll end up like Greece."

I expect research to be in the line of fire; an open question is whether cuts to the research councils' budgets will be "game-changing" ie will they have an impact on the way funding is handed out, and researchers' approach to competing for it. A bit of reform could be something of a silver lining on the cloud.

Let's try to foresee some changes.

Increased importance of EU funding. This is easy, just look at the figures. European Research Council funds have been spiralling up in recent years, just as EPSRC is facing a cut.

Full economic costing of research grants has got to go. In a nutshell, the background is this. About 10 years ago, universities complained to the Government that their research activities were bankrupting them, because research grants did not cover the expenses incurred. It is important to note that this problem did not actually stop them from fighting like rats in a sack over these supposedly inadequate research grants. And the Govt duly boosted Britain' s science budget, but... the way the funds were handed out, was essentially by doubling the cost of individual grants. The result: heightened competition for grants, and a boom/bust pattern to peoples' (and universities') research incomes. Reviewers of research proposals are told to ignore all value-for-money considerations, and comment only on the quality of the research, leading to anomalies like successful applicants getting more than 100% of their salary paid by their grants... enough of this!

A simplified research funding regime. I'm entering the wishful-thinking zone here. The rationale is, that if you've only got ten quid to dish out to the research community, there's no point making them spend hours poring over impact statements, and peer-reviewing each others' long and highly-technical grant proposals. You should just give it to a subset of universities, or share it equally amongst anyone who can exhibit a decent track-record, or something. The new government is likely to cause a welcome delay to the Research Excellence Framework.

Theoretical research may be well-placed to survive a worst-case scenario. This is appropriate, since a lot of theoretical CS addresses worst-case performance of algorithms. Actually, CS generally may be in better shape than other sciences, since computers are cheap.

And finally, a tongue-in-cheek suggestion: Since London-based academics receive London weightings to their salaries, they end up doing the same work at a higher cost. It makes sense to support academic research (and indeed teaching) in less expensive locations.

Wednesday, May 12, 2010

UK election game

Here's a fun game-theory problem; Martin Gairing helped me find a solution during lunch, which I will add later.

There are 2 political parties and N constituencies; each party wants to win as many of them as possible. Both parties have an amount M of money (to spend on election campaigning) which they split amongst the N constituencies; for each constituency, it is won by the party that allocated it the larger amount of money. A party's payoff is the number of constituencies it wins, so it's a zero-sum game. The problem is to find a Nash equilibrium. You can assume that M is infinitely divisible, or if not, you're allowed to find an approximate solution with error proportional to 1/M.

Note that there is no pure equilibrium; if a party fails to randomize, the other one will be able to narrowly defeat it in nearly all constituencies while allocating no money to one(s) that it loses.

Obviously lots of generalizations are possible...