Sunday, October 31, 2010

complexity and elections

I just read an interesting review article (Using Complexity to Protect Elections, by Piotr Faliszewski, Edith Hemaspaandra, Lane A. Hemaspaandra) in the CACM on the complexity of election schemes, and various "attacks" on them, such as searching for voter manipulation opportunities. The article is aimed at a general computer-science audience, and written in quite a colourful style. It touches on a problem with this line of research, one that has bothered me since I started following its progress:
... perhaps the greatest single worry related to using NP-hardness to protect elections—a worry that applies to NP-hardness results not just about manipulation, but also about control and bribery. That worry is that NP-hardness is a worst-case theory, and it is in concept possible that NP-hard sets may be easily solved on many of their input instances even if P and NP differ.
They go on to mention the theory of average-case complexity, but it seems not to have been applied in this context. Is it ever going to be worth using a highly-artificial voting system, just to ensure that a potential manipulator gets a problem that is hard in worst case, but may generally be easy in practice? Perhaps one should look at other sources of difficulty for the manipulator, such as uncertainty about the other voters' preferences.

Added 1.11.10: Another nice survey that Ariel Procaccia pointed me to, ("AI’s War on Manipulation: Are We Winning?" by Piotr Faliszewski and Ariel D. Procaccia) — considers in more detail the problem of NP-hardness as just a worst-case problem for the manipulator, it reviews some work giving negative results, i.e. fundamental obstacles to the project of designing voting systems that are typically computationally resistant to manipulation.

Thursday, October 21, 2010

SAGT 2010

Here are some notes from the 2010 Symposium on Algorithm Game Theory (SAGT), mostly typed up at Athens airport. SAGT had about 60 participants and next year will take place in Amalfi, near Naples. If you want to host it in 2012 then I believe the steering committee would like to hear from you. In the following I mention some of the papers that caught my eye; it is by no means a complete overview, being biased to my own interests, plus I suspect that "conceptual" papers tend to be more eye-catching than "technical" ones.

A special tutorial session on "Games played in physics", along with one of the regular papers "Mixing time and stationary expected social welfare of logit dymanics" highlights a line of research that looks like it could go a long way. Given a game, there's an associated Markov chain whose states are pure-strategy profiles, and transitions consist of the selection of a random player, who then updates his strategy in a way that makes better responses more likely than worse ones (although, it is possible for him to switch to a worse response). Specifically, the probability assigned to a response with payoff x is proportional to exp(βx) where parameter β is the "noise rate": β=0 is entirely noisy and players move at random; as β increases the noise goes down and players prefer better responses. The solution concept for the game is the stationary distribution of the chain. The topic of interest is the mixing rate (a function of β). A connection with physics is that the Glauber dynamics on the Ising model (a topic of enduring interest in the math/CS community) corresponds to a party affiliation game where the players lie on a grid, and in the ferromagnetic version you want to join your neighbours, and in the antiferromagnetic version you want to disagree with them.

2 papers from the "Learning and dynamics" session: one is "On the rate of convergence of Fictitious Play", which I have mentioned earlier. An important point: the lower bounds on convergence rate (for 2-player zero-sum games) apply to convergence to the "correct" strategies, as opposed to convergence to the value of the game. The paper "On learning algorithms for Nash equilibria" forms part of a proposed search for general lower-bound results stating that a broad class of algorithms should fail to find Nash equilibria, even in the 2-player case. They get some negative results for iterative weight-update approaches. They mention that the question of the convergence rate of FP for zero-sum 2-player games to the value of the game is something of a long-standing open question, and seemingly the only convergence rate known is the very slow one that results from the 1951 proof of Julia Robinson.

A cute result: "Responsive lotteries" by Feige and Tennenholtz consider the problem of incentivising someone to reveal their true preference-ranking of a set of items, by awarding one of them to the person, selected from a distribution that is derived from their declared ranking. You have to design the distribution carefully.

Peter Bro Milterson's talk was about NP-hardness and square-root-sum hardness of testing equilibria for being trembling-hand stable; I like the topic since it relates to the complexity of finding equilibria that are restricted to comply with some equilibrium selection/refinement concept. The focus on the "doomsday game" example was nice, and he gave a nice overview of the 3-player hide-and-seek game of Borgs et al (The myth of the folk theorem).

2 papers on network congestion I will mention: Yonatan Aumann's talk on "Pareto efficiency and approximate Pareto efficiency in routing and load balancing games" started by noting that the concept of Pareto efficiency can be used to distinguish the inefficency of the Braess paradox network from that of the Pigou network - in the Pigou network, when you move from equilibrium to optimum flow, some but not all agents benefit, in Braess, all benefit. They continue by studying a notion of approximate Pareto efficency, focussing on parallel links. Then a very nice talk by Martin Macko studies a model of dynamic flows on networks, and shows Braess-paradox-like results... In the model, backlogs may develop at nodes in a flow network, like the build-up of a traffic jam at a junction whose capacity is lower than the rate of incoming traffic. Assuming everyone has up-to-date traffic information, there comes a point in time where some other route is preferred (and possibly, itself becomes overloaded as a result, again not only due to the usage it attracts, but possibly due to a build-up of backlog at some point along it...). And, they get some results characterising the topology of networks that produce Braess-like paradoxes, that are different from the characterision for the standard Braess paradox.

OK, so I'm a bit of a sucker for cake-cutting... Omer Tamuz ("Truthful fair division") gave a nice talk on a protocol where you get the players to declare their valuations of items to be shared, and then you share them out in an envy-free manner. Of course, you want the sharing rule to incentivise the players to be truthful (at a high level, the set-up is similar to the "responsive lotteries" topic noted above). So, this can be done, but if you want a "super-fair" truthful mechanism, it cannot be deterministic.

Thursday, October 14, 2010

Dearing Report

I took a look, by way of reminder, at a copy of the Dearing Report, which has a lot in common with the Brown Review discussed previously. I reckon that you need to know about Dearing in order to write a sensible analysis of Browne, and understand the mess we're in and how we got there. They are both reports with recommendations on higher education funding that got commissioned by unpopular Governments shortly before an election that they duly lost. The Dearing Report got a lot of coverage and discussion in the Times Higher at the time, especially in the run-up to its publication in 1997. Reading it now, it seems terribly overtaken by events -- despite being written only about 15 years ago, it seems to address a situation a lifetime ago, when the basic assumptions about HE finance were totally different. It can be blamed for playing a part in the process of change and decay that it purported to manage. The underlying narrative of UK higher education funding over the past few decades, has been a path of least resistance. It has not been guided by principles; what has happened instead is that certain principles have been retro-fitted to the lazy decisions that have been made.

Dearing's report purported to set the HE funding scene for the next 20 years, but was in practice mainly in the business of short-term fixes rather than an attempt to settle the matter decisively. Its context was a decline in the unit of resource for teaching of about 40% over the previous 20-odd years, this decline having been billed an "efficiency gain". Universities (some of them) were making noises about charging top-up fees. The report concluded that the sector could manage a further 2% cut over the next 2 years, but the planned 6.5% cut would lead to a loss of quality. The report had some good stuff: a recognition that international competitiveness was important, and a recommendation that public spending on HE should increase in real terms by 1% a year. It broached the subject of income-contingent repayment of tuition costs, decided that a graduate tax is unworkable, and proposed a scheme for repaying 25% of a student's tuition costs. Obviously it's a shame we did not stick to that figure of 25%; it looks benign by present standards.

Some other stuff that caught my eye: It recommended a target of 45% participation in HE, which for some reason was rounded up to 50% by the subsequent Govt, but wasn't achieved. With regard to the short-term funding gap resulting from student loans coming from Govt, it recommended that national accounting rules should be changed so that the loans could be treated as a Govt asset.

Here's a point made by Dearing whose dread significance was probably not so very apparent at the time. The Dearing commission considered the question of the benefits to society arising from higher education, and concluded that the main beneficiaries are the graduates themselves, due to improved job opportunities. Not a very remarkable or surprising conclusion to reach, but it has allowed that aspect to crowd out all other considerations of the wider social benefit. Participation in HE has been reduced to an act of economic rationality, an act of selfishness.

Tuesday, October 12, 2010

Browne Review

The Browne Review (wikipedia page) came out today and is being widely discussed. Here is a link to one of many news articles. (Added later: this blog post reports on a devastating critique of the Browne report. Added still later: another great critique in the London Review of Books. Added 20.8.11: another critique (by the same writer, Stefan Collini) with more overview of the historic context.)

On the plus side, it does an efficient job of demolishing the ludicrous "graduate tax" idea. Also, it acknowledges that
Other countries are increasing investment in their HEIs and educating more people to higher standards
And the graphic design is striking in a rather retro way. It would have been improved by being embellished with the dirty fingerprints of assorted Labour party politicians, since the previous Government commissioned the report, but although the fingerprints are missing, the following quote serves that purpose:
Students do not pay charges, only graduates do; and then only if they are successful. The system of payments is highly progressive. No one earning under £21,000 will pay anything.

We estimate that only the top 40% of earners on average will pay back all the charges paid on their behalf by the Government upfront; and the 20% of lowest earners will pay less than today. For all students, studying for a degree will be a risk free activity. The return to graduates for studying will be on average around 400%.
Of course, the above is too good to be true. It reeks of Gordon Brownism, in that it's making promises too good to be true: no-one pays anything, you get much more back that you pay in later, and Government can print all the money needed to fill the short-term funding gap. There's also some stuff about student charters that looks suitably new-Labourish.

However, the present government seem happy to accept this gift. The headline figure of 7000 pounds per year to study at a UK university is dismaying many people, although not nearly as much as the lack of any limit on the fees that may be charged.

Here's a David Blunkett quote that I found here
This is a complete betrayal by the Liberal Democrats of everything that they have ever said on higher education and of the platform they stood on at the general election. The Tories have already performed a volte-face on their previous policy. This leaves only the Labour party with any credibility on student funding and the future of our great universities ...

The fact that the Labour party introduced university fees in the first place, and commissioned this report, seems to have escaped his attention! And here is the single take-home message of this blog post: Labour is to blame (or, if you like fees, they get the credit). Don't ever forget that. And don't ever forgive people like Blunkett for trying to trying to pass on the blame to his opponents.

What happens next? i.e., more specifically, how much will universities will charge? Probably there will be a high ``sticker price'' embellished with a system of discounts and bursaries for students with good exam results. It is tempting to assume that Oxford and Cambridge will gleefully impose very high fees, but they will be reluctant to be seen to be shutting out poorer candidates. Below them, prestigious universities will want to be seen to have a relatively high fee since university degrees are a Veblen good, but then they will have concerns about being able to attract enough students at the basic sticker price. If high fees do not deter too many UK students, then overseas students may be a casualty, at least if they no longer pay substantially higher fees than home students.

Friday, October 01, 2010

trip to Glasgow

Congratulations to Eric McDermid, who successfully defended his PhD at his viva yesterday at the department School of Computing Science, University of Glasgow (myself as external examiner, Patrick Prosser as internal examiner). Title of thesis: A Structural Approach to Matching Problems with Preferences, I thought it was very nice work. I was also fortunate to be able to attend Rob Irving's retirement presentation, which coincidentally took place on the same day.