Thursday, July 31, 2008

COMSOC-2008 progress

Ulle and I recently finished making the program for the 2nd International Workshop on Computational Social Choice (COMSOC-2008), a task that is a bit like solving a sudoku. Then I took a look at some CS conferences that I have been involved with recently, to see how much social choice theory has appeared there. (Not much in CS theory conferences, see below for a summary; but there's more in AI and multiple-agents conferences. Indeed, a few of the COMSOC papers appeared at conferences like AAAI.)

I looked at conferences that attract a lot of computational game theory. So, what is the difference between game theory and social choice theory? Game theory studies the outcomes of interactions amongst multiple agents. Social choice theory is about aggregating multiple opinions/world views/preferences into an overall one. Elections are the most obvious examples of such a process. (For elections to be interesting to study, you have to solicit more than just a single preferred candidate from each voter. If voters provide rank-ordered lists, life gets interesting. It gets more interesting still if the decision is more than just a single winning candidate, e.g. if the decision is a set of winners, or a rank-ordered list.) For computer scientists, game theory is our opportunity to play at being economists. With social choice theory, we get to diversify into politics and philosophy also. (As an aside, at the 2000 BCTCS conference (held here at Liverpool), Les Valiant gave an invited talk in which he proposed an answer to the question: "What academic discipline would have attracted today's computer scientists if computer science had not yet been invented?" I thought he was going to say mathematics, but his answer was philosophy. Which makes a lot of sense, when you think about it.)

EC 2007: 42 papers, nothing obviously about social choice. Mostly game theory and mechanism design.

EC 2008: 38 papers, one of which is "A Sufficient Condition for Voting Rules to be Frequently manipulable" by Xia and Conitzer.

WINE 2007: 31 regular papers and 30 short papers, one of which is "PageRank as a Weak Tournament Solution" by Felix Brandt and Felix Fischer.

Then I took a look at STOC 2008, and was reminded that it is home to a wonderful paper: "The Chow Parameters Problem", by Ryan O'Donnell and Rocco Servedio. This is the paper I was trying to write when I wrote this precursor. Quoting their paper: "In the realm of social choice and voting theory the Chow Parameters represent the Banzhaf power indices of the n voters -- a measure of each one's influence over the outcome." Besides the big technical contribution of this paper, I think it's the first one to explain the importance of Chow parameters both to learning theory as well as voting theory.

Thursday, July 24, 2008

John Savage's "Models of Computation"

I read in DMAnet that John Savage's book "Models of Computation" is now available here (as PDF) under a Creative Commons license. I have my own traditional paper copy, but it's very nice to see such a comprehensive heavyweight textbook become freely available, and get rescued from the twilight zone of many books (when they remain in copyright but it gets harder and harder to get a copy). Now I can think of a bunch of other books for which this would be nice...

Thursday, July 17, 2008

GAMES over

The conference ended today, with the last plenary session being the Shapley Lecture, given by Tim Roughgarden. Followed by one last technical session. Among the technical talks I heard, James Schummer gave a nice presentation on "voting with money" (joint work with Vohra). I guess I like theorems that begin with the statement "Suppose taxation is 100% wasteful."

Afterwards, I had a chat with Lloyd Shapley, originally with a view to finding out some details about the game so long sucker, although in the event quite a lot of other topics were covered, including a fiendishly complicated chess puzzle. (This is Shapley of the classical Gale-Shapley algorithm (1962), for the stable marriage problem, which has been worked on extensively in the CS community, and which we hopefully all teach in our undergraduate classes.)

Here is the chess puzzle; I think it's one that he just took an interest in rather than constructed himself. The question is "can white castle?" More formally, the question is whether there is a sequence of legal moves that lead up to the position shown, such that at this point, white can still castle.

The follow-up question is: if the pawn at F5 was at F6, could white still castle? Which leads me to suspect that the answer is different. For myself, it's tough enough to reconstruct any kind of sequence of moves that lead to these positions, without worrying much about whether it can/cannot be done while allowing white to still be able to castle. (This puzzle seems to be an example of retrograde analysis.)











black
rk
ppppp
pp
P
P
P
BPPPP
RK
WHITE

White pieces are denoted with capital letters, black with lower-case. K (or k) denotes a king, not a knight (all the knights have been captured). A1 is bottom-left, H1 is bottom-right, and so on.

And finally, this post at Lance Fortnow's blog (as well as some adjacent ones) also is about GAMES 2008; he's the only other one I know of who's been cluttering up the internet with impressions of this conference, so if you're interested, take a look there (if you haven't already been there.)

Wednesday, July 16, 2008

Poster sessions

Yesterday I attended a poster session at GAMES 2008, and today went downtown to AAAI 2008 to meet up with colleagues and attend the poster session over there.

So, about poster sessions. I am reminded that correctly treated, they can be one of the fastest ways of learning about someone's research, since they provide the opportunity to quiz the presenter about any aspect that is unclear to you. At talks, it is usual to fail to take in some crucial aspect of the mathematical model being analysed. At a poster session, the problem can be eliminated, if you don't mind interrupting the presenter frequently. At GAMES 2008 I took in a nice poster presentation by Liad Wagman (Vincent Conitzer was co-author) on "Optimal False-Name-Proof Voting Rules with Costly Voting", which happened to also be a paper at AAAI that picked up an Outstanding Paper award (I missed the talk because the room was packed out). I think the other 2 posters at GAMES that I took in in detail were "The Theory of Collusion under Financial Constraints" (Yosuke Yasuda) and "On loss aversion in a bargaining game with alternating offers" (Bram Driesen). At AAAI, where the subject matter was, shall we say, highly diversified, I found myself gravitating towards the posters on graphics and image processing; there was one on rendering scenes based on natural-language descriptions, another on scribble analysis - given a scribble over an element of a sketch, is it a filling-in or a deletion?, another on domino tiling (given a fixed collection of dominos, arrange them to form a best-possible picture of a target image. More relevant to my own research, I take in a poster by Piotr Faliszewski, about how, despite the NP-hardness of manipulating elections, there is an approximation algorithm.

At GAMES, I attended an entertaining performance by stand-up economist Yoram Bauman.

Tuesday, July 15, 2008

The future of Game Theory

I have attended Peyton Young's presidential address, "STRATEGIC LEARNING: Recent Advances and Open Problems", also Sergiu Hart's plenary talk addressing the question of when does there exist natural dynamics leading to Nash Equilibrium (Hart is president-elect). Also, yesterday I attended the "Nobel Panel", which is as you might guess, a panel discussion by Nobel laureates (Aumann, Maskin and Myerson). Notwithstanding Aumann's delightful rendition of "que sera, sera", there are some take-home messages.

Computer Science, and the notions that we in the CS community work with, are of great interest to mainstream game theory; a paper by Papadimitriou used the phrase "as the ice between game theory and computer science continues to melt..." -- well I can report that it is well and truly melting. This is a great time for computer scientists to continue to work at making contributions to game theory. Sergiu Hart himself is very conversant with the relevant terminology and concepts from CS, perhaps in part due to his work with Yishay Mansour (to appear in GEB).

It was claimed that the time is ripe for more emphasis on cooperative solution concepts, as opposed to non-cooperative ones. (I would note that work in the AI community is addressing this topic.)

Generally, the word "decentralized" is a big buzz-word around here. I did not foresee when I wrote my most recent research proposal "efficient decentralised algorithms for computational game theory" quite how mainstream are the general topics on which I was proposing to work. Indeed, if anything the situation is "too good", in that my research program is being pursued by some of the top people in the game theory field, and I have to take care in a future version of the proposal, to identify a distinctive agenda. (added 21.7.08.): By the way, this web page describes a long-running conference series on "decentralization", based in the economics community, that I just found out about.

Sunday, July 13, 2008

Game Theory and Computer Science Prize

OK, it's the main reason I attended the conference, since it was awarded to Constantinos Daskalakis, myself and Christos Papadimitriou for our paper “The Complexity of Computing a Nash Equilibrium”, which first appeared at STOC in 2006. The plan is to award this prize once every world congress of the Game Theory Society (so, every 4 years) for a recent CS/game theory paper. It's an encouraging sign of the interest that the game theory community has for work going on in CS.

Constantinos Daskalakis gave the talk; in the audience was John Nash himself (Nash equilibrium was named after him since he was the one to prove (back in 1950!) that any game has one (at least one, to be more precise.)) Nash is now 80 years old, so he was 22 when he obtained that famous result.

Other bits and pieces: I attended a very nice talk at a session on "Roommate and Marriage Models", given by Thayer Morrill, on "The Roommates Problem Revisited". From memory, here's the idea. The problem with looking for a solution to the Stable Roommates Problem (in contrast with the Stable Marriage Problem) is that for some instances of the problem, any proposed solution may have at least one blocking pair. (That is, a pair of participants who prefer each other to the partners to which they have been assigned.) So, what we need is, a new solution concept. Morrill notes that shared rooms are scarce - the members of a blocking pair typically cannot go off together and move in with each other unilaterally. Instead, a solution should be deemed stable, essentially if there is no group of participants who can re-distribute the keys to their rooms in such a way that they are all happier than before. A solution should be Pareto optimal in this sense. Given this notion of what it means to be a solution, there is a simple algorithm that finds solutions efficiently, called something like "iterated random dictator", in which at each step, a participant is chosen, who gets to select his/her roommate; those two are paired off and we continue with the remaining participants.

Another session I attended was on "Algorithms and Games", where I got to attend talks by COMSOC invited speakers Tuomas Sandholm and Rohit Parikh. (It clashed with a session on "War", which I was sorry to miss.)

Outgoing president of the Game Theory Society, Peyton Young, gave a great plenary talk about viewing the search for a Nash equilibrium as an iterative learning process in which a player should interact with the other players solely by trying to improve his own payoff and not by attempting to model the behaviour of the other players. The wonderful thing about this talk, as far as I'm concerned, is how much it overlaps (coincides?) with my own research agenda. It is great to hear that the mainstream game theory community is very interested in these issues. I hope to talk to him later in more detail, but already I have some very interesting literature to look into, and will certainly write my next research proposal with that much more conviction.

This evening I got to attend a dinner hosted by Ehud Kalai; I came away feeling slightly star-struck by all the eminent game theorists who were in attendance...

Saturday, July 12, 2008

GAMES 2008

I arrive in Chicago after a fairly trouble-free flight. When I am asked at immigration what game theory is, I say it's a branch of economics. I feel like I am "going native" (the process by which computer scientists migrate from "computational X" to "X".

I meet Trevor and Katie in immigration queue; they are going to AAAI, which is "co-located" with GAMES 2008, but in fact is 15 miles away. They are the ones who had the sense to be in the city centre; my conference is out in the boonies of Evanston.

Thursday, July 10, 2008

ACM-EC 2008

The ACM EC'08 conference has now started; I am not there myself, but one of my papers is. And I am happy to say that it was selected as one of two "outstanding papers" at the conference; the papers that received the award are

  • Self-Financed Wagering Mechanisms for Forecasting, by Nicolas Lambert, John Langford, Jennifer Wortman, Yiling Chen, Daniel Reeves, Yoav Shoham and David Pennock

  • Uncoordinated Two-Sided Matching Markets, by Heiner Ackermann, myself, Vahab Mirrokni, Heiko Roeglin and Berthold Voecking



Our paper, Uncoordinated Two-Sided Matching Markets, considers the stable marriage problem, which as we know, is efficiently solved by the Gale-Shapley algorithm. We consider two simple randomised algorithms that have the advantage over GS that they are arguably less "centralised" (participants don't have to be told what to do a central controller), and which are guaranteed to terminate. We show that unfortunately, for carefully-chosen preference lists, these algorithms take (in expectation) exponential time. I can imagine that further work might involve identifying conditions (hopefully not too onerous) on the preference lists, that would imply fast convergence of these algorithms.

Monday, July 07, 2008

COMSOC notification

Program co-chair Ulle Endriss and I finally finish choosing papers for presentation at COMSOC-2008. Thanks to the program committee members for their work, and also to everyone who submitted papers. We accepted 36 out of 55 submissions (not including a handful of off-topic or duplicate submissions that were deleted at the outset). Authors have now been notified of the decisions on their papers. Here are some thoughts about picking papers for this particular meeting.

The process of choosing papers was done in computer-science style, i.e. with the aid of a program committee of established researchers in the field, who reviewed the papers in detail. We obtained three reviews per paper, which were mostly passed back to the authors. This approach is of course geared towards a system whereby you are looking mainly for technical merit of papers.

Computer science conferences are treated as publication forums, but in economics (and I believe in most other disciplines) conferences are much more informal. There is a proceedings but papers may be in a preliminary state. Fundamentally, COMSOC is a meeting, not a publication forum, and we took papers that appeared elsewhere. But, we compromised a bit with the Computer Science way of doing it.

In CS conferences, at least the ones where I have reviewed papers, the aim is to select (from the papers deemed relevant to the meeting) the ones of highest technical quality. The discussion of which papers to select is aimed at a rank-ordering of the submissions, in terms of quality, and the "best" ones are taken. For COMSOC, we had that bias, but we also aimed for representation of different sub-topics, and had a bias towards younger authors and paper that had not so far been presented anywhere else (actually, for CS conferences, that last aspect is not so much a bias as a requirement; since they are publication forums a paper is disqualified if it was given at a previous conference).

Grounds for rejection included "not being about computation", which applied to a sizeable minority of papers, including some quite interesting ones. (You have to be caseful to make a broad interpretation of what it means to be about computation, however.) "not being about social choice" was only applied to one paper that I know of, where it seemed to be more about a standard optimization problem that was being discussed in the context of a particular social choice setting.

I liked the approach of attempting to build a program that we believe should develop new interactions and attract interesting participants, rather than just select for technical quality. There's something quite liberating about being able to admit that it's OK to take one paper and reject a "better" one.

Due to the compromise that we adopted, there is a risk that some contributors from CS, as well as come from (say) economics, especially ones whose papers were rejected, will be taken aback by the approach to selecting papers. An obvious topic for the business meeting is whether we want a subsequent COMSOC to veer towards one or the other type of conference.

Adam Smith

A few months ago he made it onto the twenty-pound note. Now, Adam Smith gets a statue in Edinburgh. It is ironic that this event coincides with recent government pronouncements that hold his big ideas in disrespect. Thus, Gordon Brown urges us to stop wasting food, because waste is contributing to price rises. As I see it, that's the whole point of these price rises: to stop us wasting food! We need the price rises in order to change our behaviour. A similar point holds regarding the Government's plans to ban the use of incandescent light bulbs in order to save energy (and thus, lower its cost.) It's topsy-turvy thinking. Price should be allowed to do its job.

Here's a more complex example that affects me professionally. This article attempts to explain why the Government is right to introduce the "full economic cost" regime for research grants. Essentially, the effect of this regime (by comparison with the previous one) is that when you get a grant, they pay you more money for doing the same amount of work. The rationale is that universities were trying too hard to attract under-funded research projects, which leads to infrastructure decline, and is supposed to be unsustainable. But, hang on a moment: if universities were already fighting like rats in a sack to get these insufficient research grants, why stop them? I'm not just a researcher, I'm also a taxpayer, and I like the idea of my money going further, even when it's being spent on research. And from a selfish academic perspective, if each research project only attracted half the money it ought to, it would presumably double my chances of getting one.