Here are some notes from the 2010 Symposium on Algorithm Game Theory (SAGT), mostly typed up at Athens airport. SAGT had about 60 participants and next year will take place in Amalfi, near Naples. If you want to host it in 2012 then I believe the steering committee would like to hear from you. In the following I mention some of the papers that caught my eye; it is by no means a complete overview, being biased to my own interests, plus I suspect that "conceptual" papers tend to be more eye-catching than "technical" ones.
A special tutorial session on "Games played in physics", along with one of the regular papers "Mixing time and stationary expected social welfare of logit dymanics" highlights a line of research that looks like it could go a long way. Given a game, there's an associated Markov chain whose states are pure-strategy profiles, and transitions consist of the selection of a random player, who then updates his strategy in a way that makes better responses more likely than worse ones (although, it is possible for him to switch to a worse response). Specifically, the probability assigned to a response with payoff x is proportional to exp(βx) where parameter β is the "noise rate": β=0 is entirely noisy and players move at random; as β increases the noise goes down and players prefer better responses. The solution concept for the game is the stationary distribution of the chain. The topic of interest is the mixing rate (a function of β). A connection with physics is that the Glauber dynamics on the Ising model (a topic of enduring interest in the math/CS community) corresponds to a party affiliation game where the players lie on a grid, and in the ferromagnetic version you want to join your neighbours, and in the antiferromagnetic version you want to disagree with them.
2 papers from the "Learning and dynamics" session: one is "On the rate of convergence of Fictitious Play", which I have mentioned earlier. An important point: the lower bounds on convergence rate (for 2-player zero-sum games) apply to convergence to the "correct" strategies, as opposed to convergence to the value of the game. The paper "On learning algorithms for Nash equilibria" forms part of a proposed search for general lower-bound results stating that a broad class of algorithms should fail to find Nash equilibria, even in the 2-player case. They get some negative results for iterative weight-update approaches. They mention that the question of the convergence rate of FP for zero-sum 2-player games to the value of the game is something of a long-standing open question, and seemingly the only convergence rate known is the very slow one that results from the 1951 proof of Julia Robinson.
A cute result: "Responsive lotteries" by Feige and Tennenholtz consider the problem of incentivising someone to reveal their true preference-ranking of a set of items, by awarding one of them to the person, selected from a distribution that is derived from their declared ranking. You have to design the distribution carefully.
Peter Bro Milterson's talk was about NP-hardness and square-root-sum hardness of testing equilibria for being trembling-hand stable; I like the topic since it relates to the complexity of finding equilibria that are restricted to comply with some equilibrium selection/refinement concept. The focus on the "doomsday game" example was nice, and he gave a nice overview of the 3-player hide-and-seek game of Borgs et al (The myth of the folk theorem).
2 papers on network congestion I will mention: Yonatan Aumann's talk on "Pareto efficiency and approximate Pareto efficiency in routing and load balancing games" started by noting that the concept of Pareto efficiency can be used to distinguish the inefficency of the Braess paradox network from that of the Pigou network - in the Pigou network, when you move from equilibrium to optimum flow, some but not all agents benefit, in Braess, all benefit. They continue by studying a notion of approximate Pareto efficency, focussing on parallel links. Then a very nice talk by Martin Macko studies a model of dynamic flows on networks, and shows Braess-paradox-like results... In the model, backlogs may develop at nodes in a flow network, like the build-up of a traffic jam at a junction whose capacity is lower than the rate of incoming traffic. Assuming everyone has up-to-date traffic information, there comes a point in time where some other route is preferred (and possibly, itself becomes overloaded as a result, again not only due to the usage it attracts, but possibly due to a build-up of backlog at some point along it...). And, they get some results characterising the topology of networks that produce Braess-like paradoxes, that are different from the characterision for the standard Braess paradox.
OK, so I'm a bit of a sucker for cake-cutting... Omer Tamuz ("Truthful fair division") gave a nice talk on a protocol where you get the players to declare their valuations of items to be shared, and then you share them out in an envy-free manner. Of course, you want the sharing rule to incentivise the players to be truthful (at a high level, the set-up is similar to the "responsive lotteries" topic noted above). So, this can be done, but if you want a "super-fair" truthful mechanism, it cannot be deterministic.
[DMANET] ISSAC 2025 -- Second Call for Papers
19 hours ago
No comments:
Post a Comment