Showing posts with label games. Show all posts
Showing posts with label games. Show all posts

Thursday, December 12, 2019

A competitive foraging/gathering game

Here’s an interesting-looking game that occurred to me recently; let me know if you have seen an analysis of it or something similar. Suppose that two or more competing agents can move around in an environment in which there are a number of valuable items, and agents want to gather more items than their opponents. (One thinks of the arena in the vicinity of the Cornucopia in The Hunger Games, although a better story may be a collection of robots who can each obtain credit for performing tasks that have arisen in various locations.) How hard is it to figure out what each agent should do, and how many items she can collect? The analysis looks challenging. What follows is a few observations. A and B denote players.

It looks like a player faces an NP-hard problem, since one can present player A with a Travelling Salesman Problem of collecting n items in his vicinity before player B (initially located some distance away) has time to reach the area and collect some of these items.

One question that is not about computational difficulty is: can such a game ever have a standoff in which players stop moving, and one or more items remains uncollected? The answer seems to be yes, subject to certain modelling assumptions.

Suppose that the items are located at the vertices of a graph, and players may traverse the edges. If the graph is allowed to be directed, a standoff is easy to construct, see diagram below. All edges take 1 time unit to traverse. Suppose both players A and B start at the central vertex, and there are items of equal value 1 on each of the other vertices. If player A moves towards the top vertex, player B will move towards the bottom left vertex just downstream (according to the edge direction), and will be able to collect 2 out of the 3 items, exploiting the directedness of the edges. Hence it looks like both players will be reluctant to leave the central vertex. (Especially if we assume that the aim is to get more items than the opponent, as opposed to maximising the number of items collected.)

Can we do something similar for undirected graphs? The following example is inspired by a secure protocol for bilateral trade of physical goods, which I used once or twice as a young child. The parties to the transaction place the items on the ground a few metres apart, and walk towards each other, passing at the halfway point. Assuming they want the exchange to go ahead, they continue towards their preferred item, noting that neither trader can grab both items, subject to the reasonable assumption that neither can run 3 times as fast as the other.

The figure below shows an undirected graph where player A is very close to an item worth  1.5 (we can assume he grabs it immediately) while player B is distance 1 from the remaining two items of value 1. Player A is distance 1.5 from each of these 2 remaining items.

Assuming that the aim of the game is to get the lion’s share of the items, A needs one more item, and B needs both remaining items. If A moves towards the top item, then B can move towards that same item, get there first and then go back for the bottom item. If players can move freely (and reverse directions) on edges, B can stop A from getting both, by staying slightly closer to whichever item A is closest to. However, if B goes ahead and collects (say) the top item, then A can go for the bottom one, as soon as B is close to the top one. Result: an impasse, especially if you assume that such an impasse means the game is tied and the overall prize is split.

Notice that you do not get this deadlock if players can move freely in the plane: in that version, A can move directly towards B, and if B approaches (say) the top item then A should approach the bottom one. So A wins with no impasse. It would be interesting to know whether an impasse can result in this setting of free movement in the plane.

Thursday, January 28, 2016

go-playing AI

Back when I was a student I was a fairly enthusiastic Go player, and always liked the fact that it seemed to be resistant to efforts to make a strong go-playing computer program. (At any rate, it resisted my own effort to write a strong Go-playing program.) Having followed the progress of go-playing programs, of course I was interested in the success of the Google DeepMind program (articles in the BBC and Guardian) against Fan Hui, Europe’s top Go player. As noted in Neil Lawrence’s Guardian article, the DeepMind program doesn’t achieve the data efficiency of human players, so there is still work to do. And for those of us who like to imagine that Go is really supposed to be hard to program, there is a glimmer of hope: Previous Go-playing programs perform better against a human opponent during the first few games, and then the human opponent learns their weaknesses. Could Fan Hui eventually start winning, with a bit more practice?

Wednesday, October 05, 2011

Project Waterloo

I tried playing Microsoft Research’s Project Waterloo on Facebook; details are here, see also here. It’s a Colonel Blotto game: each player (there are 2) has 100 soldiers that he uses to attack 5 hills1, and you capture any hill if you attack it with more troops than your opponent. In each round you divide your soldiers between the hills, doing your best to guess how the opponent is dividing his, and you win the round if you capture 3 or more hills. Quoting the MSRC site:
The game is complex from a game-theoretic perspective, involves randomized strategies, and can be approached by reasoning about the opponent's reasoning. We have also found it to be fun, engaging, and slightly addictive. It is thus a great test case for studying actual strategic behaviour of people on Facebook.

I’m wondering whether it’s meant to be straightforward to devise an optimal strategy... it’s a zero-sum game, the catch is, there’s a large number of pure strategies — but do you have to mix over a large number of pure strategies (to guarantee to break even over time)? Maybe there’s a set of 5 good numbers (e.g. 34,26,21,15,4) that work if you allocate them at random to the hills; those suggested numbers look like a good bet against opponents who split their troops about equally amongst 3,4 or 5 hills.

Next I start thinking up ideas for improvement, usually after losing a round... how about a multi-round variant, in which the surviving troops get to defend the hills they captured? In a subsequent round, a defender would cancel out (say) three attackers, and you are allowed to send additional troops to a hill you already captured, in which case, you have to cancel them out with opponent’s attackers, and if any survive, they add to the defenders of that hill. It looks like eventually, all hills will end up with so many defenders that further attacks should be futile, but there is no guarantee of how long it will take to reach that state.

1“Hill” comes from the Gross and Wagner paper linked-to in the Wikipedia page. More abstractly they are sometimes called battlefields or sites.

Monday, June 20, 2011

Computer games as a lens on higher education policy

Trying out the Facebook app "global warfare", it seems that a player can equip his town with various facilities, one of which is a university. Of course, my own biases being how they are, I promptly acquire one and proceed to spend all my resources on upgrading it (which is not something that usually happens in real life, but computer games are there to allow players to indulge their fantasies). Anyway, given a university, a player can then click on "research" and select from a list of topics to devote one's (virtual) academic efforts. Unfortunately, "computational complexity" was not on the list. Instead, I settled for "drilling", which when completed, causes your oil production to increase by 10%. This helps to make sense of the various calls for proposals that emanate from EPSRC; I have a vivid vision of our political leaders clicking on topics like "construction" or "military science" in an effort to obtain a quick reward. Maybe next time I will select "espionage" which could perhaps lead to cryptography, and thence to computational complexity.

Sunday, December 12, 2010

Game-theoretic board games

'Tis the season (almost) to sit around the fire playing board games and pretending to enjoy it. I recommend two board games, namely Poleconomy and Apocalypse, that I played a few times when I was a student; these came up in a conversation recently due to having some interesting game-theoretic content. I gave a talk at Microsoft Research (Cambridge) on Thursday, and Yoram Bachrach told me about the ripoff game, which has been played by human volunteers for cash prizes. Each individual game takes about one minute and works as follows. Each player is allocated a number, a fraction in the range [0,1] which is his “weight”. A subset of the players can form a winning team if their weights add up to at least 1. However, the winning team has not won the round until they have agreed on how to share the prize (worth 1 pound). For this purpose, each player gets to control another number in the range [0,1], which is the fraction of the prize that he requests from the winning team. And, the team does not win (and share the prize) until those fractions add up to at most 1. Apparently, the players sit in the same room but interact via computers, so the negotiation is somewhat stylized. Anyway, it turns out that Shapley value is quite a good predictor of the winnings associated with weights (although, there is variation from round to round, and some players are better at the game than others). A computational agent was implemented, which computed its Shapley value and then added 10%, and it performed well in competition.

This reminded me of an aspect of Poleconomy, a board game that simulates the interactions of politicians who happen to occupy corporate directorships “on the side”. (The game was developed in the early '80s.) From time to time, an “election” takes place, in which the players cast dice to determine the number of “votes” they obtain, and a Government may be formed by any subset of players who happen to have received a majority of the votes between them. There is some advantage to being in Government, so the immediate outcome of an election is a flurry of mental arithmetic and bargaining amongst the players to identify and agree upon a winning subset.

In Apocalypse, a player's move consists of a sequence of attacks. In each attack, the player chooses a number of units with which to attack another player, a whole number in the range 1...6, which is identified by placing a standard die with the chosen number uppermost underneath a cup; the player being attacked tries to guess the number. If the attacked player guesses correctly, that is the number of units that are lost by the attacking player; otherwise, the attacked player loses a single unit (and the attacking player gains a nuclear weapon, or part thereof, so there is an incentive to make lots of attacks.) Thus, each attack is a kind of generalized matching pennies, where the probability of choosing a smaller number is clearly larger than the probability of choosing a larger number, but all probabilities are positive.

Are there any other board games out there with interesting game-theoretic aspects?