Wednesday, October 23, 2013

article on privatisation of universities

I recommend the article Sold Out by Stefan Collini, in the London review of Books. It reviews two books about the process of privatisation/marketisation of higher education, that has taken place in the UK during the past roughly 30 years. Collini’s article provides a highly informative overview of the development and current state-of-play of private sector provision of HE, in the USA and UK. It then explains very lucidly the problems inherent in the current system of student funding, and is very convincing on why it cannot be expected to last. My only criticism: the article is passionate, but doesn’t propose remedial action. That may be because there’s a bigger picture (missing from the article) that includes, for example, the recent Royal Mail privatisation. According to this bigger picture theory, remedial action would not be specific to universities, but would involve a restoration of trust in public institutions and the public service ethic.

(added later:) ...And, this article (Economics students need to be taught more than neoclassical theory) in the Guardian looks relevant to the general topic. See also this earlier article (Economics students aim to tear up free-market syllabus). Their criticism of neoclassical economics is mainly that it failed to predict the financial crisis. It could also be blamed for an approach to student funding which takes the form of firstly, assuming all students are selfish agents, and secondly, repeatedly raising the price of higher education, in search of some kind of market equilibrium. 

Wednesday, October 09, 2013

What are the equilibria of this game?

In the course of writing a paper, we often construct examples of the sort of mathematical objects under analysis, as a means to figure out what’s true in general. Then the paper gets written, and the examples are stripped away, like the scaffolding from a completed building. Before it gets forgotten, I now provide an online description of the following game (that came up while writing this paper), since it sets you thinking about alternative solution concepts (exact/approximate Nash/correlated equilibria (well-supported or otherwise)), and which one(s) we are supposed to like best. Or maybe it could be used as part of a game theory exercise. (It may be helpful to point out that a well-supported ε-approximate equilibrium is one where no player ever uses a strategy that pays more than ε less than his best response. An approximate equilibrium that is NOT well-supported could allow some player to occasionally play a very poor strategy, provided that it’s with low probability.)

Suppose there are n players, numbered 1 through n. Each player has 2 pure strategies, call them a and b. Payoffs will be either 0 or 1, according to the following rule. Player 1 gets paid 1 to play a and 0 to play b, UNLESS all other players play a, in which case it’s the other way around. Meanwhile, for i>1, player i gets paid 1 to play the same way as player i−1 (otherwise is paid 0).

(If instead player 1 was paid to differ from player n, we would have a generalized Jordan game, with a unique Nash equilibrium where all players flip fair coins to choose a or b.)

So — spoiler alert — I reckon the answers are as follows. For exact Nash equilibrium, it’s not hard to see that players 1 and 2 can’t play pure strategies, and in fact players 1 and 2 should flip fair coins, meanwhile the other players all play 1. For exact correlated equilibrium, one solution (different from the NE) is for player 1 to flip a fair coin, while all the other players flip a single shared fair coin, ie they play the all-a’s vector or the all-b’s vector with equal probability. It looks like there is some freedom for any player to deviate slightly from this, while keeping it well-supported. In the neighbourhood of the original NE, only players 1 and 2 can deviate slightly, if we want the approximate equilibrium to be well-supported. (In fact, I think we can say more strongly, that all the well-supported Nash equilibria are in the vicinity of the exact NE.)

Interestingly, when we give up on the “well-supported” requirement for correlated equilibria, lots of other things become possible. Notably, all the players can play a with high probability 1−O(1/n), and this is what would start to happen if natural learning dynamics were applied to learn an equilibrium. Eventually, the dynamics should do something different; maybe converge to something in the region of the exact NE.

Tuesday, October 01, 2013

2 research posts in Algorithms and Complexity, Oxford

2 research positions in algorithms and complexity at the University of Oxford, each tenable for up to five years

Professor Leslie Ann Goldberg is looking for two postdoctoral researchers to join the ERC project Mapping the Complexity of Counting at the University of Oxford. (This post at Luca Aceto’s blog contains a link to the announcement of this and other ERC grants.) This is a five-year project from 1 Mar 2014 to 28 Feb 2019. The positions are available for the duration of the project (subject to satisfactory performance).

The salary range is £29,541 - £36,298, depending on experience.

Deadline for applications 15 Nov 2013
Details and application form at
https://www.recruit.ox.ac.uk/pls/hrisliverecruit/erq_jobspec_version_4.jobspec?p_id=110002

The overall objectives of the project are:
  • To map out the landscape of computational counting problems (both exact and approximate), determining which problems are tractable, and which are intractable (quantifying the extent of tractability).
  • To discover complexity characterisations which elucidate the features that make counting problems tractable or intractable (telling us why each problem is tractable or intractable).

Within the context of these overall objectives, the goal is to study classes of counting problems which are as wide as possible (obviously including the problems that arise in practice), and within these classes, to develop complexity dichotomies/characterisations which are as precise as possible. The key problem domains that will be considered are graph homomorphism problems (including generalised weighted homomorphism problems), counting constraint satisfaction problems and holant problems. Key challenges include determining the effect of restrictions such as planarity, bipartiteness and bounded-degree constraints on the complexity of graph-theoretic counting problems.

Candidates should have a PhD in algorithms and complexity or in a closely related area of discrete mathematics.

Expertise in one or more of the following would be helpful: combinatorics or discrete probability, graph polynomials or partition functions, mixing rates of Markov chains, constraint satisfaction problems including algebraic methods for classifying their difficulty, holographic algorithms or holant problems.

REF submission deadline approaching

At the New Insights into Computational Intractability workshop, I find that some of our distinguished overseas speakers are blissfully unaware (or maybe: blissfully vaguely-aware) of ongoing work on submissions to the Research Excellence Framework. They’re unfamiliar with the heavy-duty administrative machinery that UK universities have set up to process the submissions, which will finally fall silent on the deadline of November 29th.  There are gaps in their knowledge about the functional dependence of the number of researchers a ‘unit of assessment’ (i.e., academic department) can submit, on the number of impact cases. They don't know that “evidence” is a verb, not just a noun. They don’t know the word “REF-able”. And so on.

My advice: it’s a time to indulge your UK colleagues when they gripe about REF. Especially if they’ve taken on any kind of responsibility for preparing a department’s REF submission. Buy them drinks. Appear sympathetic and remind that it’ll all be over in a few weeks.