This post on the Fortnow/Gasarch blog caught my eye, partly because some of my own work is cited, towards the end. In the post, Lance Fortnow argues that proofs of NP-completeness do not constitute work in computational complexity, but rather in algorithms. This is because a typical NP-completeness proof consists of an efficient algorithm that translates an instance of some pre-existing NP-complete problem, into a corresponding instance of the problem of interest.
Lance's dividing-line between "algorithms" and "computational complexity" disagrees with a passage of text I wrote for one of the pages of my research group, in which I said: This kind of analysis belongs to two closely-related research fields: algorithmics, the design of algorithms with provable performance guarantees, and computational complexity, the classification of problems according to the kinds of algorithms that can solve them. According to what I wrote, an NP-completeness result is a work of computational complexity, albeit not usually a very interesting one.
I would not be that interested in getting into a debate about where exactly to draw this dividing-line. But I tend to prefer my own definition, mainly because it defines computational complexity in terms of what it is, rather than what it isn't. The other problem with Lance's definition is that one can imagine some Grand Inquisitor of Computational Complexity, managing to exile all sorts of results from the hallowed halls of computational complexity to the chilly uplands of algorithms, on the grounds that they are often tainted with algorithmic content. Take for example Savitch's Theorem, and its corollary that nondetermistic polynomial space is equivalent to deterministic polynomial space. The proof uses the reachability method, which is clearly an algorithm.