A recent experience of mine in my own research seems particularly relevant here, and I'm going to try and explain what happened (this will, at the very least, satisfy my promise to Michael Brough at about the same time of getting some more serious mathematics into my blog). This is pretty much the first time I've ever tried to explain something technical in a non-technical setting, so let's see how it goes...

I have been working for a long time on the problem of determining the complexity of counting the number of homomorphisms between two graphs in modular counting classes. Although it isn't entirely the point of the story, I'll try to explain what this problem is.

An "homomorphism" between two objects is a map from one to the other that, in some sense, preserves their structure. An homomorphism that everyone who has studied any maths has seen would be the homomorphism from the integers into the integers modulo 2. The integers modulo 2 (which we will be talking about a lot in this post) is the set {0,1} with the following rules:

0 + 0 = 0, 1 + 0 = 1, 1 + 1 = 0;

0*0 = 0, 1*0 = 0, 1*1 = 1;

You can think of 0 as meaning "even" and 1 as meaning "odd" (check that these rules make sense with those meanings). Now an homomorphism from the integers into this set is "send the even integers to zero, send the odd integers to 1". The "structure" that is preserved by this homomorphism is addition and multiplication. What do I mean by "preserved"? I mean that if you first send the integers to {0,1} then add them together you get the same answer as if you add them together and send the answer to {0,1}. You can check that this is true by using the rules you already know for adding integers (the sum of two odd integers is even, for example). We'll just do one example to make it clear.

13 + 14 = 27

Let's see what our homomorphism does to this equation. 13 is sent to 1 and 14 is sent to 0, and 1 + 0 = 1. The homomorphism does indeed send 27 to 1, so we're ok.

Ok, so now you know what a homomorphism is, I'm going to confuse you by saying that that is entirely *not* the best way to think about graph homomorphism. The best way to think about graph homomorphism is the inspiration for another word we use for the same problem "H-colouring". Here, "H" is the graph that the homomorphism goes into (the graph we're colouring will usually be called G). To see the reason for this, let's look at what a homomorphism into a complete graph (a graph with all edges present) looks like. Such a homomorphism maps each vertex of G to some vertex of H, and does so in such a way that no two vertices of G which are adjacent are mapped to the same vertex of H. If we colour the vertices of H Red, Green and Blue, then we can think of an homomorphism into H as a way of colouring the vertices of G Red, Green and Blue so that no two vertices of the same colour are adjacent (see picture). In other words, every edge in G is taken to an edge in H (the "structure" we preserve is the edges of G).

More generally, we can think of graph homomorphism as a way of colouring the vertices of G with the vertices of H. The rule then becomes that you can only have Red vertices adjacent to Blue vertices if Red is adjacent to Blue in H. In this sense, H-colouring really is a generalisation of colouring - we're still colouring the vertices of G, it's just that the rules are now more complicated that just "don't put two vertices of the same colour next to each other".

Now, for any given graph G, a question we can sensibly ask is "how many H colourings are there?". Actually, I've been asking a slightly easier (?) question - "is the number of H colourings odd or even?". Now, we want to know how hard it is to decide if the number of H-colourings is odd or even for *any* G. Here's a warm-up question. If H is the triangle from the picture above (K_3 for the mathematicians) then is there any graph for which it's hard to decide whether the number of H-colourings is odd or even?

Answer - no. For any graph*, the number of colourings with K_3 is even. Let's look at why. Take a colouring of the graph. Let's say it colours some of the vertices Red and some of the vertices Green. Well then we can build another colouring by

*switching Red and Green*. That is - we colour all of the vertices which are Red with Green, and all of the vertices which are Green with Red. This must still be a colouring, as there are no Reds next to Reds and no Greens next to Greens - this will still be true after we've switched them. So what? Well, given any colouring of G, we can find another colouring which just has Red and Green swapped. Importantly, if we swap Red and Green again, we get back to our original colouring. In other words, the colourings of G

*come in pairs*. Well - if I can divide the colourings up into sets of size two with none left over, then there must be an even number of them (that's pretty much what "even" means).

So now we've learned a new trick for counting colourings. If there are two colours like Red and Green in the triangle which "look the same" as far as the graph is concerned then we know there are an even number of colourings that use these colours. This leads to an important insight - remember the rules for adding even and odd numbers? Even + odd = odd and even + even = even. Even behaves like 0. In other words, the total number of colourings, including those which use the vertices which we can "swap" is *the same* as the total number of colourings which don't use this vertices. If there are two vertices which we can "swap" the

*we might as well delete them before we count the colourings.*(We call such a swap an "involution" of the graph - I figure I might as well say this now, as I'm probably going to use the word again later anyway, so you might as well know what it means)

Now we're getting to the theorem that I wanted to explain, (remember when I mentioned Lipton about 3 pages ago? I'm getting there.)

Consider the following process. You pick some vertices which you can "swap" without changing the structure of the graph, and you delete them. If there are any vertices in the new graph which you can swap, you delete them, and so on. Eventually, you'll get to a graph with no vertices which can be swapped (we'll count the graph with no vertices as a graph). We call a symmetry of the graph which just swaps pairs of vertices like this an "involution". Now, I have a theorem that says, essentially, that it doesn't matter in what order you do this process, you'll always get to the same place. Consider the graph below. You can start by swapping A and B, or you can start by just going all out and swapping A and F, B and E and C and D, to make the whole thing disappear. Notice that if you start by swapping A and B, you then swap E and F then C and D, again the whole thing disappearing.

Now. Why should this theorem be true? And why should I care? Well, it should be true for the following reason: remember that the process we're describing preserves the number of colourings modulo 2 for every graph G. So the number of ways of colouring any given graph with H is the same as the number of ways of colouring that same graph H after deleting vertices in the above manner. So imagine that the theorem wasn't true, then from some H you would get to two different involution-free graphs. Then you'd have two different graphs, H and H' with the property that

*for every single graph in the world*the number of H-colourings of G was the same parity as the number of H' colourings of G. This is just too big a coincidence to be possible. It can't be true. Except... why not graph automorphisms are complicated, and besides, mathematicians prove things.

I did this nearly two years ago. It was fairly technical. I used something called Newman's Lemma and had to learn the theory of something called reduction systems. Basically, I worked out in great detail how to go from the graph you get by deleting one involution to the graph you get by deleting a different involution (it was actually slightly more complicated than that, but that's beside the point). Along the way, I had to use some non-trivial group theory, and a rather bizarre fact about the lowest common multiple of a set of odd numbers as well as some tedious case analysis on the lengths of paths in G. What I'm trying to get across is that, whilst it was technically correct, the ideas in this proof were a very long way from the idea I had for *why* the theorem should be true.

I was in quite a bizarre epistemic situation. I knew the theorem was true, and I also knew why. Unfortunately, the reason I knew it was true and the reason *why* it was true were not the same!

If you're attentive, you'll have noticed I used the past tense in that previous paragraph. That's because this story has a happy ending. A year or so later, I came across something called the Lovasz Vector of a graph (there is more than one thing called the Lovasz Vector of a graph, so don't worry if you think you've heard of it and don't understand the following sentence). Essentially, the Lovasze Vector of a graph H tells us the number of H colourings of every finite graph. There is a theorem in a book by Hell and Nesestril which says that two graphs which share the same Lovasz Vector are the same. Remember my intuitive reasoning for why my proof should be true? It would just be *silly* if there were two different H such that the number of H-colourings was the same for

*every single graph in the world*. Well, here were Hell and Nesetril telling me that I was right!

There were a few details to iron out, but essentially the same proof as they give in their book tells us that if two graphs have no involutions and they have the same Lovasz Vector then they are the same. Note that this is *why* the reduction system I had on graphs just had to be confluent. Intuitively, there is no possible reason why two different graphs should have the same Lovasz Vector. I always knew it was why, and now I could prove it!

The new proof had a lot of other advantages over the old one (most notably, it works for any prime p, not just 2 (so if you have 3 vertices which "look the same" to the graph, and delete them, you get an analogous result)) but, more importantly, it is the

*right*proof. The proof follows the intuition exactly, and once you've seen the new proof, you know that the theorem is true, and you also believe that it has to be true (the new proof also deleted about 12-15 pages from my thesis, but oh well...).

So, that's the story of my proof, and an attempt to give some idea of at least one reason why proofs in general are a good idea.

* there is actually one exception - see if you can where the exception hides itself in the argument given.

## No comments:

Post a Comment