Coffee Space


Detecting TSP


After watching the film “Travelling Salesman” [1], I was interested in the implications of the Travelling Salesman Problem (TSP) and what would happen to systems that rely on the N = NP problem. Running under the assumption somebody has already solved the problem, how would we know? What tests or mechanisms are in place to detect whether somebody has in fact solved the problem?

I think the first point to consider is the fact that anybody who has in fact solved the problem will certainly not want anybody else to know of their success. Look at the Enigma code for example, when the allies had discovered a way in which to break Nazi codes in little time they then had a duty to make sure they didn’t find out, otherwise it would be improved. As long as your enemy thinks they are winning they are likely to continue doing what they are doing. It’s highly likely that this will be the same scenario if somebody was to make significant progress on TSP.

To me this now turns to a question of “how do you detect somebody who does not want to be detected?”. There are several answer to this question and I’ll discuss these below in their relevant sections.

Honey Pot

When you want to find out how hackers are breaking into a system and what they do when they can, it’s often the case a honey pot will be used. After all, the best way to find out how they gain root access on a system is to simply watch them gaining root access on a system. This works particularly well, with many tutorials existing on how to do this yourself and videos showing some of the interesting results from such experiments.

Back to our question of TSP, they may be able to appear to log in as some kind of user without raising suspicion but interestingly they cannot access a system without first logging in. It’s not difficult to bridge the gap where you purposely set up an unused server and wait for the first person to login. This would at least allow you to know that somebody has solved the TSP problem.


The next issue is your enemy will know that you know, that they know how to solve the problem. This means only they have the advantage. If you want the advantage of such a situation then it’s in your best interest to make sure your enemy does not know that you know.

In reality this translates to replicating an “undetected” system break-in and revealing sensitive information on a real system. In order to gain from the knowledge that your enemies have solved TSP you must allow them into a real, used system and allow them to gain compromised information. This means that you must detect the difference between an actual user and attackers. This could be done by asking users to login at set times or set geographic locations to make a non-planned access obvious.

Once you know your enemy has solved TSP without them knowing that you know that, you can then start to play games about information they read, i.e. leading them off the trail for significant events so that the side you’re on strategically benefits whilst making this look like an accidental success. It’s also important that in some cases you lose.

Of course, history is not without lessons and we should look at what the great mathematicians at Bletchley Park did (Station X) to see how is best to act on the information. It’s important to consider that your enemy is playing a very similar game with not making you think they have solved TSP, so it’s extremely important to get this balance right to not alert your enemy.


[1] Wikipedia