Tuesday, February 24, 2009

Failures

Theoretical computer scientists can fail (as a researcher) if their proof is wrong. An applied algorithms person though, your proofs can be correct, theorems deep, and you can still fail. It is like being an engineer, your assumptions have to hold. Here are examples:
  • I once gave a talk on scheduling problems that arise when you use broadcasts from the base station to wireless laptops. Broadcast means if you have to send the same bits to many laptops, you can do it with a single copy, so it is efficient. It also gives nice abstract problems, and I was on a roll, showing 2-opts or ptases. In the audience was a systems researcher who slayed me with a simple question: how do you know the bits were received correctly? Well, each of the laptops should ack. Then chances are a subset of them will fail. So, you have to rebroadcast, now to specific subset of laptops, and iterate. The model then becomes messier, results less clear.
  • I once heard someone give a talk on network routing in presence of edge deletions. The motivation was if a truck ran over a cable and cut it, the link between the two routers will disappear. So, you want to compute reroutes. A systems researcher in the audience says, but physical links are bunched together into large cables, so (a) if there is a cut, bunch of links will go down simultaneously and (b) while you can determine the logical links between routers easily, say by looking at the routing table, it is difficult to get the operations people to tell you how they bunched wires together. That is, physical world of wires is different from the logical world of links, and indecipherable. Alas.
  • Then there are times when you look for a theory and fail. I looked at problems where the longer you run the algorithms, better the approximations. The quality of solutions found by the algorithm is like a counter getting progressively close to the OPT. You can stop the computation any time you are happy with the accuracy. A bunch of us called his "progressive algorithms" and spent a long time searching for a theory of such algorithms. No luck.
I can tell you more stories, instances where theory triumphs in theory and a question or comment from sidelines derails the purpose.

Labels:

5 Comments:

Anonymous Anonymous said...

Your "progressive algorithms" have also been studied a lot under the name of "anytime algorithms". Although in that case you never know the quality of the approximation, only that it gets better and will eventually hit OPT. In your case, is there a quality guarantee that gets better?

7:33 PM  
Anonymous Anonymous said...

As somebody said, in theory, there is no difference between theory and practice. In practice, there is.

7:53 PM  
Anonymous Anonymous said...

Although in that case you never know the quality of the approximation, only that it gets better and will eventually hit OPT.

If you have an FPTAS then you can compute a worst case bound on the quality of the approximation as time goes by. This is done by running the FPTAS at increasing lengths, as mentioned here:

http://www.cs.uwaterloo.ca/~alopez-o/cspap/contract_algorithms/index.html

7:11 AM  
Anonymous Anonymous said...

The meta observation in anytime/progressive algorithms and others of this ilk (online alg in some database papers) is that they take an underlying approx solution and run them for (geometrically) increasing times/accuracies/... and get bicriteria guarantees (eg running for time T will get something as good as running best for time T/2, etc).

-- metoo

6:42 AM  
Anonymous Anonymous said...

Correct. It would be nice to produce the same type of guarantees without having to restart the FPTAS for each time length but that seems much harder.

5:27 AM  

Post a Comment

<< Home