I managed to identify the cause of the discrepancies between two of our products. One of them we fixed, but one messy hack we didn't try to emulate.
One of the problems with our system is that it's often difficult to determine right answers. We can detect crashing, and that's definitely a problem. We can even be sure about infinite loops and assertion failures. But to wildly overgeneralize, when it comes to deciding "is this set of results better than this other set of results? Which one is 'right'?" we wouldn't know the right answer if it came up to us, sang a jaunty little song, and started dancing a jubilant little i-am-the-right-answer dance.
This is not quite true. We do have some methods of determining what's right, but they involve lots of user testing and evaluations. And often we're working on demo time scales, which means no time for rigorous evaluations. And so questions of system behavior get decided by whatever seems to suit whatever demo is in mind at the moment. And these decisions linger and linger.
Shortly before Clairvoyance went down the tubes, I had a fit of hack-frenzy (as described earlier) come across as I wrote a piece of e-mail, with the result that that mail ended up taking the following form:
Yes, it would be better to solve that problem in way X instead of way Y. But X is extremely difficult for these reasons...
Damn. I see how to do it. I'll do it, but under these conditions: 1) I do it during the day, not hacking late at night. ... n) Someone else decides whether this is the right thing to do, not me.
No one ever did resolve whether it was the right answer; it was implemented, and therefore it was so.
If I get another job, I would like to have a job with well-defined answers; it would make a pleasant change.
The upshot of all this: today was a day of reasonably productive hacking. It might have been better if I had gone to bed at a more reasonable hour the night before.