Quote Originally Posted by jMcCranie View Post
First, I have several log files that I will send after the last one finishes.

Secondly, why is it necessary to redo all of the old ones? We know that all exponents below a certain limit (27,700,000) have been checked - I don't see a need to redo those.
We don't know which ones were double checked -- and at PrimeGrid I've got a really good window into the quality -- or lack thereof -- of the computers used in distributed computing. In general, we no longer trust any results unless they're double checked. The problem with not immediately double checking results is that when a computer starts going bad, you have no way of detecting it. So any results that don't have matching residues from different computers are suspect. Unless we get really lucky, except for whatever we can get from log files, we have no residues at all on 4 of the 6 k's.

Calculation errors are proportionally more likely to occur on larger candidates, especially when the error rate is fairly low, but non-zero.

Our position on double checking is especially rigid when it comes to conjectures like SoB. Consider a hypothetical k where the first prime is at n=100,000, and the second prime is at n=100,000,000. If you miss the first prime because of an undetected computation error, many years of unnecessary computing will be wasted searching for the second prime.

It's actually not as horrible as it might seem at first glance. The vast majority of candidates are small and can be rechecked much faster than the original search.