View Full Version : retesting?
I am currently running k=55459 n=2761006. It has to be a re-test, because according to the status page k=55459 has been tested through over 7M. Can anyone point me to a thread or a FAQ/Wiki entry which would explain the situation, in particular why retests are necessary? Does it mean that the first test wasn't reliable?
Thanks
Igor
Greenbank
10-05-2005, 11:48 AM
See the 5th or 6th post (by Alien88) of the "If you can't get a block" thread in this forum.
Alien88 wrote:
"
The main reason we're running second-pass for a little bit is because we never intended for it to get as far behind as it is. Now is just a decent time to get caught up on it..
as for how long? probably not too long.. i can't give a 'real' estimate as to when we'll switch back over as of yet.
"
Retesting is required to make sure that a prime hasn't been missed. There's a small chance that a PRP tests already performed was incorrect. The causes can be anything from a hardware error to cosmic radiation, all you need is one bit to be flipped in the middle of a calculation and the whole result could be wrong.
By double-checking we make sure the same residue is reported back on two different executions.
The chances of exactly the same strangeness occurring in two seperate tests are absolutely tiny but still possible.
Greenbanks Answer is perfect... here is another version.
For every completed test (k/n pair) a residue is sent to the server. If this residue is 000000000...00 the k/n pair is prime if it's anything else it's not prime. In order to get the correct residue the computer must not make any mistakes for the entire time the test is being computed, (this could be a week). If an error is made at any point the resulting residue will be incorrect. Two matching residues is good enough to say that a particular k/n pair is not prime.
So what are the chances that this residue is incorrect??? Lets say 2% chance of error...
errors from; overclocking, malcious users, overheating, cosmic rays, faulty clients, aliens etc...
So if there is a 2% chance of error this basically mean 1 in 50 tests are wrong, and that wrong test could be hiding a prime.
So as long as we can test 50 smaller tests in the time it takes to test one test we are breaking even effort wise.
Now add to this...
The probability a k/n pair is prime decreases with increasing n, The error rate may be higher than 2%, if we miss a prime at low n it may take a very long time before we find a prime at higher n, the more time we sieve and factor the less total test we will have to do at higher n....
What this comes down to is 1 in 50 is much to high. It's probably more like, if we can test 10-20 secondpass tests in the time it takes to test 1 first pass test we are breaking even.
Read up on a couple of posts, search for terms such as error rate, probabilty, missed prime etc.... IT's been discussed before in great length.
Powered by vBulletin® Version 4.2.4 Copyright © 2025 vBulletin Solutions, Inc. All rights reserved.