Originally posted by Keroberts1
The number of people running the secret account may no actually be enough although the double check is only 1/100th as likely to find a prime as was mentioned before adn may not be so this depth but there was a much better chance of finding a prime in the first place at this level and since these tests ca ne finished alot faster it may be that the optimal level to keep the double check at is far beyond our current level. If anyone has more info on the error % and how it depends on size of N value. Also what is the likelyhood of a number of a certain N value being prime. I believe it is omehow related to the inverse of the natural log but it is differnt because of proth numbers attributes. The final figure i would need is the amount of time required to do a test at different N values. I heard it stated o nthe forum before that doubleing the N value would quadruple the time required is this true or is it more complicated?
You'll find answers to these questions in the thread on a Resource Allocation Model. The best estimates of error rates at that time were extremely low - we were experiencing much lower error rates than GIMPS, perhaps because we haven't -attracted as many aggressive overclockers. If error rates have remained low, double checking is a poor use of resources until we get much higher.

William