Moo_the_cow
05-29-2004, 01:21 AM
This isn't really a new idea, but I would like to have it reconsidered.
I think that we should try skipping a range of 0.3 G or so every time we find a factor. Yes, I've heard all of the arguments against it last year about how all factors are distributed randomly, that skipped ranges will be very hard to keep track of, etc. However, I think that we should apply this method, and here is why:
First, skipping ranges after a factor is found will increase the rate that factors are found. Besides, how often do you find 2 factors within 0.3 G's of each other? Most likely, you haven't found those factors in the past month. Skipping ranges after a factor is found will make people sieve a range faster, with a very small percentage of factor loss.
However, many say that factors are distributed randomly, and that the chance of one factor being found very close to another is the the same as the chance that one factor is far apart from another. But while this is true theoretically, it isn't true in practice. In the range of 61-62T, where the factors were over 4x as dense as they are now, skipping a range of 0.3G will result in just a 5% factor loss, but over 9.2% of the range will be skipped.
Secondly, skipping ranges not only increase the rate at which factors are found but will also eliminate many PRP tests. Even though some may say that skipped ranges are hard to keep track of, we don't need to keep track of them. That's because, at the current p value of 300T, the factor density is so low that sieving is no longer beneficial unless a way of increasing the rate at which factors are found is used.
Without a way to increase the rate at which factors are found, sieving will probably die off at around 400T instead of a possibly higher value at 600T. Considering these things, we should give this method a try :)
I think that we should try skipping a range of 0.3 G or so every time we find a factor. Yes, I've heard all of the arguments against it last year about how all factors are distributed randomly, that skipped ranges will be very hard to keep track of, etc. However, I think that we should apply this method, and here is why:
First, skipping ranges after a factor is found will increase the rate that factors are found. Besides, how often do you find 2 factors within 0.3 G's of each other? Most likely, you haven't found those factors in the past month. Skipping ranges after a factor is found will make people sieve a range faster, with a very small percentage of factor loss.
However, many say that factors are distributed randomly, and that the chance of one factor being found very close to another is the the same as the chance that one factor is far apart from another. But while this is true theoretically, it isn't true in practice. In the range of 61-62T, where the factors were over 4x as dense as they are now, skipping a range of 0.3G will result in just a 5% factor loss, but over 9.2% of the range will be skipped.
Secondly, skipping ranges not only increase the rate at which factors are found but will also eliminate many PRP tests. Even though some may say that skipped ranges are hard to keep track of, we don't need to keep track of them. That's because, at the current p value of 300T, the factor density is so low that sieving is no longer beneficial unless a way of increasing the rate at which factors are found is used.
Without a way to increase the rate at which factors are found, sieving will probably die off at around 400T instead of a possibly higher value at 600T. Considering these things, we should give this method a try :)