This is the start of the high range sieving thread.

What is proposed is to start higher n range sieving.

Facts:

1. There are more factors at lower p
2. A 1m<n<100m dat file is quite large and may not even work with cmov
3. Starting a new dat file from 1T wouldn't find additional factors below n=20m until we get to ~500T
4. Some people believe sieving above 20m right now is pointless.
5. Our factor density is decreasing, leading to difficulties in determined if a range has been sieved or not.
6. This project will continue past 20m if it wishes to finish
7. Sieving a 20m<n<80m range is a 60% the speed of sieving 1m<n<20m
8. This speed is probably represenative of a 1m<n<60m file
9. Eliminating k's with primes will speed up sieving
10. A new client that hasn't made it to beta yet may increase sieve speeds by 5x.
11. Factor files don't take up alot of space, if they do as a community we can find space for them.


It is my belief we should soon start sieving a larger range always including those n's above double check.

Example stop the [dc(double check)<n<20m] at some T say 600T, once we find a prime, or once we find a new client.

Continue from that point forward with a new *.dat (dd<n<some high T) I suggest 80m others believe 100m.

I don't think it's right to sieve 1-20 then dc-40 then dc-60 then dc-80 etc... It doesn't make sence. Widen the sieve range to the max we are going to do asap dc<n<100m, then continute sieving to very deep p.

Alot of factors will be removed from n=20m+ at low p this can be a second effort for those who wish to do it now, Or we could start the low p range once we get to n=18m.

Thought's comments...

We could also use the low p-range to test the new client.

Thanks to all