HHH,
Part of the reason why we are sieving in 1T chuncks is as you had stated is to make record keeping easy.
But also those low-p ranges have alot of factors in them so getting them done in 1T chuncks from the begining was important, it helped reduce the dat files size ALOT and we wanted to insure that people would get them done in a reasonable time frame.
Why?
If you look at mikes pages, particularly this one http://www.aooq73.dsl.pipex.com/scores_p.htm
And even further this table...
Code:
2^40 - 2^41 ( 1T - 2T) 0.00% 0.00% 18387/18387
2^41 - 2^42 ( 2T - 4T) 0.00% 0.00% 17343/17343
2^42 - 2^43 ( 4T - 9T) 0.00% 0.00% 16275/16275
2^43 - 2^44 ( 9T - 18T) 0.00% 0.00% 15617/15617
2^44 - 2^45 ( 18T - 35T) 0.00% 0.00% 14907/14907
2^45 - 2^46 ( 35T - 70T) 0.00% 0.00% 14108/14108
2^46 - 2^47 ( 70T - 141T) 0.00% 0.00% 13784/13784
2^47 - 2^48 ( 141T - 281T) 0.00% 0.00% 11750/11750
2^48 - 2^49 ( 281T - 563T) 1.20% 1.01% 10926/11038
2^49 - 2^50 ( 563T - 1126T) 51.07% 46.71% 5594/10498
2^50 - 2^51 (1126T - 2252T) 99.72% 99.09% 96/10498
2^51 - 2^52 (2252T - 4504T) 99.96% 99.25% 79/10498
You can see that sieving 1T at a particular p level makes a difference with how many factors are found. (THIS IS THE HEART OF FACTOR DENSITY)
The number of factors found are based strongly on %n of p=2^n sieved for a range.
O.K. so what am I talking about????
Each line of the above table represent a sieve range divided up into p=2^n, for example from n=47-48 (i.e. 141T-281T) we found 11750 factors or ~83 factors per 1T sieved. From 1T to 2T the same size range we would have recieved 18387 factors per 1T sieved.
Thus far one can consider that we have sieved 991-50M upto about 70T so about 46n worth. And we have 4n remaining, the 1-20M dat has been sieve to >49.5n and less than 0.5n to go.
What this basically means is that we could skip all p from 100T to current and not miss alot of factors, we would then later returning to these with when n approaches 20M. Or we run out of ranges with proth.