Update on the high-n sieving,
For the past few weeks Joe_O, myself, a very few others have been doing some high n sieving. We have gone through various dats etc as you can see from previous discussions and are currently sieving a dat with a size of ~50M. (This is near optimal from a pure sieve stance, optimal is ~53M).
The dat we are using is 991<n<50M ~2.7 times larger than the 1.5M<n<20M dat currently in use by the maineffort. This 2.7X size increase reduces the client speed by roughtly 15% when compared to the 18.5M dat in current use.
Since our dat overlaps the current effort we have found several factors missed by the previous and current efforts.
What we have sieved so far is all p=~64k to p=7T with a 991<n<50M dat several holes exist but basically all p<25T are either inprogress or complete.
In addition to this low-p sieving we have been collecting large factors >20M and small factors <1.5M from the factrange.txt files, submitted from those people who chose to submit factrange.txt factors to factrange@yahoo.com. We have taken these factors as well as factrange from our efforts and applied them to Joe's database of all unfactored k/n pairs in the range of 0<n<100M. Currently the smallest unfactored n is n=991.
In order to reduce the number of total factors in this database we have also sieved 50M<n<100M to approximately 3T b/c the factor density is so high at low p.
Our results...
We have managed to reduce the 991<n<50M dat size from >20mb to approximately 8.1Mb currently, also the memory requirement for running proth sieve have been reduced from a high of ~50Mb to ~32.5Mb currently while speeding up the client.
Table of results...
See above post for descriptions of table
Code:
(n>) (n<) Start Now T.Fact 10K 2.5T 3T 3T+ 5T+
0 1 28187 27609 578 39 17 139 251 132
1 3 53908 53158 750 23 22 76 378 251
3 8 131984 131369 615 0 0 284 143 188
8 10 53115 52847 268 0 0 124 61 83
10 20 265330 264119 1211 240 335 0 284 352
20 30 648872 300372 348500 331271 6492 0 7520 3217
30 40 648663 301172 347491 330829 6236 0 7610 2816
40 50 649463 302275 347188 330923 6099 0 7371 2795
50 60 649117 312789 336328 318159 11629 0 5938 602
60 70 648603 315006 333597 315355 12319 0 5696 227
70 80 648590 315497 333093 310861 16388 0 5712 132
80 90 648497 314856 333641 310689 17239 0 5639 74
90 100 648923 315669 333254 310061 17379 0 5792 22
- Sum 5723252 3006738 2716514 2558450 94155 623 52k ~11k
(n>) (n<) Start Now T.Fact 10K 2.5T 3T 3T+ 5T+
0 1 28187 27609 578 39 17 139 251 132
dat % 100 97.95 2.05 0.14 0.06 0.49 0.89 0.47
1 20 504337 501493 2844 263 357 484 866 874
dat % 100 99.44 0.56 0.05 0.07 0.10 0.17 0.17
0 50 2479522 1432921 1046601 993325 19201 623 23618 9834
dat % 100 57.79 42.21 40.06 0.77 0.03 0.95 0.40
20 50 1946998 903819 1043179 993023 18827 0 22501 8828
dat % 100 46.42 53.58 51.00 0.97 0.00 1.16 0.45
50 100 3243730 1573817 1669913 1565125 74954 0 28777 1057
dat % 100 48.52 51.48 48.25 2.31 0.00 0.89 0.03
0 100 5723252 3006738 2716514 2558450 94155 623 52395 10891
dat % 100 52.54 47.46 44.70 1.65 0.01 0.92 0.19
I can't seem to get the formatting correct on the board
As you can see the total number of unfactored k/n pairs has reduced by a little more than 47% 991<n<100M.
We have reduced the number of k/n pairs between 20M<n<50M by 53.5%, more than half.
Perhaps Joe can comment regarding factors missed by the maineffort which would require testing. I know we found >100, but alot of them were between firstpass and second pass <3M, however more than an handfull have been above firstpass.
If anyone has questions or comments?
P.S. I'd like to personally thank all of you who have submitted your factrange.txt files.