1143439251189941 | 67607*2^9009131+1
1 gone, 31 to go
... for the number of k/n pairs at k=67607 and 9m < n< 10m
1143439251189941 | 67607*2^9009131+1
1 gone, 31 to go
459030938766785899 | 67607*2^9036611+1
2 gone, 30 to go
nuri what bounds are you using?? Also are you keeping your stage1 file???
B1=255000 B2= 4080000 I thought something around this would be sufficient, togeter with 3-4 factors from sieving up to 2^50.Originally posted by vjs
nuri what bounds are you using??
yes, but only after somewhere around 9040000. I was out of town for a month or so, and could not change before that..Originally posted by vjs
Also are you keeping your stage1 file???
Opps, may be I've set it wrong..
Could you please confirm how to generate the save files? thx.
You are using prime95, right? The way I know is:
You use the line
Pminus1=k,2,n,1,B1,B1
Let it compute, take the save file of Stage1, store it somewhere else, run the line
Pminus1=k,2,n,1,B1,B2
and get the savefile of Stage2. If you only want the latter, you run the last line only.
So, I have still the Stage1-savefiles of the largest prime factoring effort, I'm back home and have enough mem, so I will run them now.
Yours H.
I see.... thx.
So, I need to manually do that stuff. I thought there was a way to do it through inserting some stuff to prime.ini etc... I guess that's why I couldn't do it..
On second though, may be I should change he strategy of attack...
Is this possible?
1- Run all k/n pairs with, say B1=100000 and keep the save files
Question: If I do that, will I have to store the save files somewhere else after each test, or can I just continue without manual intervention?
2a- Rerun the pairs with, say B1=200000 and keep the save files
Question: Will 1 + 2a take the same time as running at B1=200000 at once, or will it take longer?
2b- Rerun the pairs with, say B2=1000000 and keep the save files
Question: Will I be able to rerun k/n pairs with, say B1=200000 or B2=2000000, etc after that, without additional burden w.r.t. running B2=2000000 at the very beginning?
In short, what I have in mind is, if possible, first sweep through all pairs with smaller bounds, and increase the bounds gradually if there is not sufficient number of factors found, and repeat the process for the third time ect. up until all remaining 30 factors are found. This is feasible, of course, only if it is possible to build up on the save files more than once.
It looks like it is possible to do it, with minimal additional CPU time cost. I'll report if I was mistaken and that's not the case.
I decided to do the first run with B1=100000 and B2=1000000, starting at n=9046211.
For lower n, I had B1=255000 and B2=4080000 and unfortunately no save files.
Nuri,
I wouldn't bother doing a stage1 then a larger stage1 etc. Just go with the largest stage1 you think it will require to get enough factors with say B2=10*B1.
If you don't get enough factors you could then try a larger B2 b2=20*B1.
I believe that if you simply run using Pminus1= you do get a temp file z**** or something like that. IF you finish a b1, b2=10*b1 and what to do a second b2 run the command is something like B2'-B2 where B2 is from the first run.
Example
Pminus1=....,1,B1,B2
Pminus1=....,1,B1,B2'-B2
Himmm, I didn't know the B2'-B2 stuff, thx.
I'm pretty sure it works for Prime95, it certainly does for ECM6.0 Give it a try for one test first.
I'm just thinking about another option, What about 15M<n<16M I think that range only has something like 1046 factors. Also it hasn't been touched by P-1 yet. Might be faster to bring that one below 1000. ALso we could declare all k=67947?? reserved for that range now? just a thought.
ANother point is that the 9M<n<10M range has actually been tested once already. Project wise it makes more sence to do the higher one.
Yeah, I know that... It does not make any sense project wise. Think of it as a kind of obsession...
Yup that's sort of what I figured in the first place. Remember there is also ecm but of course I'd do P-1 with fairly high bounds first.
I'd assume that all 15-digit factors have been found for these numbers p=1T (We are pratically there already)
So if you did want to run ecm it seems reasonable to start at no less than 20 and probably more likely 25 or B1 somewhere inbetween 25-30. I think this would depend more on the memory requirements for stage2.
digit B1 Curves
20 digits 11K 90
25 digits 50K 240
30 digits 250K 500
That seems like alot of curves... and certainly alot of work. Looking at ??Greenbanks?? page will give you some idea of how many factors you can expect to find with B1,B2 values. It seems like the optimal B1:B2 ratio was somewhere around 12 to 14. Also remember your only getting an additional 50% of the factors for doubleing the amount of work for any B1:B2 ratio.
Example
Case 1
B1=50K
B2=700K
Number of factors found X
Time required T
Case 2
B1=200K
B2= 2800K
Number of factors found 2X
Time required 4T
P-1 found a factor in stage #2, B1=100000, B2=1000000.
67607*2^9063851+1 has a factor: 3726912393954779
3 gone, 29 to go
Not bad you might want to start with a few of these first and see what happens
9110000 9150000 [passed by]
9209000 9350000 [passed by]
9380000 9400000 ? ? [passed by]
9680000 9700000 ? ? [passed by]
Thx for the idea vjs... I've also thought of running the untouched ranges first, but when I realized that running them only would not be sufficient to achieve the goal, I decided on a sweep all approach. This would also decrease the burden of keeping track of what I've done and what's next etc.
I've allocated two machines to the project.
One is a PC which is relatively faster on Stage 2, and the other one is a lap top which is relatively faster at Stage 1.
For B1=100000, B2=1000000
- PC finishes Stage 1 at around 11000 sec, whereas the laptop achieves 5500 sec
- PC finishes stage 2 at around 8500 sec, whereas the laptop has to spend 22000 sec.
So, no Stage 2 work at the laptop and minimal Stage 1 work at PC (only for the laptop top prepare enough Stage 1 save files to feed the PC).
Current work queues and thereafter.
- PC crunches B1=100000, B2=1000000 stuff for 9045000-9100000 (9000000-9045000 was already done with larger bounds with no save file, and I do not intend to go back there unless I can not drop below 1000 after a couple of sweep throughs for the remaining ranges). After 9100000, the PC will take over Stage 1 finished tests from the laptop in 100k chunks.
- Laptop crunches B1=B2=100000 for 9100000-10000000, which then will be fed to the PC queue in 100k chunks.
I have to spend some time on ETA of these queues to decide on what to do next.
It looks like, when the laptop finishes it's B1=100000 queue, it will get the save flies of the PC for B1=100000, B2=1000000 and crunch them to B1=200000, B2=1000000, and then pass them to the PC for B1=200000, B2=2000000, etc.
There will be a time when the PC will not be able to keep up with the laptop. As I mentioned above, I have to spend some time on ETA of the queues to decide the details of the action plan.
Nuri I was curious if you finished this sieve range???
860000-862000 Nuri (with 991-50M dat)
I guess you already though about the holes in reservations...
I guess I did... I was out of town for a couple of weeks and could not find the time to cross check reservations and sending finished ranges to factrange etc.
WEll if it's possible to doublecheck or send the factors again to factrange that would be cool in the mean time I'll mark it as complete.
Jason
I'll do so as soon as I visit my friend whose PC did the other half of that range.
Especially for the single k specialists I have created a set of pages like the all users page, but for individual k.
k=4847 is http://www.aooq73.dsl.pipex.com/2005/ui/19998.htm
k=5359 is http://www.aooq73.dsl.pipex.com/2005/ui/19997.htm (prime, so not much this year!)
...
k=67607 is http://www.aooq73.dsl.pipex.com/2005/ui/19987.htm
Enjoy
I like it Mike
The more info we have on factors for k's etc the more interesting it is.
I'm supprised by the limited numbers of factors for secondpass w.r.t. 67607.
I guess that k and 19249 are pretty light however...
Good work.
67607*2^9175347+1 has a factor: 398193466655495081803
398193466655495081803 | 67607*2^9175347+1
4 gone, 28 to go
progress so far
P-1 found a factor in stage #1, B1=100000.
67607*2^9445707+1 has a factor: 2609790718264729
2609790718264729 | 67607*2^9445707+1
5 gone, 27 to go
P-1 found a factor in stage #1, B1=100000.
67607*2^9528651+1 has a factor: 49713463448254842283
49713463448254842283 | 67607*2^9528651+1
6 gone, 26 to go
P-1 found a factor in stage #1, B1=100000.
67607*2^9541251+1 has a factor: 74908262920873179613
74908262920873179613 | 67607*2^9541251+1
7 gone, 25 to go
P-1 found a factor in stage #1, B1=100000.
67607*2^9697707+1 has a factor: 110710470704730133
110710470704730133 | 67607*2^9697707+1
8 gone, 24 to go
Finally, a contribution from the sieve...
872.252T 67607 9426731 8.723 Fri 18-Nov-2005 145333.400 (1) minbari
9 gone, 23 to go
P-1 found a factor in stage #2, B1=100000, B2=1000000.
67607*2^9485451+1 has a factor: 76453102778865746489
76453102778865746489 | 67607*2^9485451+1
10 gone, 22 to go
P-1 found a factor in stage #2, B1=100000, B2=1000000.
67607*2^9504971+1 has a factor: 149184001413027233
149184001413027233 | 67607*2^9504971+1
11 gone, 21 to go
P-1 found a factor in stage #1, B1=100000.
67607*2^9835371+1 has a factor: 933294164025961
933294164025961 | 67607*2^9835371+1
1 of 1 verified in 0.03 secs.
0 of the results were new results and saved to the database.
This factor lies within Keroberts1's sieving range. I guess he's recently found and sumbitted it as well. I can not check it right now, as Mike's pages are currently offline.
Anyways, if so that would be the second contribution from the sieve and most importantly...
12 gone, 20 to go
Himmm, it was found by garo on September 3rd.. It's strange that this k/n pair was left in the queue.
285.072P 22699 9835390 296847.911 Sat 03-Sep-2005 2
3748.840T 10223 9823517 295280.835 Sat 03-Sep-2005 2
933.294T 67607 9835371 282130.517 Sat 03-Sep-2005 2
So, it's still 11 gone, 21 to go..
Just make sure you have the latest results.txt file
http://www.seventeenorbust.com/sieve/results.txt.bz2
Quad 2.5GHz G5 PowerMac. Mmmmm.
My Current Sieve Progress: http://www.greenbank.org/cgi-bin/proth.cgi
I guess that single k/n pair simply slipped away during the first few weeks when I was out of town.
As what I'm focused on is the whole million range for a single k, thus it is possible to check whats going on thrgouh time.
At the beginning, I created the ini file and checked number of k/n pairs with the figure in Mike's page, they were the same.
And I kept checking for changes in remaining number of k/n pairs regularly so that it would be possible to track any new factors (as the 872.252T 67607 9426731 factor from minbari).
The problem, I suppose, is that I did not notice the single new factor when (or as) I was out of town within the first couple of weeks.
And, of course, I'll recreate the worktodo.ini with the latest results.txt file once I start the second tour (i.e., B1=200k etc.) within a week or so.