Page 2 of 3 FirstFirst 123 LastLast
Results 41 to 80 of 105

Thread: P-1 coordination thread - discussion

  1. #41
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Quote Originally Posted by glennpat
    In my range, 14000000 to 14010000, I have 444 numbers left to do.
    Glenn
    That's funny, I only count 216 k n pairs in that interval. See Attached.
    Attached Files Attached Files
    Joe O

  2. #42
    Quote Originally Posted by Joe O
    That's funny, I only count 216 k n pairs in that interval. See Attached.
    My data was in my file twice. I must of ran makewtd twice. I just experimented with it and it appends the data if ran again. The one you attached has a few for 19249 which I don't have and believe I don't need.

    Thanks for finding this!!!

  3. #43
    Quote Originally Posted by glennpat
    My data was in my file twice. I must of ran makewtd twice. I just experimented with it and it appends the data if ran again. The one you attached has a few for 19249 which I don't have and believe I don't need.

    Thanks for finding this!!!
    Woops. It's mine that has the 19249 which I shouldn't have.

  4. #44
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    I guess you figured out that you don't need to P-1 those that are prime.

    Also you didn't post which B1, b2 values you are using.

    You shouldn't have too much of a problem staying ahead of the testing. Especially if you decide to put both CPU's on the project once everything is working.

    Let us know where your at from time to time with repect to n-level, we should be able to tell by the factor you submit. If you start falling behind I'm sure some others will start helping out withthe p-1 effort.

    Jason

  5. #45
    I started running on the other core. I moved what was in between 14004000 to 14007999 to the other core. I should be in could shape, but I will keep a close eye it. Now to find some good stuff .

  6. #46
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Again I'm curious about what B1 and B2 values you are using.

    I have a core2duo I might be able to help out with if you start to fall behind.

  7. #47
    Quote Originally Posted by vjs
    Again I'm curious about what B1 and B2 values you are using.

    I have a core2duo I might be able to help out with if you start to fall behind.
    B1 is 120000 and B2 is 1700000. I took what you had in your example back on 6/30. I hope those are good numbers to use. I have been reading about B1/B2, but need to do some more reading how how they work.

    The 1st core I started is working on 14001058 right now.

  8. #48
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    PRP is at 13641092 at the time I started to write this post. Between that value and 14000000 there are 6949 k n pairs. At approximately 100 pairs/day that is 70 days work. glennpat said that he is factoring at approximately 7 pairs/day/core and we know that there are 216 pairs in his interval. Which means that he will finish that interval in 31 days, or less now that he is using two cores.
    What we need to worry about is the next interval, and the one after that. To stay ahead of PRP would take 14 or 15 cores running at the same speed as glennpat's machine. Look's like it's time to start P-1 factoring in earnest.
    Joe O

  9. #49
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Thanks Glennpat and Joe,

    Glennpat if your in a team or something try to recruite a few more people for P-1.

    And to Joe

    As always Joe your analysis is right on the money. I am currently working on some stuff with Lars but have no problem returning for some P-1 when nessary.

    Might be time to start the drum roll.

    _----------_

    The other issue or comment is that a little P-1 is better than no P-1, so if push comes to shove. We can always run slightly lower B1 B2 values ( although this would not be my choice).


    According to Joes Math we would need another 6 people like Glennpat. I wonder if Louie is up to doing a little more P-1 as well. He threw quite a bit of CPU at the project a while back.

  10. #50
    Old Timer jasong's Avatar
    Join Date
    Oct 2004
    Location
    Arkansas(US)
    Posts
    1,778
    I've got a 2.8GHz dual-core Pentium-D I could throw at it, with about 1200MB of RAM available.

    If any of the mods(I don't know who the main people are running the project, so I'm just going to listen to the mods) think I'm needed, just decide whether I should use one or both cores with the 1200MB of RAM, give me some B1/B2 values(if there isn't a way to get the computer to figure it out), tell me how big a range I should reserve, and post the info here.

    If that's too much, but I am needed, just point me in the right direction.

    (I have no idea how much RAM is needed, so help with that would be appreciated)

  11. #51
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Jasong,
    First question: Do you have Prime95 set up on your machine? If so, do you have one directory or two? If not, you need to set up Prime95 on your machine. When it asks if you want to use Primenet, just say no.
    Let's start with that.
    Joe O

  12. #52
    Old Timer jasong's Avatar
    Join Date
    Oct 2004
    Location
    Arkansas(US)
    Posts
    1,778
    Never mind, I'm just going to jump in. I'll reserve a tiny range and time it tomorrow afternoon.

  13. #53
    For stage 1 RAM is not an issue, for stage 2, the more the better, though the impact gets smaller the more you have.

    I have a theory that running two copies of prime95/mprime on a dualcore/hyperthreaded machine gives not twice the speed, but a little more nevertheless.
    So I once tried to figure out how to let one instance of prime95 run stage 1, (the command with B1=B2), and the other one stage 2, with lots of Ram. The idea was to have the time for stage 1 a little less than the time for stage 2, so that the first instance is always a bit ahead, and have the savefiles in one shared directory.

    There are commands for the prime.ini etc. to specify the work directory, but I got stuck somehow and never got it working. If somebody else has some spare time to figure things out, it may work though.

    H.
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  14. #54
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Humm,

    I was just at Fry's they have quad Q6600 for 300 bucks with a board and 2G for 50 dollars woth rebate. I passed it up but now I'm wondering, that could make quite a good factoring box with hhh's suggestion.

    Considering those quads are droppping to 260 in a few weeks and will overclock to ~3G... I may turn into a Intel fanboy.

  15. #55
    Quote Originally Posted by vjs
    Thanks Glennpat and Joe,

    Glennpat if your in a team or something try to recruite a few more people for P-1.

    And to Joe

    As always Joe your analysis is right on the money. I am currently working on some stuff with Lars but have no problem returning for some P-1 when nessary.

    Might be time to start the drum roll.

    _----------_

    The other issue or comment is that a little P-1 is better than no P-1, so if push comes to shove. We can always run slightly lower B1 B2 values ( although this would not be my choice).


    According to Joes Math we would need another 6 people like Glennpat. I wonder if Louie is up to doing a little more P-1 as well. He threw quite a bit of CPU at the project a while back.

    I started a p-1 thread on the XtreamSystem asking for some p-1 help and how to get started. Thanks for putting the worktodo.ini files in the other coordination thread. I don't think it was there when I started and will help me and new people.

    Now for the quad prices to drop .

  16. #56
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Quote Originally Posted by glennpat
    I started a p-1 thread on the XtreamSystem asking for some p-1 help and how to get started. Thanks for putting the worktodo.ini files in the other coordination thread. I don't think it was there when I started and will help me and new people.

    Now for the quad prices to drop .
    No, the worktodo.ini files were not there when you started. Your experience starting up, and that of jasong and others, prompted me to do that. It is also the easiest way to provide just the unfactored k n pairs, and keep it relatively up to date. You are welcome.
    Joe O

  17. #57
    I want to start another range. The next available range starts at 14037000. The next PRP is at 13900361. I am only going to run on 1 core. Should I take the next available range or should I skip ahead to something like 14100000? Thanks

  18. #58
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Quote Originally Posted by glennpat
    I want to start another range. The next available range starts at 14037000. The next PRP is at 13900361. I am only going to run on 1 core. Should I take the next available range or should I skip ahead to something like 14100000? Thanks

    I think you'll be okay unless somebody joins and start grabbing hundreds of wu's. Or another runaway proxy


    e

  19. #59
    Thanks. I took the next available.

  20. #60
    Can someone tell me how to use multiple worker threads in Prime95 on Windows XP? I am using mprime on another machine, and the nice thing about it is that I only start one instance of mprime and it creates two worker threads for the dual-core processor I have in that machine.

    I have a dual-core processor on this machine as well, but it runs windows, and I have had a heck of a time trying to figure out how to get Prime95 to do the same thing. I think I found something that said that it's possible to do this on the Windows version, but there aren't any details how. People also make reference to stress testing where multiple threads are executed simultaneously.

    One other nice thing about the multiple threads is that when one thread is in stage 2 and using lots of memory, the other thread will try to find other stage 1 work to do as not to use up even more system memory.

    My question basically revolves around the worktodo.ini file structure. In the mprime version of the program, the structure is as follows:

    [Worker #1]
    Pminus1=blah blah blah
    Pminus1=blah blah blah

    [Worker #2]
    Pminus1=blah
    Pminus1=blah

    and the program automatically recognizes what work is assigned to which thread. I tried putting this structure into the Prime95 worktodo.ini, but to no avail. Has anyone gotten this to work?

    Update: Nevermind. I now find that the official release version of Prime95 does not support multithreading, while the latest version (25.4), available on the Mersenne Forums, does support multithreading. This is nice from the standpoint that it can start multiple threads and one can designate each thread to run on a separate core, but apparently the author had to make a change to the p-1 save file format to allow for larger B2 bounds, so the save file formats are not compatible. Anyone considering upgrading should keep this in mind -- finish up both stages of a number using the old client and THEN switch

    On another note, does anyone have any information regarding the justification behind choosing B1 and B2 bounds? How do we estimate the probability that on a given number of the form k*2^n+1, a B1/B2 pair of bounds gives ____ % chance of finding a factor? A screenshot from an old version of Prime95 on someone's web site shows this estimate in the output of the program, but it appears that this is not in the final version. Even a rough estimate would be useful for getting a hold on these B1 and B2 bounds...
    Last edited by SlicerAce; 11-19-2007 at 08:08 PM.

  21. #61
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Joe,

    When you created the worktodo for P1 you used B1=130000 B2=2200000 as the bounds. Is that the general consensus on what the bounds should be at this level? I've always used sieve depth and factor value. It's just a tad higher when I was factoring around 13.5m. Just wondering?

    Also being it has been a while since I factored, I do not remember getting the residues from the factored wu. Now as it completes it leaves a residue. What do we do with it? Throw it on the trash bin? Thanks.

    e

  22. #62
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    e,

    The residue is from the first stage. You could reuse that residue to calculate the second step on a different machine or using different bounds than the first step.

    In all realitiy for our purposes its a trash bin if you decide to run the tests in the normal way, step one then step two with reasonable bounds from the begininning.

  23. #63
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Quote Originally Posted by vjs View Post
    e,

    The residue is from the first stage. You could reuse that residue to calculate the second step on a different machine or using different bounds than the first step.

    In all realitiy for our purposes its a trash bin if you decide to run the tests in the normal way, step one then step two with reasonable bounds from the begininning.
    Normal way meaning letting it run stage 1 and 2 which on my older xeon 2.8 means about 9+ hour per wu. Trash the residue. Tried to lower the memory allocation per Prime95 from 840mb to 720mb did not noticed much if any difference. Probably need to change it when it is running stage 2. Ok will let it go for now. I should not expect much factors at this level, correct?

  24. #64
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Quote Originally Posted by engracio View Post
    Normal way meaning letting it run stage 1 and 2 which on my older xeon 2.8 means about 9+ hour per wu. Trash the residue. Tried to lower the memory allocation per Prime95 from 840mb to 720mb did not noticed much if any difference. Probably need to change it when it is running stage 2. Ok will let it go for now. I should not expect much factors at this level, correct?
    Quote Originally Posted by engracio View Post
    Joe,

    When you created the worktodo for P1 you used B1=130000 B2=2200000 as the bounds. Is that the general consensus on what the bounds should be at this level? I've always used sieve depth and factor value. It's just a tad higher when I was factoring around 13.5m. Just wondering?

    Also being it has been a while since I factored, I do not remember getting the residues from the factored wu. Now as it completes it leaves a residue. What do we do with it? Throw it on the trash bin? Thanks.

    e
    e
    It sounds like you were using much higher B1 and B2 before. What exactly were you using? These numbers were the minimum that I calculated would find factors. Did I drop a decimal point?
    Joe O

  25. #65
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Quote Originally Posted by Joe O View Post
    e
    It sounds like you were using much higher B1 and B2 before. What exactly were you using? These numbers were the minimum that I calculated would find factors. Did I drop a decimal point?
    Joe,

    this is the record I managed to find:

    [Tue Jul 24 15:19:41 2007]
    P-1 found a factor in stage #2, B1=80000, B2=920000.
    22699*2^13996774+1 has a factor: 3026831589545851

    [Fri Jul 27 12:50:12 2007]
    P-1 found a factor in stage #2, B1=80000, B2=920000.
    33661*2^13997880+1 has a factor: 1393159544349241

    [Tue Jul 10 21:46:15 2007]
    P-1 found a factor in stage #2, B1=80000, B2=920000.
    24737*2^13991551+1 has a factor: 147185688671410446786862117

    I don't know you think we jumped a little too high? I don't know, remember it it is almost 1 mil higher.

    e

  26. #66
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    e,

    If you are talking about in general. No we will not find that many factors using P-1 but this was always the case. Overall we will also find less factors since the sieve is always progressing. Before we had a good shot at finding factors above 900T. Now that we have sieved out to around 1300T and bionic has sieved a good portion between 1500T and 2000T our chances of finding factors less than 2000T or 2P are almost zero since sieve finds all factors including those found by P-1 factoring.

    Now inregards to your settings and the current settings.

    You were using
    B1=80K and B2=920K this is a B1 to B2 ratio of 11.5.

    Currently we are suggesting using a
    B1= 130K and B2=2200K this is a B1 to B2 ratio of 16.9

    The suggested setting will find more factors that your old setting, but it may find less factors per unit of time. (Not sure we will see, also if we start to fall behind in P-1 we will proably reduce those B1 B2 values).

    (BTW the stage that uses the most memory is stage 2, memory requirements are based upon the size of the B2 value.

    The higher the B1 and B2 the more chance we have of finding a factor but the longer each test will take. There are some efficency issues here with values of n, the sieve level, B1, B2, and of course if we are testing each k/n pair prior to prime testing.

    The biggest issue here is that we do at leasat some minimal testing of each k/n pair with P-1 prior to primality testing.

    Now I think in your case we had talked and I suggested that you use a smaller B1:B2 ratio since you were memory limited.

    If this is your case simply start to decrease the B2 value to a minimum of B1:B2 of 12, or B1=130k B2=1600k until you do not have any memory issues.

    I hope this helps and I know alot of what I said above you already know.

  27. #67
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Quote Originally Posted by vjs View Post
    e,

    If you are talking about in general. No we will not find that many factors using P-1 but this was always the case. Overall we will also find less factors since the sieve is always progressing. Before we had a good shot at finding factors above 900T. Now that we have sieved out to around 1300T and bionic has sieved a good portion between 1500T and 2000T our chances of finding factors less than 2000T or 2P are almost zero since sieve finds all factors including those found by P-1 factoring.

    Now inregards to your settings and the current settings.

    You were using
    B1=80K and B2=920K this is a B1 to B2 ratio of 11.5.

    Currently we are suggesting using a
    B1= 130K and B2=2200K this is a B1 to B2 ratio of 16.9

    The suggested setting will find more factors that your old setting, but it may find less factors per unit of time. (Not sure we will see, also if we start to fall behind in P-1 we will proably reduce those B1 B2 values).

    (BTW the stage that uses the most memory is stage 2, memory requirements are based upon the size of the B2 value.

    The higher the B1 and B2 the more chance we have of finding a factor but the longer each test will take. There are some efficency issues here with values of n, the sieve level, B1, B2, and of course if we are testing each k/n pair prior to prime testing.

    The biggest issue here is that we do at leasat some minimal testing of each k/n pair with P-1 prior to primality testing.

    Now I think in your case we had talked and I suggested that you use a smaller B1:B2 ratio since you were memory limited.

    If this is your case simply start to decrease the B2 value to a minimum of B1:B2 of 12, or B1=130k B2=1600k until you do not have any memory issues.

    I hope this helps and I know alot of what I said above you already know.
    Joe, vjs

    Thanks for the info. Yes most of the info you stated above I already or have known about it. Just kind of rusty. My biggest thing is last year I was completing 1 wu per 3 to 4 hours per cpu and finding a factor every 6 to 8 wu. Now it seems like everything doubled but with very little result if any. So far out of 6 cpu/cores it found no factor yet. Is the cost/benefit ratio worth it? We all knew P1 will eventually run into this wall sooner or later. Is sooner now? I will complete my reserved range and see if we did hit the wall. My next reservation will tell me how I feel about it. Thanks.

    e

  28. #68
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    e,

    As far as cost benift you were finding really alot of factors last year. Alot more than I expected. The cost benifit really comes in when you look at how many factors you find per unit time and in that unit of time how many tests could have you done.

    The ratio although some could argue should be 2 factors for every 3 tests... Yup really!!!

    For the simple fact that we will probably test each k/n pair twice.

    Where we were before is probably around a 1:1 ratio if not more. This ratio was totally sub-par, since we could factor tests out quicker than we could test them. I'd say stick with the P-1 for no and look at the time requirements. Besides we could always use those CPU' sin the testing fold if that's where they are best suited.

  29. #69
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Just a reminder for the factorers. Once you find a factor and manually submitting the factor, if on the normal page it states that 0 factors was found. Do not automatically assume it is not a valid factor and somebody has found the factor already. I've noticed that the current factor size is close to being "large factor size" instead of "normal factor size"

    Last year I was able to generally "eyeball" the factor size and when in doubt submitted the "normal factor" on the large factor page.

    e

    oops I think I posted this on the wrong thread, please move. Thanks

  30. #70
    If I let run prime95 with the Pfactor= line, I get much lower values for B1 and B2, probably because I have only 400 MB assigned.

    Is anybody interested to take my files after stage 1 and to let finish stage2 on a machine with lots of RAM?

    H.
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  31. #71
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    hhh,

    what B1 value are you doing them out too?

    Depending on the B1 your running I'll do the B2 portion. Only problem, if I recall the residual is something like 1MB in size if not more????

  32. #72
    I am running at B1=130000 at the moment. Yes, the residual is big, but I could zip them together with the worktodo.ini and host them on rapidshare or so. (Need just to figure out how that works.). H
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  33. #73
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Sure I'll run them with a fairly large B2 value. I have 2G so I should be able to run fairly large. How many G of memory do you have?

    You know that you can break the secondstage into parts if you wish.

  34. #74
    I have 512MB on the P4, and 1GB on the Core2Duo. I had 400MB assigned per instance on the C2D, but Windows became really slow. So I prefered to reduce it a bit.

    Thanks, anyway, when I have enough residues together, I'll think about a way to send them to you. H.
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  35. #75
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    hhh,

    I have 2G on my quad core with generally 1500mb free. This is free memory even while running two instance of prp and two instances of llr.

    I'm thinking I could run one instance of stage 2 on your residues pretty easy.

    Not sure where you live, but in the US 2G of memory can often be purchased for 20-30 dollars after mail in rebate. Just a thought.

  36. #76
    It's a notebook, and underclocked and undervolted when on battery. 2GB more will basically just empty my battery faster. Furthermore, waking up the computer from hibernate would take 2 times more time.

    Sometimes, less is more. H.
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  37. #77
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Laptop... that explains it. When your ready let me know. Looks like we have some time for those tests with the recent repopulation.

  38. #78
    Unholy Undead Death's Avatar
    Join Date
    Sep 2003
    Location
    Kyiv, Ukraine
    Posts
    907
    Blog Entries
    1
    just got an Q
    if we can sieve together with PSP, can we p-1 together?
    wbr, Me. Dead J. Dona \


  39. #79
    Quote Originally Posted by Death View Post
    just got an Q
    if we can sieve together with PSP, can we p-1 together?
    Nope.

    Firstpass at SoB is at nearly 15M, while around 5M for PSP only. And at 5M, P-1 isn't worth it, fortunatlythanks to the deep sieving.

    H.
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  40. #80
    Unholy Undead Death's Avatar
    Join Date
    Sep 2003
    Location
    Kyiv, Ukraine
    Posts
    907
    Blog Entries
    1
    thank you.

    and another question. I try to make p-1 for riesel, just to see some stats for my team.

    and latest version of prime95 doesn't "continue" after makewdt.exe

    what's wrong with it and what version do you use?
    wbr, Me. Dead J. Dona \


Page 2 of 3 FirstFirst 123 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •