Results 1 to 25 of 25

Thread: Optimal depth?

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Optimal depth?

    I understand that sieving is used to reduce the number of k that must undergo proth tests, and is (at this point) more efficient than running the proth tests. However, I was wondering what the optimal sieve depth is for SB and how close we are to that point.

  2. #2
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    If you look at the paper for the combined effort recently posted on the main screen and read between the lines.


    The optimal depth is nowhere near where we are now. So please keep sieving. I havn't looked at P-1 lately but that may be near no longer optimal :couch: .



    I don't have any and I mean ANY figures on this.

    You would have to ask the P-1 people how long it takes them to find a factor, not a test but a factor...

    Then multiply that time by 1.7 'ish and see if they can complete a firstpass test in that time.

    Those factors found by P-1 at this point will probably never be found with sieve, yeah I know never say never with primegrids rate.

  3. #3
    I did some more or less thorough calulations on this, and with very optimistic esimates about the primes we are going to find, the optimal sieve depth would be around 100 P. Current sieve level is ~25P. I believe not much work would be wasted if we sieved to 200G. I hope that at some time, I can get better estimates, and hence more accurate figures. Yet I know that 100P is a lower bound. H.
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  4. #4
    As a reality check: sieving from 25P to 200P will eliminate roughly 1-ln(25P)/ln(200P) = 5.2% candidates only. So as a rule of thumb, no more than 5% of resource should be spent in sieving.

  5. #5
    To make sure I'm understanding this correctly: sieving saves time because it finds factors, making it unnecessary to run the proth test on that number. Also, at this point, sieving is eliminating n faster than proth tests are. The optimal sieve depth is the point at which sieving removes candidates at the same rate as proth tests. Are these statements accurate?

  6. #6
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Quote Originally Posted by Bananaman42 View Post
    To make sure I'm understanding this correctly: sieving saves time because it finds factors, making it unnecessary to run the proth test on that number. Also, at this point, sieving is eliminating n faster than proth tests are. The optimal sieve depth is the point at which sieving removes candidates at the same rate as proth tests. Are these statements accurate?
    More or less!
    Since we are running 2 proth tests on most k n pairs, you could argue that the optimal sieving depth is the point at which sieving removes candidates at half the rate that proth tests do.
    Joe O

  7. #7
    Quote Originally Posted by axn View Post
    As a reality check: sieving from 25P to 200P will eliminate roughly 1-ln(25P)/ln(200P) = 5.2% candidates only. So as a rule of thumb, no more than 5% of resource should be spent in sieving.
    While this is correct, these 5 percent of work should be done before the other 95% of work. So, now, the higher the percentage of work spent on sieving, the better. Later, sieving will be stopped, and LLR togehter with P-1 will be 100% of the work.
    This begs a question that's been bugging me the last couple days - is it possible while sieving for a memory or CPU error to remove a (potentially prime) candidate in error?

    Is there some sort of built in check that prevents this?
    Yes; checking for this is very easy, and is done by the server once factors are submitted.
    I havn't looked at P-1 lately but that may be near no longer optimal :couch:
    I have. We are near the threshold of efficiency, but Prime95 still agrees to run P-1 when started with

    Pfactor=k,2,17M,1,55,1.6,0

    Meaning that even at a sieve depth of 2^55, it will be worth. Currently, we are at around 2^54.* . Yet, when sieving will approach 40P or 50P, we will have to reevaluate the question. Later, when firstpass reaches around 22M, P-1 will be effective even if we have sieved to 200P or 300P.

    H.
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  8. #8
    Quote Originally Posted by hhh View Post
    While this is correct, these 5 percent of work should be done before the other 95% of work. So, now, the higher the percentage of work spent on sieving, the better. Later, sieving will be stopped, and LLR togehter with P-1 will be 100% of the work.
    Only if you plan to test all the k/n pairs being sieved. However, in a project like SoB, where a prime found will eliminate a whole k, this is not true. Here, you're better off putting your resources into PRP:Sieve at 20:1 ratio and be done with it. Here's the reasoning:

    Sieving will make the project 5% faster (assuming the current 25P to future 200P, and zero cost of sieving). How else can we make the project 5% faster? Why, putting 5% more resource in the PRP stage, of course. So if you're using more than 5% of resource on sieving, move it to PRP, and it will achieve the same thing.

    Admittedly, this is not a thorough analysis of optimality, but it is good enough. People are seriously overestimating the value of sieve, and losing sight of the big picture.

  9. #9
    Quote Originally Posted by vjs View Post
    If you look at the paper for the combined effort recently posted on the main screen and read between the lines.


    The optimal depth is nowhere near where we are now. So please keep sieving. I havn't looked at P-1 lately but that may be near no longer optimal :couch: .



    I don't have any and I mean ANY figures on this.

    You would have to ask the P-1 people how long it takes them to find a factor, not a test but a factor...

    Then multiply that time by 1.7 'ish and see if they can complete a firstpass test in that time.

    Those factors found by P-1 at this point will probably never be found with sieve, yeah I know never say never with primegrids rate.
    The P-1 code has a built-in algorithm that *can* determine if the test is optimal to run, but it's true that many P-1 tests have been run with static bounds that may partially ignore the sieve depth / test saved vs probability of a factor.

    Especially for "near", first-pass tests, the number of tests saved is arguably over 2 since there will almost certainly be a double check and in the case of a mis-match a 3rd test to confirm one of the first two residues.

    However, we're winding down separate P-1 testing now in favor of strictly-optimal, in-client P-1 testing. When the tests get over n > 17.2M, the client will start doing an optimal test. They will probably be a little shorter than the current testing done by stand-alone testers. As input, I currently specify a sieve depth of 2^55 and saved tests as 2.1. If the sieve depth gets up to 56 or higher someone should email me to remind me to update that figure as it is hard-coded into the new server.

    Cheers,
    Louie

  10. #10
    OK...we are past 50 P and almost halfway through 2^55

    can I wake this sleeping dog

    slatz

  11. #11
    Quote Originally Posted by Slatz View Post
    OK...we are past 50 P and almost halfway through 2^55

    can I wake this sleeping dog

    slatz
    Are you saying you think it would be correct to update the P-1 sieve depth constant in the SB server now to 2^56. Seems a little early if we're only half way through 2^55.

    But I'll consider it if you think there's a good reason to.


    Cheers,
    Louie

  12. #12
    we are now officially past 2^55

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •