# Thread: Optimal depth?

1. ## Optimal depth?

I understand that sieving is used to reduce the number of k that must undergo proth tests, and is (at this point) more efficient than running the proth tests. However, I was wondering what the optimal sieve depth is for SB and how close we are to that point.

2. If you look at the paper for the combined effort recently posted on the main screen and read between the lines.

The optimal depth is nowhere near where we are now. So please keep sieving. I havn't looked at P-1 lately but that may be near no longer optimal :couch: .

I don't have any and I mean ANY figures on this.

You would have to ask the P-1 people how long it takes them to find a factor, not a test but a factor...

Then multiply that time by 1.7 'ish and see if they can complete a firstpass test in that time.

Those factors found by P-1 at this point will probably never be found with sieve, yeah I know never say never with primegrids rate.

3. I did some more or less thorough calulations on this, and with very optimistic esimates about the primes we are going to find, the optimal sieve depth would be around 100 P. Current sieve level is ~25P. I believe not much work would be wasted if we sieved to 200G. I hope that at some time, I can get better estimates, and hence more accurate figures. Yet I know that 100P is a lower bound. H.

4. As a reality check: sieving from 25P to 200P will eliminate roughly 1-ln(25P)/ln(200P) = 5.2% candidates only. So as a rule of thumb, no more than 5% of resource should be spent in sieving.

5. To make sure I'm understanding this correctly: sieving saves time because it finds factors, making it unnecessary to run the proth test on that number. Also, at this point, sieving is eliminating n faster than proth tests are. The optimal sieve depth is the point at which sieving removes candidates at the same rate as proth tests. Are these statements accurate?

6. Originally Posted by Bananaman42
To make sure I'm understanding this correctly: sieving saves time because it finds factors, making it unnecessary to run the proth test on that number. Also, at this point, sieving is eliminating n faster than proth tests are. The optimal sieve depth is the point at which sieving removes candidates at the same rate as proth tests. Are these statements accurate?
More or less!
Since we are running 2 proth tests on most k n pairs, you could argue that the optimal sieving depth is the point at which sieving removes candidates at half the rate that proth tests do.

7. This begs a question that's been bugging me the last couple days - is it possible while sieving for a memory or CPU error to remove a (potentially prime) candidate in error?

Is there some sort of built in check that prevents this?

8. Originally Posted by axn
As a reality check: sieving from 25P to 200P will eliminate roughly 1-ln(25P)/ln(200P) = 5.2% candidates only. So as a rule of thumb, no more than 5% of resource should be spent in sieving.
While this is correct, these 5 percent of work should be done before the other 95% of work. So, now, the higher the percentage of work spent on sieving, the better. Later, sieving will be stopped, and LLR togehter with P-1 will be 100% of the work.
This begs a question that's been bugging me the last couple days - is it possible while sieving for a memory or CPU error to remove a (potentially prime) candidate in error?

Is there some sort of built in check that prevents this?
Yes; checking for this is very easy, and is done by the server once factors are submitted.
I havn't looked at P-1 lately but that may be near no longer optimal :couch:
I have. We are near the threshold of efficiency, but Prime95 still agrees to run P-1 when started with

Pfactor=k,2,17M,1,55,1.6,0

Meaning that even at a sieve depth of 2^55, it will be worth. Currently, we are at around 2^54.* . Yet, when sieving will approach 40P or 50P, we will have to reevaluate the question. Later, when firstpass reaches around 22M, P-1 will be effective even if we have sieved to 200P or 300P.

H.

9. Originally Posted by hhh
While this is correct, these 5 percent of work should be done before the other 95% of work. So, now, the higher the percentage of work spent on sieving, the better. Later, sieving will be stopped, and LLR togehter with P-1 will be 100% of the work.
Only if you plan to test all the k/n pairs being sieved. However, in a project like SoB, where a prime found will eliminate a whole k, this is not true. Here, you're better off putting your resources into PRP:Sieve at 20:1 ratio and be done with it. Here's the reasoning:

Sieving will make the project 5% faster (assuming the current 25P to future 200P, and zero cost of sieving). How else can we make the project 5% faster? Why, putting 5% more resource in the PRP stage, of course. So if you're using more than 5% of resource on sieving, move it to PRP, and it will achieve the same thing.

Admittedly, this is not a thorough analysis of optimality, but it is good enough. People are seriously overestimating the value of sieve, and losing sight of the big picture.

10. Originally Posted by axn
Admittedly, this is not a thorough analysis of optimality, but it is good enough. People are seriously overestimating the value of sieve, and losing sight of the big picture.
And you are assuming that all the people who sieve could or would PRP. That is not true. There are many who enjoy sieving but do not enjoy the longer PRP units and will not do them.

11. Originally Posted by axn
Only if you plan to test all the k/n pairs being sieved. However, in a project like SoB, where a prime found will eliminate a whole k, this is not true.
Or if you take into account that fact into your calculations of the optimal sieve depth. Which I did. Basically: Even after throwing out of the window tons of factors due to the primes found, we are still better off first sieving to 100P and then starting the PRP tests.

Originally Posted by axn
Here, you're better off putting your resources into PRP:Sieve at 20:1 ratio and be done with it. Here's the reasoning:

Sieving will make the project 5% faster (assuming the current 25P to future 200P, and zero cost of sieving). How else can we make the project 5% faster? Why, putting 5% more resource in the PRP stage, of course. So if you're using more than 5% of resource on sieving, move it to PRP, and it will achieve the same thing.
You do not define what you mean by faster. Of course, if you don't sieve, at the beginning you are flying through the ranges faster. Later, you realize that you should perhaps sieve a little. But this work would have saved much of your earlier work if it had been done earlier. This is especially true at the current levels where the primes start to be rare.

Originally Posted by axn
Admittedly, this is not a thorough analysis of optimality, but it is good enough. People are seriously overestimating the value of sieve, and losing sight of the big picture.
IMO, people love to start crunching and to find the easy primes without proper preparation.

Anyway, there is no way to start a flamewar on this topic. Our little discussion will have no impact whatsoever on the average behaviour. Primegrid is happily sieving, SoB happily PRPing, I think everything is just fine as it is.

H.

12. ... ... ...

We are probably no where near 5% of the total projects resources spent on sieve. If we have spent 1% of our computational effort thus far on sieve I'd amazed.

13. Originally Posted by vjs
... ... ...

We are probably no where near 5% of the total projects resources spent on sieve. If we have spent 1% of our computational effort thus far on sieve I'd amazed.
Are you considering the contribution of PrimeGrid to sieving? They have added a significant boost to the sieve effort. Do we have any figures to compare?

14. Originally Posted by Matt
Are you considering the contribution of PrimeGrid to sieving? They have added a significant boost to the sieve effort. Do we have any figures to compare?
Ya vjs, got any figures or is that just your best wag?

I would not consider Primegrid contribution not significant. Individual contribution previous to Primegrid were definitely less than 1% compare to prp individual contributions. Primegrid changes that.

15. A bit of a wag LOL...

But I took some offense to axn's post. ( sorry to point a finger directly that's not my intent).

As a reality check: sieving from 25P to 200P will eliminate roughly 1-ln(25P)/ln(200P) = 5.2% candidates only. So as a rule of thumb, no more than 5% of resource should be spent in sieving.

So as a reality check could someone run some numbers for me.

1-ln(50P)/ln(200P) =
1-ln(100P)/ln(200P) =
1-ln(150P)/ln(200P) =

and

1-ln(400T)/ln(1P) =
1-ln(500T)/ln(1P) =
1-ln(400T)/ln(600T) =

Also as to

: Prime Grid

16. Originally Posted by vjs

So as a reality check could someone run some numbers for me.

1-ln(50P)/ln(200P) =
1-ln(100P)/ln(200P) =
1-ln(150P)/ln(200P) =

and

1-ln(400T)/ln(1P) =
1-ln(500T)/ln(1P) =
1-ln(400T)/ln(600T) =

1-ln(50P)/ln(200P) = 0.0348
1-ln(100P)/ln(200P) = 0.0174
1-ln(150P)/ln(200P) = 0.0072

1-ln(400T)/ln(1P) = 0.0265
1-ln(500T)/ln(1P) = 0.0201
1-ln(400T)/ln(600T) = 0.0119

The above results are rounded to 4 decimal places.

17. 1-ln(400T)/ln(600T) = 0.0119

So if we currently sieved to 400T and were thinking about sieving to 600T we should only devote <3% of our total resources? Just checking... on the logic.

18. I can see the logic of all the logaritms, but i prefer a faster way to calculate efficiency :
Right now i'm popping out a factor a day or so when sieving, and a full PRP-test takes about a month or more. (running 4core, so should correct for that... let's say a week) That means that sieving is about 7 times as efficient as is, so i'll continue sieving :P

@vjs : just a bit more than 1% of total resources... and probably not worth the effort... But hey, we're not even close yet Once sieving reaches 50T we reevaluate, can always switch, but it'd be much better if the PRP-ers switch to us first

19. yup...

simple math is generally pretty good.

Way back when a few of us were screaming for sievers...
We were pretty much ignored except for; the faithful few, those we begged, those be bribed, and those who knew...

Now that we are starting to approach APPROACH optimal numbers

20. Can someone let me know if my "back of the napkin" figuring here make sense..?

According to primestats.net, I am averaging about 2.5 factors/day when I run BOINC PSPSeive all-out on this machine. 6 out of the 15 k being sieved are applicable to SoB (40%) so essentially I am finding about 1 factor per day that benefits SoB, which saves 1 future PRP test. (Assuming a prime isn't found for that k before then, in which case the work would be "wasted")

Running Prime95 all-out on this same machine, I can complete PRP 4 tests in about 5.5 days, so about 0.73 tests per day.

So, by sieving instead of running the client on this machine, I am reducing SoB's queue by 0.27 additional tests/day, and the other 1.5 factors/day go to benefit PSP. (Bonus for them!)

Does this make sense, or am I miscalculating or misunderstanding something somewhere? (I am new to sieving and trying to get a better understanding of what I am actually accomplishing here )

21. Originally Posted by vjs
If you look at the paper for the combined effort recently posted on the main screen and read between the lines.

The optimal depth is nowhere near where we are now. So please keep sieving. I havn't looked at P-1 lately but that may be near no longer optimal :couch: .

I don't have any and I mean ANY figures on this.

You would have to ask the P-1 people how long it takes them to find a factor, not a test but a factor...

Then multiply that time by 1.7 'ish and see if they can complete a firstpass test in that time.

Those factors found by P-1 at this point will probably never be found with sieve, yeah I know never say never with primegrids rate.
The P-1 code has a built-in algorithm that *can* determine if the test is optimal to run, but it's true that many P-1 tests have been run with static bounds that may partially ignore the sieve depth / test saved vs probability of a factor.

Especially for "near", first-pass tests, the number of tests saved is arguably over 2 since there will almost certainly be a double check and in the case of a mis-match a 3rd test to confirm one of the first two residues.

However, we're winding down separate P-1 testing now in favor of strictly-optimal, in-client P-1 testing. When the tests get over n > 17.2M, the client will start doing an optimal test. They will probably be a little shorter than the current testing done by stand-alone testers. As input, I currently specify a sieve depth of 2^55 and saved tests as 2.1. If the sieve depth gets up to 56 or higher someone should email me to remind me to update that figure as it is hard-coded into the new server.

Cheers,
Louie

22. OK...we are past 50 P and almost halfway through 2^55

can I wake this sleeping dog

slatz

23. Originally Posted by Slatz
OK...we are past 50 P and almost halfway through 2^55

can I wake this sleeping dog

slatz
Are you saying you think it would be correct to update the P-1 sieve depth constant in the SB server now to 2^56. Seems a little early if we're only half way through 2^55.

But I'll consider it if you think there's a good reason to.

Cheers,
Louie

24. we are now officially past 2^55

25. Originally Posted by Slatz
we are now officially past 2^55
Thanks mate! I updated the server to reflect the depth!

Cheers,
Louie

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•