PDA

View Full Version : p-1 works



Xeltrix
08-31-2003, 08:42 AM
I've been prping more than sieving of late and finally found P-1 useful.

Three times I have factored my current test [when not part of a reserved/completed range] before running it and today, on the third time got a factor for my test. So I will continue with the trend as it seems to save some testing!

So my doubt as to the usefullness of the p-1 has been cured.

Troodon
08-31-2003, 03:28 PM
It's highly recommendable to P-1 factor an assigned k/n pair before testing it! But before that, check in the coordination thread if it hasn't been factored before ;) .

Xeltrix
08-31-2003, 03:37 PM
Yep I agree, hence : "[when not part of a reserved/completed range] ". ;)

There's only a small area at the moment this applies to so hopefully [if enough power at p-1] I won't need to do any p-1 testing for all tests.

A question: I use 2.0 and 256 is it worth using 2.5 [did once] or should I stick to what I've been using?

Moo_the_cow
09-01-2003, 01:37 AM
I got this from page 4 of the sieve client thread:

Anyway, back on topic, I noticed mention of using candidate-by-candidate factoring algorithms in the thread, and it's worth trying to analyse to see what they say:

If we pluck a figure out of the air for sieving depth, say 10T, and pretend that the average machine does <100000p/s, then we conclude that the job's >1000 CPU days. What else could we do in that number of CPU days? If we were to sieve half way, then we'd have 500 days spare for factoring. Say we have 700000 candidates left (I'm in 1s.f. land here), then that's about 1 minute per candidate. You can't do anything useful in one minute. Not even 10 minutes would be useful. Once you know no factors are below 5e12, and you're time limited, you're looking probably at just running Pollard's P-1, with B1>10000, B2=B1*30, and on numbers of this size that's just prohibitive.

i.e. sieving 700000 candidates is _much_ more cost-effective than factoring.

The reason GIMPS use P-1 and ECM to factor candidates is because they _can't_ sieve. Even the old style trial-division NewPGen was probably more effective than factoring, and we're 100 times faster than that now.

Phil

smh
09-05-2003, 08:40 AM
Yes, sieving is much more cost effective for the whole project at this moment. But there still needs to be a lot of sieving done on the exponents currentely assigned.

So to be cost effective, a user should do some sieving on the single exponent he's testing, which is obviousely not efficient.

Another way of increasing the overal throughput is doing some P-1 on exponents which are not sieved far enough before doing a prp test.

Not ideal, but it does increase overal project throughput, so P-1 should be done on exponents before doing a prp test.

MikeH
09-05-2003, 02:50 PM
Hopefully with the current scoring scheme, this factoring vs. sieving debate should sort itself out.

Some very rough numbers here. Sieving a 1T range at p~60T there are ~300 factors. So that's 17 factors per 1M and only 3.5 in the 200K target window. Given the PRP effort is moving at about 100 days per 1M n, most of us would consider (say) 50 of those 300 factors to be near enough to have a reasonable degree of confidence that they will score. To sieve that 1T range on an XP2100+ will take about 30 days.

On that same XP2100+ in 30 days you could P-1 factor and find about 6 factors. So right now P-1 will score more on-target factors than sieving, but sieving gives more 'almost' factors that gut feel says will be useful.

At 120T the sieve numbers will halve while the P-1 will remain fairly constant. At 250T there will be less than one on-target factor per 1T (this is a depressing thought) and about 10 in that good gut feel region. By this time, all things being equal, I'd expect a good number of PCs to have moved over to P-1. But then some PCs with little memory are not appropriate for P-1, maybe we'll find a prime ;) , more sieve client improvements ;) , P-1 client improvements.........

But at some point sieving will become too depressing to consider. That is a fact.

Mystwalker
09-05-2003, 03:02 PM
I also made considerations in that direction the last days.
But concerning P4's, this consideration should change a bit. The question for me is: how much?

And maybe:
Is it good to devote resources to sieving while there is too few power for p-1 to keep up with PRP?
Plus, one has to consider that P-1 uses a default bounds parameter that is a tradeoff between optimal factor rate and keeping up with PRP...

Moo_the_cow
09-06-2003, 12:36 AM
Originally posted by MikeH

At 120T the sieve numbers will halve while the P-1 will remain fairly constant.

But at some point sieving will become too depressing to consider. That is a fact.

At the same time, P-1 factoring will also take longer per candidate as time goes on since the exponents being PRP'ed will grow larger.

Nuri
09-06-2003, 01:27 PM
the point is, sieving will double much faster.

It's highly likely that sieving depth will double within the next 3 to 4 months. On the other hand, PRP will probably proceed by ~25-30% by that time.

And so on....

MikeH
11-01-2003, 03:19 PM
hc_grove, I am very interested to know what PC power you have on P-1 right now? You seem to be finding a factor per day, which will give you 2nd in the 14 average if you keep this up.:smoking:

I really want to know so I can get an updated view on when is a good time to start moving some of my resources from sieving to factoring.

hc_grove
11-01-2003, 06:12 PM
I have a Dual Xeon P4 @ 2GHz (supports hyperthreading, so it like 4 CPUs), which does work for others too (its a server), until yesterday I only ran one copy of SBFactor on it (24/7), but yesterday I decided that at least for the weekend I could run two copies. Tomorrow when people get back to work, I'll see if it's too heavily loaded to run two copies on weekdays.

When I don't use it for other things I set my laptop (which has a mobile P4 @ 2GHz) to do some tests during the day. It normally ends up being around 6 tests (about 140 minutes each) - which is also the reason for some of the weird intervals I've been reserving.

Last weekend I did some test (which found no factors) on a Xeon P4 @ 2.4GHz. If PRP'ing comes too close to catch up with one of my ranges, I might use that to do some of the tests, but else I don't plan on using that too often (another server).

Since SBFactor semes to require far less nursing (it doesn't stop at random times) I have considered moving some Athlon XP1800+ (@1533) from PRP'ing to factoring. But after all, I am in this to find a prime! :D

I do think that having found six factors already is very lucky.
:notworthy

EDIT:

I just thought that it might be useful to say that the first mentioned machine has 2GB of RAM, but I only use 512 MB.

The laptop has 512 MB of RAM, and I've set it to use 448 MB (don't really know if that makes any sense).

In all cases the other parameters are 45 and 1.5.

Mystwalker
11-02-2003, 03:23 PM
Maybe you could put to Athlon to sieving, as there lies its strength.


When I don't use it for other things I set my laptop (which has a mobile P4 @ 2GHz) to do some tests during the day. It normally ends up being around 6 tests (about 140 minutes each)

Seems like it's very memory dependant, as my 3 Ghz P4 completes 2 tests in ~85 minutes - using PSB800. 533 MHz memory clock yielded 2 tests every 103 minutes only, though...

Keroberts1
11-02-2003, 05:04 PM
well when using larger amounts of memory I thought it was a given that you would find more factors. I believe the chances of finding a factor double when you use twice as much memory. I don't thin the test take twice as long, although they do go faster. When i was doing P-1 factoring I only had 256 MB on my machine. I never found a factor, perhaps someone with more memory flexability can lemme know how the factor ratio per time may add up with different settings.

hc_grove
11-02-2003, 05:27 PM
Originally posted by Mystwalker
Maybe you could put to Athlon to sieving, as there lies its strength.


I'll consider that. What about the single P3 I have PRP'ing, should that be put to sieving or factoring if I get to tired of the hangs in the PRP client?



Seems like it's very memory dependant, as my 3 Ghz P4 completes 2 tests in ~85 minutes - using PSB800. 533 MHz memory clock yielded 2 tests every 103 minutes only, though...

I'm more than willing to believe that the RAM in my laptop is not the fastest possible.

.Henrik

Mystwalker
11-02-2003, 09:10 PM
What about the single P3 I have PRP'ing, should that be put to sieving or factoring if I get to tired of the hangs in the PRP client?

I'd prefer sieving when I was you...


well when using larger amounts of memory I thought it was a given that you would find more factors.

Chances of finding a factor should only be dependant on the bounds.
More memory speeds thinks up, but my experience shows that it's not that a big performance increase.