PDA

View Full Version : P-1 thread discussions



prime95
06-06-2004, 09:23 PM
My new code was doing about a curve an hour at 128 MB and what you call a 2.0 setting. This translated to B1=40000 and B2=340000. I'm not sure how long sbfactor would take on my 2.1 GHz P4.

If there is interest I can package up this executable for you guys to test. There are limitations: P4/A64 only, Windows only, different user interface. It will take quite a while before the gwum library is ready for linking into sbfactor.

P.S. I'm amazed at how deep you guys have sieved. I'm used to having a 3 to 5% chance of a P-1 hit on GIMPS and here I'm getting less than a 1% chance!

Keroberts1
06-07-2004, 12:21 AM
when more memory is used the chances improv greatly. Perhaps you might have some input on the optimal bounds selecting. I am curious if the likelyness of a factor being found is just from experiments or from theory. perhaps using the results file we could see exactly how many smooth factors exist for each range. Perhaps that would be a bettr way or determineing the likelyness of finding a factor instead of theory that is often unreliable when dealing with extreme circumstances as we have here. This isn't a question for prime95 this is for anyone with input. How do the current optimal bounds selectors work?

Nuri
06-07-2004, 01:54 AM
Originally posted by prime95
If there is interest I can package up this executable for you guys to test. That would be wonderful. May be I'm wrong, but it sounds as if it's much faster. Thanks in advance.

prime95
06-07-2004, 04:08 PM
Here is the Windows SSE2-only pre-beta P-1 factorer. You can get it from ftp://mersenne.org/gimps/sobpm1.zip

There is a text file you'll need to read before using it.

Please try a few known factors before giving it a go. This is only lightly tested. Bug reports are most welcome.

Nuri
06-07-2004, 07:17 PM
My PC requies mfc70d.dll file to run the client (and probably some others).

EDIT: Found the file here (http://codeplanet.userhost.de/). I'm posting the link in case somebody else requires the file (download at your own risk). It's really scarce.

By the way, what is 1 in Pfactor=k,2,n,1,49,0? Does it refer to the +1 in k*2^n+1, or is it something else?

prime95
06-07-2004, 07:35 PM
The zip file now contains two executables, a debug build and a release build.

The first four arguments in "Pfactor=k,2,n,1" are k,b,n,c in k*b^n+c.

Nuri
06-07-2004, 08:26 PM
First (very early) comments.

- Thanks for the new client beta. I'm sure we'll soon have a solid and efficient client soon.

- The client seems to be much faster. The estimated completion time for a k/n=6m pair with B1=30000, B2=270000 is ~60 min , whereas it took ~100 min for B1=10000, B2=100000 with the original client. PS: It might take longer. The expected completion at status window is increasing. Still, the new client looks faster.

- Where are the factors written? If they're written to results.txt file like in Mersenne project, could you please use another name for the file? (fact.txt would be very nice). We also have a results.txt file which contains our factors for p>25T.

- I first tried with:
Pfactor=24737,2,1008463,1,49,0
Pfactor=27653,2,5992989,1,49,0
at worktodo.ini. The client simply skips the first one and starts the second one. We do not need to factor n=1m files, but it might signal a bug.


- Is it possible to use numbers like 1.4 as the last input in Pfactor=k,2,n,1,49,0. I tried some alternatives. But the client seems to skip the k/n pair (and delete it from the worktodo.ini) in case there is an input with decimal there.


More will come later.

Nuri
06-07-2004, 08:39 PM
I've run Pfactor=27653,2,5992989,1,49,0 to 43% on stage 1. Stopped the client, added Pminus1=21181,2,5777180,1,10000,100000 to the first line of worktodo.ini (pushing Pfactor=27653,2,5992989,1,49,0 to the second line).

Here's what status looks like at that stage.

prime95
06-07-2004, 09:00 PM
Try putting this line in prime.ini: "results.txt=fact.txt"
That feature should rename the output file.


The Pfactor=k,2,1000000,1,49,1 not getting executed is simply due to the program concluding that P-1 doesn't make sense for such a small number that has been factored to 2^49. I'll look at outputting a line rather than silently skipping the worktodo entry.

It is a known limitation that Pminus1= lines are not estimated properly by Test/Status.

Keroberts1
06-07-2004, 09:10 PM
well this should soon help the P-1 effort aintaine speed with the PRP effort, No? Unless of course some of these optimizations could be transcribed over to the PRP too. I do remember hearing that muchg of the code between the two was very similar.

Nuri
06-07-2004, 09:26 PM
Thanks for quick answers George. I'll do some more tests in the following days and post here if I encounter something interesting.

It took 40 mins to find:

[Tue Jun 08 04:13:00 2004]
P-1 found a factor in stage #2, B1=10000, B2=100000.
21181*2^5777180+1 has a factor: 407337342926141

IIRC, it took 60 mins for SBFactor to finish the same k/n pair with B1=10000, B2=65000 on the same machine. So, significantly faster.


Notes:

- We need a program that will create the input lines for worktodo.ini when the user enters nmin and nmax.

- We need a second program that enables communication between SOBPM1 output and sieve submission format.

prime95
06-08-2004, 10:19 AM
Originally posted by Keroberts1
Unless of course some of these optimizations could be transcribed over to the PRP too. I do remember hearing that much of the code between the two was very similar.

These optimizations will help PRP too. However, I need to put the new code through a lot more testing before that happens. If I introduce a bug in P-1 factoring, the worst that happens is you miss a factor. A bug in PRPing could miss a prime!

Keroberts1
06-08-2004, 01:19 PM
great to hear :cheers:

Nuri
06-08-2004, 07:59 PM
I've created a work queue from the P-1 factors we've found so far.

Parameters are:
- 2^49 < p < 2^64
- 4000000 < n < 6365000
- factors found through sieve are excluded
- duplicates are included (there's only one such case)

There are 212 k/n pairs of such.

The client did not factor anything below 4926821 for the setting 0 (two times PRP). Pfactor=10223,2,4926821,1,49,0 is the first work accepted by the client for 160 MB RAM allocation on a PIV-1700.

So, there's a set of 140 k/n pairs available for testing.


I started testing them, but the whole set is too much for me.

I can post some blocks of k/n pairs (in Pfactor=k,2,n,1,49,0 format) if anyone is interested in testing the client.


BTW:

- putting results.txt=fact.txt line in prime.ini seems to work

- as far as the times PRP setting is concerned, it's not only the decimals that do not work. Integers like 3, 4, etc. does not work either (not that they'll be used).

Mystwalker
06-09-2004, 05:00 AM
Originally posted by Nuri
I can post some blocks of k/n pairs (in Pfactor=k,2,n,1,49,0 format) if anyone is interested in testing the client.

*Interested* ;)

Nuri
06-09-2004, 06:29 AM
Sure, I'll post some when I go home tonight.

Nuri
06-09-2004, 03:02 PM
First, my test results so far.

The client skipped all of the 72 tests below n=4926821.

I tested 53 k/n pairs from 4926821 to 5673943 (both included).

As far as I can see, the results can be grouped into two.

1. Tests below 548000:
This area is tricky. The client
- skipped 23 tests,
- found the factors for 4 tests, and
- could not find factors for 3 tests.

2. Tests above 5480000:
The client found all 23 factors.

-----

Please find below the detailed results. The format for the results below is:

The factor we found by P-1
Test client worktodo.ini input line
Test client result (or my comment if the test as skipped by the client)

1642178249068589 | 10223*2^4926821+1
Pfactor=10223,2,4926821,1,49,0
10223*2^4926821+1 completed P-1, B1=20000, B2=190000, WZ1: 7FA28ADC

298757939128829629 | 5359*2^4932486+1
Pfactor=5359,2,4932486,1,49,0
5359*2^4932486+1 has a factor: 298757939128829629

3022753468710323 | 5359*2^4985022+1
Pfactor=5359,2,4985022,1,49,0
5359*2^4985022+1 has a factor: 3022753468710323

7088096697598673 | 55459*2^5101474+1
Pfactor=55459,2,5101474,1,49,0
the client skipped this test

232926929846780599 | 4847*2^5102007+1
Pfactor=4847,2,5102007,1,49,0
4847*2^5102007+1 completed P-1, B1=20000, B2=190000, WZ1: 840FF103

1616859400969889249 | 10223*2^5107709+1
Pfactor=10223,2,5107709,1,49,0
the client skipped this test

1478855049607009 | 22699*2^5120614+1
Pfactor=22699,2,5120614,1,49,0
the client skipped this test

589241016128671903 | 4847*2^5121303+1
Pfactor=4847,2,5121303,1,49,0
4847*2^5121303+1 has a factor: 589241016128671903

762098342145967 | 21181*2^5122772+1
Pfactor=21181,2,5122772,1,49,0
the client skipped this test

17915702690728657 | 55459*2^5123038+1
Pfactor=55459,2,5123038,1,49,0
the client skipped this test

968947423096291 | 5359*2^5123766+1
Pfactor=5359,2,5123766,1,49,0
5359*2^5123766+1 has a factor: 968947423096291

1250773527381738389 | 24737*2^5124367+1
Pfactor=24737,2,5124367,1,49,0
the client skipped this test

3890219409956567 | 55459*2^5124826+1
Pfactor=55459,2,5124826,1,49,0
the client skipped this test

3448105179147583 | 33661*2^5205048+1
Pfactor=33661,2,5205048,1,49,0
the client skipped this test

1890202050860579 | 4847*2^5205111+1
Pfactor=4847,2,5205111,1,49,0
4847*2^5205111+1 completed P-1, B1=25000, B2=212500, WZ1: 86AB56F3

3892606265176382447 | 28433*2^5245033+1
Pfactor=28433,2,5245033,1,49,0
the client skipped this test

1006791864086011 | 22699*2^5301910+1
Pfactor=22699,2,5301910,1,49,0
the client skipped this test

2777060416448011 | 24737*2^5303767+1
Pfactor=24737,2,5303767,1,49,0
the client skipped this test

1120067536731277 | 21181*2^5304212+1
Pfactor=21181,2,5304212,1,49,0
the client skipped this test

5346081339902940419 | 28433*2^5307673+1
Pfactor=28433,2,5307673,1,49,0
the client skipped this test

2498287469935123 | 10223*2^5308541+1
Pfactor=10223,2,5308541,1,49,0
the client skipped this test

888917795346331 | 55459*2^5317534+1
Pfactor=55459,2,5317534,1,49,0
the client skipped this test

798402259594481 | 24737*2^5318911+1
Pfactor=24737,2,5318911,1,49,0
the client skipped this test

88310736740738929 | 24737*2^5320951+1
Pfactor=24737,2,5320951,1,49,0
the client skipped this test

340365599269738517 | 19249*2^5322218+1
Pfactor=19249,2,5322218,1,49,0
the client skipped this test

216324352986783397 | 10223*2^5326121+1
Pfactor=10223,2,5326121,1,49,0
the client skipped this test

333344685330888353 | 19249*2^5326862+1
Pfactor=19249,2,5326862,1,49,0
the client skipped this test

2271657032710253 | 21181*2^5329652+1
Pfactor=21181,2,5329652,1,49,0
the client skipped this test

2711637975518893 | 10223*2^5330405+1
Pfactor=10223,2,5330405,1,49,0
the client skipped this test

1298422551990943 | 55459*2^5336866+1
Pfactor=55459,2,5336866,1,49,0
the client skipped this test

1702062035206919 | 27653*2^5480061+1
Pfactor=27653,2,5480061,1,49,0
27653*2^5480061+1 has a factor: 1702062035206919

797778229476377 | 22699*2^5480470+1
Pfactor=22699,2,5480470,1,49,0
22699*2^5480470+1 has a factor: 797778229476377

783357760490677 | 55459*2^5481334+1
Pfactor=55459,2,5481334,1,49,0
55459*2^5481334+1 has a factor: 783357760490677

2490175855835303 | 4847*2^5481927+1
Pfactor=4847,2,5481927,1,49,0
4847*2^5481927+1 has a factor: 2490175855835303

384493174147192573 | 67607*2^5484267+1
Pfactor=67607,2,5484267,1,49,0
67607*2^5484267+1 has a factor: 384493174147192573

17227108922055551 | 19249*2^5486126+1
Pfactor=19249,2,5486126,1,49,0
19249*2^5486126+1 has a factor: 17227108922055551

19030971212858639 | 19249*2^5491598+1
Pfactor=19249,2,5491598,1,49,0
19249*2^5491598+1 has a factor: 19030971212858639

103596681290541143 | 33661*2^5550432+1
Pfactor=33661,2,5550432,1,49,0
33661*2^5550432+1 has a factor: 103596681290541143

2186703185472067 | 10223*2^5595929+1
Pfactor=10223,2,5595929,1,49,0
10223*2^5595929+1 has a factor: 2186703185472067

62447276586432949 | 28433*2^5598025+1
Pfactor=28433,2,5598025,1,49,0
28433*2^5598025+1 has a factor: 62447276586432949

412016110183932931 | 22699*2^5602654+1
Pfactor=22699,2,5602654,1,49,0
22699*2^5602654+1 has a factor: 412016110183932931

315608255140912871 | 24737*2^5606887+1
Pfactor=24737,2,5606887,1,49,0
24737*2^5606887+1 has a factor: 315608255140912871

46736570240095201 | 24737*2^5611207+1
Pfactor=24737,2,5611207,1,49,0
24737*2^5611207+1 has a factor: 46736570240095201

2272180866513823 | 21181*2^5616452+1
Pfactor=21181,2,5616452,1,49,0
21181*2^5616452+1 has a factor: 2272180866513823

5013169446620738903 | 4847*2^5621151+1
Pfactor=4847,2,5621151,1,49,0
4847*2^5621151+1 has a factor: 5013169446620738903

1042382178159409 | 21181*2^5621468+1
Pfactor=21181,2,5621468,1,49,0
21181*2^5621468+1 has a factor: 1042382178159409

1328735974435881607 | 22699*2^5627854+1
Pfactor=22699,2,5627854,1,49,0
22699*2^5627854+1 has a factor: 1328735974435881607

208629015721752851 | 10223*2^5629241+1
Pfactor=10223,2,5629241,1,49,0
10223*2^5629241+1 has a factor: 208629015721752851

28195479692754961 | 67607*2^5635211+1
Pfactor=67607,2,5635211,1,49,0
67607*2^5635211+1 has a factor: 28195479692754961

22851488069566757 | 5359*2^5646262+1
Pfactor=5359,2,5646262,1,49,0
5359*2^5646262+1 has a factor: 22851488069566757

11974770314169263 | 21181*2^5648732+1
Pfactor=21181,2,5648732,1,49,0
21181*2^5648732+1 has a factor: 11974770314169263

6051228394673221 | 24737*2^5666431+1
Pfactor=24737,2,5666431,1,49,0
24737*2^5666431+1 has a factor: 6051228394673221

9811491219057001 | 24737*2^5673943+1
Pfactor=24737,2,5673943,1,49,0
24737*2^5673943+1 has a factor: 9811491219057001

Nuri
06-09-2004, 03:05 PM
Please find below the input line for the 72 tests that the client skipped at the first place. In case somebody else wants to give a try with different hardware and RAM settings (mine was 160 MB).

Pfactor=19249,2,4003538,1,49,0
Pfactor=10223,2,4005017,1,49,0
Pfactor=5359,2,4006302,1,49,0
Pfactor=67607,2,4022171,1,49,0
Pfactor=28433,2,4027873,1,49,0
Pfactor=10223,2,4029221,1,49,0
Pfactor=5359,2,4038382,1,49,0
Pfactor=55459,2,4040698,1,49,0
Pfactor=24737,2,4054831,1,49,0
Pfactor=5359,2,4055790,1,49,0
Pfactor=33661,2,4064496,1,49,0
Pfactor=21181,2,4065980,1,49,0
Pfactor=55459,2,4069738,1,49,0
Pfactor=10223,2,4073981,1,49,0
Pfactor=67607,2,4091211,1,49,0
Pfactor=28433,2,4091425,1,49,0
Pfactor=21181,2,4108364,1,49,0
Pfactor=27653,2,4114797,1,49,0
Pfactor=4847,2,4122567,1,49,0
Pfactor=55459,2,4132918,1,49,0
Pfactor=21181,2,4134692,1,49,0
Pfactor=24737,2,4149607,1,49,0
Pfactor=67607,2,4150107,1,49,0
Pfactor=28433,2,4150417,1,49,0
Pfactor=55459,2,4151566,1,49,0
Pfactor=4847,2,4155831,1,49,0
Pfactor=21181,2,4156100,1,49,0
Pfactor=21181,2,4162724,1,49,0
Pfactor=5359,2,4164942,1,49,0
Pfactor=5359,2,4169766,1,49,0
Pfactor=10223,2,4172189,1,49,0
Pfactor=33661,2,4172496,1,49,0
Pfactor=55459,2,4173886,1,49,0
Pfactor=4847,2,4177407,1,49,0
Pfactor=27653,2,4184205,1,49,0
Pfactor=33661,2,4188096,1,49,0
Pfactor=22699,2,4252294,1,49,0
Pfactor=28433,2,4300225,1,49,0
Pfactor=33661,2,4447032,1,49,0
Pfactor=10223,2,4530041,1,49,0
Pfactor=33661,2,4532712,1,49,0
Pfactor=55459,2,4534834,1,49,0
Pfactor=27653,2,4538445,1,49,0
Pfactor=33661,2,4541544,1,49,0
Pfactor=21181,2,4543700,1,49,0
Pfactor=5359,2,4544686,1,49,0
Pfactor=10223,2,4545869,1,49,0
Pfactor=4847,2,4546887,1,49,0
Pfactor=28433,2,4546993,1,49,0
Pfactor=33661,2,4570008,1,49,0
Pfactor=5359,2,4571326,1,49,0
Pfactor=10223,2,4571705,1,49,0
Pfactor=27653,2,4571709,1,49,0
Pfactor=55459,2,4574206,1,49,0
Pfactor=28433,2,4620625,1,49,0
Pfactor=22699,2,4621942,1,49,0
Pfactor=21181,2,4647212,1,49,0
Pfactor=10223,2,4753865,1,49,0
Pfactor=21181,2,4754228,1,49,0
Pfactor=10223,2,4755461,1,49,0
Pfactor=33661,2,4761600,1,49,0
Pfactor=28433,2,4762105,1,49,0
Pfactor=5359,2,4765870,1,49,0
Pfactor=27653,2,4766289,1,49,0
Pfactor=10223,2,4780349,1,49,0
Pfactor=10223,2,4780397,1,49,0
Pfactor=33661,2,4780752,1,49,0
Pfactor=4847,2,4820583,1,49,0
Pfactor=10223,2,4822349,1,49,0
Pfactor=4847,2,4830063,1,49,0
Pfactor=10223,2,4832441,1,49,0
Pfactor=21181,2,4922300,1,49,0

Nuri
06-09-2004, 03:08 PM
Please find below the the input line for the remaining 87 tests. I'm not planning further tests for the time being, so feel free to grab anything you like.

Pfactor=33661,2,5674272,1,49,0
Pfactor=27653,2,5690625,1,49,0
Pfactor=24737,2,5705071,1,49,0
Pfactor=27653,2,5705673,1,49,0
Pfactor=4847,2,5707551,1,49,0
Pfactor=24737,2,5709871,1,49,0
Pfactor=27653,2,5713017,1,49,0
Pfactor=55459,2,5720278,1,49,0
Pfactor=67607,2,5726411,1,49,0
Pfactor=21181,2,5726972,1,49,0
Pfactor=55459,2,5730718,1,49,0
Pfactor=10223,2,5732837,1,49,0
Pfactor=10223,2,5733485,1,49,0
Pfactor=27653,2,5737929,1,49,0
Pfactor=28433,2,5741185,1,49,0
Pfactor=24737,2,5754367,1,49,0
Pfactor=4847,2,5754687,1,49,0
Pfactor=55459,2,5758438,1,49,0
Pfactor=33661,2,5767488,1,49,0
Pfactor=28433,2,5768017,1,49,0
Pfactor=10223,2,5774585,1,49,0
Pfactor=10223,2,5781065,1,49,0
Pfactor=19249,2,5794718,1,49,0
Pfactor=4847,2,5801991,1,49,0
Pfactor=21181,2,5804804,1,49,0
Pfactor=55459,2,5843506,1,49,0
Pfactor=4847,2,5846367,1,49,0
Pfactor=55459,2,5868118,1,49,0
Pfactor=10223,2,5869769,1,49,0
Pfactor=22699,2,5874598,1,49,0
Pfactor=24737,2,5880343,1,49,0
Pfactor=10223,2,5884637,1,49,0
Pfactor=27653,2,5888841,1,49,0
Pfactor=24737,2,5951647,1,49,0
Pfactor=33661,2,5963328,1,49,0
Pfactor=67607,2,5965947,1,49,0
Pfactor=22699,2,5989654,1,49,0
Pfactor=27653,2,5992989,1,49,0
Pfactor=27653,2,5998713,1,49,0
Pfactor=4847,2,6001023,1,49,0
Pfactor=24737,2,6003031,1,49,0
Pfactor=55459,2,6004246,1,49,0
Pfactor=21181,2,6025868,1,49,0
Pfactor=33661,2,6027912,1,49,0
Pfactor=55459,2,6032314,1,49,0
Pfactor=24737,2,6053911,1,49,0
Pfactor=33661,2,6055008,1,49,0
Pfactor=55459,2,6077758,1,49,0
Pfactor=24737,2,6079111,1,49,0
Pfactor=21181,2,6094532,1,49,0
Pfactor=24737,2,6094663,1,49,0
Pfactor=33661,2,6095592,1,49,0
Pfactor=10223,2,6096761,1,49,0
Pfactor=21181,2,6103172,1,49,0
Pfactor=24737,2,6104311,1,49,0
Pfactor=24737,2,6107023,1,49,0
Pfactor=28433,2,6107737,1,49,0
Pfactor=27653,2,6111897,1,49,0
Pfactor=10223,2,6113177,1,49,0
Pfactor=28433,2,6118297,1,49,0
Pfactor=4847,2,6161007,1,49,0
Pfactor=24737,2,6161263,1,49,0
Pfactor=24737,2,6162487,1,49,0
Pfactor=10223,2,6165605,1,49,0
Pfactor=21181,2,6167900,1,49,0
Pfactor=28433,2,6170953,1,49,0
Pfactor=55459,2,6172258,1,49,0
Pfactor=33661,2,6173496,1,49,0
Pfactor=4847,2,6174351,1,49,0
Pfactor=28433,2,6180505,1,49,0
Pfactor=28433,2,6181297,1,49,0
Pfactor=10223,2,6182117,1,49,0
Pfactor=55459,2,6182770,1,49,0
Pfactor=19249,2,6185138,1,49,0
Pfactor=19249,2,6185858,1,49,0
Pfactor=55459,2,6187594,1,49,0
Pfactor=22699,2,6191038,1,49,0
Pfactor=28433,2,6195265,1,49,0
Pfactor=4847,2,6220431,1,49,0
Pfactor=10223,2,6226937,1,49,0
Pfactor=4847,2,6281463,1,49,0
Pfactor=28433,2,6345265,1,49,0
Pfactor=28433,2,6350497,1,49,0
Pfactor=55459,2,6358546,1,49,0
Pfactor=10223,2,6360365,1,49,0
Pfactor=24737,2,6363511,1,49,0
Pfactor=19249,2,6364058,1,49,0

prime95
06-09-2004, 05:47 PM
Thanks Nuri! The 3 times the program did not find the factor is easily explained by the factor not lying within the B1/B2 values chosen by the optimal bounds checker.

Remember, when P-1 was first run on these numbers, sieving has only been done to 2^47 or so. I'll bet if you cahnged the 49 to 47 in the pfactor= lines, then these would be rediscovered too.

In summary, the program found 27 out of 27 factors that it should have found. That is encouraging. Feel free to use it on new k/n pairs.

I'll retest the 4,000,000 to 4,100,000 range you posted with different settings.

Mystwalker
06-09-2004, 06:49 PM
I started to test the last 3 numbers:

Pfactor=10223,2,6360365,1,49,0
Pfactor=24737,2,6363511,1,49,0
Pfactor=19249,2,6364058,1,49,0

The first one was correct, plus it

- used higher bounds,
- completed faster (~45 mins vs. ~65 mins) and
- was used for higher n's than before (6.3M vs. 6.0M).

Very good results, indeed! :cheers:

But the second test gave me a SUMOUT error just at the beginning. I first thought it would be my hardware, but the problem occurs right from the start and is reproducable - but only for this test (of the 3 I have)!

As the other 2 use a 512K FFT and the one in question 640K FFT, maybe this is the problem.
Additionally, the line "Zero-padded FFTs not coded yet!" is written for the problematic test.

On a sidenote, I found a crash bug. Once the program wanted to restart after the 5 minutes penalty, it gave me this little fellow:

http://www.mystwalker.de/error.png

garo
06-10-2004, 05:56 AM
George,
Could you please explain why some tests are being skipped. There doesn't seem to be any pattern...

Mystwalker
06-10-2004, 11:41 AM
Testing 20 more tests at the end (all beginning from Pfactor=21181,2,6167900,1,49,0 inclusive).

Nuri
06-10-2004, 12:39 PM
49 means the client assumes the k/n pair was sieved to 2^49, right?

If so, I guess we should use 48 there. What would you recommend?

prime95
06-10-2004, 02:43 PM
I retested the 4000000 to 4100000 range with 47 instead of 49 and found all the factors.

I can reproduce the "Pfactor=24737,2,6363511,1,49,0" bug. Thanks for finding it! You've hit a case where I haven't finished the code yet. I didn't think we'd hit that case, but apparently there is a way.... I'll work on that soon.

prime95
06-10-2004, 02:49 PM
Originally posted by garo
Could you please explain why some tests are being skipped. There doesn't seem to be any pattern...

The main "problem" is using 49 instead of 47 for these QA tests.

However, you are probably referring to the fact that some n values are skipped even though a slightly smaller n and slightly larger n are not. The short answer is that, unlike the previous version, the value of k affects the FFT size chosen. Larger k values will switch to a larger FFT size sooner that smaller k values. Larger FFT sizes affect how many temporary variables can be allocated in the memory the program is allowed to use, which in turn affects the optimal bounds selection.

prime95
06-10-2004, 02:52 PM
Originally posted by Nuri
49 means the client assumes the k/n pair was sieved to 2^49, right?
If so, I guess we should use 48 there. What would you recommend?

For QA purposes we should probably use 48 or 47, whatever was commonly used at the time the factors were originally found.

For new work, I think you should use 49 as, from what I've read, you seem to have sieved to 2^49.

Frodo42
06-10-2004, 02:54 PM
Will it be possible to make a new linux-factorer also?

Mystwalker
06-10-2004, 03:22 PM
Originally posted by prime95
For new work, I think you should use 49 as, from what I've read, you seem to have sieved to 2^49.

Sieving levels to certain exponents can be seen here (http://www.aooq73.dsl.pipex.com/scores_p.htm) (second table). 2^48 is almost completely sieved, 2^48 - 2^49 only to less than 10%.
Or is it important that everything to 2^49 has been sieved to approx. 55%?

prime95
06-10-2004, 03:42 PM
I've uploaded a new sobpm1.zip that works around the 640K FFT problem. Instead you'll see several warnings about using a larger FFT size instead. The warnings can be ignored for now.

I've fixed the current_time crash bug after the 5 minute waiting period.

I'll see if I can build a Linux version.

As to whether you should use 49 or 48: Technically, you should use whatever value the siever will have reached by the time PRP testing will begin. So if you P-1 a 6 million exponent you should select 48 or 49 (either is fine as you are now at 48.55). If you were testing an exponent around 13 million, you might use 51 or 52.
In any event, don't get hung up on it. The smaller the number, the deeper the bounds that will be chosen.

prime95
06-10-2004, 04:46 PM
Linux version, totally untested: ftp://mersenne.org/gimps/sobpm1.tgz

Troodon
06-10-2004, 05:24 PM
Can we expect any improvement in the future for non-SSE2 processors?

Frodo42
06-10-2004, 06:29 PM
Seems it works OK for Linux also.

[Fri Jun 11 00:05:52 2004]
P-1 found a factor in stage #1, B1=30000.
4847*2^5801991+1 has a factor: 2015485189907779
[Fri Jun 11 00:34:42 2004]
P-1 found a factor in stage #2, B1=30000, B2=277500.
21181*2^5804804+1 has a factor: 5172107658035209


I'll keep it running for some more hours ...

Mystwalker
06-10-2004, 07:01 PM
Originally posted by prime95
I've uploaded a new sobpm1.zip that works around the 640K FFT problem. Instead you'll see several warnings about using a larger FFT size instead. The warnings can be ignored for now.

Trying to factor "Pfactor=55459,2,6172258,1,49,0", I first got this output:
http://mystwalker.de/status.png

followed by another crash:
http://mystwalker.de/error2.png


The problematic test from yesterday does work now, although it now uses a FFT size of 768K instead of 640K it tried earlier. Is this ok?
Of course, it takes a lot more time now - maybe 75% more...

Another thing that came to my mind. Assuming that there are two tests with the same n value, the associated file ("I<n>" - or is it "l<n>"?) of the first test will be overwritten, won't it?

Just for curiosity:
Is the release version any faster than the debug version?

Nuri
06-11-2004, 01:41 AM
Oooops!

That happens when I want to use test/status with the SOBPM1R client.

[Fri Jun 11 08:33:36 2004]
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.
Zero-padded FFTs not coded yet! Using larger FFT size.

garo
06-11-2004, 09:30 AM
Thanks George. I was referring to the skipping. It is interesting to see that k affects the optimal bounds so much and makes P-1 "seem" even more non-deterministic than in GIMPS.

I've read the source code that chooses P-1 bounds several times. I believe I know how it works. And it is not intuitive. So near FFT boundaries one cannot really say what bounds will be chosen. But I am convinced that it is optimal.

Finally, I believe that 48 is the correct bound for the moment (i.e. for 6M exponents) as only 10% of the range above 2^48 has been sieved. So we are really at 48.1 and not 48.55.

prime95
06-11-2004, 02:31 PM
Mystwalker's latest debug assertion error is not good. For now, I'd stay away from those tests that are raising the "zero padded fft not coded yet" warning. I'll work on that soon.

Answering other questions:

The release version is not faster than the debug version.

The same improvements will come to the x87 code too. It will take quite a while. Since coding for SSE2 processors is a lot easier, I try out new ideas there first.

I'll fix the save file name problem and test/status bugs too. Thanks for the valuable QA testing.

Nuri
06-11-2004, 06:06 PM
???

55459*2^6399406+1 does not need P-1 factoring.

prime95
06-11-2004, 11:21 PM
Originally posted by Nuri
???

55459*2^6399406+1 does not need P-1 factoring.

what is your memory settings in Options/CPU?

Nuri
06-12-2004, 06:04 AM
This later happened 12 more times.

13 occurances in a set a 30 workunits.

PS: Pfactor= settings were as usual.

ceselb
06-12-2004, 08:08 AM
How sure are we that these higher tests done with the beta client are correct?
I'm not sure these should be added until it's stable.

Mystwalker
06-13-2004, 07:00 AM
Maybe the memory is not seen as enough for a 768K FFT? :confused:

Apart from that, I finished all the last 23 tests of Nuri's list.
Excep 3 tests that gave me the error message shown somewhere above, all factors were found.

Sieving depth was 49, granted memory 512MB.

prime95
06-14-2004, 05:27 PM
Thanks everyone for your testing. I'm now writing the zero-padded 640K FFT code, the lack of which is causing the current errors. This will take at least a week.

Nuri, try dropping to 48 bits instead of 49. This will probably eliminate the skipping of tests.

Y'all are welcome to continue using the pre-beta code. I'll post a new version when I can.

Nuri
06-14-2004, 06:15 PM
I tried it too.

A total of 70 k/n pairs tried, 30 pairs with 48 and 40 pair with 49. All had 0 (two times PRP) setting.

I'll also try 1 times PRP with the skipped tests later when I'm done with the batch I have.

This might be wrong, but what I see is:

All of the pairs where k =< 21181 (i.e. 4847, 10223, 19249, and 21181) are tested without a problem for both 48 and 49.

All of the pairs where k >= 22699 are skipped for both 48 and 49.


PS: If 1 times PRP does not work either, I'm planning to force the client with predefined B1 and B2 values (Pminus1=.....).


BTW, 235703760784331778069847 | 19249*2^6405242+1 :thumbs:

garo
06-15-2004, 05:12 AM
Nuri, using a setting of 1 will half the amount of time the program would want to allocate to P-1. So a setting of 1 would cause more tests to be skipped and not less. But please do go ahead and see what happens. I just wanted to let you know what you should expect.

Nuri
06-15-2004, 03:26 PM
Thanks garo. Yes, I know it should normally behave like this.

I consider this as a help in beta testing, and I feel there might me something wrong with the FFT size selection script (not necessarily, but probably).

It is not normal that everything where k >= 22699 are skipped for both 48 & 49, and everything where k =< 21181 are tested for both 48 & 49.

In the end, if n is similar, the difference in digits between the highest k and lowest k is not more than 2 (3?). On the other hand, pairs with smaller n are tested with success, whereas larger ones are skipped (or vice versa).

Anyways, trials with 1 setting might result in same (or different) pattern. I feel like both might lead to useful conclusions.

Nuri
06-25-2004, 05:44 PM
Just out of curiosity.

Is there any progress on the client, or is this the final version?

Is anyone testing/using the current version?

prime95
06-25-2004, 08:30 PM
I'm still working on the code. I'm not near a final version. There is plenty more code to write and test. When it is done it can be added to the old P-1 factorer.

Your testing was useful in finding several minor problems. The current client should be stable enough for you to use in further factoring. I'll post improved clients as they become available.

dmbrubac
06-26-2004, 09:56 AM
It's getting lonely here :confused:
Nuri, hc_grove & dmbrubac - the black sheep of SOB

Nuri
06-26-2004, 01:30 PM
I guess nobody has the right to complain about P-1 lagging behind.

dmbrubac
06-26-2004, 01:46 PM
Originally posted by Nuri
I guess nobody has the right to complain about P-1 lagging behind.

That doesn't mean they won't though!

Mystwalker
06-26-2004, 02:01 PM
The bad thing is that I only have one P4. :(
All the computers in the Condor Pool are P3s, which are a lot better for sieving. With a little luck, there will be P4 Celerons in the Pool someday. Then I can devote them to factoring...

Nuri
06-26-2004, 02:26 PM
Breakdown of ranges between 5,870,000 and 6,440,000

garo
06-27-2004, 02:30 PM
I'm really waiting for George's new client. That should up the P-1 speed a bit. I 'll probably put in a P4 for some time when the new client is released. And I think some others from TPR will also join in to help TPR improve its factoring rankings.

So gimme about a week or so....

prime95
07-29-2004, 12:29 PM
OK, here's a new beta to try. It is much more thoroughly tested and supports those so-called zero-padded FFTs. Yes, it is still SSE2 only.

ftp://mersenne.org/gimps/sobpm1.zip

If you Windows guys don't find any problems then I'll build a Linux version.

Mystwalker
07-29-2004, 03:47 PM
Got a crash right at the start - well, right at the start of the factoring, to be specific. Without a worktodo.ini, the program starts.

The last output is "Chance of finding a factor[...]"

Then, I get "instruction 0x004094f7 tried to write on memory segment 0x0110d000..." (the text might not be exact, bur the values are - maybe they help you). :(

Using WinXP SP1 (german) on a P4c 2.4 GHz

prime95
07-29-2004, 05:09 PM
That doesn't make sense. Without a worktodo.ini the program should just idle. Can you double-check to see what worktodo.ini file it might have found. Also, try deleting old save files - they may not be compatible.

Mystwalker
07-29-2004, 06:56 PM
Ok, I did not express myself in a clear way. :(

Case 1 (worktodo.ini does exist):
Crash as described (basically, it also starts here (naturally))

Case 2 (worktodo.ini does not exist):
program starts, but does nothing / idles --> no crash (so far)
when creating worktodo.ini and selecting "Continue" --> See Case 1

prime95
07-29-2004, 11:19 PM
What is the first line in worktodo.ini?

Mystwalker
07-30-2004, 10:33 AM
Pfactor=24737,2,6704023,1,49,0

Sorry for forgetting to report that. :(
Guess I had other things in mind: wrote an exam today - the last of this semester! :elephant:

edit:
The next don't work, either:
Pfactor=19249,2,6704042,1,49,0
Pfactor=22699,2,6704110,1,49,0

Using the last version (SOBPM1.EXE - HE1), it works... :confused:

hc_grove
07-30-2004, 10:59 AM
The previous version (HE-1) has worked for me so far, and I even found a new factor with, but now it has started failing. (I use the Linux version so I can't try the new verson yet)

When I start the program with `./mprime -m` and enter 5 to resume factoring, it runs for a while but then it segfaults:


Your choice: 5

Mersenne number primality test program version HE-1
Starting P-1 factoring with B1=60000, B2=750000
Chance of finding a factor is an estimated 1.68%
P-1 on 24737*2^6825703+1 with B1=60000, B2=750000
Using FFT length 768K

zsh: segmentation fault ./mprime -m


The first line of my worktodo.ini reads:


Pfactor=24737,2,6825703,1,48,0


It's configured to use a maximum of 384 MB of RAM (I have 512).

garo
07-30-2004, 11:16 AM
OK I tried the same and I get an error too.
In the exception information section the code was: 0xc00000005
Address: 0x0000004094f7

Mystwalker
07-30-2004, 02:20 PM
Originally posted by garo
Address: 0x0000004094f7

The same address I had...

prime95
07-30-2004, 11:33 PM
The debug version worked great, but the release version crashes. Go figure.

I tracked it down and fixed. Please try downloading again. Thanks.

Mystwalker
07-31-2004, 10:18 AM
Looks good - just started processing. :thumbs:
Let's see if everything else works as expected. *testing*

edit: I checked some old tests with known factors - all of them were found. :)
Meanwhile, I've started the search for factors no man has seen before. :D

Mystwalker
08-03-2004, 06:30 PM
Is there any way to determine what FFT size is going to be used a priori?
This way, one could concentrate on k/n pairs with lower FFT size (e.g. 521K instead of 768K), which saves approx. 30% time.
As each test is done independently, there wouldn't be an effort-increasing side-effect...

prime95
08-03-2004, 09:52 PM
It isn't an easy formula for computing the FFT size. But maybe I can get close.

The formula is based on n/FFTLEN + log2(k)/2. So if you start with smallest k and find where it switches from 512K to 768K, then you should be able to figure out the n value that a different k value would switch over.

This is complicated slightly by the zero-padded 640K FFT. Sometimes this will be used before switching to the 768K FFT. IIRC, the zero padded FFT looks at 2*n/FFTLEN - 0.3.

garo
08-06-2004, 06:08 AM
George,
Any chance of a linux version?
Thanks.

Keroberts1
08-12-2004, 02:29 PM
I tried running the new version of the P-1 factorer but it kept crashing at i believe exactly 50% of the way through stage 1. anyone else have this problem? Any ideas what i might be doing?

6881007 4847 are the numbers I'm using plus 48 for sieve depth and 256 for mem. I tried running for much smaller numbers and it seems to be going fine.

I've also tried with a few other values above the prp boundary that seems to tbe the only KN pair that has the problem could someone else try it out and see if they get any sort of an error.

prime95
08-13-2004, 11:14 AM
4847*2^6881007+1 completed P-1, B1=50000, B2=537500, WZ1: B21DFCCE

Did this problem happen after a long run? If so, maybe there is a memory leak. Use task manager to see if memory used is significantly increasing over time.

prime95
08-13-2004, 11:18 AM
Originally posted by garo
Any chance of a linux version?


Yes. Download ftp://mersenne.org/gimps/sobpm1.tgz

It is totally untested - I don't have a linux P4 machine.

jjjjL
08-14-2004, 12:07 AM
i should read the forum more! i just worked on making a new factorer and come here to share the good news and george has been working on it for the past few months. :) can't complain... it's my own fault i've let my job consume all my time. hopefully that can stop for a bit.

any interest in testing my inferior, old command line style factoring client with these new improvements? ;)

i have a windows build. email me at lhelm@seventeenorbust.com if you'd like it. oh, i've also been covertly testing out the regular SB client with this code for awhile. should be releasing that soon once we test a few more large numbers and get matching residues. everything looks good, i just didn't want to get too much buzz going until the client was validated and closer to release.

Cheers,
Louie

Keroberts1
08-15-2004, 12:37 AM
have you tried setting it on secret testing? a significantly faster client could speed up the double check and there would be no risk since its only seeing if the residues match anyways. I'd be willing to put a couple machines towork on the secret account ifi could test the new cient. I only have AMD amchines though and only a couple. I could probably manage a few hours a day on a single P4 also but thats only a few hours a day and isn't likely to make much differance. I don't use it at all now cause i rarely have time to finish many secret tests. I only use it for the occasional factoring.

prime95
08-15-2004, 08:17 PM
Originally posted by jjjjL
any interest in testing my inferior, old command line style factoring client with these new improvements? ;)


I think most of the folks here would prefer to use the old command line style client. It has some advantages, like it understands the SOB data files and supports settings other than 1.0 and 2.0 for that-parameter-I-can't-remember-the-name-of.

Thanks again to all the P-1 testers here. You found a couple of key bugs that made Louie's integration job much easier. I like testing early versions of changes to the FFT code with P-1 as a mistake (you miss finding a factor) is not a catastrophe (compared to missing a prime find).

Frodo42
08-16-2004, 01:26 PM
Louie any chance of making af Linux version of your new factorer?
I would sure like to use it when I get back home tomorrow and start factoring another range.

cedricvonck
08-16-2004, 04:24 PM
sbfactor.exe 7000000 7005000 47 1.5 1 1 + 256

Is this correct?
Please PM me.

Regards

garo
08-17-2004, 11:35 AM
Hi George,
I got a segmentation fault on running the mprime version you posted. The following is the size of the binary :2326620 and the timestamp: Aug 13 16:14 (Timezone +2 GMT).

The worktodo consisted of only one line:
Pfactor=21181,2,6799052,1,48,0

This works on Windows but on linux this is what I get:

Mersenne number primality test program version HE-2
Starting P-1 factoring with B1=50000, B2=575000
Chance of finding a factor is an estimated 1.26%
P-1 on 21181*2^6799052+1 with B1=50000, B2=575000
Using FFT length 768K
Segmentation fault

I know you are probably busy with the cleanup so take your time fixing it.

vjs
08-17-2004, 12:21 PM
Louie,

I have about 3 machines working on supersecret right now. I also noticed you stated you had a new client for sob.

If you'd like I could install the new client it on the supersecret boxes I have running. Won't use it on normal prp's or distribute promise.

E-mail me a copy and directions if you wish

mustang (no spaces here all one word) 35157L @ yahoo dot com

prime95
08-17-2004, 04:48 PM
Originally posted by garo
I got a segmentation fault on running the mprime version you posted.

Try the new one. The data segment was not properly aligned.

garo
08-18-2004, 12:46 PM
It is working for me now. I have not tested it with any known factors yet. Since the range I'm doing will not be done any way, I do not think much will be lost. Will report back after finishing the range to see if any factors were found.

One question for you George. I know you answered something similar on the mersenneforums recently but I'm didn't understand your answer fully.

While testing 28433*2^6799153+1 your program chose the 768K FFT and while testing 4847*2^6799167+1 it chose the 512K FFT. Can you shed some light on why even though the difference in the number of digits in the entire number was minimal (in fact the second number was bigger) a smaller FFT was chosen. I presume it has to do with the "k" value and "k" has a much greater influence on the FFT than the "n" value. Can you briefly explain the math behind it?

Lastly, in the version I'm testing, I tried the tests with factoring depths of 48 and 49. I got B1 and B2 bounds of 40,000/50,000 and 430,000/575,000 respectively. However the amount taken to complete these tests with such different bounds weer almost the same. Does this have to do with zero-padded FFTs? Or is there something wrong with this picture? Here is the relevent log snippet:


Starting P-1 factoring with B1=40000, B2=430000
Chance of finding a factor is an estimated 0.911%
P-1 on 33661*2^6799128+1 with B1=40000, B2=430000
Using FFT length 768K
[Aug 18 12:47] 33661*2^6799128+1 stage 1 is 13.93% complete. Time: 299.320 sec.
[Aug 18 12:52] 33661*2^6799128+1 stage 1 is 27.79% complete. Time: 299.368 sec.
.....
Mersenne number primality test program version HE-2
Starting P-1 factoring with B1=50000, B2=575000
Chance of finding a factor is an estimated 1.26%
P-1 on 33661*2^6799128+1 with B1=50000, B2=575000
Using FFT length 768K
[Aug 18 13:04] 33661*2^6799128+1 stage 1 is 51.35% complete. Time: 302.847 sec.
[Aug 18 13:09] 33661*2^6799128+1 stage 1 is 65.21% complete. Time: 302.871 sec.
[Aug 18 13:14] 33661*2^6799128+1 stage 1 is 79.07% complete. Time: 303.078 sec.


Are we getting something for free here? Atleast in stage 1 we are getting an increase of 10,000 in the bounds with a 1% increase in time.

garo
08-19-2004, 11:37 AM
Lastly, in the version I'm testing, I tried the tests with factoring depths
of 48 and 49. I got B1 and B2 bounds of 40,000/50,000 and 430,000/575,000
respectively. However the amount taken to complete these tests with such
different bounds weer almost the same. Does this have to do with zero-padded
FFTs? Or is there something wrong with this picture? Here is the relevent log
snippet:

George,
Do not worry about my second question. I was not able to replicate it! I do not know why it happened the first time but it doesn't seem to be happening now! With lower bounds the program does indeed take less time.

Finally, I checked the linux version with some know factors, stage 1 and stage 2 and they were all found. Pity I haven't found any new factors yet :(

prime95
08-19-2004, 01:54 PM
You weren't imagining things. When you started the test with B1=40000, prime95 created a save file. When you resumed with B1=50000 it had to first complete the B1=40000 to not lose the work you had already done. Then it should have gone from 40000 to 50000.

Now it could well be that the % complete lines are inaccurate in this case.

As to k and FFT size. Yes, k now has a big effect on FFT size selection. Log2(k)/2 bits must be reserved in each FFT data word. Since log2(28433)/2 is roughly 7.4 and log2(4847)/2 is roughly 6.1 you have 1.3 more bits per FFT word available. Thus, for FFT length of 512K, you should get 512K*1.3 (about 650,000) higher values of n before changing to a larger FFT size.

garo
08-19-2004, 03:40 PM
Thanks George. I vaguely remember the log2(k)/2 expression from your other post.

About the 40,000 -> 50,000 confusion, I'm pretty sure it's the savefiles that caused the confusion. The rate at which the program "apparently" proceeded at the 40,000 bounds was exactly the same as the actual rate at the 50,000 bounds. But I was starting and stopping the program a lot to test bounds etc. so evidently there was some mixup in the % complete figure.

One more question :)

The sieving in SOB is about 31% of the way through 2^48-2^49. So a factoring depth of 48 is too little and 49 is too high. Does that also mean that the "chance of finding a factor" figure is a tad high for 48 (I'm getting 1.26%) and a tad low for 49 (0.911%)?

I ask this because since a lot of numbers in SOB are not getting any P-1 done on them, I want to stick with the 49 limit which will be about 20% faster. The factor probabilities of 1.26% and 0.911% imply that I'll actually find fewer factors per unit time with the 49 limit. But I'm pretty sure this is not the case since the actual factor depth of 48.3 implies that the those probabilities are incorrect.

To wit, will I find more factors per unit time with the factor depth of 49 giving me bounds of B1=40k and B2=430k and taking 20% less time than a factor depth of 48 giving B1=50k and B2=575k?

Final question, though I think others on this forum can probably answer this as well. As George's last post indicated, the k value makes a huge difference in the FFT size for P-1. But since the SOB primality testing code is older, does that mean it does not have the same dependency on k (Edit: I just did a search and found a post by kugano stating that indeed it does not). Hence, k=4847 is much better for P-1 as it uses the 512K FFT for the 6.8M range and hence takes much less time for the P-1 while having the same probability of finding a factor and saving the same amount of time for a primality test.

[makes a dash for the P-1 reservation thread]

prime95
08-19-2004, 08:53 PM
Yes, the probability is between 0.9 and 1.2%

I've been thinking about P-1 on SoB. Unlike GIMPS, the number of PRP tests saved if a factor is found is not clear. It all depends on how far behind double-checking is and when you think a prime will be found that will stop the double-checking effort.

I've just changed prime95 so that it takes a floating point value for how-far-factored. The last argument is no longer a double-check flag it is a floating point value representing the number of PRP tests that will be saved if a factor is found.

Furthermore, since SoB does not have enough P-1 clients, is it better to do more exponents at a 1.0 (or less?) PRP tests saved setting or fewer exponents at a higher PRP tests saved setting? I don't know. My gut reaction says it doesn't matter. The difference in efficiency is likely to be very, very slight.

I'll try to upload the new prime95 in the next day or two.

garo
08-20-2004, 08:46 AM
Thanks George. Yes, I think that in the long run it does not matter. A lot of discussion in the forum when P-1 was first introduced settled on the value of 1.25 as the probable number of tests saved. Personally, given the lack of P-1 right now, I'd go with whatever setting gives the maximum number of factors per unit time regardless of "optimality". In either case, the amount of time saved is likely to be more or less the same but more factors is always better, no? :)

Mystwalker
08-23-2004, 03:49 PM
Like garo said:

As there is (more than) enough work to do, it is best to put the setting to a value that generates the most factors per time. Plus, it makes no sense to trial factor the remaining tests once they got reached by PRPing.

vjs
08-23-2004, 08:44 PM
I had a question about p-1 factoring.

I have a couple machines on double checks right now.
Has any p-1 factoring been done for 900k<n<1m?

I only see p-1 for n>4m.

Assuming the answer is no for 900k<n<1m would it make sence to do p-1 factoring on these. How many n per day would one expect to remove with a P-833?

And what settings would I use considering 300k<n<3m was done upto ???75T??? 2^46 and the new sieve doesn't cover values n < 1m.

I realize this may also be a mute point since I'm not sure how n values correspond to p. Does sieveing everything <300T eliminate all factors for n<1m????

How does p and n relate anyways????

garo
08-24-2004, 06:21 AM
P-1 only makes sense for numbers that are reasonably large - say above 4M especially when the numbers have already been tested once. So the short answer is: No it does not make sense to do P-1 for numbers under 3M.

vjs
08-24-2004, 10:16 AM
I was just thinking that if I could eliminate more tests through p-1 than actually prp then it would make sence.

I actually tried it out yesterday for a few n I was getting a rate of ~240 sec per pair and a probability of 0.014. Which would mean that I should get one factor every 4.7 hrs. Which would be pretty good for that machine since I don't think I can do one prp in that time.

One problem was I need to use a setting of <42. In addition we have already sieved everything below 75T.

Now it brings up the other question of how does p and n relate.

And is that part of the reason why we switched to a 1m<n<20m file for ranges above 75T.

garo
08-24-2004, 03:16 PM
Yes but when you use a setting of 42 the factor probability number you get is inaccurate. So you will be very lucky to get a factor every 4.7 hours. More than likely it would take about a day to find a factor.

prime95
08-24-2004, 04:07 PM
If prime95 is refusing to do P-1 when you use a setting larger than 42, then it is telling you that P-1 does not make sense - you will eliminate candidates faster by just doing the PRP double-check.

This is not surprising for you small exponents. P-1 barely makes sense for current exponents in the 6 millions given the deep sieving that's been done.

vjs
08-24-2004, 06:25 PM
sbfactor.exe 7000000 7005000 47 1.5 1 1 + 256 is this correct

cedricvonck,


Yes and no first you don't have to specify the 1 1 since it will default to one processor one instance.

Second I suggest using a 49 instead of 47.

This number basically signifies the sieve depth

Since all of the factors below 47 have been found through seive and <0.36% remain between 47-48, a setting of 49 would be best.

It might/will be too soon to use 50 but 2^49 will probably be reached by the time prp reaches 7m.

The 256 is the amount of memory you have correct.

So go with

sbfactor.exe 7000000 7005000 49 1.5 256

hc_grove
08-25-2004, 10:28 AM
Originally posted by vjs
Does sieveing everything <300T eliminate all factors for n<1m????


Using the simple fact that at least one factor of a number has to be smaller than the square root of the number, it can easily be shown that sieving everything below 300T eliminates all n<96, but as we have factors for all candidates with n<1000 except one, that doesn't really help us.

vjs
08-25-2004, 03:40 PM
Thanks HC_grove,

The major reason for my question was I didn't believe sieveing to 300T would eliminate all n<1m, So it is really as simple as p=300T the number is 300T.

Example if p=6m the number is 6,000,000.

I was confused b/c if n=3 the number is actually 8. :rolleyes:

2^n
n=3

So 2x2x2=8 :rolleyes:


one factor of a number has to be smaller than the square root of the number

Of course as soon as you have one factor it's not prime.

Thanks very much for the explanation.


So sbfactor wouldn't let me run any n<1m unless I reduced the 48 value to something very small, the reason being is sbfactor actually make some calculation based upon n, 2^?, the 1.0-1.5 setting etc to check if your wasting time etc.

garo
08-25-2004, 03:46 PM
vjs,
The setting is not based on whther all those numebrs have already been factored but on the calculation the P-1 code makes on the probability of finding a factor given that the number has been sieved to 2^48. Clearly, if you tell sbfactor that the sieving has happened till 2^42, the probability of finding a factor it will calculate, will be larger than at 2^48 and as a result it may think it is worthwhile to do a P-1 factoring. But in reality it will not be worth it as the factor probability is based on incorrect information.

vjs
08-25-2004, 06:07 PM
Sorry Garo but now your response has confused me even further with the p-1 settings. B/c by changing the sieved to value it changes the time it take to complete a k,n pair. There is also another variable that one can set between 1.2-1.5 etc. This also has an effect on completion time.

I did a series of tests to find the best setting: ( I have 512 mb)

sbfactor 6840100 6840110 48 1.5 400 (10223^6840101)
Yeilds:
B1=30k
B2=255k
Prob Suc=0.008214
Sq=65375
stage 1 trans, time (s) 86450, 3421
stage 2 trans, time (s) 37154, 5514
total time 92 min

sbfactor 6840110 6840120 49 1.5 400 (55459^6840118)
Yeilds:
B1=20k
B2=155k
Prob Suc=0.004921
Sq=42832
stage 1 trans, time (s) 57640, 1865
stage 2 trans, time (s) 23540, 2907
total time 49 min

sbfactor 6840120 6840130 48 1.2 400 (10223^6840125)
Yeilds:
B1=20k
B2=155k
Prob Suc=0.005879
Sq=43069
stage 1 trans, time (s) 57640, 2282
stage 2 trans, time (s) 23540, 3609
total time 61 min

sbfactor 6840100 6840110 49 1.2 400
Yeilds:

Program won't run

And won't start running until n=7172069 with a 49 1.2 setting

sbfactor 7172000 7172100 49 1.2 256 <-- note 256 used here
B1=15k
B2=98k
Prob Suc=0.003879
Sq=31386

Didn't let it run.

I realise to do this correctly I should have ran the same k,n pair, but the numbers do show trends.

It looks like the best setting for the probablitiy of finding a prime per time spent is (factor Prob Suc/total time) actually the 49 1.5 setting.

So what does the 48,49 and 1.2-1.5 settings do exactly?
Do they simply change the b1 and b2 values and the number of squares...

If so this would mean that sieve really has nothing to do with P-1 efficiency.

In other words by sieving everything below 2^48 all it simply does is decrease the probablility of finding a p-1 factor in that range. To counter act this effect p-1 simply changes settings from 48 to 49 so that they spend less time on any one n in an n-range there-by increasing the n's per unit time. :confused:

I know this is not the case.

vjs
08-30-2004, 11:18 AM
I did some investigation this weekend and as far as a can tell the 48,49,50 factor basically tells the program no to look for factors below this value b/c they have already been searched, found etc.

So by using 48 it simply doesn't investigate/spend time on factors below 48.

So the best setting currently is 49 since a large portion of the factor have been found between 48-49. So using 49 1.5 is better than 48 1.2.

The question is when should we switch to 49 with some setting >1.5??

Mystwalker
08-31-2004, 05:22 PM
Originally posted by prime95
I'll try to upload the new prime95 in the next day or two.

AFAI can see, it's still the old version... :(
Could you upload the new version? Many thanks! :cheers:

garo
09-02-2004, 07:23 AM
Yes George, new version!!! :)

OK vjs, let me have another go at this. I was out of town this past week so sorry for the delay.

The 48,49,50 etc. you enter is only used by the factoring program to calculate the probability of finding a factor. P-1 works differently from sieving so it cannot control the bit range in which factors are searched. The principle of P-1 is that if the factor found is P then the factors of P-1 are all below B1 except for one which may be between B1 and B2. In fact, if all the factors of P-1 (i.e., the factor - 1) are below B1 then the factor is found in stage 1.

So essentially, sieving only affects factoring in that the sieving depth entered into the worktodo line (48,49 etc.) changes the probability with which the factoring program thinks a factor will be found and hence it affects the B1 and B2 values chosen by the factoring program.

The "real" probability of finding a factor depends on the real sieving depth and not the number you have entered. Right now the sieving has been completed to 48.37 or so. Therefore, when you enter 48, the factoring program overestimates the likelihood of finding a factor and thus chooses higher B1,B2 bounds. Whereas if you enter 49 then the factoring program underestimates the probability of factor being found and chooses lower B1, B2 bounds.

So how are the B1, B2 bounds chosen and how is the amount of memory allocated to factoring change this choice?

Memory : Given a number and fixed B1,B2 bounds, the amount of memory allocated decreases the processing time as there is more space for temporary variables. That is, the old space-time tradeoff comes into effect. However, the effect is NOT linear or anything even close to that. George has himself stated several times that additional memory above a certain point - and I would put the number at about 256MB for 7M numbers that are currently being tested, though he would put the number even lower - will have dimnishing benefits and it certainly is not worth buying an extra stick of memory only for the purpose of P-1.

However, increasing the memory can sometimes increase the amount of time taken for P-1 as the extra memory can cause the B1,B2 bounds to be raised. The reason for this lies in how optimal B1,B2 bounds are chosen.

Optimality : The ultimate objective of P-1 factoring is to increase the throughput of the project as a whole by eliminating numbers faster. It is not to find factors as quickly as possible. And I repeat not fiding factors at the greatest possible speed.

Let me illustrate this. Suppose you have achoice between two B1,B2 settings. The first takes 60 minutes to complete and finds a factor with the probability of 1%. The second takes 80 minutes and finds a factor with the probability of 1.25%. Notice that the first setting will find a factor every 6000 minutes while the second will find a factor every 6400 minutes. So should we choose setting 1 over setting 2?

NO : No this is counter-intuitive but our choice really depends on how much time a primality test will take and if a factor is not found what the average number of tests that will be required. now the average number of Lucas-Lehmer tests required for Mersenne number is about 2.1 - I may be wrong about the exact number - or so as each number is doublechecked and each test has a certain probability of being faulty due to hardware errors. In SOB on the other hand this number is not known as the project is not interested in finding every prime number but only one prime number for each k. Louie had once speculated that this number is 1.25. The reason for this is, that numbers will not need to be doublechecked if a prime is found for that k. Note that you were using 1. and 1.5 as input values instead of 1.25.

So, for the sake of our analysis let us go with 1.25. Let us also assume that each test takes 10,000 minutes. So, each factor found will save us 12,500 minutes. Let us now look at what is the average time taken to eliminate a number if the P-1 test is performed at settings 1 and 2 respectively.
Remember that now a P-1 test is performed for each number and an LL test is saved whenever a factor is found.

Setting 1:
Time for P-1 test: 60
Prob of finding factor: 1%
Average time spent in primality testing: 12,500 - (12,500 * 0.01) as 1% of the time we do not do a primality test for the number since a factor was found.
Therefore, avg amt of time spent per number =

60 + 12,500 - (12,500 * 0.01) = 12,435.

Setting 2:
Time: 80
Prob: 1.25
Avg time in primality testing: 12,500 - (12,500 * 0.0125)

Total avg time: 80 + 12,500 - (12,500 * 0.0125) = 12,423.75 minutes.


So you can see that with setting 1 we save 65 minutes per test but with setting 2 we save 76.25 minutes. This despite the fact that with setting 1 we find factors at the rate of one per 6000 minutes but with setting 2 we find factors at the rate of one per 6400 minutes.

Hope this clears it up. As you can see, the complexity of the issue necessitated the lentgh of the post. But if you have any more questions please feel free to ask.

BTW, at the end of all this I would like to say that since every number does not get a P-1 right now in SOB, in fact more than half of them don't, all this analysis goes out the window and one should simply chose the sieve depth/factor value setting that gives us the maximum number of factors per unit time. This statement will hold as long as P-1 does not match or exceed the rate of primality testing.

But this brings us back to the problem that entering a sieve depth of 48 or 49 will not accurately compute the probability of finding a factor so we'll have to wait till George uploads the new version that takes floats as an input for the sieve depth.

Frodo42
09-02-2004, 08:29 AM
Great post garo :thumbs:

I now understand a whole lot more of P-1-factoring.

I would suggest this post is added to hcgroves page on P-1.

So George, let's get the new version (also Linux version please :Pokes: )

garo
09-02-2004, 09:12 AM
I'm glad you liked it. Methinks I will also post a modified version on the mersenneforums as people have raised questions about P-1 so many times before. It took me a while to understand this - inclduing looking at the Prime95 source code and asking george many questions.

hc_grove
09-02-2004, 04:37 PM
Originally posted by Frodo42
Great post garo :thumbs:


I agree!



I would suggest this post is added to hcgroves page on P-1.


I'd like to do that, but only if garo accepts it.

The page is Factoring for Seventeen or Bust (http://www.sslug.dk/~grove/sbfactor/)



So George, let's get the new version (also Linux version please :Pokes: )

If the binary object-files for doing the math will work with the old factorer I'd be happy to see if I make a new version, if he'll release those.

.Henrik

vjs
09-02-2004, 06:42 PM
Wow Garo thanks a whole bunch,

This really clears things up a great deal, I'll have to re-read what you said a few times to understand P-1 better but at least it makes more sence and of course poses more questions.

I'll have to do an internet search on b1 and b2 optimal values vs memory allocation unless you like to go into more detail.

From what you stated it's actually more a question of b1,b2 and the 1.2-1.5 variable.

Again I personally thank-you and appreciate your efforts, nice write up.

garo
09-03-2004, 11:33 AM
hc_grove: yeah you can go ahead and add it to your pages as you see fit.!

prime95
09-17-2004, 03:22 PM
The new prime95 for SoB can now be downloaded. Several FFT bugs were fixed that should not have impacted SoB. The only change of importance is accepting floating point values in the Pfactor= line of worktodo.ini

You can get the versions from:

Windows: ftp://mersenne.org/gimps/sobpm1.zip
Linux: ftp://mersenne.org/gimps/sobpm1.tgz

The linux version is untested, I do not have Linux running on any P4s here.

Frodo42
09-18-2004, 01:50 PM
Wow.
I've just switched to Georges new Linux version from the version hc_grove modified.
With the same B1 and B2 it takes something like halft the time.
So I guess it's worth the trouble making the worktodo.ini file (which was rather cumbersome, any ideas on how to make it fast for a given range)

garo
09-18-2004, 06:52 PM
Yeah if you have flavour of unix or cygwin installed you can use awk! Otherwise you could also use perl. Actually, in the worst case, you can just open up your favourite text editor, cut and paste all the lines that are generated by sbfactor that show the numbers to be P-1 factored and do a search and replace.
Say replace "Estimating for " with "Pfactor" "k=" with "=2," and so on. I can post an awk script if that helps.

Mystwalker
09-19-2004, 07:57 AM
Originally posted by garo
I can post an awk script if that helps.

That helps. ;)

Well, such a script would be really helpful. :thumbs:

garo
09-19-2004, 08:18 AM
Ok! Here it is. Cut and paste all the lines starting with Estimating into a file say j.

cat j | awk '{print "Pfactor=" substr($3,3,length($3)-2) ",2," substr($4,3,7) ",1,48.5,1.25"}'

I'm assuming that the exponents are all length 7 i.e. less than 10 million, if not the change is trivial. This works with george's latest version which was uploaded a couple of days ago. In the older version the last two fields had different meanings. In George's newest version the second-last field can be a float whereas before you had to choose between 48 or 49 both incorrect; and the last field now means the number of tests a factor is worth instead of a zero or 1 indicating whether the number of tests saved was 2 or 1 (a holdover from GIMPS).

I've put in the figures 48.5 - because that is the current sieving status and 1.25 - because that was recommended a while back since SoB does not necessarily doublecheck every number if a prime is found for that k.

You can increase/decrease these numbers as you like but I think this is the optimal setting for the moment especially considering that P-1 is not keeping up with PRP testing.

The bounds I got for testing numbers around 6.9M were:


sievedepth factorvalue B1 B2 chanceoffactor
Old Version
48 2 50k 575k 1.26%
49 2 40k 430k .911%
50 2 30k 322k .626%

New Version
49 2 40k 430k .925%
48.5 2 45k 506k 1.1%
48.5 1.9 45k 483k 1.08%
48.5 1.8 40k 430k 1%
48.5 1.5 30k 285k .793%
48.5 1.4 25k 231k .687%
48.5 1.3 20k 180k .573%
48.5 1.25 20k 170k .564%


Notice that the bounds at 49,2 and 48.5,1.8 are the same but the chance of finding a factor goes up from .925% to 1%. So my hypothesis that the previous version with sieve depth = 49 or 50 was underestimating the chance of finding a factor and sieve depth of 48 was overestimating. So my recommendation to all is to use 48.5,1.25 and George's latest code! With these bounds each P-1 test will take about 30 minutes on a P4 2.8.

Nuri
09-19-2004, 05:22 PM
Thanks garo.

And of course, thanks George. :thumbs:

Nuri
09-19-2004, 06:19 PM
I tried 48.5,1.25 and it did not work for me.

I guess it's because the result also depends on memory allocated to P-1.

PS: I use 200MB

Frodo42
09-20-2004, 03:40 AM
thanks garo, that script speeds up things a lot.

garo
09-20-2004, 04:24 AM
Aha! I use 500MB. For 200MB, the smallest value that works is 48.5,1.5. Alternatively try 48,1.35. The bounds are similar in both cases.

This shows how close we still are to the point that P-1 does not save us any time. As n progresses, the smaller values will work as well.

Nuri
09-20-2004, 06:27 AM
I agree. Still, I guess one has to check for PRP progress vs sieve progress. As sieve boundaries go higher, may be it might take a bit longer than expected for smaller values to work as well.

garo
09-20-2004, 08:54 AM
That is true as well. I think it would be best if a recommendation could be posted in the P-1 reservation thread. That would be one centralized place where we could monitor and change the bounds as required. This would go a long way towards helping P-1 catch up.

prime95
09-20-2004, 10:15 AM
One more thing to consider in selecting parameters. The P-1 code assumes you are using the same math libraries for P-1 and PRPing. This is not the case right now. You should compare the time it takes PRP3 to test a number vs. the current SoB client. Then enter SoB_client_time / PRP3_time as the number of PRP tests saved.

garo
09-20-2004, 12:24 PM
You are right! And I think that the current SoB client takes longer because it does not have the latest code from PRP3. Correct me if I am wrong. Assuming that the current PRP3 is 20% faster, the tests saved goes up from 1.25-> 1.5. Anybody care to deliver some hard numbers?

Still I believe that we should stick with 1.25 at least till P-1 progress is slower than PRP testing.

vjs
09-20-2004, 03:24 PM
Still I believe that we should stick with 1.25 at least till P-1 progress is slower than PRP testing.

I know this is the wrong thread to make this statement in :slap: but I'm wondering if we shouldn't do the following.

Invest all of our effort into sieve and try to reach the point of deminishing returns and then stop sieveing for values of n<20m. We could then later reinvestigate sieve for 20m<n<100m once we get to a n value around 18m or so. I'm pretty sure everyone will agree that this project will reach 20m...

As for the p-1 effort incoperate it into the main client such that everyone p-1's their number before testing. Yes this would delay the prp3 implementation but by placing the majority of efforts into sieve right now, we could drive up the testing bounds for p-1 by the time the new client is released.

If optimal bounds are currently 48.5 1.5 (not taking lack or resources into consideration b/c this obviously wouldn't be the case if it were in main effort). We could set the bounds to 50 1.5 by then etc.

As for the main effort people wouldn't see a big change, the new prp3 client is faster reducing time but adding time from the p-1 step. In addition the client could report back a no factor found for k/n with x y bounds, the server could then look at time a p-1 test required and reassign the test if nessary. This may be a major advantage once we get to n=>10m. I don't think we would loose people at all especially if we could intergrate a p-1 factor score into prp scores. Also it may be an added plus since, every once in a while people would notice their test completed in record time due to a factor. Also if the bounds were set correctly their personal number of test completed on average would increase.

My 0.02 hopefully I don't get change back.

VJS

smh
09-20-2004, 05:01 PM
It's true that sieving is still the most effective in elliminating possible candidates, and a considerably ammount of effort should be put into sieving.

But one single prime found elliminates a lot of possible candidates to test, and besides, most are in here to find a prime.

I agree that P-1 must be put into the client ASAP. A small speedup for the project as a whole is still a speedup, and this will only get bigger as tests get larger.

It would be better if sieving could also be intergrated into the client and if ranges could be automatically downloaded from the server. This would save a lot of administration.

vjs
09-20-2004, 05:52 PM
It would be better if sieving could also be intergrated into the client and if ranges could be automatically downloaded from the server. This would save a lot of administration

Agreed but wouldn't intergrating sieve require quite a bit more effort?

Also one will never find a prime with either P-1 or sieve but they both help the project. But I think it's easier to justify p-1 one a particular number they are testing b/c it would increase the users possibility of finding a prime though optimization....

In my mind it seems more benifital to implent P-1 into the client first get it working etc, then do the same for sieve. Granted the project would be best suited if we could somehow have the client and server comuniate. The client could tell the server how fast it works, if it falls below a certain threshold, then it gets a small 1-2 week sieve range rather than a (prp/p-1), we could then also expire sieve ranges and put them back in the system.

I have a feeling, sieve, p-1, automation, incorperation, and automation are emotionally hot topics, perhaps we need a poll or something.

Mystwalker
09-21-2004, 07:22 AM
Integrating the sieve into the client would be possible IMHO - it takes some time to accomplish it, of course.
But you're right, factoring directly helps the people searching for a prime, as a factor found prior to the PRP run makes the latter obsolete. Of course, sieving basically has the same properties as mentioned above, but it works on all k/n pairs, not on the current one specifically.

Hm, sounds a bit like capitalism vs. communism to be... :jester:

Sieving has the advantage that the computational effort can be set with a very fine granularity (down to just a bit more than 1p). One can choose whether it takes 5 minutes or 5 days on a certain PC.

Integrating sieving into the main client makes it possible to use the optimal tool for the respective PC architecture - assuming the user sets a preference like "help the project most".
Maybe slow computers and non-SSE2 CPUs are used mainly for sieving, whereas SSE2-equipped ones get to factor and PRP. Thinking of a resource allocation model, I guess a lot of the (faster) non-SSE2 CPUs will factor/PRP as well...

vjs
09-21-2004, 12:48 PM
From Garo,


sievedepth factorvalue B1 B2 chanceoffactor
Old Version
48 2 50k 575k 1.26%
49 2 40k 430k .911%
50 2 30k 322k .626%

New Version
49 2 40k 430k .925%
48.5 2 45k 506k 1.1%
48.5 1.9 45k 483k 1.08%
48.5 1.8 40k 430k 1%
48.5 1.5 30k 285k .793%
48.5 1.4 25k 231k .687%
48.5 1.3 20k 180k .573%
48.5 1.25 20k 170k .564%


Perhaps we should make this upper portion a sticky,

Also is there are benifit difference etc for these settings with respect to cache size.

For example, alot of computer have either 256k or 512k L1 cache

48.5 1.25 20k 170k .564%
was quoted to be the best setting which would be 150K,

However,

48.5 1.5 30k 285k .793% would fit into a 256K cache

and a custom b1 b2 bounds or potentially some 49 1.x could yeild bounds of

20K 530K or something, this maybe best for 512K bartons etc.

Of course it changes p-1 time but certain bounds may be best suited for particular processor optimization.

garo
09-21-2004, 01:18 PM
Certainly. My results are from a P4 2.8 Xeon dual processor with 500MB allocated to sobpm1, Other systems may have varying results.

Mystwalker
09-21-2004, 05:33 PM
Originally posted by vjs
For example, alot of computer have either 256k or 512k L1 cache

L2 cache, I guess... :D


However,
48.5 1.5 30k 285k .793% would fit into a 256K cache

I don't have insights into the factoring algorithm and its memory needs, but I don't think that you can equal B2 value (in digits) and L2 cache size (in Bits).
For Step2, you need a lot of RAM - 256MB is good, 512 MB maybe even a bit better...

vjs
09-21-2004, 06:27 PM
I don't have insights into the factoring algorithm and its memory needs, but I don't think that you can equal B2 value (in digits) and L2 cache size (in Bits).

Arrgh, :bonk: , disregard my comments looks like I'm getting change back from my 0.02 this time for sure.

biwema
09-28-2004, 04:43 PM
While p-1 factoring a probable Bug in Prime 95 appeared: After a few seconds, the factorisation stops and the error "SUMOUT error occurred." is raised.

The program uses 1024k FFT, and can use 300M of memory (220 M on a diffetrent computer).
This problem is reproducible and appears with exponents starting from 9104171 up to 11400000. at that limit, there are roundoff errors every iteration. At 11450000 everything is fine again with the next larger FFT (1280k).

I used the base 67607, but I think that is not that important.

I agree that this is not that important at the moment because the factoring limit is at 7.2M but later it could be a problem.


:lawn:

regards,
Reto

hc_grove
09-29-2004, 05:49 PM
Now I finally found some time to try Georges new code.

And as I needed a way to generate a worktodo.ini, I though that I could pull the code from the old factorer that reads the SoB.dat so I could avoid having to start the old factorer, stop it again, copy a part of it's output to a temporary file and using garo's awk command.

Well, it was no problem finding the code, but it seemed fu..... ugly, so I decided to start over in Perl. The result is the following Perl-script for appending (i.e. you can add new tests before it's done with the old ones) new tests to worktodo.ini:


#! /usr/bin/perl

use strict;
use warnings;

my $kval;
my $current_n;
my $count;
my @tests;

if ($#ARGV != 3) {
print("Usage: make_worktodo.pl <n_low> <n_high> <factor depth> <factor value>\n");
print("$#ARGV\n");
exit(-1);
}

my $n_low = $ARGV[0];
my $n_high = $ARGV[1];
my $depth = $ARGV[2];
my $value = $ARGV[3];

open(DATFILE, "<SoB.dat") or die "Couldn't open dat file\n";

# Remove the first 3 lines
<DATFILE>;
<DATFILE>;
<DATFILE>;

my $line;
while ($line = <DATFILE>) {
chomp $line;
if ($line =~ /^k=(\d+)/) {
$kval=$1;
$current_n = <DATFILE>;
} else {
$line =~ m/\+(\d+)/;
$current_n += $1;
}
if (($current_n < $n_high) and ($current_n >= $n_low)){
push @tests,{ 'k' => $kval, 'n' => $current_n };
$count++;
}
}
close(DATFILE);

print "$count numbers with $n_low <= n < $n_high\n";

my $removed = 0;
my $factor_k;
my $factor_n;
my $test;

print "Searching for known factors in results.txt...\n";
open(RESULTSFILE,"<results.txt") or die "Failed to open factor file!\n";
<RESULTSFILE>;
while ($line = <RESULTSFILE>) {
next unless $line =~ m/^\d+\s+(\d+)\s+(\d+)/;
$factor_k = $1;
$factor_n = $2;

next if (($factor_n >= $n_high) or ($factor_n < $n_low));

for(my $i = 0; $i < $count; $i++) {
next unless (defined($tests[$i]));
if (($tests[$i]{'k'} == $factor_k) and ($tests[$i]{'n'} == $factor_n)) {
delete $tests[$i];
$count--;
$removed++;
}
}
}
close(RESULTSFILE);
print "Removed $removed numbers using the factor file\n";

print "$count numbers with $n_low <= n < $n_high\n";
@tests = sort { $$a{'n'} <=> $$b{'n'}; } grep { defined } @tests;

open(WORKFILE,'>>worktodo.ini');
foreach $test (@tests) {
print WORKFILE "Pfactor=$$test{'k'},2,$$test{'n'},1,$depth,$value\n";
}
close(WORKFILE);


It takes four arguments, n_low, n_high, factor_depth and factor_value.

It works on my Linux box, and just might be portable (the only thing I can see that might need changing on windows is \n as end-of-line).

I find this quite easy, and thought some of you might like it too. As usual this is free software.

Frodo42
09-29-2004, 08:10 PM
thanks so much hc_grove, your script works like a charm.

I had to remove a single line-break and an emty line before in the third last line to make it look like this before the factorer accepted the input


print WORKFILE "Pfactor=$$test{'k'},2,$$test{'n'},1,$depth,$value\n";

*edit* it seems the linebreak is something this forum does and extra space is something this forum does to the script *edit*

hc_grove
09-30-2004, 03:50 AM
Originally posted by Frodo42

*edit* it seems the linebreak is something this forum does and extra space is something this forum does to the script *edit*

I also looks wrong here, but if I try to edit it, the code looks fine, so this is just shows one of the reasons I hate web-based forums. :swear:

Tonight I'll upload a copy to my page on factoring, that should make it possible to get a copy without having to fix this.

hc_grove
09-30-2004, 03:39 PM
Originally posted by hc_grove
Tonight I'll upload a copy to my page on factoring, that should make it possible to get a copy without having to fix this.

Done. You can now download make_worktodo.pl (http://www.sslug.dk/~grove/sbfactor/make_worktodo.pl).

hc_grove
09-30-2004, 04:39 PM
I'm having some trouble making multiple copies of George's new fatorer run on a dual P4 Xeon. To make it easier to run a number og prp/sieve/factoring jobs on a bunch of machines, I've created some scripts to start the programs on a given machine.
My scripts for starting the new factorer looks like this:


#! /bin/sh
cd ~/17orbust/p-1_1
./mprime -A1 &

and


#! /bin/sh
cd ~/17orbust/p-1_2
./mprime -A2 &

(I left out the parts that just makes it easier for me to keep track of which jobs run on which machines and vice versa)

When I try to run these two scripts on the same machine (called shannon), the following happens:


grove@galois > ./on shannon p-1_1
grove@galois > ./on shannon p-1_2
grove@galois > Another mprime is already running!


What am I doing wrong? Or is it just totally impossible to run multiple copies?

garo
09-30-2004, 04:50 PM
I run two copies on a dual machine without any problem. However, I do not think you need the -A1 command if you run in two different directories. Check your scripts again! I bet the problem is there.

hc_grove
09-30-2004, 04:55 PM
Originally posted by garo
I run two copies on a dual machine without any problem.


Good to hear. How?



However, I do not think you need the -A1 command if you run in two different directories.

Removing them changes nothing. :(

prime95
09-30-2004, 06:01 PM
Running two mprimes ought to work. Try copying mprime to another directory and running the second mprime from there.

Or you could ask the linux gurus at mersenneforum.org for more help.

hc_grove
10-01-2004, 01:59 PM
Originally posted by prime95
Running two mprimes ought to work. Try copying mprime to another directory and running the second mprime from there.


As my scripts show, I already run them from different directories.



Or you could ask the linux gurus at mersenneforum.org for more help.

I'll try that.

garo
10-01-2004, 02:43 PM
OK, a basic sanity check but did you do a ps -ef to see if there is some long lost mprime process that might be hung or something?

Also your method for invoking the scripts looks a bit funny:

grove@galois > ./on shannon p-1_1
grove@galois > ./on shannon p-1_2
grove@galois > Another mprime is already running!

Are the scripts really called "on" and do they take two arguments? And what directories are you invoking the shell scripts from? Please post more details.

hc_grove
10-01-2004, 06:02 PM
Originally posted by garo
OK, a basic sanity check but did you do a ps -ef to see if there is some long lost mprime process that might be hung or something?


I used `ps x`, but yes I checked that.



Also your method for invoking the scripts looks a bit funny:

grove@galois > ./on shannon p-1_1
grove@galois > ./on shannon p-1_2
grove@galois > Another mprime is already running!

Are the scripts really called "on" and do they take two arguments? And what directories are you invoking the shell scripts from? Please post more details.

"on" is a script that takes two arguments, a hostname and the name of a directory. It updates some status information on what jobs run on which machines, and then runs
"$HOME/17orbust/<directory>/job.sh" on the machine given by the hostname. job.sh is the scripts I showed earlier, and as can be seen they change the working directory to the directory they are placed in. This is possible because my home directory is NFS mounted on all of the machines.

The point of this scheme is that it allows me to use "on" to start prp clients, sieve clients and factoring clients on any machine -- as the machines are a mix of athlons, P3's and P4's it could distribute the clients quite silly, but it still makes sense to have one command do it all so I don't have to worry about the specifics of each client.

pixl97
10-01-2004, 07:17 PM
[Wed Sep 29 01:48:27 2004 - ver HE-3]
Error: Work-to-do file contained composite exponent: 4847

Im getting this message when im running mprime, this is after i used the worktodo pl file posted on the board.

Is this normal?

Pfactor=33661,2,7200144,1,48.5,1.25
Pfactor=4847,2,7200183,1,48.5,1.25
Pfactor=28433,2,7200193,1,48.5,1.25

Frodo42
10-03-2004, 07:08 AM
Done. You can now download make_worktodo.pl.
Thanks.

I was wondering if you could implement some kind of propability calculation ... I miss that from the old version of sbfactor, even though I'm not sure that one was very exact.

hc_grove
10-03-2004, 08:03 AM
Originally posted by Frodo42
I was wondering if you could implement some kind of propability calculation ... I miss that from the old version of sbfactor, even though I'm not sure that one was very exact.

I never really tried to understand that part of the code, so I don't know how the probabilties were calculated. Of course I could just copy the code, but I suspect that to be quite a lot of work, so even though I miss that too, I have no plans to do it.

I still hope that George will release the binary objects that does the work, to use with the sbfactor.

dmbrubac
10-03-2004, 09:32 AM
Hi all

I asked about an optimized (as in PRP V2) client for P-1 and was directed back here. Since I've not participated in this thread yet I feel a bit confused.

I assume the new P-1 client is 'George's new client'. Although I have no idea who George is, I assume the client is Prime95.exe. I've downloaded sobpm1.zip expecting to find instructions of some sort (as implied in some early posts) but did not. I also gather I have to build a worktodo.ini and will try the .pl script above, hopefully it works on Windows. Is this P4 only? I guess I will find out...

Could someone confirm, deny, explain, refute, crystalise or obfuscate where necessary? Thanks!

dmbrubac
10-03-2004, 10:08 AM
OK. I generated a worktodo.ini using the perl script - looks fine. Started Prime95 in stress test mode, then clicked File...Continue. It looked like it started processing, then it 'encountered a problem and needed to close'. Since this is a P3, I guess Prime95.exe is P4 only.

Frodo42
10-03-2004, 10:30 AM
Originally posted by hc_grove
I never really tried to understand that part of the code, so I don't know how the probabilties were calculated. Of course I could just copy the code, but I suspect that to be quite a lot of work, so even though I miss that too, I have no plans to do it.

OK.
I have no clue how these probabilities are calculated so I can't be of much help there, I just liked the output.

Mystwalker
10-03-2004, 10:35 AM
That special version of prime95 is for SSE2-capable CPUs only, right.
"George" is George Woltman, the guy (at least, but most likely not restraint to) behind the PRP part of the client for GIMPS, and thus basically for the core code of SoB. His FFT routines also power P-1 factoring.

He used this prime95.exe version to test the new FFT routines, as a missed factor due to a bug is not as severe as a missed prime when PRPing. Some errors where found and corrected in this process.
The current version shows up only the bug mentioned by biwema so far.

I guess the schedule for Louie et al. is the following:

- Get SBv2 on route (it takes some effort, but increases performance somewhat)

I don't know if they will wait until the x87 code is available, I guess not. Those machines that benefit from it are those that don't benefit from the SSE2 enhancements anyway, so an update v2 --> v2.1 doesn't make sense anyway...
More likely seems to be a v2.1
However, they will wait until the 1M FFT bug is fixed.

- Finish SBv3
either with integrated P-1 factoring or:

- Integrate P-1 factoring into the client as a plugin

vjs
10-04-2004, 06:42 PM
All p-1'er should head on over to the sieve section Mike has updated the dat file. Pretty small d/l compared to results.txt on a daily basis.

MikeH
10-08-2004, 10:33 AM
All p-1'er should head on over to the sieve section Mike has updated the dat file. Pretty small d/l compared to results.txt on a daily basis. Following on from this, I've been doing a few experiments.

Since most of the P-1 work is done within 500M of the edge of the PRP leading edge, I wondered how large an sob.dat file which has just those factors in would be. Answer is 82KB, 16KB when zipped.

Seems better to me to be regulaly downloading a 16KB file instead of a 505KB file (my daily updating 'full' sob.dat), or a 1.83MB file (the 6 hourly updated results.txt)

So if I generated this new file daily, would anyone be interested?

Mystwalker
10-08-2004, 10:38 AM
Originally posted by MikeH
So if I generated this new file daily, would anyone be interested?

Sounds great! Fabulous idea! :cheers:

MikeH
10-08-2004, 12:00 PM
Sounds great! Fabulous idea!

OK, the new small zipped sob.dat file (http://www.aooq73.dsl.pipex.com/sobdat/SobDat_P1.zip) is about 16KB in size. As with the other similar files it will be updated daily at about 03:00 UK time.

The current file spans n = 7081949 to 7581949.

I've vaidated the file by picking a few ranges and enuring that the k/n pairs that are spat out to be tested are the same with this new file as with an exsiting sob.dat and new results.txt combination.

If you download this file regulaly, you won't need to download the results.txt file.

DO NOT USE THIS FILE TO SIEVE. I'm sure to will make the sieve very fast, but you will find very few factors! I Repeat. DO NOT USE THIS FILE TO SIEVE.:taz:

Frodo42
10-10-2004, 03:10 PM
OK I think I found some kind of bug in the new factorer. I started testing the k,n pairs that run on my machines with very high bounds and appearantly I found a very big factor for one of them, but for it seems that the it can't output the factor the right way, maybe its just the output variable that can't containt the factor.



[Sun Oct 10 20:35:46 2004]
P-1 found a factor in stage #2, B1=135000, B2=1991250.
19249*2^7084433+1 has a factor: 100985139

So I'm stuck with this k,n pair that I know has a factor but I don't know that factor. Now I don't want to release the k,n pair again before I have submitted the factor so that someone else won't get this test.

It could also be some other kind of bug in the factorer.

btw. this is run using the linux-version of Georges factorer ... I think I will try to run the test on the old factorer and with the same bounds to see what output it gives me, even thought that will take something like 2 times that of the current factorer.

(added a few minutes later)

:bang: :bang: :bang: :bang:
Oops, typo in the input to the factorer. The input should have been
19249*2^7084418
Sorry for creating all this havock ... I just wasted a few CPU-hours here :o

vjs
10-12-2004, 12:08 PM
Just wondering if it makes any sence to try p-1 with very large bounds on some of the k/n pairs below prp. For example those that are holding back n-upper bounds, or those in the 90-day window that have 5-10% with a very low rate, and again above upper bounds?

Mystwalker
10-12-2004, 05:14 PM
AFAIK, factoring is more effective (compared to PRPing) the bigger n is.

vjs
11-17-2004, 11:33 AM
Another quick question about P-1, I asked this before but I'd like to ask again,

I know sieve and P-1 are different animals but...

If there is a factor for a particular k/n pair at 557T, sieve will definetly find that factor.

However is P-1 also going to find it 100% of the time???

I think the answer is no, b/c P-1 onlly find's smooth factors.

So what are the chances of the factor being missed, by P-1 if a factor around 557T exists etc.

garo
11-17-2004, 12:44 PM
vjs, you are correct. P-1 will miss a factor if it is not smooth. The chances of the factor being missed depend on the bounds (B1,B2) chosen of P-1. The math is a bit complex. I would recommend "A Practical Analysis of the Elliptic Curve Factoring Algorithm" by Robert Silverman and Samuel Wagstaff, Mathmatics of Computation, Vol 61, No. 203, pp445-462

Your quest is related to the Dickman's function and Merten's theorems discussed in that paper. Essentially the chance that a factor p will be missed given that it exists is simply the chance that p-1 does not satisfy the bounds.

prime95
12-07-2004, 07:32 PM
A new prime95 is available that should work on non-SSE2 machines too. This also fixes the bug where prime95 blew up running P-1 on exponents above 9 million or so. The new version is at ftp://mersenne.org/gimps/p95v246.zip Let me know if there are any problems. If all goes well I'll upload a Linux version soon.

pixl97
12-17-2004, 01:21 AM
prime95, can you test your latest verson of mprime on kernel 2.6.10-rc or above (on rc3-bk10 currently), on my Opteron boxes running mprime segfaults, where on 2.6.9 it does not. I think something about the virtual memory layout has changed causing the program to crash. Unfortunatly I dont have a P4 that I can test with a 2.6.10 kernel to see if this is only affecting 32 bit programs on the 64 bit platform.

..buch of stuff before here...
open("/proc/meminfo", O_RDONLY) = 3
fstat64(0x3, 0xffffd3e4) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, 0x3 /* MAP_??? */, 34, 0xffffffff) = 0x55555000
read(3, "MemTotal: 1026248 kB\nMemFre"..., 1024) = 600
close(3) = 0
munmap(0x55555000, 4096) = 0
open("/proc/meminfo", O_RDONLY) = 3
fstat64(0x3, 0xffffd3e4) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, 0x3 /* MAP_??? */, 34, 0xffffffff) = 0x55555000
read(3, "MemTotal: 1026248 kB\nMemFre"..., 1024) = 600
close(3) = 0
munmap(0x55555000, 4096) = 0
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++
[root@james pixl]#
Script done on Thu 16 Dec 2004 12:16:13 AM CST
gdb mprime /start/step/ info
Program received signal SIGSEGV, Segmentation fault.
0x0828e745 in _ecpuidsupport ()

garo
12-17-2004, 05:16 AM
pixl97, if you want a quicker reply you should probably post in the mersenneforum as well. Though George(prime95) will eventually check this thread again so if you are patient, no worries.

Frodo42
02-28-2005, 04:38 AM
Originally posted by prime95
A new prime95 is available that should work on non-SSE2 machines too. This also fixes the bug where prime95 blew up running P-1 on exponents above 9 million or so. The new version is at ftp://mersenne.org/gimps/p95v246.zip Let me know if there are any problems. If all goes well I'll upload a Linux version soon.
Did we ever get that Linux-version? ... we are closing in on the 9 million border now and I would like to keep my two P4's busy factoring ...

garo
02-28-2005, 07:43 AM
Yes the 24.6 Linux version is available at http://mersenne.org/gimps But I'd wait a day or two before downloading it as the mersenne server seems swamped by the attention due to the new prime find M42.

Frodo42
02-28-2005, 09:52 AM
Thank you Garo ... I guess I should keep an eye on the mersenne forum also.

vjs
03-28-2005, 12:26 PM
Humm,

I did 8910000 through 8910500 10 k/n pairs and didn't find a factor...

I was doing stage1 testing only with B1=B2=150000

I was sort of expecting to find something...

Any suggestions for a B2=xxxx pass

Not looking to do a large number of factorings just find one large factor in the range of 8910000 - 8911000 ...

Should I go with the standard b2=b1x100 for stage2?

P.S. BTW I'm using sbfactor 1.2.5.5

Frodo42
03-29-2005, 12:01 AM
I would guess that you would find a factor within a n-range of 2000 with those ranges ... adding a decent B2 might lower that range to 1000 ... but this is only based on my "gut feeling" from doing a lot of P-1 factoring, not on any calculations.

vjs
03-29-2005, 12:06 AM
So, what would you suggest for a b2 setting for stage 2 given that this "barton" machine has about 330mb of memory to spare without swapping.

Frodo42
03-29-2005, 12:16 AM
Well I'd guess a decent B2 would be something around 1000000

This is only what I am used to that the factorer sets for me automaticly when I set a value higher factor value and a sieve limit ... I don't think memory has all to much effect eccept on the efficency of stage2 (and therefore sets somewhat higher B2 for higher amounts of memory).

All this is just guesswork build on experience ... so I might very well be wrong.

vjs
05-15-2006, 02:08 PM
By E,

Quote:
Originally Posted by Frodo42
10865000 10885982 frodo42 5 [completed]
10885983 10890000 frodo42 ? [abondoned]

Sorry folks, but I have to stop crunching for the summer, I don't need the CPU-heat in this season


If you don't mind I'll take the rest of your reserved range. Thanks.


10885983 10890000 engracio ? [reserved] Looks like this range has been by passed. never mind.


11015000 11050000 engracio ? [reserved]



e

11000000 11015000 range has been completed by jmblazek, so if you'll don't mind I'll start from there. Besides jmblazek reminded me I was reserving a range that has been by passed. Thanks jmblazek. btw I am keeping an eye on the current prp, sure don't want to factor something that has been by passed.

=====================================

By VJS,

I'm a little amazed by how quickly prp is advancing. I thought there may be a runaway client or people were caching alot of k/n pairs advancing the prp but I don't think this is the case.

Currently there are 4867 tests pending with a project wide rate of approximately 2500Mcems/s, on average that means a little more than 500cems/s per machine.

Does this sound reasonable? I think it does, anyone want to check the math or come up with another number.

========================

By grobie,

the Makewtd zip is corrupted, and the the one under it doesn't exist. I was going to try out factoring.. but I know it's pretty slow with AMD's Was going to see what she can do on a dually.

11050000-11050400 reserved Dagger ?

b1=60000 b2=255000 These okay settings?

____________________________
AMD 64 X2 4200+
1GB RAM

Should have mentioned this earlier, but when I click on the results.txt.bz2 link it just brings up a large page full of gibberish. It doesn't actually let you download the file.


Quote:
Originally Posted by KWSN_Dagger
Should have mentioned this earlier, but when I click on the results.txt.bz2 link it just brings up a large page full of gibberish. It doesn't actually let you download the file.

What you have to do is right click on the link, then save as, after you done that open the dir you saved it in & then open it.

================================

by E,

I have a XP2400+ dually on this box with GB of memory. I put down 460mb each of the prime95 process running with a 1.36% probability of finding a factor. The b1=70000, b2=822500. Give it a try see if the box like it.




Quote:
Originally Posted by KWSN_Dagger
the Makewtd zip is corrupted, and the the one under it doesn't exist. I was going to try out factoring.. but I know it's pretty slow with AMD's Was going to see what she can do on a dually.

11050000-11050400 reserved Dagger ?

b1=60000 b2=255000 These okay settings?

____________________________
AMD 64 X2 4200+
1GB RAM

================================

By KWSN_Dagger,

I'm running with 512MB of mem on each. I'll try those settings after I run this range with my settings.

[Edit] So far after 1 test it's 72 mins for B1. 15 mins for B2. Total time 87 mins for 1 test. Ideal is what? 120mins total B1+B2? [\Edit]


Quote:
Originally Posted by engracio
I have a XP2400+ dually on this box with GB of memory. I put down 460mb each of the prime95 process running with a 1.36% probability of finding a factor. The b1=70000, b2=822500. Give it a try see if the box like it.

===============================

By E,

Originally Posted by KWSN_Dagger
I'm running with 512MB of mem on each. I'll try those settings after I run this range with my settings.

[Edit] So far after 1 test it's 72 mins for B1. 15 mins for B2. Total time 87 mins for 1 test. Ideal is what? 120mins total B1+B2? [\Edit]



I found that with that setting and only have 1GB of memory, if both p95 are on B2 stage the box slows down quite a bit. With 460mb per p95 the percentage does not increase any more than 512mb. Play with it see what you get. On the b2/stage 2 the time doubled.

================================

Quote:
Originally Posted by engracio
I found that with that setting and only have 1GB of memory, if both p95 are on B2 stage the box slows down quite a bit. With 460mb per p95 the percentage does not increase any more than 512mb. Play with it see what you get. On the b2/stage 2 the time doubled.


e


I changed it to your settings.. Running 1 instance on each core after I split my numbers in half. It doesn't like running 2 but as long as I don't look at the output it's alright. Knocked my Mem setting down to 460 as well, as I had both at 512.

==============================
By KWSN_Dagger ,


n is only at 10917965 so you should be okay.. AFAIK it's checked every hr, or else it's updated with every factor submitted.

==============================

By E,

I found a factor a few minutes ago and submitted it but the database said it was not valid. Looked at the stats which is updated every 15 mins, it said the current n is 10919832. Am I wasting my time factoring so close to the prp point?

[Sat May 13 16:09:18 2006]
P-1 found a factor in stage #2, B1=70000, B2=822500.
10223*2^10920569+1 has a factor: 321510543325791301

already jumped to 10930000 a while back hopefully gives me enough time to finish the range. Too bad I know that I had two factors which was found and did not counted. Oh well. Had to rearrange the list to make sure the lowest got crunch first. The pickup on prp'ng really caught me off guard. Just hope it is not a runaway client. Does not matter now.

==========================

By VJS,

Sorry E,

I though you said you were adandoning the range in a previous post. Stay ahead of prp and let me know how much you finish etc.

Submitted factors are remove from the db right away so as long as you submit before they are handed out.

============================


===================================

vjs
05-18-2006, 12:17 PM
There have been some questions as to which B1 and B2 values to use. One option is to always use the Prime95 default. Currently those setting should probably use a sieve of 52 and a value of about 1.7 per test. The 1.7 is my opinion.

Others can chime in with what works best...

The other way to look at it is setting B1 and B2 manually. Ideally equal time should be spent in stage1 (B1) and stage2 (B2), although stage2 is quite memory dependant and it may not be possible with only 512Mb of ram. In order to get equal time the ratio between B1:B2 should be about 1:14 if memory serves me correctly.

Simply adjust your ratio until you get equal times.

-------------------------------

A good choice of b1 is somewhere between 30K and 80K, if your much outside this your probably wasting time.

Using a value of 60K is probably about right, I'd suggest staying between 50-70K.

Then choose a B2 value so that you don't run out of memory.
No larger than 14x the B1 value and no smaller than 8x.

Example

B1=60000 B2=480000
or if you have enough memory
B1=60000 B2=840000

Just watch your task manager for max memory usage.


B2=822500 will use 446MB of Memory

grobie
05-18-2006, 07:08 PM
vjs can you check out the links to makewtd on the reservation forum and also on the tutoral page. I get them as corrupt. I had to do google search to get a good copy. If there are new folks wanting to try p-1 and cant get the programs they need here, they will move on to other projects. Here is the link I got it from: http://users.skynet.be/bk261068/Makewtd.exe

Thanks

Joe O
05-18-2006, 11:16 PM
vjs can you check out the links to makewtd on the reservation forum and also on the tutoral page. I get them as corrupt. I had to do google search to get a good copy. If there are new folks wanting to try p-1 and cant get the programs they need here, they will move on to other projects. Here is the link I got it from: http://users.skynet.be/bk261068/Makewtd.exe

Thanks
Only the second link on the reservation thread needed to be replaced.

grobie
05-19-2006, 05:28 AM
Only the second link on the reservation thread needed to be replaced.
When I try to unzip the 1st link this is what I get: ! C:\Documents and Settings\Owner\Local Settings\Temp\Makewtd.zip: Unexpected end of archive

same thing on tutoral page.

Joe O
05-19-2006, 06:45 AM
When I try to unzip the 1st link this is what I get: ! C:\Documents and Settings\Owner\Local Settings\Temp\Makewtd.zip: Unexpected end of archive

same thing on tutoral page.


Contact the tutorial writer directly.

grobie
05-23-2006, 05:30 PM
oops, I think I messed up. I submitted a factor yesterday & I think I might not have been logged in, my total should be 2 now but stats has only 1 :blush: Is there anyway to fix it or chalk it up to "pay attention next time" lol. It should be this one 55459*2^11055910+1

Well now it should be 4, I just now submitted one :)

grobie
05-27-2006, 12:36 PM
Ok then!!!

vjs
06-10-2006, 08:58 AM
Cleaning up reservation thread:

Posts

===========================

By E,


11145000 11155000 shauge 0 [complete]

========================================
When first trying out factoring we were close to PRP. I therefore run it as recommended in the "start here" link. When I on the second try, tried to do optimize factoring, I discovered that I had to little RAM for second stage. I have 1GB. The factoring I have done may therefore have been worthless.

shauge

I do not think 1gb is too little ram. I personally run 1gb memory on a dual XP 2400. They crunch a unit about every 2 1/2 hours. When I am surfing on it, the only time it slows down for me is when both wu is doing stage 2. If they were staggered I do not notice any slow down.

On my current reservation I increased on my makewtd.bat from 1.7 to 1.8. Still using 49 as the other setting. I got the same amount of wu to do on either 1.7 or 1.8. The only difference is with 460mb assigned to each P95 on my dually, the stage 1 was about 3.5 minutes faster to complete.

BTW I made the same mistake you did, stayed with the Prime95 default which is I think 8 mb allocated to Prime95. It only completed stage 1, no stage 2. Too late now. Like vjs said at least you did stage 1. The factors I found are evenly spread between stage 1 and stage 2. Depends on the wu. Some are found at stage 1 some must be completed up to stage 2. No biggie, thanks for your help.:cheers:

If you got any questions, holler and I hope I ran into it already. Like a wall, makes you remember things.


e:)

==========================

By jmblazek ,

fyi...I contacted omboohankvald several weeks ago to update his site to mention the prime95 default RAM oversite. I fell victim to the same error and only did stage 1 through half my range before figuring it out. I then went back and started over.

I haven't heard back from him nor have I seen any changes on his site. Just a simple mention to modify the RAM would be fine.

If you notice on his page, the screen shot B1=30000, B2=30000 which means stage 2 will not run. However, in the "Submitting the results" section, you'll notice that he used a different example where B1=30000, B2=255000. Therefore, stage 2 will run.

Also, vjs, what's the format in plugging in B1/B2 values into worktodo.ini? I presume this is after you have ran make_worktodo and generated work???

==========================================

By VJS,

Quote:
Originally Posted by jmblazek
Also, vjs, what's the format in plugging in B1/B2 values into worktodo.ini? I presume this is after you have ran make_worktodo and generated work???


It's been a while... but I believe you simply replace the 50,1.7 with the desired B1,b2 values and that's it. If you reply with a worktodo I can modify one as an example

==========================

By Shauge,

I looked at the link to the other thread in the first post in this thread. There I found this method of modifying worktodo:

Quote:
Pfactor=19249,2,9947282,1,49.8,1.7
Pminus1=19249,2,9947282,1,50000,700000


Whatever I tried for B1 and B2 I got the message:
"Not enough RAM to ever run stage 2".
Keeping in mind that B2 should at least be 8*B1 I went down to B1=30000 and B2=240000, with still no success. I guess the memory can be fragmented. The PC is not used for anything else.

=======================================

By engracio,

Shauge

I am able to create a new wtd by substituting the b1 and b2. I type in "makewtd 11180000 11181000 70000 840000" on the command prompt per instructions on the website and I got this wtd units. I plug it in on the worktodo.ini and it started crunching. Hope this help.



Pminus1=21181,2,11180108,1,70000,840000
Pminus1=55459,2,11180110,1,70000,840000
Pminus1=19249,2,11180138,1,70000,840000
Pminus1=24737,2,11180407,1,70000,840000


====================

BY JoeO,

Quote:
Originally Posted by shauge
I looked at the link to the other thread in the first post in this thread. There I found this method of modifying worktodo:

Whatever I tried for B1 and B2 I got the message:
"Not enough RAM to ever run stage 2".
Keeping in mind that B2 should at least be 8*B1 I went down to B1=30000 and B2=240000, with still no success. I guess the memory can be fragmented. The PC is not used for anything else.

Under options CPU you must set the amount of memory that Prime95 is allowed to use.

=============================================
shauge

Aha, I didn't know that. Thanks.

===============================================