PDA

View Full Version : P-1 coordination thread - discussion



engracio
06-09-2006, 07:53 PM
Gonna try secondpass on my xeons and sieving on my XP for a while until everything settles down. Hate doing duplicate work.


e:)

jmblazek
06-09-2006, 08:51 PM
Something happend...that was a huge jump in n...to 11786612????? That does not seem right.

Joe O
06-09-2006, 09:27 PM
I just barely started on my new updated p1 factoring reservation and looking at the stats it looks like in a day or so it will be passed by. Should I even try to factor anything and wait how the new sheriff in town produce or what? Any ideas?
Give it a day or so until things sort themselves out. If it is just a runaway client (or something) these tests will be re-released. If it is really a massive influx of computing power, then there is no way for P-1 to keep up with it.

Sieving anyone?

engracio
06-09-2006, 09:32 PM
Give it a day or so until things sort themselves out. If it is just a runaway client (or something) these tests will be re-released. If it is really a massive influx of computing power, then there is no way for P-1 to keep up with it.

Sieving anyone?

Thanks Joe, its about time for sieving again anyway. Was just waiting for the new siever, guess its working fine now.

e

Joe O
06-09-2006, 09:35 PM
Something happend...that was a huge jump in n...to 11786612????? That does not seem right.
One user (http://www.seventeenorbust.com/stats/users/user.mhtml?userID=10165) outperformed the ARS team yesterday, all by his/her lonesome. Today he/she slowed down a little, but would still have come in third place as a team.

ShoeLace
06-10-2006, 06:48 AM
did you see they have 14140 pending tests ?!?!?

Nuri
06-12-2006, 02:33 AM
With account age of 227 days and 19 tests completed overall... :bang:

vjs
06-12-2006, 05:46 PM
With the lastest issues in the prp I'm placed in a ackward position, of not knowing the best solution to the issues.

I'm going to make the suggestion that people skip a range of P-1 upto 12M.

Reason being I'm uncertain if Louie or Dave will transfer the dropped tests back into the firstpass que or not. Also even if they can/do we have some legit assigments in the range of 11.3 to 12M....


If we simply left a gap would solve a bunch of issues.
- It would ensure P-1 factors score properly and give P-1 a headstart/lead on prp again.

In either case reserve as you wish.

KWSN_Dagger
06-13-2006, 04:31 AM
Makes sense to me, because the next.txt file is blank. But it is at 11975918 taken from Mike's stats page. http://www.aooq73.dsl.pipex.com/2006/scores_t.htm

vjs
06-13-2006, 12:00 PM
Yeah...
If you go by this page, http://www.seventeenorbust.com/stats/rangeStatsEx.mhtml
it's pretty obvious everything was handed out to n=12M for firstpass.


:cry:

engracio
06-13-2006, 06:57 PM
Yeah...
If you go by this page, http://www.seventeenorbust.com/stats/rangeStatsEx.mhtml
it's pretty obvious everything was handed out to n=12M for first pass.


:cry:


13470000 13485000 engracio ? [reserved]


Looks like he went above that, all the way to n=13.5m. Now the big question after everything settles down and the runaway client dropped everything is would the admin put back the dropped test to first pass. If they put it back to first pass would we be able to factor it and still get the benefits of factoring? I.e if we find a factor would the server consider it a passed test?

Personally I do not care where we start over again as long as we do not do double work or for nothing. As what everybody has been saying, the longer the test become the more benefit p1 factoring is. With my ancient cpu's they are getting very long.

e

vjs
06-14-2006, 10:34 AM
E,

Those tests around 13.4M are far a few between they are actually from a very old special que 13367 or something like that, I think a total of ~10 were completed.

I can assure you that there were no tests between >12M assigned recently.

Regardless do the range you specified if you wish, but if you havn't got that far I'd suggest taking something a little lower. By the time we get to 13.4M I'd like to see another 2 k's eliminated.

engracio
06-14-2006, 11:17 AM
E,

Those tests around 13.4M are far a few between they are actually from a very old special que 13367 or something like that, I think a total of ~10 were completed.

I can assure you that there were no tests between >12M assigned recently.

Regardless do the range you specified if you wish, but if you havn't got that far I'd suggest taking something a little lower. By the time we get to 13.4M I'd like to see another 2 k's eliminated.

Okay buddy make sense to me, was just not sure. I was just prepping the wu so that when the boxes running the sieving and secondpass prp completes p1 can continue.

Let me reserve this range then:


12000000 12020000 engracio ? [reserved

I can always go back down to the correct prp range when this set is complete. Thanks.

e

======================================

Sounds good e.

VJS

jasong
07-08-2006, 01:27 PM
12050000 12051000 jasong [complete] 1 factor

(the very first p-1 in the group, and my very first p-1, period, found a factor. I thought that was a good sign, but of the 22 k/n pairs in that range, it was the only factor. go figure)

Nuri
07-11-2006, 04:12 AM
1 out of 22 is a very lucky shot... You should expect 1-2% success rate, unless the limits you're using are on the extremes.

vjs
07-11-2006, 07:29 PM
yup one out of 22 is pretty lucky. I believe the first time I tried P-1 factoing I found one on the second or third number. Then not again for quite some time.

jasong
07-12-2006, 04:13 PM
I gave Prime95 an 800MB maximum which, unless I totally misunderstood something I read about that, should be more than enough.

Question: If you run p-1 more than once, it does the problem slightly differently each time because of randomization, right? So, would that make it possible that another p-1 could find a factor that the previous one missed? Even with exactly the same parameters?

Nuri
07-13-2006, 04:41 AM
Question: If you run p-1 more than once, it does the problem slightly differently each time because of randomization, right? So, would that make it possible that another p-1 could find a factor that the previous one missed? Even with exactly the same parameters?

to the best of my knowledge, this is not how it works. it's true for ECM though and this is why we run many curves (trials) in ECM at each boundary.

hhh
07-13-2006, 04:58 AM
Consider a prime factor p of your number k*2^n-1. p-1 is not prime, and has a factorisation p=p_1*p_2*p_3*...*q*r, where q and r are the two largest factors.
If r<B1, the factor p will be found in stage 1, if r<B2 AND q<B1, it will be found in stage 2. No random parameters.
In other words: if all prime factors of p-1 are below B1, it's a stage-1-hit, if the largest (and only the largest) is between B1 and B2, it's a stage-2-hit.
Yours H.

engracio
08-04-2006, 06:13 PM
11400000 11500000 engracio ? [reserved]


e:)

=========================

E, you might be working behind prp...

-VJS


Not really, had to dump a few G here and there but I mostly factored 95% of this range. Too bad did not get much factors like the last time. Maybe this next range. :) e


11400000 11500000 engracio 17 [complete]


E
17 factors is not bad. You didn't have to dump all those G because PRP is back down again. Want to go back and get them?

So far you have a lock on the 500000 points and above:

p (T) k n Score Factor found Score changed Score was Score could be Reqd bias PRP saved
19387.196T 21181 11625332 528170.686 Mon 17-Jul-2006 Fri 28-Jul-2006 35.000 8187960.721 484.68 (2) engracio
75720.086Z 21181 11606444 526455.813 Mon 17-Jul-2006 Fri 28-Jul-2006 35.000 9999999999999.998 9999.99 (2) engracio
41158.945P 10223 11593949 525322.903 Mon 17-Jul-2006 Fri 28-Jul-2006 35.000 17289284572.827 9999.99 (2) engracio
2277.338T 55459 11561530 522389.195 Mon 17-Jul-2006 Fri 28-Jul-2006 22.773 951279.378 56.93 (2) engracio
84710.325P 10223 11553701 521681.953 Wed 26-Jul-2006 Fri 28-Jul-2006 35.000 35336914058.002 9999.99 (2) engracio
72082.420T 55459 11547478 521120.132 Sat 08-Jul-2006 Fri 28-Jul-2006 35.000 30036800.128 1802.06 (2) engracio
895.728P 19249 11546366 521019.771 Sun 09-Jul-2006 Fri 28-Jul-2006 35.000 373178529.588 9999.99 (2) engracio
45671.456T 33661 11542752 520693.665 Fri 07-Jul-2006 Fri 28-Jul-2006 35.000 19015756.551 1141.79 (2) engracio
137.145Y 10223 11538761 520333.659 Thu 06-Jul-2006 Fri 28-Jul-2006 35.000 9999999999999.998 9999.99 (2) engracio
2683.231T 24737 11537191 520192.073 Fri 07-Jul-2006 Fri 28-Jul-2006 26.832 1116113.234 67.08 (2) engracio
131.898Z 10223 11536997 520174.578 Fri 07-Jul-2006 Fri 28-Jul-2006 35.000 9999999999999.998 9999.99 (2) engracio
3146.918T 21181 11532404 519760.487 Thu 06-Jul-2006 Fri 28-Jul-2006 31.469 1307901.811 78.67 (2) engracio
265.611P 19249 11531282 519659.356 Sat 08-Jul-2006 Fri 28-Jul-2006 35.000 110369928.217 6640.27 (2) engracio
1391.878T 21181 11531228 519654.489 Sat 08-Jul-2006 Fri 28-Jul-2006 13.919 578365.586 34.80 (2) engracio
211.787P 67607 11531187 519650.793 Sat 08-Jul-2006 Fri 28-Jul-2006 35.000 88003001.266 5294.68 (2) engracio
1269.580T 55459 11529694 519516.239 Tue 04-Jul-2006 Fri 28-Jul-2006 12.696 527406.758 31.74 (2) engracio
126.246P 67607 11528667 519423.692 Thu 06-Jul-2006 Fri 28-Jul-2006 35.000 52435361.164 3156.14 (2) engracio
80445.754P 10223 11526077 519190.333 Thu 06-Jul-2006 Fri 28-Jul-2006 35.000 33397671039.595 9999.99 (2) engracio
2321.674T 10223 11525789 519164.388 Tue 04-Jul-2006 Fri 28-Jul-2006 23.217 963812.435 58.04 (2) engracio
24789.747T 55459 11517574 518424.583 Sat 01-Jul-2006 Fri 28-Jul-2006 35.000 10276474.214 619.74 (2) engracio
33213.948E 55459 11513278 518037.915 Mon 03-Jul-2006 Sun 09-Jul-2006 35.000 9999999999999.998 9999.99 (2) engracio
3721.833T 21181 11511932 517916.796 Tue 04-Jul-2006 Sun 09-Jul-2006 35.000 1541357.340 93.05 (2) engracio
4929.227E 10223 11506649 517441.546 Fri 30-Jun-2006 Sun 09-Jul-2006 35.000 2039513603842.301 9999.99 (2) engracio
57274.696T 24737 11506087 517391.002 Fri 30-Jun-2006 Sun 09-Jul-2006 35.000 23695622.414 1431.87 (2) engracio
5757.564T 24737 11502511 517069.450 Fri 30-Jun-2006 Sun 09-Jul-2006 35.000 2380532.438 143.94 (2) engracio
99999.999Y 10223 11501405 516970.019 Fri 30-Jun-2006 Sun 09-Jul-2006 35.000 9999999999999.998 9999.99 (2) engracio
230.196P 24737 11493487 516258.461 Fri 04-Aug-2006 95028016.772 5754.90 (2) engracio
44479.372T 24737 11487751 515743.296 Thu 03-Aug-2006 18343351.824 1111.98 (2) engracio
38834.518T 24737 11474743 514575.968 Wed 02-Aug-2006 15979157.491 970.86 (2) engracio
1747.465T 22699 11469934 514144.746 Wed 02-Aug-2006 718423.334 43.69 (2) engracio
11592.538T 67607 11469851 514137.305 Wed 02-Aug-2006 4765891.026 289.81 (2) engracio
4418.233T 10223 11457101 512994.900 Sat 29-Jul-2006 1812375.068 110.46 (2) engracio
1075.644P 33661 11451408 512485.216 Wed 26-Jul-2006 440794860.563 9999.99 (2) engracio
49946.562T 10223 11450345 512390.075 Wed 26-Jul-2006 20464105.477 1248.66 (2) engracio
1942.552T 55459 11448094 512188.635 Fri 28-Jul-2006 795589.458 48.56 (2) engracio
13872.847T 22699 11445958 511997.524 Fri 28-Jul-2006 5679628.468 346.82 (2) engracio
2935.332T 10223 11441177 511569.888 Wed 26-Jul-2006 1200739.131 73.38 (2) engracio
163.862P 19249 11432642 510806.921 Wed 26-Jul-2006 66930288.095 4096.56 (2) engracio
14941.227T 21181 11432228 510769.927 Wed 26-Jul-2006 6102363.182 373.53 (2) engracio
16420.073T 55459 11430346 505465.178 Tue 18-Jul-2006 2 engracio
6890.336P 33661 11420280 504575.306 Tue 18-Jul-2006 2 engracio
7804.304T 10223 11416709 504259.805 Wed 26-Jul-2006 2 engracio

Joe O

engracio
08-04-2006, 09:47 PM
Joe,

What I meant about dumping several G's or is it M's is that when I see the prp being issued at or just behind what I am factoring, I would dump a few of them until I am just above it. Granted some might have been re issued again but at least I know someone is prp'ing it and I don't want to duplicate work.

Yes, the prp going back and forth gave me enough time to like I said factor about 95% of this range.

My next range will give me plenty of head room I would not even worry about it.


e:)

vjs
08-23-2006, 01:19 PM
E,

I'm currious what you are currently using for B1 and B2 bounds... Seem to be doing pretty well, IMHO.

engracio
08-23-2006, 06:40 PM
I'll be glad when I finish my last range under 12m, hopefully it would give me lots of breathing room. Have to constantly check the assignment page for the current n. Last week have to either skip bunch of n just to stay ahead of prp. I try to complete as many n as I can. Alas most of them are not factors but you do not know that unless it gets factored. With the 12m range, I should be able to just stick 100 or so n's per cpu and not worry about it getting passed by prp. Harvest them and send out the factors.

Which reminds me, I feel the assignment page is the most current for the current n being given out. I now prefer looking at it instead of the next.txt page. I guess when we get ahead again from current n being handed out, the next.txt page will be relevant again.

vjs, I use B1=70000, B2=822500 bounds. With my duallies I reserve 560mb per cpu with 1.17% chance. I have 1.5GB memory per box.


e:)

BTW, what sieve depth and factor worth should be use for the 12m range. Right now I am using 50.1 and 1.8 with the 1160-1175 range. How much to change if any?


Looks like Next.txt is working again. I still suggest you check the assignment ques from time to time.

Currently there are 6K tests left before 12M and there are roughly 20k tests per million n. This would suggest the actual n value is somewhere around 11.6-11.7M.

Regardless the only person this currently affects is E and he is generally on top of his ranges. The good news the lowest reservable range is >12M so were safe for now.

Keep up the good work people.

vjs
08-23-2006, 11:12 PM
You seem to be doing great on your own, better than expected.

:clap:

I wouldn't change a thing. :cheers:

Might change your luck.

engracio
09-11-2006, 05:51 PM
Louie can you post what is the B1=B2 setting you are currently using on your P1 factoring? Thanks.


e

jjjjL
09-14-2006, 01:48 PM
Louie can you post what is the B1=B2 setting you are currently using on your P1 factoring? Thanks.


e

I'm not using B1/B2. I'm using prime95 24.14 with a sieve depth of 50 and a factor value of 2.1 and specify 256MB of mem for B2. These settings end up causing approximately B1=95000, B2=1092500.

Somewhat overkill. I'm trying to find more large factors like

19750058751527901255535231 | 21181*2^12447884+1

19750058751527901255535230 = 2 x 3 x 5 x 7 ^ 2 x 11 x 17 ^ 2 x 2963 x
32191 x 90863 x 487649

What are you using?

Cheers,
Louie

vjs
09-14-2006, 07:20 PM
Louie,

I believe most people choose B1 between 60k-85k and a B1:B2 ratio between 10-16.

Larger than this range your time spent per factor drops quite a bit, but you do find larger factors of course.

The B1:B2 ratio is based upon suggestions from Dr. Silverman where a roughly equal time should be spent during stage1 and stage2.

engracio
09-14-2006, 10:58 PM
I'm not using B1/B2. I'm using prime95 24.14 with a sieve depth of 50 and a factor value of 2.1 and specify 256MB of mem for B2. These settings end up causing approximately B1=95000, B2=1092500.

Somewhat overkill. I'm trying to find more large factors like

19750058751527901255535231 | 21181*2^12447884+1

19750058751527901255535230 = 2 x 3 x 5 x 7 ^ 2 x 11 x 17 ^ 2 x 2963 x
32191 x 90863 x 487649

What are you using?

Cheers,
Louie


That is the reason I asked you the question why you are getting lots of large factors. I am also using Prime 24.14 with 50.1 and 1.8 factor value, b1=75000 b2=825000. Since I have 1.5gb of memory per duallie I am using 560mb per instance of prime. It seems that the amount of memory does not really affect the completion per unit any faster. I am just looking for different configuration I can use factoring. Thanks.

vjs,

Comparing his factors found and mine, seems like he is finding a little bit more. Nothing close to being scientific but a Pluto wag (out there guess)

e

vjs
09-15-2006, 10:58 AM
Interesting lets compare louies and yours...

12250000 12500000 louie 107
11780000 11930000 louie 44

Basically Louie P-1'ed a range of 400000 n and produced 151 factors.

0.3775 factors per 1000 n-range

11600000 11770000 engracio 30
12000000 12020000 engracio 8
11400000 11600000 engracio 51

E P-1'ed a total range of 390000 n and produced 89 factors.

0.228 factor per 1000 n-range.

Louie used a B1=95K with B2=B1x11.5
E used a B1=75K with B2=B1x11

Basially Louie spent close to 30% more processing power per k/n pair, yeilding a 65% increase in factor probability.

-------------------

Very interesting, E you might want to try higher bounds say B1=110k and if you are not limited in memory and have dual channel try upping your B2:B1 ratio. I wouldn't go much beyond 14.

B2=1500000

Are you still running those MPX's?

The most important point to remember is a little P-1 is better than no P-1 so as long as we are not passing ranges...

Joe O
09-15-2006, 01:39 PM
To try and translate VJS's recommendation into the parameters that you are using, I would suggest that you try changing your 1.8 factor value to the 2.1 that Louie is using. Depending on how that goes, I would try 2.2 as well. I would recommend that you leave everything else as is, unless you want to lower your factor depth to 50.0 from 50.1 as well or instead of changing the factor value. It would be instructive to see what these changes do to your B1 and B2 values, as well as your run time.

engracio
09-15-2006, 09:52 PM
Interesting lets compare Louis and yours...

12250000 12500000 louie 107
11780000 11930000 louie 44

Basically Louie P-1'ed a range of 400000 n and produced 151 factors.

0.3775 factors per 1000 n-range

11600000 11770000 engracio 30
12000000 12020000 engracio 8
11400000 11600000 engracio 51

E P-1'ed a total range of 390000 n and produced 89 factors.

0.228 factor per 1000 n-range.

Louie used a B1=95K with B2=B1x11.5
E used a B1=75K with B2=B1x11

Basially Louie spent close to 30% more processing power per k/n pair, yeilding a 65% increase in factor probability.

-------------------

Very interesting, E you might want to try higher bounds say B1=110k and if you are not limited in memory and have dual channel try upping your B2:B1 ratio. I wouldn't go much beyond 14.

B2=1500000

Are you still running those MPX's?

The most important point to remember is a little P-1 is better than no P-1 so as long as we are not passing ranges...


Vjs,

Yes on the MPX but they are now sieving and dying slowly, had to use their memory and to increase and make it dual channel the memory on the Xeons and a Pentium D915 oc'd to 3500 mhz. Probably could get a few more mhz but did not want to spent money on memory and better cooling.

Did not know you guys were going to break it down and throw off my galactic wag. Portions of my range were skipped due to the random fluctuations of the prp. I know I skipped a few factors.

I really just wanted to know how Louie was able to complete those range so quick. One quick answer was he had more machines to crunch. doh!

I'll slowly increase the B2 and see if they really make that much difference. My P1 machines are all Intel cpu's and I try to have at least 1gb of memory while factoring. What is odd is I have P4 2.8 with only 512 memory with 452mb allotted, max memory Prime95 would let me allocate. It can crunch a factor every two hours but my Xeon 3.1 could only crunch every two and half with 560mb alloted per instance.

Most of the time it is not used. Ummm.

e

vjs
09-16-2006, 12:48 AM
Humm,

I forgot about that in the calc's I guess that throws everything out the window from above.

Still using my MPX as a home machine, dual 2.4G Bartons and 2G of ram, sieve exclusively. Yup it's getting old. I should have sold it off 6 months ago. I could have got ~$400 for just the board and processors :cry: . But it's tough to find something decent with u320 or PCI-X slots.

glennpat
06-29-2007, 09:34 PM
Since this thread is a little old what is the sieving depth and factor value should be used when doing a range?
Thanks.

vjs
06-30-2007, 02:40 AM
I wouldn't use sieve depth and factor worth.
I'd simply specify the B1 and B2 values.

Since it will be a while until we get to 14M I would suggest you use.

B1=100k to 120k with a b2=b1x12 to b2=b1x14 depending on how much memory you have.

in other words,

minimum
B1=100000 b2=1200000

maximum
B1=120000 b2=1700000

What CPU memory and amount of memory do you have?

glennpat
06-30-2007, 07:21 AM
The one I will run on is an Intel dual core 6600 running 2.4 GHz (stock) with 2 Gig of memory. SOB Sieve will also be running on it.

To use the B1 B2 do I use the AdvanceFactor command for PRIME95? I assume I edit the the worktodo.ini to use the B1 B2.

Joe O
06-30-2007, 09:57 AM
To specify the desired B1 and B2 values, place the following line(s) in your worktodo.ini file:
Pminus1=10223,2,14000237,1,120000,1700000
where
Pminus1=k,2,n,1,B1,B2

Do not use the AdvanceFactor command, that is for something else.

glennpat
06-30-2007, 10:59 AM
Thank You! I now see that was in the instructions at the top of the reserve thread.

The 1st few line of my are worktodo are:


Pminus1=21181,2,14000180,1,120000,1700000
Pminus1=10223,2,14000237,1,120000,1700000
Pminus1=21181,2,14000252,1,120000,1700000
Pminus1=10223,2,14000261,1,120000,1700000
Pminus1=55459,2,14000278,1,120000,1700000

It is running. :banana:

vjs
07-01-2007, 12:19 AM
That looks good to me.

Let us know what your times are per run, it would be interesting to see.

glennpat
07-01-2007, 12:31 PM
That looks good to me.

Let us know what your times are per run, it would be interesting to see.

The last two stage 1 times were 6800 and 6893 seconds.
The last two stage 2 times were 5078 and 5119 seconds.

In my range, 14000000 to 14010000, I have 444 numbers left to do. With the one core I can do about 7 per day. From the user stats it has:


"Factors next to enter (main) 'active window' " which is at n=13813352".

From the link http://www.seventeenorbust.com/sieve/next.txt it has:


k n
24737 13629607


Which number do I have to stay ahead of? Looks like I may have to spread the work around on some of my other boxes.

Glenn

engracio
07-01-2007, 03:56 PM
The last two stage 1 times were 6800 and 6893 seconds.
The last two stage 2 times were 5078 and 5119 seconds.

In my range, 14000000 to 14010000, I have 444 numbers left to do. With the one core I can do about 7 per day. From the user stats it has:


"Factors next to enter (main) 'active window' " which is at n=13813352".

From the link http://www.seventeenorbust.com/sieve/next.txt it has:


k n
24737 13629607


Which number do I have to stay ahead of? Looks like I may have to spread the work around on some of my other boxes.

Glenn

Yes you might have to spread the wealth to several cpu/cores to stay ahead. The number fluctuate back and forth from the highest to some lower k and n. Mainly because somebody dropped it and went back to the first pass queue. If you stay ahead of the "next to enter" you'll be fine. At the current rate 2 or 3 cpu/core should be enough to stay ahead until you figured out the system/p1'ing.

On my pd945 3.4 oc'd to 3.9 I get 5767 secs on stage 1 and the other core is 5484 on stage 2. I am using 840mb per core since I have 2gb of memory. 1.34% probability of finding a factor.

e

Joe O
07-01-2007, 04:38 PM
In my range, 14000000 to 14010000, I have 444 numbers left to do.
Glenn
That's funny, I only count 216 k n pairs in that interval. See Attached.

glennpat
07-01-2007, 06:23 PM
That's funny, I only count 216 k n pairs in that interval. See Attached.

My data was in my file twice. I must of ran makewtd twice. I just experimented with it and it appends the data if ran again. The one you attached has a few for 19249 which I don't have and believe I don't need.

Thanks for finding this!!!

glennpat
07-01-2007, 06:48 PM
My data was in my file twice. I must of ran makewtd twice. I just experimented with it and it appends the data if ran again. The one you attached has a few for 19249 which I don't have and believe I don't need.

Thanks for finding this!!!

Woops. It's mine that has the 19249 which I shouldn't have.

vjs
07-02-2007, 11:42 AM
I guess you figured out that you don't need to P-1 those that are prime. :blush:

Also you didn't post which B1, b2 values you are using.

You shouldn't have too much of a problem staying ahead of the testing. Especially if you decide to put both CPU's on the project once everything is working.

Let us know where your at from time to time with repect to n-level, we should be able to tell by the factor you submit. If you start falling behind I'm sure some others will start helping out withthe p-1 effort.

Jason

glennpat
07-02-2007, 06:31 PM
I started running on the other core. I moved what was in between 14004000 to 14007999 to the other core. I should be in could shape, but I will keep a close eye it. Now to find some good stuff :) .

vjs
07-03-2007, 11:45 AM
Again I'm curious about what B1 and B2 values you are using.

I have a core2duo I might be able to help out with if you start to fall behind.

glennpat
07-03-2007, 04:43 PM
Again I'm curious about what B1 and B2 values you are using.

I have a core2duo I might be able to help out with if you start to fall behind.

B1 is 120000 and B2 is 1700000. I took what you had in your example back on 6/30. I hope those are good numbers to use. I have been reading about B1/B2, but need to do some more reading how how they work.

The 1st core I started is working on 14001058 right now.

Joe O
07-03-2007, 05:25 PM
PRP is at 13641092 at the time I started to write this post. Between that value and 14000000 there are 6949 k n pairs. At approximately 100 pairs/day that is 70 days work. glennpat said that he is factoring at approximately 7 pairs/day/core and we know that there are 216 pairs in his interval. Which means that he will finish that interval in 31 days, or less now that he is using two cores.
What we need to worry about is the next interval, and the one after that. To stay ahead of PRP would take 14 or 15 cores running at the same speed as glennpat's machine. Look's like it's time to start P-1 factoring in earnest.

vjs
07-03-2007, 08:03 PM
Thanks Glennpat and Joe,

Glennpat if your in a team or something try to recruite a few more people for P-1.

And to Joe

As always Joe your analysis is right on the money. I am currently working on some stuff with Lars but have no problem returning for some P-1 when nessary.

Might be time to start the drum roll.

_----------_

The other issue or comment is that a little P-1 is better than no P-1, so if push comes to shove. We can always run slightly lower B1 B2 values ( although this would not be my choice).


According to Joes Math we would need another 6 people like Glennpat. I wonder if Louie is up to doing a little more P-1 as well. He threw quite a bit of CPU at the project a while back.

jasong
07-03-2007, 09:30 PM
I've got a 2.8GHz dual-core Pentium-D I could throw at it, with about 1200MB of RAM available.

If any of the mods(I don't know who the main people are running the project, so I'm just going to listen to the mods) think I'm needed, just decide whether I should use one or both cores with the 1200MB of RAM, give me some B1/B2 values(if there isn't a way to get the computer to figure it out), tell me how big a range I should reserve, and post the info here.

If that's too much, but I am needed, just point me in the right direction.

(I have no idea how much RAM is needed, so help with that would be appreciated)

Joe O
07-03-2007, 10:04 PM
Jasong,
First question: Do you have Prime95 set up on your machine? If so, do you have one directory or two? If not, you need to set up Prime95 on your machine. When it asks if you want to use Primenet, just say no.
Let's start with that.

jasong
07-03-2007, 10:13 PM
Never mind, I'm just going to jump in. I'll reserve a tiny range and time it tomorrow afternoon.

hhh
07-04-2007, 04:32 AM
For stage 1 RAM is not an issue, for stage 2, the more the better, though the impact gets smaller the more you have.

I have a theory that running two copies of prime95/mprime on a dualcore/hyperthreaded machine gives not twice the speed, but a little more nevertheless.
So I once tried to figure out how to let one instance of prime95 run stage 1, (the command with B1=B2), and the other one stage 2, with lots of Ram. The idea was to have the time for stage 1 a little less than the time for stage 2, so that the first instance is always a bit ahead, and have the savefiles in one shared directory.

There are commands for the prime.ini etc. to specify the work directory, but I got stuck somehow and never got it working. If somebody else has some spare time to figure things out, it may work though.

H.

vjs
07-04-2007, 02:34 PM
Humm,

I was just at Fry's they have quad Q6600 for 300 bucks with a board and 2G for 50 dollars woth rebate. I passed it up but now I'm wondering, that could make quite a good factoring box with hhh's suggestion.

Considering those quads are droppping to 260 in a few weeks and will overclock to ~3G... I may turn into a Intel fanboy.

glennpat
07-07-2007, 05:54 PM
Thanks Glennpat and Joe,

Glennpat if your in a team or something try to recruite a few more people for P-1.

And to Joe

As always Joe your analysis is right on the money. I am currently working on some stuff with Lars but have no problem returning for some P-1 when nessary.

Might be time to start the drum roll.

_----------_

The other issue or comment is that a little P-1 is better than no P-1, so if push comes to shove. We can always run slightly lower B1 B2 values ( although this would not be my choice).


According to Joes Math we would need another 6 people like Glennpat. I wonder if Louie is up to doing a little more P-1 as well. He threw quite a bit of CPU at the project a while back.


I started a p-1 thread on the XtreamSystem asking for some p-1 help and how to get started. Thanks for putting the worktodo.ini files in the other coordination thread. I don't think it was there when I started and will help me and new people.

Now for the quad prices to drop :) .

Joe O
08-01-2007, 10:21 AM
I started a p-1 thread on the XtreamSystem asking for some p-1 help and how to get started. Thanks for putting the worktodo.ini files in the other coordination thread. I don't think it was there when I started and will help me and new people.

Now for the quad prices to drop :) .

No, the worktodo.ini files were not there when you started. Your experience starting up, and that of jasong and others, prompted me to do that. It is also the easiest way to provide just the unfactored k n pairs, and keep it relatively up to date. You are welcome.

glennpat
08-16-2007, 04:53 PM
I want to start another range. The next available range starts at 14037000. The next PRP is at 13900361. I am only going to run on 1 core. Should I take the next available range or should I skip ahead to something like 14100000? Thanks

engracio
08-16-2007, 05:08 PM
I want to start another range. The next available range starts at 14037000. The next PRP is at 13900361. I am only going to run on 1 core. Should I take the next available range or should I skip ahead to something like 14100000? Thanks


I think you'll be okay unless somebody joins and start grabbing hundreds of wu's. Or another runaway proxy


e

glennpat
08-16-2007, 06:13 PM
Thanks. I took the next available.

SlicerAce
11-18-2007, 06:15 PM
Can someone tell me how to use multiple worker threads in Prime95 on Windows XP? I am using mprime on another machine, and the nice thing about it is that I only start one instance of mprime and it creates two worker threads for the dual-core processor I have in that machine.

I have a dual-core processor on this machine as well, but it runs windows, and I have had a heck of a time trying to figure out how to get Prime95 to do the same thing. I think I found something that said that it's possible to do this on the Windows version, but there aren't any details how. People also make reference to stress testing where multiple threads are executed simultaneously.

One other nice thing about the multiple threads is that when one thread is in stage 2 and using lots of memory, the other thread will try to find other stage 1 work to do as not to use up even more system memory.

My question basically revolves around the worktodo.ini file structure. In the mprime version of the program, the structure is as follows:

[Worker #1]
Pminus1=blah blah blah
Pminus1=blah blah blah

[Worker #2]
Pminus1=blah
Pminus1=blah

and the program automatically recognizes what work is assigned to which thread. I tried putting this structure into the Prime95 worktodo.ini, but to no avail. Has anyone gotten this to work?

Update: Nevermind. I now find that the official release version of Prime95 does not support multithreading, while the latest version (25.4), available on the Mersenne Forums, does support multithreading. This is nice from the standpoint that it can start multiple threads and one can designate each thread to run on a separate core, but apparently the author had to make a change to the p-1 save file format to allow for larger B2 bounds, so the save file formats are not compatible. Anyone considering upgrading should keep this in mind -- finish up both stages of a number using the old client and THEN switch :D

On another note, does anyone have any information regarding the justification behind choosing B1 and B2 bounds? How do we estimate the probability that on a given number of the form k*2^n+1, a B1/B2 pair of bounds gives ____ % chance of finding a factor? A screenshot from an old version of Prime95 on someone's web site shows this estimate in the output of the program, but it appears that this is not in the final version. Even a rough estimate would be useful for getting a hold on these B1 and B2 bounds...

engracio
01-02-2008, 05:55 PM
Joe,

When you created the worktodo for P1 you used B1=130000 B2=2200000 as the bounds. Is that the general consensus on what the bounds should be at this level? I've always used sieve depth and factor value. It's just a tad higher when I was factoring around 13.5m. Just wondering?:)

Also being it has been a while since I factored, I do not remember getting the residues from the factored wu. Now as it completes it leaves a residue. What do we do with it? Throw it on the trash bin? Thanks.

e

vjs
01-03-2008, 07:36 AM
e,

The residue is from the first stage. You could reuse that residue to calculate the second step on a different machine or using different bounds than the first step.

In all realitiy for our purposes its a trash bin if you decide to run the tests in the normal way, step one then step two with reasonable bounds from the begininning.

engracio
01-03-2008, 02:19 PM
e,

The residue is from the first stage. You could reuse that residue to calculate the second step on a different machine or using different bounds than the first step.

In all realitiy for our purposes its a trash bin if you decide to run the tests in the normal way, step one then step two with reasonable bounds from the begininning.

Normal way meaning letting it run stage 1 and 2 which on my older xeon 2.8 means about 9+ hour per wu. Trash the residue. Tried to lower the memory allocation per Prime95 from 840mb to 720mb did not noticed much if any difference. Probably need to change it when it is running stage 2. Ok will let it go for now. I should not expect much factors at this level, correct?

Joe O
01-03-2008, 10:33 PM
Normal way meaning letting it run stage 1 and 2 which on my older xeon 2.8 means about 9+ hour per wu. Trash the residue. Tried to lower the memory allocation per Prime95 from 840mb to 720mb did not noticed much if any difference. Probably need to change it when it is running stage 2. Ok will let it go for now. I should not expect much factors at this level, correct?


Joe,

When you created the worktodo for P1 you used B1=130000 B2=2200000 as the bounds. Is that the general consensus on what the bounds should be at this level? I've always used sieve depth and factor value. It's just a tad higher when I was factoring around 13.5m. Just wondering?:)

Also being it has been a while since I factored, I do not remember getting the residues from the factored wu. Now as it completes it leaves a residue. What do we do with it? Throw it on the trash bin? Thanks.

e

e
It sounds like you were using much higher B1 and B2 before. What exactly were you using? These numbers were the minimum that I calculated would find factors. Did I drop a decimal point?

engracio
01-03-2008, 10:53 PM
e
It sounds like you were using much higher B1 and B2 before. What exactly were you using? These numbers were the minimum that I calculated would find factors. Did I drop a decimal point?

Joe,

this is the record I managed to find:

[Tue Jul 24 15:19:41 2007]
P-1 found a factor in stage #2, B1=80000, B2=920000.
22699*2^13996774+1 has a factor: 3026831589545851

[Fri Jul 27 12:50:12 2007]
P-1 found a factor in stage #2, B1=80000, B2=920000.
33661*2^13997880+1 has a factor: 1393159544349241

[Tue Jul 10 21:46:15 2007]
P-1 found a factor in stage #2, B1=80000, B2=920000.
24737*2^13991551+1 has a factor: 147185688671410446786862117

I don't know you think we jumped a little too high? I don't know, remember it it is almost 1 mil higher.

e

vjs
01-04-2008, 09:17 AM
e,

If you are talking about in general. No we will not find that many factors using P-1 but this was always the case. Overall we will also find less factors since the sieve is always progressing. Before we had a good shot at finding factors above 900T. Now that we have sieved out to around 1300T and bionic has sieved a good portion between 1500T and 2000T our chances of finding factors less than 2000T or 2P are almost zero since sieve finds all factors including those found by P-1 factoring.

Now inregards to your settings and the current settings.

You were using
B1=80K and B2=920K this is a B1 to B2 ratio of 11.5.

Currently we are suggesting using a
B1= 130K and B2=2200K this is a B1 to B2 ratio of 16.9

The suggested setting will find more factors that your old setting, but it may find less factors per unit of time. (Not sure we will see, also if we start to fall behind in P-1 we will proably reduce those B1 B2 values).

(BTW the stage that uses the most memory is stage 2, memory requirements are based upon the size of the B2 value.

The higher the B1 and B2 the more chance we have of finding a factor but the longer each test will take. There are some efficency issues here with values of n, the sieve level, B1, B2, and of course if we are testing each k/n pair prior to prime testing.

The biggest issue here is that we do at leasat some minimal testing of each k/n pair with P-1 prior to primality testing.

Now I think in your case we had talked and I suggested that you use a smaller B1:B2 ratio since you were memory limited.

If this is your case simply start to decrease the B2 value to a minimum of B1:B2 of 12, or B1=130k B2=1600k until you do not have any memory issues.

I hope this helps and I know alot of what I said above you already know.

engracio
01-04-2008, 01:14 PM
e,

If you are talking about in general. No we will not find that many factors using P-1 but this was always the case. Overall we will also find less factors since the sieve is always progressing. Before we had a good shot at finding factors above 900T. Now that we have sieved out to around 1300T and bionic has sieved a good portion between 1500T and 2000T our chances of finding factors less than 2000T or 2P are almost zero since sieve finds all factors including those found by P-1 factoring.

Now inregards to your settings and the current settings.

You were using
B1=80K and B2=920K this is a B1 to B2 ratio of 11.5.

Currently we are suggesting using a
B1= 130K and B2=2200K this is a B1 to B2 ratio of 16.9

The suggested setting will find more factors that your old setting, but it may find less factors per unit of time. (Not sure we will see, also if we start to fall behind in P-1 we will proably reduce those B1 B2 values).

(BTW the stage that uses the most memory is stage 2, memory requirements are based upon the size of the B2 value.

The higher the B1 and B2 the more chance we have of finding a factor but the longer each test will take. There are some efficency issues here with values of n, the sieve level, B1, B2, and of course if we are testing each k/n pair prior to prime testing.

The biggest issue here is that we do at leasat some minimal testing of each k/n pair with P-1 prior to primality testing.

Now I think in your case we had talked and I suggested that you use a smaller B1:B2 ratio since you were memory limited.

If this is your case simply start to decrease the B2 value to a minimum of B1:B2 of 12, or B1=130k B2=1600k until you do not have any memory issues.

I hope this helps and I know alot of what I said above you already know.

Joe, vjs

Thanks for the info. Yes most of the info you stated above I already or have known about it. Just kind of rusty. My biggest thing is last year I was completing 1 wu per 3 to 4 hours per cpu and finding a factor every 6 to 8 wu. Now it seems like everything doubled but with very little result if any. So far out of 6 cpu/cores it found no factor yet. Is the cost/benefit ratio worth it? We all knew P1 will eventually run into this wall sooner or later. Is sooner now?:) I will complete my reserved range and see if we did hit the wall. My next reservation will tell me how I feel about it. Thanks.:)

e

vjs
01-04-2008, 08:01 PM
e,

As far as cost benift you were finding really alot of factors last year. Alot more than I expected. The cost benifit really comes in when you look at how many factors you find per unit time and in that unit of time how many tests could have you done.

The ratio although some could argue should be 2 factors for every 3 tests... :confused: Yup really!!!

For the simple fact that we will probably test each k/n pair twice.

Where we were before is probably around a 1:1 ratio if not more. This ratio was totally sub-par, since we could factor tests out quicker than we could test them. I'd say stick with the P-1 for no and look at the time requirements. Besides we could always use those CPU' sin the testing fold if that's where they are best suited.

engracio
01-11-2008, 10:18 AM
Just a reminder for the factorers. Once you find a factor and manually submitting the factor, if on the normal page it states that 0 factors was found. Do not automatically assume it is not a valid factor and somebody has found the factor already. I've noticed that the current factor size is close to being "large factor size" instead of "normal factor size"

Last year I was able to generally "eyeball" the factor size and when in doubt submitted the "normal factor" on the large factor page.:)

e

oops I think I posted this on the wrong thread, please move. Thanks

hhh
02-08-2008, 11:23 AM
If I let run prime95 with the Pfactor= line, I get much lower values for B1 and B2, probably because I have only 400 MB assigned.

Is anybody interested to take my files after stage 1 and to let finish stage2 on a machine with lots of RAM?

H.

vjs
02-08-2008, 06:08 PM
hhh,

what B1 value are you doing them out too?

Depending on the B1 your running I'll do the B2 portion. Only problem, if I recall the residual is something like 1MB in size if not more????

hhh
02-09-2008, 11:09 AM
I am running at B1=130000 at the moment. Yes, the residual is big, but I could zip them together with the worktodo.ini and host them on rapidshare or so. (Need just to figure out how that works.). H

vjs
02-11-2008, 06:35 AM
Sure I'll run them with a fairly large B2 value. I have 2G so I should be able to run fairly large. How many G of memory do you have?

You know that you can break the secondstage into parts if you wish.

hhh
02-11-2008, 11:34 AM
I have 512MB on the P4, and 1GB on the Core2Duo. I had 400MB assigned per instance on the C2D, but Windows became really slow. So I prefered to reduce it a bit.

Thanks, anyway, when I have enough residues together, I'll think about a way to send them to you. H.

vjs
02-12-2008, 12:10 AM
hhh,

I have 2G on my quad core with generally 1500mb free. This is free memory even while running two instance of prp and two instances of llr.

I'm thinking I could run one instance of stage 2 on your residues pretty easy.

Not sure where you live, but in the US 2G of memory can often be purchased for 20-30 dollars after mail in rebate. Just a thought.

hhh
02-12-2008, 02:47 PM
It's a notebook, and underclocked and undervolted when on battery. 2GB more will basically just empty my battery faster. Furthermore, waking up the computer from hibernate would take 2 times more time.

Sometimes, less is more. H.

vjs
02-14-2008, 02:02 PM
Laptop... that explains it. When your ready let me know. Looks like we have some time for those tests with the recent repopulation.

Death
05-30-2008, 05:37 AM
just got an Q
if we can sieve together with PSP, can we p-1 together?

hhh
05-30-2008, 01:43 PM
just got an Q
if we can sieve together with PSP, can we p-1 together?

Nope.

Firstpass at SoB is at nearly 15M, while around 5M for PSP only. And at 5M, P-1 isn't worth it, fortunatlythanks to the deep sieving.

H.

Death
06-09-2008, 05:33 AM
thank you.

and another question. I try to make p-1 for riesel, just to see some stats for my team.

and latest version of prime95 doesn't "continue" after makewdt.exe

what's wrong with it and what version do you use?

glennpat
06-10-2008, 07:55 PM
I'm been running linux lately so have not been using Prime95 much. The newer Prime95 version 25.3 has changed some on the file names. From the whatsnew:


Several files have been renamed so that they are not changed by Windows
save/restore. Prime.ini is now prime.txt. Local.ini is now local.txt.
Worktodo.ini is now worktodo.txt. The primenet.ini file has been
deleted - it is now a section within the prime.txt file.

Death
06-11-2008, 03:28 AM
hmmmm.... maybe I should check this out.

hhh
06-11-2008, 04:24 AM
thank you.

and another question. I try to make p-1 for riesel, just to see some stats for my team.

and latest version of prime95 doesn't "continue" after makewdt.exe

what's wrong with it and what version do you use?

P-1 for Riesel will not be worth the CPU-cycles: They have sieved deeper than we have, and have even lower n.

H.

Death
06-12-2008, 03:55 AM
This is make just for fun. I want to see team Ukraine in stats, don't care about place or scores, just presence. And also try to understand how to run p-1, this knowledge can be useful in other projects.

hhh
06-18-2008, 04:04 PM
14485000-14490000 Cow_tipping [Reserved]

Your results are going to be useless, given the advance of firstpass, see here (http://www.seventeenorbust.com/secret/) or here (http://www.seventeenorbust.com/sieve/next.txt). A reservation above 15M will make more sense. H.

PS: Yes, I agree, somebody could clean up thread. H.

Max Dettweiler
06-21-2008, 12:12 AM
Your results are going to be useless, given the advance of firstpass, see here (http://www.seventeenorbust.com/secret/) or here (http://www.seventeenorbust.com/sieve/next.txt). A reservation above 15M will make more sense. H.

PS: Yes, I agree, somebody could clean up thread. H.
Won't they still be useful for secondpass, though?

hhh
06-26-2008, 04:06 AM
Won't they still be useful for secondpass, though?

Yes, but the whole purpose of P-1 is to save tests at a last stand, isn't it? H.

Max Dettweiler
06-28-2008, 12:14 PM
Yes, but the whole purpose of P-1 is to save tests at a last stand, isn't it? H.
I would think that even if the number has been tested once, a factor for that number can still save us from having to do the secondpass test.

hhh
06-29-2008, 05:45 AM
I would think that even if the number has been tested once, a factor for that number can still save us from having to do the secondpass test.

That's right. Yet, it will happen only in a few years, and by that time, there is a good chance that a prime will be found or that sieving would have found that factor. And then, P-1 is not superfast. By the time it takes to find this factor, one could have done this one LLR test; but not two.

So, if one would want to look for factors of Doublecheck numbers, one would do that right before doublecheck hits them, and then again, one would not do it at all, because you are better off with LLR directly.

I hope I look not too wise-assed now... H.

balachmar
07-07-2008, 02:58 AM
Does mprime have to do the second pass tests as well, because on this machine I have it running for a while, but I didn't change the memory setting, so it doesn't do second pass tests. How bad is that?
On my other machine, I actually opted for a range that seems to be passed by PRP. But then it will be finished by now, so I will send the results in this evening.

hhh
07-07-2008, 04:16 AM
Does mprime have to do the second pass tests as well, because on this machine I have it running for a while, but I didn't change the memory setting, so it doesn't do second pass tests. How bad is that?
On my other machine, I actually opted for a range that seems to be passed by PRP. But then it will be finished by now, so I will send the results in this evening.

Stage2 is needed to find factors. In your mprime directory, look for a bunch of files called (one letter)(long number), I hope you get what I mean. it could be something like
z5123875 or
f1329834

if they are there, you probably can just put in the stuff for worktodo.txt again, change your memory settings, and it will start to do all the stage2 for the tests. If that fails, it is proably better to drop that range and to do another one with larger memory settings, rather than redoing all the stage 1 work.

H.

balachmar
07-07-2008, 07:36 AM
OK, thanks for the clarification. I know what you mean with the files.
I will copy the lot to another machine, since this machine is doing some other work at the moment. I hope that the other machine will be able to finish it. OK, I first tried to get it to run on this machine and that works.
So I will complete it normally.

balachmar
07-08-2008, 10:38 AM
I'll post here not to trash the other thread, but my range resulted in one factor, not two!
14920000 14921000 balachmar 1 [complete]

Joh14vers6
07-15-2008, 03:48 PM
I'll post here not to trash the other thread, but my range resulted in one factor, not two!
14920000 14921000 balachmar 1 [complete]
I can not see the factor in the stats. Is the factor big? Then post it here (http://www.seventeenorbust.com/largesieve/).

glennpat
08-02-2008, 07:02 PM
I had posted that I had 2 factors in the range 15060000 15070000. Maybe it should of only really been 1. The factor 8058533177897161|24737*2^14910223+1 I turned in was new and saved to the data base, but I see today I did not show up in my stats. I looked in the stats for everyone and saw:

5189.734T 21181 15307772 3800310.858 Sat 12-Jul-2008 Mon 21-Jul-2008 35.000 PrimeGrid

Mine was found after the one PrimeGrid had found so it didn't really count for me.

balachmar
08-12-2008, 03:12 AM
I can not see the factor in the stats. Is the factor big? Then post it here (http://www.seventeenorbust.com/largesieve/).
Too bad I only read this now. I don't think I have the values of the factor anymore...
And as far as I know the submission page accepted the factor...
mm strange...
Anyway, I'm getting a bit fed up with P-1 did a few like 40 lines of work and still no result... (As in points for the team and tests saved...)

[DPC]Frentik
09-17-2008, 01:44 PM
Sorry if asked before but these are the steps I performed to do P-1 factoring and I want to make sure it's done properly

- Created account at www.seventeenorbust.com and joined my team there.
- downloaded latest prime95 version from www.mersenne.org
- reserved a range in the appropriate thread and saved it in the prime95 folder
- started prime95 and choose "join GIMPS" (then I had to fill in a username/email etc but i assume this is only used if you do mersenne prime finding? <- insecure on this part)
-stopped prime95, added the results.txt=fact.txt to the prime.ini file and restarted prime95.

It's running now and since its working on the first value from the worktodo.ini file I assume this part is correct so far.

Now when I find a factor I only have to submit it at www.seventeenorbust.com/sieve? And when the range is done, I'll post it in the coordination thread.

Joe O
09-17-2008, 08:22 PM
If the factor is larger than 2^64 then you have to submit it here. (http://http://www.seventeenorbust.com/largesieve/)

Joh14vers6
11-01-2008, 11:18 AM
I switched over from prime v25.6 to v25.7.
Since I switched over I see an extra text at the result.txt.
With 25.6 I had results like:

[Fri Oct 10 20:25:04 2008]
21181*2^15854852+1 completed P-1, B1=130000, B2=2200000, Wd1: 9A15E8A8

And now with 25.7 I mostly get results like:

[Tue Oct 14 22:45:12 2008]
55459*2^16041334+1 completed P-1, B1=130000, B2=2200000, E=6, Wd1: 9F09E1E8

What does E=6 mean?

opyrt
11-03-2008, 08:41 AM
I switched over from prime v25.6 to v25.7.
Since I switched over I see an extra text at the result.txt.
With 25.6 I had results like:

[Fri Oct 10 20:25:04 2008]
21181*2^15854852+1 completed P-1, B1=130000, B2=2200000, Wd1: 9A15E8A8

And now with 25.7 I mostly get results like:

[Tue Oct 14 22:45:12 2008]
55459*2^16041334+1 completed P-1, B1=130000, B2=2200000, E=6, Wd1: 9F09E1E8

What does E=6 mean?

I'm getting the same thing on my 32 bit client, but not on the 64 bit client.

Edit: I now see that this has been answered on mersenneforum: http://www.mersenneforum.org/showthread.php?t=10902

Joh14vers6
03-28-2009, 12:10 PM
Today I found the factor:
P-1 found a factor in stage #2, B1=130000, B2=2200000.
21181*2^16496492+1 has a factor: 13825033371936547

But I will not get any credit for it because user 13796 (PrimeGrid) already found this factor. :(
I found that out here: http://www.seventeenorbust.com/sieve/results.txt.bz2
I searched for more tests with already found factors in my worktodo.txt and found another two already found factors by PrimeGrid. So I deleted them from the worktodo.txt.

How can I prevent PrimeGrid finding my factors first?

shauge
03-28-2009, 01:21 PM
It is always annoying when not getting credit for a factor found. There is of course no way to prevent sieving finding factors in your factoring range. It is only possible check for found factors before starting factoring. The factor you post is 13.8P which is in a sieve range PrimeGrid did a some time ago and when I look in the "worktodo_16000000_16500000.zip" posted in the "P1 coordination thread", the test you post have been excluded. You should use an updated worktodo file.

Joe O
03-28-2009, 01:22 PM
How can I prevent PrimeGrid finding my factors first?

By checking your worktodo.txt against results.txt on a daily basis and removing k n pairs from your worktodo.txt that have already been factored and posted in results.txt.

Edit:
You should use an updated worktodo file.
Thanks shauge. I wrote my answer before seeing your post. I did not consider that Joh14vers6 would have used an obsolete worktoto file. Actually, even my advice to check it daily would not always work. You are correct in saying that there is no way to prevent this from happening.

shauge
03-28-2009, 05:59 PM
I can recommend the "make worktodo" (http://tools.1up.no/mkwork/) site my team mate runesk made to get a worktodo file which eliminates the factors found in the current online results.txt.bz2 file and I will mention my own tool (http://www.dump.no/files/b9d50030fd5b/SobFactResultsWorkToDoCompare.zip) that checks a local worktodo.txt against a locally copied version of results.txt and displays already found factors (take care not to overwrite the prime95 output file "results.txt")

Natenofs
10-10-2017, 01:06 PM
I just thought Id start a thread for those of us in the Northeast to start discussing routes and movements for the project. Can everyone who is interested in the project from the Northeast chime in? The main thread for this is under the general topics forum, cycling related ? not sure, but it starts with "Wouldnt it be cool if..."