I most heartily concur!Originally posted by Nuri
Still, I think it would be nice if this bug is repaired. P-1 needs P4 power as well.
Yes, that's true, the problem exists for P4 only.
It also works on AMD Athlon without any problem. But, to be honest, I do not want to give up 515 kp/sec for P-1 at the moment.
Still, I think it would be nice if this bug is repaired. P-1 needs P4 power as well.
Originally posted by Nuri
Yes, it seems so. I've just tried on Athlon 2400, and it works fine.
As far as I remember, it was mentioned somewhere that PIV is the most efficient machine for this task. Too bad if it actually is the case.
I hope Louie can have the time to take a look at it.
I most heartily concur!Originally posted by Nuri
Still, I think it would be nice if this bug is repaired. P-1 needs P4 power as well.
Joe O
My P4 does a P-1 test at 1.5 in < 45 minutes.
I assume P4s are a quite important factor in P-1 factoring.
Posted in the coordination thread:
Let's everyone with PIVs please post the details of their machine and the point at which it seems to start failing.Originally posted by Frodo42
4930000 4935000 Frodo42 169 2.483284 3 [completed]
5100000 5110000 Frodo42 317 5.049642 ? [reserved]
I don't seem to have any problem for n>4980670 with ver. 1.1 on P4 .
Joe O
Nuri, Did you actually compute 4980670, or is that the one that failed?
Joe O
I did. May be it was not clear, I had to emphesize.
From reservation thread:
Please replace
4980000 4982000 Nuri 91 1.624419 ? [reserved]
4982000 4990000 ? ???? ?.?????? ? [soon to be passed]
with
4980000 4980670 Nuri 28 0.295272 1 [completed]
4980726 4990000 ? ???? ?.?????? ? [soon to be passed]
PS: 4980670 is the upper limit for my P4.
I didn't want to miss any so I tried the interval 4980670 4981000 and the first one it tried was 4980670. I wasn't sure if the endpoint was included in the range or the start point. I know that Louie posted it somewhere But I haven't been able to find it. If you make your endpoint 4980700 and i make that my start point, then it won't look like a there is a gap between our ranges, and there should be no confusion.
On another note, have you tried ranges way above 5000000? Frodo42 has posted that he has no difficulty with 5100000 5110000 on a PIV.
Joe O
My success might be because I use the Linux version?
I have now done something like 15 tests at 5.1M, they take 1.5 times the time needed at 4.9M, with the same parameters, but thats probably because a factor counts that much more at this point, their probability of success is also somewhat higher.
Sure, I agree with you. Of course that should be the case normally.Originally posted by Joe O
If you make your endpoint 4980700 and i make that my start point, then it won't look like a there is a gap between our ranges, and there should be no confusion.
Since that was a specific case, I wanted to post it that way at the first place to show the exact maximum n in the dat file that it can handle, and exact minimum n it fails to run (to make it clear where exactly the problem starts). But, I see it created some confusion on your side.
Yes, I tried many alternatives. It takes just a few seconds for the client to exit for those ranges, therefore it was very easy to try tens of them up to 20m in a couple of minutes per client version.Originally posted by Joe O
On another note, have you tried ranges way above 5000000? Frodo42 has posted that he has no difficulty with 5100000 5110000 on a PIV.
I get the same behaviour on an ATHLON!Originally posted by Nuri
ecm.exe <n low> <n high> <factor depth> <factor value> (mem usage at local.ini) ==> exits starting at 4980726.
Joe O
It seems like everyone with the P4/Windows combination has the problem.Originally posted by Joe O
Let's everyone with PIVs please post the details of their machine and the point at which it seems to start failing.
Maybe someone can fire up a Knoppisx version and start teh P-1 factorer? (I won't be able to get to my PC in the next days...)
what happened here?
3 | 33661*2^4993368+1
bounds 20000 140000 with 128 and 1.5
Last edited by Keroberts1; 10-23-2003 at 08:48 AM.
I ran the same test again and got nothing. Should i paly with the bounds, is there a factor that I'm just barely missing or is this just a bug.
Probably a bug. This has happened recently to Frodo42 , and one other time as well.Originally posted by Keroberts1
I ran the same test again and got nothing. Should i paly with the bounds, is there a factor that I'm just barely missing or is this just a bug.
Joe O
same thing again
now i have two
3 | 33661*2^4993368+1
3 | 10223*2^4993901+1
Does anyone know wahts causes this?
Does anyone know when we can expect Louie back, we kind'a need him as he probably also is the only one who can fix these bugs with the P-1 factorer.
If he does not expect to be able to give some time to this project within a reasonable time perhaps it would be a good idea if someone else began looking at the source code for the P-1 factorer (not me I study physic's, not math's)
i think it happened again except this time its
49 | 21181*2^4995332+1
Thats three in about a range of 2 thousand. Is anyone else getting this error this often? I don't know whats wrong with my computer, maybe its because i only use 128 MBs RAM Could someone tell me if there is something wrong with it happening this often. For now I'm stopping on my ranges done upto 4997000. I havent' foudn any factors but i have had that error thre times. I think I'd be better off putting all o my resources into sieving.
Please check out my last post here and let me know if this is a good idea (should give a speed boost of 2x)
Team Anandtech DF!
I'm having a little trouble reporting the last factor that I have found.
261252098359999236987971694398316499 | 28433*2^5314225+1
The factor is larger than 2^117 and the Sieve Result Submission page only accepts factors less than 2^64.
Without the limit to the bias in the score calculation this factor would have scores around 2.3*10^25 points.
I think a factor of that size calls for a couple of smilies:
wow is that 36 digits?
What is the upper limit of a factor size that can be found using the P-1 factorer?
That ones a real trophy.
I assume you've tried submitting to
http://www.seventeenorbust.com/largesieve/
Now I have. (I had completely forgotten about that page).Originally posted by MikeH
I assume you've tried submitting to
http://www.seventeenorbust.com/largesieve/
That's what I counted it to.Originally posted by Keroberts1
wow is that 36 digits?
Congratulations, but that factor isn't prime (Mr. Killjoy here).
261252098359999236987971694398316499 =
1695263162018308849 * 154107105146414851
Keep this up, and you'll make it into the record books
http://www.loria.fr/~zimmerma/records/Pminus1.html
Oh, and I might need to make some hasty changes to the sieve scoring, not sure if it can cope with displaying that many digits.
Just in case anyone is interested, here is the factorization of p-1:
2*3*457*86491*8622413*13809557*9251514854849
But that's not really what the P-1 algorithm found. Rather it found two factors (simultaneously, since the bounds covered both):
1695263162018308849-1:
2^4 * 3^4 * 7^2 * 157 * 317 * 2237 * 239779
and
154107105146414851-1:
2 * 3 * 5^2 * 17 * 37 * 3217 * 10271 * 49433
I was trying to redirect the output of sbfactor to a file with something like
./run.sh 5322000 5323000 > progress
but there is nothing written to progress.
The idea behind this is to start sbfactor at system startup, but later monitor it under kde with tail -f ...
Can anybody help me with this?
I redirect both STDOUT and STDERR withOriginally posted by rosebud
I was trying to redirect the output of sbfactor to a file with something like
./run.sh 5322000 5323000 > progress
but there is nothing written to progress.
The idea behind this is to start sbfactor at system startup, but later monitor it under kde with tail -f ...
Can anybody help me with this?
./sbfactor 5134000 5136000 45 1.5 512 > uddata 2>&1
and that works.
I use 47 instead of 45 as 2^47=140.74*10^12 and the sieve 90% point has passed../sbfactor 5134000 5136000 45 1.5 512 > uddata 2>&1
As I understand this just makes sure that the P-1 factorer does not check for or find duplicates.
Well, I tried that, too, but it equally just creates an emtpy file, doesn't write anything into it.Originally posted by hc_grove
I redirect both STDOUT and STDERR with
./sbfactor 5134000 5136000 45 1.5 512 > uddata 2>&1
and that works.
I thought about that, too, but it gives me lower bounds than with 45. Isn't it actually less likely to find factors with smaller bounds, especially when the sieving point goes up? So shouldn't be the bounds be increased instead? Please correct me if I'm wrong...I use 47 instead of 45 as 2^47=140.74*10^12 and the sieve 90% point has passed.
As I understand this just makes sure that the P-1 factorer does not check for or find duplicates.
Rosebud,
run.sh invokes sbfactor. So you need to redirect the output of sbfactor. Try to edit run.sh and it should work.
Thanks for your help, I just found out, that I was just not patient enough.
Apparently some buffer is written to the outputfile in 4kb blocks, I just didn't wait long enough to see it happen before I killed the program again.
I used version 11 dev.
This is what I found.
[Mon Dec 01 11:50:44 2003]
ECM found a factor in curve #1, stage #1
Sigma=7999200192601744, B1=2000, B2=200000.
2956796616715477563801784791791170379265868824629460864415946527130271264717466812248233802997132847 5423878344513 | 265711*2^199920+1
[Mon Dec 01 13:21:36 2003]
ECM found a factor in curve #1, stage #1
Sigma=636312383509189, B1=2000, B2=200000.
453696229622164192714306768330898875898888916174816736249304693155093236170413062337 | 265711*2^199920+1
both the factors are wrong.
k=265711 is being used on the PSP project.
Citrix
What is the limit on the size of a factor we could expect to find using P-1. Could we find a record factor? Very soon sieving will run past its usefullness and many sievers will probably be switching CPUs towards P-1. Or at least i will be. At the point we'll have a rather significant amount of CPU power running there. Just wondering what our chances are of hitting a record.
Hypothetical the maximum size is unlimited. But I think the odds of finding a record factor are pretty small. I run sbfactor with B1=30000 and B2=320000 which is what the bounds optimizer gives me.
All record p-1 factors (40+ digits) where found using much higher bounds: B1 > 10^7, B2 > 10^8
But my 36 digit factor was found using B1=35000, B2=420000, so B1 and B2 doesn't have to be that big.Originally posted by rosebud
Hypothetical the maximum size is unlimited. But I think the odds of finding a record factor are pretty small. I run sbfactor with B1=30000 and B2=320000 which is what the bounds optimizer gives me.
All record p-1 factors (40+ digits) where found using much higher bounds: B1 > 10^7, B2 > 10^8
Having said that, I do think my factor will be our largest for quite a while. Maybe MikeH should change the stats to show the 3 or 5 largest factors so others have a chance of having big factors recognized?
I'm sorry but i thought tha had been found ot be a composite of two factors? Does that matter?
Yes, that matters. The reason hc_grove's bounds didn't need to be so high is that the bounds only had to be sufficient for a 19-digit factor (and an 18-digit one).
The chance that a set of small factors completely factors a number decreases as the number gets bigger.
And if you're aiming for a record I'm sure nothing but a prime factor is accepted.
Are there any news on fixing the P-1 bug for PIVs?
I am aware that it is not of top priority, but some feedback would be useful.