Page 3 of 10 FirstFirst 1234567 ... LastLast
Results 81 to 120 of 386

Thread: P-1 factorer

  1. #81
    It's an original factor - didn't find it in the latest files posted by Louie and the n was listed as uncleared in one of the lists posted recently.

    It took me about 5 hours on an Athlon 1333 with bounds of 25000 2500000. Looking at the factorization of P-1 one can see that bounds of 6000 and 160000 would have sufficed.

    961094450858074349 -1 = 2 ^ 2 x 47 ^ 2 x 61 x 2243 x 5107 x 155663

    I looked at how much P-1 has been done on the numbers in the 4million range in GIMPS and how much the client currently tries to P-1 on a P4. Here is the result:

    If the number is not tested and has been trial factored to 62 bits:
    B1= 45,000 B2=753750

    If it has been factored to 63 bits the bounds drop to

    B1=40000 B2=640000

    If it has already been tested once, ie in the doublecheck phase:

    B1=20,000 B2=260,000


    So, looks like biwema's estimate of keeping B2 about 10-25 times B1 seems to be correct as is his suggestion of raising B1. I'd suggest 50,000 and 1 million as reasonable bounds unless someone does an analysis.

    [Edit: If we consider the fact that sieving has only reached 2^45 whereas GIMPS factors these numbers to 2^61, maybe raising B1 even more might be useful. But then we also have to consider the fact that GIMPS factors have a special form so their bounds can afford to be a little higher.]
    Last edited by garo; 06-16-2003 at 08:03 PM.

  2. #82
    NEW SBFACTOR v0.8

    this version has the features I was talking about adding, namely:

    -Working optimal bound picker
    -SoB.dat decoder
    -Ablility to scan lowresults.txt/results.txt for known factors

    The usage for the optimal-bound/SoB.dat execution is:

    sbfactor.exe SoB.dat lowresults.txt results.txt <n low> <n high> -x <how far factored> <double check?>

    so if I wanted to P-1 factor all unfactored numbers between n=4000000 and n=4000100, I'd enter:

    sbfactor.exe SoB.dat lowresults.txt results.txt 4000000 4000100 -x 45 0

    here's a kinda tricky part the 45 there means that the numbers in that range have been sieved up to 2^45 = 35T. that's not exactly true but it will produce more accurate bounds than 44 since the majority of p < 35T is sieved.

    the final 0 just means that the number is completely untested. if you wanted to try factoring a range of numbers around n=3M, you would want to set that to 1 to signify that the value of a factor is less since it will only prevent a double check and not a new test.

    To make this REALLY easy to use, I've included a batch file that you only needs the bounds of a range of n that you want to test. This means you just download the new version, unzip it and type:

    run 4010000 4010100

    wait for it to finish, open up fact.txt and submit. another great thing is that even though there is no master file that tracks progress, the cache files will prevent retesting of numbers if you reboot your system or quit the program. like SB, the most you could lose would be 10 minutes. so just add another batch file to the directory that calls run.bat and add it to your startup folder and you could have your very own P-1 factoring system.

    the new download comes packaged with SoB.dat, lowresults.txt, and results.txt so it is much larger than the last versions. It is the original dat file so if you want to factor ranges less than 3M, use a different dat file.

    be warned though, the optimal bound setter will return B1=0 for numbers around n=510k. It looks at the likelihood that a factor will be found and immediately realizes that just testing the number is faster. it uses the same multiplication routines that SB does for squaring numbers so it doesn't need to even know how fast your computer is. it's pretty nice.

    n=1M will also immediatly quit if you indicate there's already a test done for it (which you should). it starts returning non-zero B1 values around n=1.7M. For more reasonably sized tests around n=4M it returns things in the neighborhood of B1=80000, B2=440000. i'll post the code it's using for bounds setting as an attachment here so that people can examine it before we decide that it is what we want for "official" testing of different ranges. once we agree on the fine points of the bounder, it wouldn't be inconcievable for people to simply reserve small ranges of n (for all k) where they will let the factorer run with optimal bounds. the new system is pretty easy to use compared to manually editing batch files. it will get easier too.

    one last thing I want to mention. the program now returns "success values" to help you estimate how may factors it will find. here's an example:

    Code:
    run 4100000 4105000
    
    sbfactor.exe SoB.dat lowresults.txt results.txt 4100000 4105000 -x 45 0
    SBFactor v0.8
    P-1 and ECM factoring for number of the form k*2^n+1.
    Adapted from GIMPS v23.4 by George Woltman and Louis Helm
    Finished parsing SoB.dat
    218 numbers between 4100000 < n < 4105000
    Searching for known factors in lowresults.txt...Done.
    Searching for known factors in results.txt...Done.
    Removed 22 numbers using the factor files
    Testing 196 numbers between 4100000 < n < 4105000
    Expected number of factors for entire range: 4.173580
    B1=80000 B2=440000 Success=0.021294 Squarings=200766
    P-1 on 4847*2^4100151+1 with B1=80000, B2=440000
    initializing test
    sieve finished
    So it expects that if I test all 196 unfactored numbers in that range to the optimally chosen levels, I should find about 4 factors. The expected number of factors is simply the sum of the success value of all 196 tests which are individually printed before each test starts.

    Let me know how the new version works and what you think of the optimal bound setter.

    http://www-personal.engin.umich.edu/...sbfactor08.zip

    -Louie

    PS - I am actually doing the "run 4100000 4105000" range for those looking to avoid duplication. thanx.
    Attached Files Attached Files
    Last edited by jjjjL; 06-16-2003 at 08:23 PM.

  3. #83
    Senior Member
    Join Date
    Jan 2003
    Location
    U.S
    Posts
    123
    Louie, could you include a "how to.txt" in the file that explains
    how to use the client? It took me 2 hours (literally) to figure out
    how to use the client the first time that it was released.

  4. #84
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Louie,
    Code:
    Removed 22 numbers using the factor files
    Testing 196 numbers between 4100000 < n < 4105000
    Expected number of factors for entire range: 4.173580
    Somewhere around here could you have the program print the 196 k n pairs that will be factored? or write them to a file? I think that it would be very helpful to have this info.
    Joe O

  5. #85
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Louie,
    Would it be possible for the program to check the last updated date of the results.txt file and reprocess it if it has changed? If this could be done between P-1 runs it would allow us to automatically eliminate factors found since the last time we downloaded the results.txt file.
    Joe O

  6. #86
    PS: Using 25000 instead of 1000 together with ecm would probably make each test last ~4 hours.

    Please note that ECM and P-1 are completely different things.

    With ECM you need to run the program several times on the same number to have a reasonable chance to find a factor.

    You'd need to run it about 25 times with B1=2000 to find most 15 digit factors. To find most 20 digit factors you need to run the program 90 times with B1=11000

  7. #87
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Originally posted by smh
    A few questions:

    1) Is there a file available sorted by N with both K and N on one line? This would make it easier to reserve a range.

    2) What bounds are you using?

    3) maybe a moderator can make a reservation list like in the sieving topic and delete all messages below it? Just to make things more clear.
    1) Not needed if you use version 8, available here. Just choose an n range that you want to work on and post it in the cordination thread. Then type "run.bat n_min n_max" in the directory that you unzipped version 8 and press enter will get you started. e.g. "run 4080000 4085000"
    2) Automatically chosen by the program. Look here.
    3) That's what this is. Ceselb/Louie will be along shortly to clear the extra posts.

    For more information look back up to Louie's post.
    Last edited by Joe O; 06-17-2003 at 01:51 PM.
    Joe O

  8. #88
    Louie:

    the program still doesn't run at idle priority by default.

    Last edited by smh; 06-17-2003 at 01:56 PM.

  9. #89
    Originally posted by smh
    Louie:

    the program still doesn't run at idle priority by default.
    Yeah, I know. I already got told on the other thread. They reminded me of the whole idle-thing right after I finished v0.8.

    I have made a new version that runs idle and also processes the result files much faster (0 seconds instead of 3 ).

    I was going to throw a GUI wrapper around it and add progress bars instead of scrolling messages but it may take a day. I was going to build a new window interface with MFC but after an hour of working with it, I decided I still don't like MFC.

    I'll probably just gut the SB GUI code and start there.

    -Louie

  10. #90
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    Copied from the coordination thread.
    Originally posted by cjohnsto
    Or if you want to do it as part of a batch file use:
    start /LOW /b /wait ..\ecm -c 500 250000 < in.txt >> out.txt

    Of course use the program you want not ecm.
    Does anybody understand what he means?

  11. #91
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    C:\sieve\sbfact>sbfactor.exe SoB.dat lowresults.txt results.txt 19999000 2000000
    00 -x 45 0
    SBFactor v0.8
    P-1 and ECM factoring for number of the form k*2^n+1.
    Adapted from GIMPS v23.4 by George Woltman and Louis Helm
    Finished parsing SoB.dat
    37 numbers between 19999000 < n < 200000000
    Searching for known factors in lowresults.txt...Done.
    Searching for known factors in results.txt...Done.
    Removed 2 numbers using the factor files
    Testing 35 numbers between 19999000 < n < 200000000 <-- takes several minutes at this stage. Could it be done quicker?
    Expected number of factors for entire range: 2.425795
    B1=650000 B2=16412500 Success=0.069308 Squarings=2020159
    P-1 on 4847*2^19999071+1 with B1=650000, B2=16412500
    initializing test
    sieve finished

    C:\sieve\sbfact>

    Does it exit because of memory shortage for that large B range, or is it a bug? Not a big problem, since I'm not planning to factor anything there for a couple of years more.

  12. #92
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    May it be because you wrote 200000000 instead of 20000000? Just a thought.

  13. #93
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    Nope, re-ran it with one 0 less. Still the same result.

  14. #94
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Try it with 19999999 instead of 20000000. It'll work.

    Though I don't know why.

  15. #95
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    How long does a 4m P-1 test take on the average?

    I started my first test, and it took 80 minutes to reach 38% of stage 1 (PIV-1700, CPU usage %99). Is this normal?

    A quick calculation:

    My range has 179 numbers and 5.5 expected factors. Assuming it would take 4 days to finish a 4m prp test, testing 5.5 candidates would mean 22 days. Adding double check for all, and 10% non-matching residues, it would take 46.2 days to get rid of them completely.

    For P-1 to be effective in terms of eliminating candidates (on a PC basis), it should finish each P-1 test in less than 6.2 hours (=46.2*24/179).


    Another question:

    How many boxes should we use at P-1 not to be caught by the main project? (Finish at least the same sized n range per day?)

    Currently, the main project finishes ~300 tests per day. Assuming a pc can finish a P-1 tests at n=4m in 6 hours, that would suggest we should use 75 PCs 7/27 on P-1 not to be caught (=300/(24/6)).


    Am I wrong somewhere? Or is it still not (or hardly) feasible to use P-1 at n=4m level? (given the current relative client speeds and sieve depth)

  16. #96
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    I've only finished 1 P-1 k n pair with version 08 in the 4.075M range. It took just 4 minutes short of 14 hours to complete. This is on a PIII/500 running Win98SE.

    Can someone tell me how long a 4.075M PRP would take on this machine?

    I've just finished the 2nd P-1 k n pair with version 08 in the 4.075M range. It took just 3 minutes short of 13 hours to complete. This is on a PIII/500 running Win98SE.

    I've just finished the 3rd P-1 k n pair with version 08 in the 4.075M range. It took just 3 minutes short of 13 hours to complete. This is on a PIII/500 running Win98SE.

    I've just finished the 4th P-1 k n pair with version 08 in the 4.075M range. It took just 7 minutes short of 13 hours to complete. This is on a PIII/500 running Win98SE.
    Last edited by Joe O; 06-19-2003 at 08:05 AM.
    Joe O

  17. #97
    Yes, I'm getting similarly long times. I got 7 hours to finish a test on a P4-2533. Given that the chance of finding a factor is 3% this implies 7*33 /24 = 9 days to finish two PRP tests - one for double checking. Still that seems pretty high.

    I think the bounds calculator is a bit screwy and is giving bounds that are too large. I experimented with -x 50 and the bounds and the time to finish seems much more reasonable.

    Any comments Louie?

  18. #98
    start /LOW /b /wait ..\ecm -c 500 250000 < in.txt >> out.txt

    Does anybody understand what he means? [/B]
    OK, I wrote that quickly but i thought it was not too bad.

    start tells windows to run a command.
    /LOW tells it to run it at low priority /b and /wait make it use the current window and wait to finish execution before continuing.

    The you type the command you want to run after that with any options it needs.

    Why do this?

    It allows the program to start in whatever priority you want with using taskmanager to change it. So put the line in a batch file and you can use it to start the program when windows starts without worrying about changing priority manually.
    Last edited by cjohnsto; 06-17-2003 at 09:22 PM.

  19. #99
    SB Factor v0.9

    improvements include:
    -runs at idle priority
    -parses lowresults.txt/results.txt much quicker
    -allows to parse out larger ranges (10k # limit instead of 1k)
    -now includes the lower bound inclusively

    idle priority

    faster parsing

    the larger range although not useful for actual work may be useful to estimate the total number of factors in large areas since you can estimate ranges of around 200000 now. it is possible to do this now because of the aforementioned faster parsing.

    and the inclusion of the lower bound is just a technical corner case to make sure we don't miss numbers. right now, it would be possible to miss numbers in between two peoples ranges where they meet. i did a quick check and there are none missed yet but this will make it so we don't have to constantly check for n values on the boundries.

    i noticed garo is messing with the factorlimit to manipulate the optimal bound setter. i think a more reasonable way to do that would be to set the double check flag. this would lower the bounds in a more predictable way. if you can, i'd like for you to go though the code i posted and recommend a way to make the program bound checker truely optimal. the formula are pretty easy to follow. let me know.

    anyway, here's the new version:
    http://www-personal.engin.umich.edu/...sbfactor09.zip

    -Louie

  20. #100
    Hi Louie,
    You are right -x 50 was not the right way. In fact I did not employ that technique after all and used the doublecheck flag as you have just suggested.

    Using double-check on lowered B1 by a little over half and B2 by 2/3. B1: 105000 -> 45000 and B2: 1735750 -> 573750
    The chance of finding a factor per exponent dropped from 3.1% to 1.9% and the time required to complete a test dropped by 70% or more - don't have exact numbers.

    My experience with p-1 in GIMPS suggests that bounds depend not only on how far factored and doublecheck or not but also on machine type/speed and on memory available. for instance P4s usually give higher bounds for the same numbers than other types of computers.

    I also looked at the code and the only thing I can suggest changing seems to be the constants. I think you might need to do a recalculation to get better constants for the number of squarings required etc. Remember that GIMPS factos have to be of the form 2kp+1 whereas the same does not hold for SoB. So the constants will certainly be different.

  21. #101
    What is the benifits of using P-1 factoring instead of ECM?

    Besides that we have P-1 implemented.

  22. #102
    i took out the 2 * p factor already. there's a comment where i removed it.

    Code:
    /* What is the average value of k in 2kp+1 s.t. you get a number */
    /* between 2^how_far_factored and 2^(how_far_factored+1)? */
    
    	kz = 1.5 * pow (2.0, how_far_factored); // / 2.0 / p;  removed by Louie since we have no 2*p in proth
    i'm not positive what the factor of 1.5 signifies. i'm fairly certain we should still have it.

    the fft length calculating code used by gimps is replaced by the prp code. so in theory, those two things are correct. the table of gcd constants should be the same since it's only based on the size of the number being factored. i justed checked the gcd estimation code and it's correct.

    so what we might want to do is something that wblipp suggested: reduce the double check constants a little. it's likely that we won't completely double check some numbers. instead of valuing a factor as much as 2*(error rate+prp time), which equals 2.1, perhaps it should be closer to 1.5 or so. enabling the double check factor basically does this by reducing the value of a factor to prp time + 2*error rate = 1.1. if someone has a logical reason for believing this factor should be something precise, please explain. if not, i'd recommend we make it 1.5. if it's going to be arbitrary, it should at least be simple.

    -Louie

  23. #103
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Louie,
    Shouldn't the doublecheck flag come from the lowresults.txt and results.txt files NumTests entry?

    Code:
    p	k	n	UserID	TeamID	NumTests
    3000032566681	55459	15805426	1	3	0
    EDIT:
    I think that we should keep the parameter on the command line, and call it "relax constraint" parameter, or something like that. Allow it to range betweeen -9 and +9, and add it to the NumTests entry for that test. This would mean different B1 and B2 for each test based on the NumTests value, and an overall bias for the run based on the "rc" parameter. If you want to run fast, lower the chance of finding a factor by lowering the B1 and B2 values then you would make this a 1 or 2 or 3 ... or 9. If you have more time and want to raise the chance of finding a factor by raising the B1 and B2 values then you would make this a -1 or -2 or -3 ... or -9.
    Last edited by Joe O; 06-19-2003 at 11:05 AM.
    Joe O

  24. #104
    Originally posted by jjjjL
    if someone has a logical reason for believing this factor should be something precise, please explain.
    -Louie
    Suppose our error rate, presently zero, becomes 1%. At this point the earliest time it makes sense to do a double check would be when the new exponents are 3.8 times the double check exponents. At this point the new tests take about 26 times as long (estimated as [x*ln(x)]^2 in the Resource Allocation thread). Primes are scarcer by a factor of 3.8 in the new exponent range because the number of primes is proportional to 1/ln(x), the same proportion as 1/n. Hence it is expected to take 26*3.8=100 times as much work to find a prime among the new exponents as it would take if the double check exponents were untested. This balances the 0.01 chance there was an prime with an error in the lower range.

    I think that 3.8 is too soon when you consider the fact that primes were found for other k's, but for the time being let's ignore that.

    If we adopt a "3.8 rule," then at the time a prime is found we will have double-checked from zero to "z" and tested from 0 to 3.8z. The fraction of double tests will be 1/3.8 = 26%. Hence I propose that the correct factor to account for double tests is 1.26.

  25. #105
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Originally posted by wblipp

    I think that 3.8 is too soon when you consider the fact that primes were found for other k's, but for the time being let's ignore that.

    If we adopt a "3.8 rule," then at the time a prime is found we will have double-checked from zero to "z" and tested from 0 to 3.8z. The fraction of double tests will be 1/3.8 = 26%. Hence I propose that the correct factor to account for double tests is 1.26.
    So use a "4.0" rule for simplicity's sake. This gives us 1/4 = 25% for the fraction of double tests, and a factor to account for double tests of 1.25. This is in the spirit of Louie's "arbitrary but simple" criterion.
    Joe O

  26. #106
    On further reflection, the proposal of 1.25 may be too low. While it's true that "a rule of 4" means that when we find a prime we will have double checked 25% of the results, there is the question of when we will reach the prime. Today we are handing out first tests around n=4 million. Under a "rule of 4" these will be double checked unless we find a prime for this k-value before n=16 million. Using Proth Weights to estimate this, the probability of NOT finding a prime between 4 and 16 million varies from 61% to 87%, averaging 73%. So if we adopt a "rule of 4." about 75% of the numbers around n=4 million will be double-checked. If we go to a "rule of 10", it drops to 60% double-checking. To get to 50% double-checking, we would need to use a "rule of 25."

    A "rule of 25" would be appropriate if our error rate turns out to be 6*10^(-6). A "rule of 10" would be appropriate for an error rate of 2*10^(-4). I'm still not sure how the knowledge of other k values affects this; I'm pretty sure that having found some primes makes it less likely we missed primes for the other k-values.

    I guess I like "stick with 1.5 until we collect some error data."

  27. #107
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Originally posted by wblipp
    I guess I like "stick with 1.5 until we collect some error data."
    Sounds good to me too!

    Louie,
    The P-1 program currently process all the k n pairs for 4847 before going on to the pairs for 5359 etc. Would it be possible to sort the k n pairs by increasing n regardless of k? This would allow those people just ahead of the PRP effort to have a better chance of remaining ahead.
    Thanks,
    Joe O

  28. #108
    Originally posted by Joe O
    Louie,
    Shouldn't the doublecheck flag come from the lowresults.txt and results.txt files NumTests entry?
    can't do that since the results.txt file doesn't have unfactored numbers in it. it's just using the result files to know which numbers it shouldn't check at all. maybe i'll make the last number just equal the factor value instaed of being a flag. that way you can decide how many prp tests of the number you feel a factor is worth. closer to 1 for speed, closer to 2 for more factors.

    i'll see if i can reorder the tests so it steps though the ranges by n value instead of k/n value. that's a good idea.

    -Louie

  29. #109
    i haven't done exact timings, but it may pay to skip the stage 1 gcd. it takes as long as the second stage gcd and has much less chance to find a factor. it will only prevent the seconds stage on around 1% of the tests.

    what do you think?

    -Louie

  30. #110
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    Just found a factor

    626134072137207677 | 28433*2^4150417+1

    2^2*7*4003*8761*17351*36749+1

    Also found a program to calculate prime factors
    Source and binaries for both dos and linux

  31. #111
    Originally posted by ceselb
    Just found a factor

    626134072137207677 | 28433*2^4150417+1

    2^2*7*4003*8761*17351*36749+1

    Also found a program to calculate prime factors
    Source and binaries for both dos and linux
    nice find.

    i factor my P-1 factors with this page http://www.alpertron.com.ar/ECM.HTM.

    -Louie

  32. #112
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    Using above program and a nice little DOS utility called FU , I've compiled a list of most prime factors from the new results.txt (up to 10^15, leaving the last 40 or so).

    I don't have a good statistics program or the knowledge, but maybe someone can figure out more optimal bounds with this data?

    The file can be found here (770kb).

    Also found another factor
    4842232354228897 | 67607*2^4150107+1

    2^5*3^2*11*257*53629*110899+1
    Last edited by ceselb; 06-22-2003 at 03:22 PM.

  33. #113
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Seems like the first tests of my range get assigned in approx. 5 days (and the last only ~5 hours later).

    Question:
    It is wise to continue factoring k/n-pairs the time they already got assigned? As it only saves a double check then, usefulness will be less than half of what it was before...

    btw. what is the best PC architecture for p-1 factoring?
    P3/Athlon or P4? Does it prefer a high FSB?
    Last edited by Mystwalker; 06-23-2003 at 10:45 AM.

  34. #114
    Originally posted by Mystwalker
    Seems like the first tests of my range get assigned in approx. 5 days (and the last only ~5 hours later).
    How did you figure that out? you may be right but I don't know, it's very hard to tell right now because of the fact that a few very high tests got assigned because of the TC incident a few weeks ago. the real upper edge of testing is much lower than 4M. I'd guess you have >5 days.

    Originally posted by Mystwalker

    Question:
    It is wise to continue factoring k/n-pairs the time they already got assigned? As it only saves a double check then, usefulness will be less than half of what it was before...
    i would say no. it's wise to pick a range that's at least near to being tested in the next month or so but it'd be equally wise to try and get one you can finish on time for maximum benefit.

    Originally posted by Mystwalker

    btw. what is the best PC architecture for p-1 factoring?
    P3/Athlon or P4? Does it prefer a high FSB?
    The factoring code uses the same squaring routines that SB does so procs that are good for SB are good for factoring. This means fast FSBs are good and P4s are good too. The only proc that has an overwhelming advantage in factoring is the P4 so once again, i'd recommend any of those should stick with SB (or at least P-1) as opposed to sieving.

    -Louie

  35. #115
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    How did you figure that out? you may be right but I don't know, it's very hard to tell right now because of the fact that a few very high tests got assigned because of the TC incident a few weeks ago. the real upper edge of testing is much lower than 4M. I'd guess you have >5 days.
    Hm, did a mistake in my calculations...
    Ok, I have more than 5 days left then. But it seems like I need at least all the time I have available...

  36. #116
    i figured out the issue that was making the LINUX code crash.

    i was testing on only P4s and the SSE2 code was unaligned.

    a working linux version of SB factor is up in my personal space on the Umich servers now:

    http://www-personal.engin.umich.edu/~lhelm/sbfactor.gz

    -Louie

  37. #117
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Ah, one request:

    Could you tell me how many tests there still are to be handed out before it hits n=4073339?
    So I can time the crunching to be (hopefully) ready in time...

  38. #118
    I looked at the data that ceselb had posted. Unfortunately it is not very useful in figuring out optimal bounds as most factors are not P-1 smooth.

    For instance, a B2 bound of 1 million would have captured 26% of the factors in the file and a bound of 2 mill would have captured 30% of the factors.

    Similarly if only stage 1 was done, a bound of 50k would have captured 9% of the factors, whereas 100k would have captured 12.6% and 200k would have captured 16.3% of factors.

    So, that then is the distribution of the final factor of p-1. The penultimate factor distribution is much better in that with B2 = HUGE, a B1 of 25k would capture 88% of the factors, 50k -> 91.6% and 100k->94.6%

    From this limited data, I would suggest that we keep B1 low, about 25k and keep B2 as high as possible.

    But then this observation is just based on empirical data and may be way off the mark.

  39. #119
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    2 more factors found.

    273148210774616633431 | 5359*2^4151886+1
    2*3*5*13^3*29*6599*87103*248621+1

    284412835641643 | 21181*2^4151612+1
    2*3*19*61*71*2953*195071+1

    Am I the only one finding any factors from the coordination thread? I'm using optimal bounds, making a test take 5.5h.

  40. #120
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    I have completed only 2 or 3 tests so far. But 2 are almost ready.
    When does the program get the factor (if there is one within the bounds)? Always at the end of the run (read: step 2 100% done) or inbetween?

Page 3 of 10 FirstFirst 1234567 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •