Results 1 to 25 of 25

Thread: 24737*2^991+1 smallest unfactored k/n pair

  1. #1
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331

    24737*2^991+1 smallest unfactored k/n pair

    I just thought that we should have a post here regarding the progress factoring attempts of this number.

    P+1 using ECM6.0

    3 Curves at
    B1=100M
    B2=10G

    P-1'ed using Prime95 24.12

    B1=2^32-1= 4.2949G (limit prime95 24.12)
    B2=4.2949G (none)

    ECM

    Currently working on the 50-digit level, >10% complete
    Last edited by vjs; 06-24-2005 at 09:07 PM.

  2. #2
    After reading this thread, I've started factoring the number using:

    http://www.alpertron.com.ar/ECM.HTM

    I'm on curve 44. We should get some sort of distributed effort in on this one using this applet. The page gives instructions on how to distribute the factoring, however it just boils down to entering the curve number, entering the number to factor, and then pressing enter.

    It suggests a good curve number as 10000, however that seems a bit high. I don't think it really hurts to choose 10000, but hey, I don't know a lot of the math behind this. Perhaps this would be an easy way for people to get into factoring? (of some sort, at least - how practical this is, I don't know, I just want to factor the smallest k/n pair )

  3. #3
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    SlicerAce,

    We are way way beyond alpertron, its a java applet and relatively slow. It also doesn't use memory very well 32Mb max I think. That's a great program/website but only for quick factoring. Your best bet is download the latest copy of ecm6.

    http://www.mersenneforum.org/forumdisplay.php?f=55

    Also I believe alpertrons limit is B1=11M with a small B2 value ( this is only good for up to 45-digits).

    We are currently testing at the 50-digit level.
    50 digits B1=43M=43000000.

    Best bet is to run a combination of prime95 for stage1 then run ecm6.0 for stage2.

    I'm also trying a few curves at the 70-digit level on a borged machine just for kicks. Each curve is taking me about 18 hours for stage1, I'm spending about 4 hours per stage2 on a second faster machine with more memory.

    As for practical, probably not... but it's fun and if we do find a factor it will be a record. Also it's the smallest k/n so that's something but there will always be a smallest.

  4. #4
    Haha, I suspected something like that - I had always wondered if the ECM factoring program being implemented in Java would cause trouble (read: slowdowns).

    I'm working on compiling a version of GMP-ECM 6 using MinGW and MSys, just like in the thread you posted. I'll be compiling it for the athlon-xp using the -mtune switch on GCC.

    Doesn't this program run into some problems with the stack size? i.e. Large numbers like the ones dealt with in SoB cause some sort of overflow? I know that was the case when I downloaded an older, precompiled version of ecm.exe, however I'd like to know how to setup GCC to organize the memory allocation correctly

    By the way, are there any instructions anywhere on how to get Prime95 and ECM6 to work together? i.e. How do I get Prime95 to do only stage 1 and ECM6 to pick up where Prime95 left off and do only stage 2?

    I have Prime95 setup using:

    ECM2=24737,2,991,1,43000000,4300000000,700

    700 is just an arbitrary number, just so that I don't go back to P-1 factoring of other numbers. In terms of trying to get Prime95 to do only stage 1, is having that B1*100 limit in there (the worktodo.ini file, above) a smart thing? In fact, shouldn't I set this to 0 or something or else it will start doing stage2? I'll have to wait and see.

    I also want to make sure of this: Are these curves chosen randomly i.e. are we both doing different pieces of work? I'll assume yes.

    I hope this only does stage 1 ECM factoring heh.

    Update:

    Mystwalker posted a bunch of great information on how to get some of this going

    http://rieselsieve.com/forum/viewtopic.php?p=4878#4878

    Another update:

    I've found that I need to set B2 in between 1 and B1 in order for Prime95 to forfeit running stage 2. However, I had B2 set to 0 and Prime95 did not write my first curve to results.txt A file, 'd0000991' was written to my prime95 directory, and it contains a bunch of gibberish. I'm not sure if I can use this or what I'm supposed to do with it!

    I'm now running the below in my worktodo.ini file:

    ECM2=24737,2,991,1,43000000,10,700

    Also, I'm looking over some of the options in gmp-ecm, and could someone explain to me what the Brent-Suyama extension is? Could someone also explain to me what a Dickson polynomial is? I'm guessing it's some sort of special polynomial form, as there's also the option to use x^n as a polynomial for this Brent-Suyama extension

    Can we use any of these options to our advantage? How does one choose correct n values for the x^n polynomial/how does one choose the correct Dickson polynomial number?

    Another update:

    Prime95 is now writing Stage 1 residues correctly to results.txt.
    I now have my laptop (p4 2.66ghz) running Prime95 stage 1 as well for the same number. As far as stage 2 goes, Mystwalker had posted in a thread somewhere that he used the flag -k 4 for gmp-ecm for a 1024MB system (which is what I have), so I'll be using -k 4 when I finally get around to crunching all these stage 2 curves.
    Last edited by SlicerAce; 06-26-2005 at 12:08 AM.

  5. #5
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    While the optimal choice of the B2 value does depend on the relative speeds of the machines used for Stage1 and Stage2 here is the command line I am using:
    Code:
    ecm6-k7.exe -dickson 12  -resume resume50-5.txt  >>ECM6A-output50-5.txt 1 43000000-150000000000
    -dickson 12 is what the readme file says is optimal for searching for 50 digit factors.
    -dickson 30 is what the readme file says is optimal for searching for 55 digit factors.
    -resume points to the file containing the result.txt output lines from Prime95
    >> concatenates all the output to a file. Note > would write all the output of a single run but would overwrite it if you started it again either accidentally or deliberately. So >> preserves what is in the file.
    1 is the B1 value to sort of tell ECM6 to use what is in the resume file
    43000000-150000000000 is the B2 value. Note that the 43000000 is the B1 value from Prime95 and must match what was used to create the resume file. Note also that there are other possible valid choices for 150000000000 The next highest value would be 200000000000 for which I have no data. Careful, as the readme shows this value, but that is only a rounding of 198,673,134,660 which is the effective B2 value or B2' for 150000000000. This choice give 7771 curves as the expected curves value. So each curve is 1/7771 if the "required" number of curves for these values. If you run with the -v option, part of the output you will get is
    Code:
    B2'=198673134660 k=2 b2=99324825600 d=1021020 d2=19 dF=92160, i0=24
    Expected number of curves to find a factor of n digits:
    20	25	30	35	40	45	50	55	60	65
    2	5	15	56	252	1321	7771	51303	372940	2933038
    It is important to run at least one run with the -v switch to get these number so that you can report xxx curves of yyy curves run for 50 digits. You will note that there are expected curves listed for other digit values. you could, for example run 51303 curves with these values to look for 55 digit factors. It would work, but it would be more efficient to run with higher B1 and B2 values for 55 digits if we don't find a 50 digit factor.

    This was run with -k 2 on my machine (the default). If I had used -k4 less memory would have been used and I would have not any slowdown. Or I could have used a higer number for the second B2 value and gotten a different number of expected curves.
    Joe O

  6. #6
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    As far as stage 2 goes, Mystwalker had posted in a thread somewhere that he used the flag -k 4 for gmp-ecm for a 1024MB system (which is what I have), so I'll be using -k 4 when I finally get around to crunching all these stage 2 curves. [/B]
    Actually, the -k 4 option is not necessary right now - it comes into play at the 55 digit level, when bounds (and hence memory requirements) are much higher. For the 50 digit level, computers with 256+ MB RAM should work well without it and be ~5(?)% faster.

    Apart from that, everything seems to be perfect.

  7. #7
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Mystwalker,

    I've been running the stage1's of a few 70-digit curves B1=2.9G, later I'll transfer the stage1 result to a barton. Was thinking about running B2 out to 1.5T, in slices of course.

    Any suggestions for comand lines using 756MB and 1.5G of memory.

    Yes I know doing it just for fun at the 70-digit level.

  8. #8
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Well, the 55-digit-level curves for M1061 I did had B2=1.98*10^11, which took max. 700-800 MB with k=4.

    Here, Alex Kruppa states that memory consumption is proportional to sqrt(B2) - at least when I've truely understood it.

    Thus, B2=1.5*10^12 should take ~2.74 times as much memory --> ~2.2GB.

    For the computer with 1.5GB RAM, halving the memory requirements could work --> k=16

    For the computer with 768MB RAM, another halving is necessary --> k=64

    (memory requirements are anti-proportional to sqrt(k), hence quadrupling the k value halves memory needs)

    Let me know whether this works (and don't forget the treefile option.
    Btw.: This should take a bunch of hard disk space - maybe a gig...

    Concerning the sense of 70-digit-level curves, I wouldn't completely discount them. After all, they count for a bunch of 50-digit-level curves. Thus, please keep record of the amount of curves done.

  9. #9
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Yup,

    Joe is the keeper of all these curves etc...

    On my previous attempts (curves at this level) here was the last bat I ran and a note...

    k7 -v -dickson 30 -resume stage1.txt 1 1550000000000-1650000000000

    655mb consumed (<-- this is total system memory consumed at its peak)

    Notice, I'm chopping the B2 into 0.1T pieces it seems fast and I believe it works the same as P-1 in this regard... but I'm open to options and opinions.

    Basically I was thinking about getting together as many B1=2.9G curves and keep stepping the B2 value ever higher.

    Using B1=1, B2=1500000000000-1550000000000, polynomial Dickson(30), sigma=2082999507464506
    Step 1 took 0ms
    B2'=1562843722440 k=2 b2=31426995600 d=570570 d2=17 dF=51840, i0=2628933
    Expected number of curves to find a factor of n digits:
    20 25 30 35 40 45 50 55 60 65
    2 3 5 11 26 65 171 473 1377 4406

    Mystwalker... If you write me a *.bat I'd give it a try see if it's faster better etc.

    not sure how treefile works etc.

    I have a fast 15K scsi on a controller so swapping (disk access) isn't an issue.

  10. #10
    ecm6 -dickson 12 -k 2 -n -one -redc -v -resume input_6262005.txt >>ECM6_output1.txt 1 43000000-150000000000

    is the command I just finished running on 55 curves that I completed in Prime95 with B1 = 43000000

    I'm wondering if ecm is smart enough to choose the -redc flag, as the README outlines,

    8. How to get the best of GMP-ECM?

    Choice of modular multiplication. The ecm program may choose between 4 kinds
    of modular arithmetic:

    (1) Montgomery's REDC algorithm at the word level (option -modmuln).
    It is quite fast for small numbers, but has quadratic asymptotic
    complexity.
    (2) classical GMP arithmetic (option -mpzmod).
    Has some overhead with respect to (1) for small sizes, but wins
    over (1) for larger sizes since it has quasi-linear asymptotic
    complexity.
    (3) Montgomery's REDC algorithm at high level (option -redc).
    This essentially replaces each division by two multiplications.
    Slower than (1) and (2) for small inputs, but better for large
    or very large inputs.
    (4) base-2 arithmetic for numbers dividing 2^n+1 or 2^n-1.
    Each division has only linear time, but the multiplication are
    more expensive since they are done on larger numbers.
    Or perhaps for this k/n pair, -mpzmod would be the most appropriate? I'm not sure. Perhaps someone here with more mathematics experience can help me


    I've attached a copy of my log file for the run above. In order to properly coordinate this effort, do we all need to just report 'Hey, I did x amount of curves on it with B1= 43..." etc.? Then when the sum of those curves is 7771 or so, we'll move on to some new settings?

    I chose a pretty large B2 value - was this incorrect? I've always seen B2=B1*100, but JoeO you chose this really large value. Shouldn't it be within that B1-100*B1 range or something, as it will only work if and only if a single factor is between B1 and B2?
    Attached Files Attached Files
    Last edited by SlicerAce; 06-27-2005 at 04:53 PM.

  11. #11
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Shouldn't it be within that B1-100*B1 range or something, as it will only work if and only if a single factor is between B1 and B2?
    Yes but all "factors" can be less than B1 as well, i.e. found in stage1. So if everything was less than B1 and there can be one very large portion, so in that respect the bigger the B2 the better. It's just a time trade-off for when to start another curve.


    The choice of B2=100*B1 was based upon the probablility of finding a factor that was b1 smooth with one factor that was within B1<factor<B2 and the time spent doing stage1 and stage2.

    What this means is that at one point in time with some client b1 and b2=100*B1 was the optimal choice.

    Now with ecm6.0, stage2 is much faster than it use to be. So in order to spend the same time doing stage2 as people did in the past we simply extend out the B2 bound.

    At least this is the way I think of it.

    Perhaps it might be time to look at this again once the prime95 client gets to a final release. BTW 24.12 version 3 is out

    I guess I shouldn't b/c they do such a great job with this client. It's also stress tested to the extreme IMHO, debugged like crazy, and optimized to the point of making a drag racer envy. Perhaps MS should pay a little notice to the way things should be done?

  12. #12
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Originally posted by vjs
    k7 -v -dickson 30 -resume stage1.txt 1 1550000000000-1650000000000

    655mb consumed (<-- this is total system memory consumed at its peak)

    Notice, I'm chopping the B2 into 0.1T pieces it seems fast and I believe it works the same as P-1 in this regard... but I'm open to options and opinions.
    I'd suggest using the -k parameter instead. I'm not sure whether the results are comparable, but my gut feeling says that splitting into blocks with -k works basically similar to your approach.
    You can simply time both methods and look what's faster.

    not sure how treefile works etc.
    Just add e.g. "-treefile treefile" somewhere before the bounds. As a result, a number of files called treefile.0, treefile.1 etc. gets written at the beginning of a curve and erased again at the end.

    Mystwalker... If you write me a *.bat I'd give it a try see if it's faster better etc.
    All in all, just try
    k7 -v -k 16 -treefile treefile -resume stage1.txt 1 2.9e9-1.5e12
    resp. change "-k 16" to "-k 64".

    I have a fast 15K scsi on a controller so swapping (disk access) isn't an issue.
    Swapping is always an issue, as even the fastest HDD is still several orders of magnitude slower than system memory...

  13. #13
    Originally posted by vjs
    The choice of B2=100*B1 was based upon the probablility of finding a factor that was b1 smooth with one factor that was within B1<factor<B2 and the time spent doing stage1 and stage2.
    The reason I bring the subject up is that because ECM is based off of the P-1 factorization method, and Paul Zimmerman's ECMnet page recommends the 100*b1 = b2.

    From http://pw1.netcom.com/~jrhowell/math/ecm.htm,

    If the above calculation (“Phase 1”) fails to find a factor, a “Phase 2” may be performed. For phase 2, we choose a second limit, called B2, which is larger than B1 (perhaps 50 times B1 or so). Phase 2 will find a factor p if p-1 has one factor between B1 and B2, and if all of the other factors of p-1 are less than or equal to B1. B2 is sometimes chosen so that the computation times for phase 1 and phase 2 are approximately the same. The total computation time is much less than would be required to perform phase 1 up to the limit B2. Phase 2 will often find factors that are missed by phase 1.
    So if ECM is somewhat similar to P-1, my concern is that by choosing such large B2 values, if more than 1 factor exists between B1 and B2, then we won't find it (assuming ECM works similar to P-1). We'll probably need someone that actually understands the intricate details of the mathematics, though, to figure this out


    Also, I've completed another 156 curves using:

    ecm6 -dickson 12 -k 2 -nn -one -redc -resume input_6282005.txt >>ECM6_out
    put2.txt 1 43000000-150000000000

    The logfile is attached with this message. I stopped twice during the whole ordeal, but made sure to crop the input file just up to the entry with the same sigma value as the one I stopped on.

    Is there an automatic way to do resuming? It sounds like treefiles may be able to do something like this, but I honestly don't know what a treefile is.
    Attached Files Attached Files

  14. #14
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Originally posted by SlicerAce
    The reason I bring the subject up is that because ECM is based off of the P-1 factorization method, and Paul Zimmerman's ECMnet page recommends the 100*b1 = b2.

    From http://pw1.netcom.com/~jrhowell/math/ecm.htm
    The computation of stage2 is now asymptotically faster, thus it is optimal to extend it in order to get the best results.
    With gmp-ecm6, you can get the optimal number of curves for certain digit limits of factors.

    The default values are quite optimal when both stages gets done with gmp-ecm. With Prime95 for the first stage, the optimal size of B2 is lower than the default one.

    So if ECM is somewhat similar to P-1, my concern is that by choosing such large B2 values, if more than 1 factor exists between B1 and B2, then we won't find it (assuming ECM works similar to P-1). We'll probably need someone that actually understands the intricate details of the mathematics, though, to figure this out
    My math skills are only enough to use gmp-ecm to find the optimal bounds. But this program (incl. the optimal curve statistics) was written by people who know the details.
    When you're interested in the math behind it, the mersenneforum is a great place to browse through, and questions are welcome as well.
    For me, I've decided to stick to "use knowledge" for now and wait for some free time to get into some of the math.


    Also, I've completed another 156 curves using:

    ecm6 -dickson 12 -k 2 -nn -one -redc -resume input_6282005.txt >>ECM6_out
    put2.txt 1 43000000-150000000000
    Nice work.
    btw.: You don't need to specify the polynomial (dickson 12) - it's the default polynomial for this B1...

    The logfile is attached with this message. I stopped twice during the whole ordeal, but made sure to crop the input file just up to the entry with the same sigma value as the one I stopped on.

    Is there an automatic way to do resuming? It sounds like treefiles may be able to do something like this, but I honestly don't know what a treefile is.
    Unfortunately, there is no automated way so far.
    And chances are it never will - but that's not that bad, because there will be a change in the way to work somewhen in the near future.
    gmp-ecm will have the ability to call Prime95/mprime directly and get stage1 from it. So, no more hassle to swap residue files, and a stage1 will get resumed ASAP.

  15. #15
    I have 532 curves that are stage 1 complete - I'll be doing stage 2 soon, however I need to go back to P-1 factoring and finish up a range I reserved that's about to be passed up

  16. #16
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    O.K. call me crazy

    I may as well add my efforts to the fold...

    I've completed an additional 16 B1 curves with b1=2.9G (the 70-digit level) They take about 21 hours on a P4-2.0G each.

    Thus far
    3 curves at B1=1G
    21 Curves at B1=2.9G

    I'm in the process of pushing B2 out with my barton, currently B2 is around 1P for most of these curves.

    Over the weekend I'm going to push it out a little farther using mystwalkers advice.

    example of one of the cmd lines

    k7 -v -dickson 30 -k 64 -treefile treefile -resume all.txt 1 1e13-2e13 >>stage2.txt

    Should consume about 600mb of memory and take about 8 hours. <--- estimate

    Not sure how far I'll get but I'll keep you posted...

    I'm going by the asumption that one should spend equal time on stage2 that one does in stage1. Wow stage2 is fast!!!

    FYI B1's for digit level.

    50 43M
    55 110M
    60 260M
    65 850M
    70 2.9G

    If I do end up running them all out to 3e13 here is what they would be worth per curve (chances).

    50 70
    55 167
    60 430
    65 1125
    no listing for 70 in ecm6.0

    So I guess I have about a

    1 in 3.5 shot at a 50 factor
    1 in 7.9 shot at a 55 factor
    1 in 20 shot at a 60 factor
    1 in 53 shot at a 65 factor

    I think this is the way the curve probablilties work but I'm new to ecm.

    Also I'm still not sure about why we have to do all of the 50's before the 55's etc.

  17. #17
    Originally posted by vjs
    Also I'm still not sure about why we have to do all of the 50's before the 55's etc.
    First of all: I'm no ECM-expert or such, but I think I can answer this question

    You don't have to do all the 50's before the 55's etc., but it is recommended. This is because you would need more curves on a 55-digit-level to find for example a 49-digit-factor than on a 50-digit-level.

    At all it's just a question of efficiency.

    (Sure, if you know that no factor below 50-digits exists you just can start with a 55-digit-level )

    ciao
    Zahme

  18. #18
    Originally posted by Zahmekoses
    .... you would need more curves on a 55-digit-level to find for example a 49-digit-factor than on a 50-digit-level.
    This is not true, actually, you need less curves, but each curve will take longer.

    Try playing with the -v option in GMP-ECM and you'll find out that for every didgit level (45,46,47,48,...) there's an optimum B1 and B2. In practise however, it's easier to make 5 digit steps when choosing bounds. This makes it also easier to keep track on the curves done.

    When i'm searching for smaller factors, i usually let the bounds increase each curve.

  19. #19
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Originally posted by vjs
    I think this is the way the curve probablilties work but I'm new to ecm.
    Actually, after the "required number of curves", the chance that a factor of corresponding size is there, but hasn't been found, is exp(-1) or ~37%. Thus, it can still occur that there's a missed factor. It's even possible that 24737*2^991+1 has a 20 digit factor - it's just extremely unlikely.
    Using higher bounds more than substitutes lower bounds, though, hence there is no need in continuing a certain digit level after the optimal point.

  20. #20
    I'm still chugging away at those curves... the output logfile I have has 321 curves completed through stage two, albeit I have 209 stage one left in the input file and another 190 stage one in a different input file, but it's all moving closer toward completion. 720 curves total, plus whatever else I added before (see above). Man, I wish this thing would just give me a factor already and get it over with!!

  21. #21

    531 curves

    Attached is the RARed logfile of another 531 curves completed through stage 2 using ECM 6.
    Attached Files Attached Files

  22. #22
    Old Timer jasong's Avatar
    Join Date
    Oct 2004
    Location
    Arkansas(US)
    Posts
    1,778
    Maybe I'll just get you guys mad with my ignorance, but I'm going to try anyway.

    I downloaded ElevenSmooth just for the heck of it and discovered it has optimized ecm.exe clients placed in different directories. You just open the appropriate directory and drag-and-drop to the main ElevenSmooth directory. I'm just guessing, but I'm assuming that the clent is made for ecm in general and going to the ElevenSmooth site is a good way to get a pre-compiled version.

    Assuming that is correct, I'd appreciate some instructions on...

    (1) Making a directory for ecming this number with the appropriate files, and...

    (2) Get instructions on running curves so that I can post back here.

    So if anyone can point me to a good website to get my feet wet, I'd appreciate it.

    /me goes to Google to do some searching.

  23. #23
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    This information may be old, but I thought the eleven smooth client still uses the ECM 5 code. Not sure here at all, and it's possible it's been updated.

    The best thing to do is use prime95 for stage1 and download the latest version of ecm6 for stage2. Perhaps I'm just out of touch with ecm lately.

    This thread has the command lines to use prime95 for stage1. And mystwalker has given us a few different lines for ecm6 depending upon your computer memory.

  24. #24
    Old Timer jasong's Avatar
    Join Date
    Oct 2004
    Location
    Arkansas(US)
    Posts
    1,778
    I've been reading the previous posts, and even though I managed to compile an ecm.exe for my machine, I may not be able to help much. If I'm going to avoid thrashing I can't spare more than about 200MB(Maybe 250 if I don't touch my computer).

    If you guys can use this, go ahead and tell me. But please don't leave me hanging, I really like the idea of participating in this project. In other words, if you can't use me tell me, if you can use me, use me.

  25. #25
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    With 200 MB of "free" RAM, efficiency should drop somewhere near the 50 digit range, I guess.

    Nevertheless, it would be an at least nearly optimal contribution.

    If you have an Athlon, I'd suggest using ecm6.
    In conjunction with prime95, the optimal B2 bounds for 50 digits (the current level for 24737*2^991+1) seems to lie approx. at 4e10. If you encounter thrashing, you could either use the "-k" parameter or a lower bound.
    You could also try ecm on the other small k/n pairs. This way, 200MB are quite enough. I'm currently finishing the 35 digit level for all candidates with 1000 < n < 2000. For higher n's, Joe O wrote the current progress in the "Small n factoring" thread IIRC.

    If you have a P4 or P-M, it would be a good option to let it run stage1 only with prime95. Afterwards, you could send the residues to another person, which does stage2 then.
    Stage1 needs next to no RAM, so you won't have a disadvantage there.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •