Page 2 of 2 FirstFirst 12
Results 41 to 79 of 79

Thread: 10,000,000 digit tests?

  1. #41
    There is a way to do stage 1 in prime95 and stage two in GMP-ECM I do not know the switches/options for this however, I have also heard that this is the fastest way of testing (at least for ECM so I think it also applies for P+/-1 factoring).
    Probably the simplest way of doing two stages on different machines is to have the default as do both yourself. But if you have a machine with low RAM then post your stage 1 results/file for someone else to claim and complete.

    Craig.

  2. #42
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    AFAIK, the use of the prime95/gmp-ecm combination is not possible for P-1 factoring right now (well, I've read it *is* possible, but only with fumbling around in prime95's save file...), because there's only a GmpEcmHook.

    IIRC, it is possible to split the stages on different computers running both Prime95.
    ECM residues at the end of stage1 for small values (~300 digits) are fairly small (~1 KB per entry). I don't know if this scales with value size, though...

  3. #43
    I just found out that if you assign not enough memory for stage 2 to prime95, it does only stage 1.

    Pfactor=22699,2,33500134,1,45,1.0 yields

    B1=B2=250000
    2.1% Chance of finding a factor.

    I tell you the size of the save-file in approximately 33 hours (Celeron 2.0 GHz).
    If the program does not delete it.
    BTW For me it's not intuitive that one needs a save file from stage 1 in order to perform stage 2. They seemed to me kind of distinct, as they are looking for different factors. But I know that it's not always intuitive... If you have an easy explanation, I would like it, if it is too difficult, don't bother. I believe you. (Belief!!! )
    H.

    PS. I have another machine working at 33500782, too. If you want to play, too, choose another number, please.

  4. #44
    Originally posted by hhh
    The choice of optimal bounds is more difficult then I thought.

    My questions:
    1) Do you think it is a good idea to do stage 1 and 2 on different machines?
    Yes and no. Yes, you'll get a (very) little bit better throughput if all stage 2 work is done on machines with lots a free RAM. But, it is a pain to move save files around and coordinate such work among several machines. It would be much easier to do all P-1 work on machines that have some minimum amount of memory available - say 400MB. In other words there is a big difference in bounds chosen for 100MB vs. 400MB, and a much smaller difference for 400MB vs. 800MB.


    If yes, the stage 1 machines will not need a lot of memory, anyway.
    But the choice of B1,B2 for the optimality of the second machine depends on the memory of the second one.
    Correct. To get the optimal bounds chosen, you need to tell the prime95 the amount of memory that will be used on the machine that runs stage 2.

    2c) BTW is the formula sufficiently simple to post it here?
    No. It is real ugly.

    4) BTW, is there a way to tell prime95 to test only stage1 or stage2 with chosen bounds? Or any other program?
    It is hard. You would use "Pfactor=..." to compute the best bounds and then replace it with a "Pminus1=..." line where B2=B1. Pminus1 will then run just stage 1 and shouldn't delete the save file when it is done.

    As to GMP-ECM, great program but not of much use here. The numbers are way too big for its superior stage 2 to handle.


    Anyway, your first chore is to get cracking on sieving. While that completes you can decide how you'll handle the P-1 work.

  5. #45
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Back on the topic of sieve and how it actually relates to the 10M digit test que...

    We have sieved almost all the ranges <40T of 991<n<50M, there are a few straggler ranges (pointing my finger at my chest) that have not yet been completed.

    I'll release more stats once all these ranges below 40T have been complete. Thus far we have reduced the k/n pair count to ~28700 tests per 1M for the 33-34M level.

    To be more specific here is the example of factors found through sieving from 25T to <40T (approx 80% done to 40T) for k=22699

    (p=25t) tests within that range at sieve to p=25T
    (Current) tests within that range at current sieve point (approx 80% done between 25T to 40T)

    Code:
    k	n=xM	p=25T	Current	Factors
    22699	33	1382	1374	8
    22699	34	1367	1353	14
    22699	35	1416	1402	14
    Based upon factor density, a few other considerations and a very rough approximation. We would expect to find 5-10 factors for the 0.5M range we are considering if we sieve to ~70T, (an additional 30T).

    Based upon previous post of 2.1% factor chance, 33 hours per P-1 test, and 7 factors found.

    It would take us roughly 500 days to find (7) factors through p-1

    So if my rough math is correct (which I think it is). it doesn't really make alot sence to P-1 at this point.

    Since that exact same machine could sieve 15T (half the range) in the 500 days (30G per day), never mind all the other factors we would find n>20M and those missed factors <20M.

    Please don't get me wrong I'm still in 100% favor of the que for n= 10,000,000 digit and the one k approach, I'm just not certain P-1'ing yet is the best thing to do. Start the que regarless it's just more benifital to sieve until say p=75T before we start p-1'ing these numbers. "We eliminate the same number of tests" and "not just the smooth factors".

    Point is that even though sieve is not specific to a particular k or n-range. With these low p ranges, the factor density being so high, we eliminate enough in this range just by factor density and probablility to counter act the benifit of the focus of p-1 effort. (hope people/do people follow this???)

    I think the major point is we just start the que and keep sieving, perhaps P-1 only the first 5 tests n in the que to fairly high bounds? But no further than we test.

  6. #46
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I agree with vjs' major point.

    Let's simply P-1 test five (may be ten) k/n pairs and fill up the queue with the unfactored ones after P-1.

    If there is really some demand for 10m+ PRP testing and say, people actually start (and it looks like they'll finish) some tests on the 10+ front, simply run another batch (may be larger number of k/n pairs) of P-1 tests. And so on. There is also the benefit of time to pass between each time we feed the queue, meaining sieve will progress further each time, and there will be less tests remaining for P-1.

    If there is not much demand, this is a premature discussion anyway... We'll be in a much better point in terms of sieve depth, client capabilities, and computing power per machine in the future. So, why not simply wait for the right time to come.

  7. #47
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Yup,

    The point i'm trying to make here is reserving ranges of

    33500000-33501000 vjs [reserved] B1=10000 B2=100000

    just doesn't make sence ...

    As I and Nuri pointed out some factoring should be done on numbers about to be tested as the que advances but only a very small lead say 5-10 tests.

    The other problem I see is the same problem we are having with the 31337 que, where people reserve then drop. The test isn't reassigned to the que it's added to the dropped tests que... and not back to the 31337 que. Then the 31337 doesn't pull the minimum untested it pulls the next k/n pair in line.

    This could be a major problem if we factor 5 tests, how long will it take for people to drop 5 tests and the que pull an unfactored one?

    Also I don't think manually maintaining the dropped test and 10+M-que is viable. It's not being done for 31337...

    Perhaps the better option is just change the 31337 que, get it to choose 33.5M numbers as opposed to 13.4M ones. If people are interested they can factor the numbers they get themselves and report back that it was factored if they drop it.

    Regardless if people want to try factoring a few here are the first few n from k=22699

    33500134
    33500710
    33500782
    33501358
    33502510
    33503014
    33503518
    33504814
    33504958
    33506254
    33506614
    33507334
    33508198
    33508774
    33508990
    33509278
    33510574

    If anyone decides to reserve one of these numbers post so were not factoring twice. Also post your experience with the factoring memory and time taken on what machine.

  8. #48
    I hereby announce solemny that I am factoring right now
    33500134
    33500710
    33500782

    with B1=B2=250000, but will probably not get a save file because I used Pfactor and not Pminus1; but that I am not willing to restart because its already 33% finished. H.

  9. #49
    Nice thread!!

    When can we start proth testing?

  10. #50
    I thought about everything and came to the following conclusions.

    1) You are right about cooking all this on a little bit smaller fire. We should keep cool about everything. As for myself, I was a little bit too excited, lately.

    2) The optimization thing is no great problem in fact. If we split the work (I still think it is a good idea), it is very easy to get a better efficiency. Even if it is not optimal. Let me explain.

    What happens if we don't split the work? Machine 1 (with big memory) will spend t1 seconds in stage 1 and t2 seconds in stage two, with boundaries b1 and b2.
    In the same time, machine 2 (with small memory) will spend T1 seconds in stage 1, with boundary B1.
    b1<B1 and t1<T1.

    Now we split. Machine 2 does the same. Machine 1 picks up the work and spends less time than t2 to perform stage2 with B1 and b2. One test has been done with better boundaries. And hypothetically, machine one has gained some time to perform a stage 1 with better bounds (which will not happen).
    Or one could increase the b2 bound to take the same time.

    So here is what I will do:
    I will start stage 1 tests with the optimal bound B2=250000 and produce save files. I have 3 *2GHz, so I will produce about 15 in the first week. Anybody who wants to do stage 2 should post or mail and I will send him as many files he wants, as long as stocks last.

    See you soon, H.

  11. #51
    Just finished
    33500782 B1=250000 without savefile. I suggest to let let it be like this. I reserve a whole bunch:
    33501358
    33502510
    33503014
    33503518
    33504814
    33504958

    Joh14vers6: keep patient. Rome wasn't built in a day either. There is still work to do, and on the other hand, things are advancing very fast from time to time. I have no idea. 2 months?

    Other question: is there a lawyer in the forum who could think a little bit about the money thing? It's not very probable that we will touch the prize, but we are doing it in order to get it so we have to make provisions.
    H.

  12. #52
    Update:
    33500134 done, no savefile
    33500710 done, savefile available
    33500782 done, no savefile
    33501358 following are reserved
    33502510
    33503014
    33503518
    33504814
    33504958

    Mail me to get a file (4MB) and do stage 2. H.

  13. #53
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    How long are these taking you and with what type of machine?
    Memory consumption?

  14. #54
    Target Butt IronBits's Avatar
    Join Date
    Dec 2001
    Location
    Morrisville, NC
    Posts
    8,619
    They are NOT machines!!! They need no oil or grease

  15. #55
    Senior Member Frodo42's Avatar
    Join Date
    Nov 2002
    Location
    Jutland, Denmark
    Posts
    299
    Originally posted by IronBits
    They are NOT machines!!! They need no oil or grease
    Well that depends

    http://hardware.slashdot.org/hardwar....shtml?tid=222

  16. #56
    would it ever work if you pulled it out???

    Does mineral oil not conduct enough electricity to kill the components??

    I may have to try this...

    -Jeff
    Distributed Hold'em


  17. #57
    Originally posted by IronBits
    They are NOT machines!!! They need no oil or grease
    nature's 6 simple machines dont need oil or grease.
    they may help some of them work better, but to say that it cannot be a machine if it doesnt require oil or grease is incorrect IMHO.

    Merriam-Webster supports this:

    Main Entry: ma·chine
    Pronunciation: m&-'shEn
    Function: noun
    Usage: often attributive
    Etymology: Middle French, from Latin machina, from Greek mEchanE (Doric dialect machana), from mEchos means, expedient -- more at MAY
    1 a archaic : a constructed thing whether material or immaterial b : CONVEYANCE, VEHICLE; especially : AUTOMOBILE c archaic : a military engine d : any of various apparatuses formerly used to produce stage effects e (1) : an assemblage of parts that transmit forces, motion, and energy one to another in a predetermined manner (2) : an instrument (as a lever) designed to transmit or modify the application of power, force, or motion f : a mechanically, electrically, or electronically operated device for performing a task <a calculating machine> <a card-sorting machine> g : a coin-operated device <a cigarette machine> h : MACHINERY -- used with the or in plural
    2 a : a living organism or one of its functional systems b : a person or organization that resembles a machine (as in being methodical, tireless, or unemotional) c (1) : a combination of persons acting together for a common end along with the agencies they use (2) : a highly organized political group under the leadership of a boss or small clique
    3 : a literary device or contrivance introduced for dramatic effect

    besides, I am sure there are modern lubricants in certain components such as the harddrive, power supply, fan, etc.
    Last edited by kelman66; 05-13-2005 at 10:50 PM.

  18. #58
    Target Butt IronBits's Avatar
    Join Date
    Dec 2001
    Location
    Morrisville, NC
    Posts
    8,619
    <--- machine (forward motion)

  19. #59
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331

  20. #60
    Update:
    33500134 done, no savefile
    33500710 done, savefile available
    33500782 done, no savefile
    33501358 done, savefile available
    33502510 done, savefile available
    33503014 done, savefile available
    33503518 res
    33504814 res
    33504958 res

    I run the tests with B1=B2=250000 on Celeron 2.0GHz and it takes a long time. One and half days? Two? Something like that. Memory consumption seems to be in the beginning 70MB, later 35MB. Or I got something wrong. Doesn't matter. The save files are 4MB large.
    As I said, you can mail me and get them, and run some stages 2, just for fun.
    I propose that we stop with reserving and I just reserve all, because there is no need for somebody else to run stages 1.

    Questions: how is the server set up for accepting residues sent by the client? Does it still accept tests that it didn't assign, or has this been changed after the zombie client (you remember, the one which messed up all the stats)?


    EDIT: update
    Last edited by hhh; 05-15-2005 at 07:51 PM.

  21. #61
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    AFAIK,

    The server is not accepting residues for tests it doesn't assign. A que will have to be made for handing out these tests, modifiying 31337 is my suggestion. Of course if they are individually tested and a prime is found that's another matter.

    As for factors n>20M they can always be submitted via factrange@yahoo.com.

    Hopefully we can convice the powers to allow the server to accept n>20M soon.
    I believe the reason why this is not done is server load and space.
    Since about half of the K/N pairs between 20M-50M have been factored out density is alot less so accepting factors for k/n pairs is less of an issue for 20M<n<50M.

    Currently the server is configured such that p<1T are ignored and p<25T are not updated in the results.txt. This could be changed "soon" since all p<40T have been double checked for n<20M and sieved for p<40T. The chances for finding a factor with p<40T will be almost 0% once the secondpass sieve is complete.

    So the server could be modified so that it won't accept p<40T nor update the results file for p<40T either. This would off set the additonal load created by accepting n up to 50M. Also it could be advance further as we double check above p=40T.

  22. #62
    Update:
    33500134 done, no savefile
    33500710 done, savefile available
    33500782 done, no savefile
    33501358 done, savefile available
    33502510 done, savefile available
    33503014 done, savefile available
    33503518 done, savefile available
    33504814 done, savefile available
    33504958 done, savefile available
    next ones in work.

    mail me, guys, come on, just for fun, you can run of these tests!

  23. #63
    nah ok, I will take one

    I will send you a mail-adress via Pm for one of these 4MB files.

    ciao
    Zahme
    Last edited by Zahmekoses; 05-19-2005 at 04:40 AM.

  24. #64
    Update:
    33500134nosave
    33500710save,Zahmekoses
    33500782nosave
    33501358save
    33502510save
    33503014save
    33503518save
    33504814save
    33504958save
    33506254save
    33506614save
    33507334save,factor:282939656582239
    33508198save
    33508774save,factor:68464761281687
    33508990save
    Next ones: reserved

    Two out of 13, that's quite impressive, for the beginning. And I did only stage 1! Can be a statistical anomaly, though.

    So, once again, whoever wants to do some work here, he can do stage 2 directly.
    I will continue to do stage 1.

    Nuri, vjs: if you have access to newer dat files then I have, please PM me a list with the next n's to do. I extracted them from the last released dat-file, but I would regret to do useless work.
    As for the factors, shall I mail them in the future or can you just collect them from the forum?

    As for me, I think in one month or so I will have tested about 50 n's and we can seriously consider starting the que.


    H.

    EDIT: typo.

  25. #65
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    hhh,

    In all reality how many 31337 tests have been done?

    I wouldn't suggest going any further with the factoring until other issue are sorted out, like the que itself. You have done 13 with some good sucess so that's enough tests for a start. You could post the factor here or e-mail them to factrange@yahoo.com

    It would be cool if you did both.

    I've proposed the que and a bunch of other topics to Louie, he and dave are reviewing a bunch of ideas this month. I think we should wait and see what happens. Also Joe and I were planning on an updated dat in a week or two. Might be good to hold off a bit.

  26. #66
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Originally posted by vjs
    In all reality how many 31337 tests have been done?
    Just for the record. As far as I know, only three.

    Two from the 31337 account itself, and one was done my MikeH under his own ID.

    Originally posted by MikeH
    Looks like I was the only one to continue with a 31337 big test (under my own ID).

    28433*2^13466977+1 completed earlier today

    Knowing my luck it's already been factored out by sieving (I haven't checked).

  27. #67
    Originally posted by Nuri
    Just for the record. As far as I know, only three.

    Two from the 31337 account itself, and one was done my MikeH under his own ID.
    See also this posting. I did 7 and Kristovt did 1.

  28. #68
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    So, it's at least 11 in total. Cool...

    BTW, just out of curiosityö shouldn't the PRP tests issues and saved by n for 12 k (n<20M) table show these tests as issued?

    May be Mike simply ignored to take that info into account. Anyways, no big deal.

  29. #69
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Nuri,

    As far as I know, the way the server is setup unless those tests were assigned under the 33137 account and residues returned by the account the test doesn't really show as complete. Not sure if the residue is ignored or kept but I know it messes with the que system to hack the registry with "random/unassigned" tests.

  30. #70
    Update again
    33500134nosave
    33500710save,zahmekoses
    33500782nosave
    33501358save
    33502510save
    33503014save
    33503518save
    33504814save
    33504958save
    33506254save
    33506614save
    33507334save,factor:282939656582239
    33508198save
    33508774save,factor:68464761281687
    33508990save
    33509278save
    33510574save
    33510862save,factor:305440004587457

    and again a new factor. I think even with a que, it will stay a half manual process, as it would be just stupid not to run a stage 2. So, whoever wants to runa test needs my savefiles. (which I could send to somebody who could host them, too; they would be downloaded only once).
    These factors would have been found by sieving to 2^50, though.
    Until now, I'm not really saving work, so.

    I will send the factors to the factrange account, with subject:
    factors from largest prime que

    , and hope that it passes your spam filter. If not, please post what subject is needed.

    Zahmekoses: any news?

    Yours H.

    PS: Don't care about that I am continuing; I will stop at a moment, don't worry.
    Last edited by hhh; 05-27-2005 at 07:54 AM.

  31. #71
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Hey hhh,

    we don't have a spam filter etc, keywords simply direct your message into a folder. This isn't important however if it's a new message we will read it if the subject line matches anything sob related.

  32. #72
    Unholy Undead Death's Avatar
    Join Date
    Sep 2003
    Location
    Kyiv, Ukraine
    Posts
    907
    Blog Entries
    1

    I'm into it.

    Well, I have no time to read entire topic, but I'ma definitely into it.

    If there's some short directions what to do, I go on board.
    wbr, Me. Dead J. Dona \


  33. #73
    Death,
    we are still in the preparations of the que; we decided k=22699, and I ran some stage1, knocking out some factors that would have¡¡been found by sieving later.
    If you want, you can PM me an email adress that can take 4MB and I will send you a savefile so that you can run a stage 2 on a large memory machine.
    If you don't want its fine too; the new que is for fun and a little bit for publicity anyway.
    I will have to check though if it really works.
    Yours H.

  34. #74
    Ok, the savefile got done with a B2 of ~B1*15 (the optimal one, that prime95 has given me).

    No factor found, sadly :/

    It got done on an Athlon XP 2200+ with 768 MB RAM (but I let prime95 only use 608 MB RAM)

    CPU Calculation Time is about 42h, real computation time is about 4-5 days (I can't run my PC 24/7 )

    ciao
    Zahme

    *grabs the next savefile*

  35. #75
    Time to push up this thread again.
    Update:

    33500134nosave
    33500710zahmekoses done, no factor
    33500782nosave
    33501358zahmekoses done, no factor
    33502510save, zahmekoses
    33503014save
    33503518save
    33504814save
    33504958save
    33506254save
    33506614save
    33507334save,factor:282939656582239
    33508198save
    33508774save,factor:68464761281687
    33508990save
    33509278save
    33510574save
    33510862save,factor:305440004587457
    33510934save

    Zahmekoses is going to continue like this; but if anybody else wants to help finishing up these stages2...
    H.

  36. #76
    Zahmekoses, who BTW is a really nice guy, has to stop his efforts due to personal reasons, but until now has done very well:

    [Fri Jun 03 18:57:06 2005]
    22699*2^33500710+1 completed P-1, B1=250000, B2=3795000, Wa1: 6296B762
    [Sat Jun 11 17:01:11 2005]
    22699*2^33501358+1 completed P-1, B1=250000, B2=3795000, Wa1: 62E9B77A
    [Tue Jun 14 20:02:04 2005]
    22699*2^33502510+1 completed P-1, B1=250000, B2=3795000, Wa1: 62F9B703
    [Mon Jun 20 21:54:02 2005]
    22699*2^33503014+1 completed P-1, B1=250000, B2=3795000, Wa1: 6286B702
    [Thu Jun 23 11:10:48 2005]
    22699*2^33503518+1 completed P-1, B1=250000, B2=3795000, Wa1: 62DDB701
    [Fri Jul 01 12:24:51 2005]
    22699*2^33504814+1 completed P-1, B1=250000, B2=3795000, Wa1: 62E8B711
    [Sun Jul 03 15:43:03 2005]
    22699*2^33504958+1 completed P-1, B1=250000, B2=3795000, Wa1: 62E4B72B

    In addition to this, there are the two tests without savefile which can be put to the que immediately, too:
    33500134nosave
    33500782nosave

    So, we have 9 tests ready, perhapas enough to put them into the que?

    If anybody wants to run more stage2, I will be pleased to send him some savefiles.
    H.

  37. #77
    Finally, the following k/n are done factoring up to 250 000/4 000 000:

    k=22699, n=

    33500134
    33500710
    33500782
    33501358
    33502510
    33503014
    33503518
    33504814
    33504958
    33506254
    33506614
    33508198
    33508990
    33509278
    33510574
    33510934

    This means we have got 16 candidates for the largest prime to feed a largest prime que. The enthusiasm was big back then when the idea came up, I don't know if it is still like this. I did my best.

    Happy crunching! H.

  38. #78
    i would still like the que to be populated How many factors were found eliminating tests from this bunch?

  39. #79
    Originally posted by Keroberts1
    i would still like the que to be populated How many factors were found eliminating tests from this bunch?
    19 tests - 3 factors = 16 tests remain. All factors were found by stage1 (B1=25000), and were below 2^50. You can see them above.

    H.

Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •