Page 1 of 2 12 LastLast
Results 1 to 40 of 79

Thread: 10,000,000 digit tests?

  1. #1

    10,000,000 digit tests?

    Recently VJS, Joe O and several others have been working on creating a .dat file for N values from 999-50,000,000. they have already eliminated a signifigant portion of tests and i was wondering if now that they have been sieved to a reasonable depth would it be possible to create an account o nwhich 10,000,000 plus digit tests could be handed out. This would of course mean that we would be competing with GIMPS for the first 10,000,000 digit prime and would give usa chance at the 100,000$ prize. I believe this could attract alot more attention to our project and it would be cool to find a first placed record prime. Any one who has an opinion on this please contribute. I'd like ot know if I'm the only user who has thought about this.

  2. #2
    I probably would not participate.. but i have no objections..

    i think you would recieve 2 types of objections..

    the 'too hard to setup' type
    and the 'inefficent use of resources' type


    what sort of n value are we looking at for 10million (im assuming < 50M)

    i could see no practical reason why the largest prime queue cannot be cleared (reallocated) and populated with suitable values. (it just takes time/effort)

    it would also probably require a new "QQQ" directive


    would i be correct in saying that ANY prime found for a particular k-val will eliminate that k-val ?
    ie are we infact trying to find the smallest prime, or just any prime?

    as for ineffiecnt.. that is really subjective in terms of what you are trying to acheive.

    i would suggest that n<50M has not yet been sieved enuff yet.. i belive it is only complete upto p=25M whereas i would expect p=100M would be more suitable.


    ShoeLace

  3. #3
    its actually sieved past 25 BILLION and that is where the majority f the factors are since that is where factor density is the highest. also most of the K values will eventually reach 33.2 million (where the digit size passes 10,000,000. And if I'm not mistaken the prize from the EFF should attract some new users. It has for GIMPS thats for sure. I believe it would e easy to populatethe largest prime que with 20 or so values and see if any of them are picked up. If it turns out to be a popular option then perhaps include the option i nthe new client as a button to press to get possible largest prime/ money winning prime. Of couse the project co-ordinators would have to decide on some sort of prize money distribution plan (unless they just wante dot let the user who finds it keep it all). Most likely it would be half to project co-ordinators and half to that particular user.

  4. #4
    okay 25 billion.. thats 25000G yes?

    this has proably been discussed elsewhere but,
    a. does the current SOB client even handle n=33M
    if so, what is the expected comletion? beleive it or not 10x 9M

    9M takes 4-5 days.. 33M will be nearly 10 weeks.

    and how many other things could your PC do in that time..?!?!



    not disuading you, but just presenting evidence

  5. #5
    well the only thing I'm trying to get at is that these tests will (most likely) have ot be done eventually, and the idea of making some money of the effort is very attractive to alot of people. I will make it possible for us to increase membership dramatcally. Also, record primes are cool, and a first place record would be super cool. Thats why the 31337 acct. was created. Sure it'll takea long time for the tests t obe done but if people wanna do them then thats good for them. I doubt it'll dramatcally hurt our regular user base I actually believe it'll mostly just attract new users such as those who have been using GIMPS running the same option. The best reason why it is a good idea though is be cause it ffers users more options. People like to have some control of their part in a DC project. This allows people ot choose their work, it makes it more fun.

  6. #6
    Sounds interesting!

  7. #7
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Chances are definitely lower for 10M+ digit primes, as

    1) prime density drops with increasing N
    2) sieving is not that deep (though this can be changed)
    3) regarding primes/time, 10M+ digit tests take longer, of course

    But I think if there are persons who are willing to test 10M+ digits, they should get a chance doing so. As Keroberts said, it's quite likely that we have to go there for most of the k's anyway.

    On a side note, when it comes to prize distribution, one should really consider GIMPS' Monetary Awards to avoid clashes.

  8. #8
    Senior Member Frodo42's Avatar
    Join Date
    Nov 2002
    Location
    Jutland, Denmark
    Posts
    299
    Well I think I would put one or two 3 GHz P4 on it if a que like this was opened.

    It would probably be a good idea to do a quite heavy P-1 test on the number before prp'ing it.

  9. #9
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    One quick and dirty way that comes to my mind to hande that is,

    - dump the first hundred (thousand?) k/n pairs with above 10m+ digits limit into the database

    - change the usernameQQQlargest-prime directive (change the queue for largest-prime, I mean) to handle those k/n pairs instead of 13.4m n pairs.

    - see if anybody actually really finishes those tests

    - if so, cool. still, finishing 1,000 of those tests will take really a lot of time. even if there will be gold rush to those 10m+ tests (which I not expect for the next couple of years), simply add more tests to the queue.

  10. #10
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    I've though about proposing a high-n que to louie, we could use one of the lowest weight k's like 67607, start around n=34M?. We have already sieve these numbers some what, but most important would be a deep p-1.

    Doing the lowest weight k would be the way to go since it's the least likely to produce a prime before we get to those numbers.

    I'm not sure if now is the time however, it would probably be a better idea to focus the p4's on second-pass, trying to eliminate a missed prime and raise the floor while promoting high n sieve on non-p4's right now.

    On the other hand the 31337 que or n=13.4M que should definetly be removed possibily changing this to the 10M< digit numbers would be cool.

  11. #11
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Wow looks like Nuri writes alot faster than I do...

    I've given it some extra thought with numbers etc...

    Lets assume that we first want to have the largest prime on record and we want it to be above 10M digits... also I wouldn't expect the 10M prime to come at exactly 10M digits but probably a few more. I propose we start at n=35M, a nice round number.

    Looking at what we have sieved so far for the 35M<n<36M range, orginally there were 2731 tests for k=67607 within this range. However with the high n-sieve 991<n<50M effort we have managed to reduce this number to 1217 test... less than half, by only sieving to about 35T (p=2^45).

    Also we should expect to eliminate roughly 75 tests per p=2^(n2-n1), so if we could sieve the high n dat to roughly 2^48 or 300T before we start we could reduce this number to <1000 tests easily. This really isn't very much effort when we start thinking about numbers this large.

    I also agree than a deep p-1 would be required prior to testing.

    I know this is starting to sound like a high-n sieve/p-1/ecm promotion but I just think it's too soon to start prping that high without wasting effort.

    Working the numbers again you should be able to eliminate one of these k/n pairs in 500G of sieving with the high n dat currently, this would basically take 2 weeks on a machine that would take well over 1 month to test one number this size.

    The task seems daunting... regarless I definetly support the changing of the 31337 que over to these n=35M numbers. I think it could bring some new faces to SoB and I'm always in favor of letting people do what they wish with either computers.

    Also the tiered payment bracket for the prize would be a good idea. I'd acutally suggest that we reissue a protion of the prize for optimizations in the client etc, although I think at least 1/3 of the prize should go to the user.

  12. #12
    another thought.. with tests of this size (timewise)

    some method of notifiying users their test is obsolete because a factor was found could prove quite crucial.

    the amount of time the could be wasted/saved could become significant.

    which then suggests either some sort of manual process to notify these users OR waiting for V3 where i belive this feature was an option.


    Shoe Lace

  13. #13
    Originally posted by Keroberts1
    Of couse the project co-ordinators would have to decide on some sort of prize money distribution plan (unless they just wante dot let the user who finds it keep it all). Most likely it would be half to project co-ordinators and half to that particular user.
    Originally posted by Mystwalker
    On a side note, when it comes to prize distribution, one should really consider GIMPS' Monetary Awards to avoid clashes.
    I'm glad you brought that up Mystwalker. Just so everyone knows, we use George's code, in fact, he does a lot of work to make sure we have the latest and the greatest. The terms of the code release on the website (and the only reason that the Debian GNU/Linux project considers the software to be non-free) is that if you use the code, you are subject to the same rules as the prime95 users. Thus, 50k would not be going to you, and unless GIMPS has already become a non-profit organization, you won't get your $50k until after that.

    Don't get me wrong, I think that wiping the current 31337 and replacing it with a 10m+ digit queue is a much better idea than leaving as is, but this will not be our major draw. It may be attractive to some, but prime95 has found quite a few primes recently. They feel another one coming at 10m+ digits, and their tests are much shorter. I think it should be an option if it's easy to implement, but I don't expect it to be a huge draw, especially since we are currently at 1/4th the N at which we would be testing. @Shoelace: Yes, any prime would eliminate that k, and it is not our current goal to find the smallest prime for each K. However, it is just as likely for us to find a prime in a test at n=8m and a test at n=33m, the n=8m test will just go faster.

  14. #14
    The license for using the GIMPS source code says that you must follow the GIMPS prize rules if you find a MERSENNE prime. You are looking for Proth primes so the $100,000 award would be divided however you decide.

    If you look for 10 million digit primes, try the small k value first - it may use a smaller FFT size.

  15. #15
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Although it could be legally OK to disregard GIMPS' awards, I don't think that it's morally/legitimately OK. You've put a lot of work into this code and even optimized it for Proth/Riesel numbers. For me, it would be highly unfair for a "sister project" (which depends on this code) to collect the reward...

    However, it is just as likely for us to find a prime in a test at n=8m and a test at n=33m, the n=8m test will just go faster.
    As prime density decreases with increasing n, it's more likely that a n=8M will yield a prime...

  16. #16
    Originally posted by Mystwalker
    Although it could be legally OK to disregard GIMPS' awards, I don't think that it's morally/legitimately OK. You've put a lot of work into this code and even optimized it for Proth/Riesel numbers. For me, it would be highly unfair for a "sister project" (which depends on this code) to collect the reward
    Nothing like money to make life complicated The source license is that way to promote its usage. SOB, Primeform, etc. don't need to worry about GIMPS suing them if a user finds a big prime. There will be no hard feelings here no matter how SoB decides to split the award. If you feel morally obligated to share some of the prize money, please make a donation to charity instead.

  17. #17
    Heya! Glad to see you stopped by. I guess I didn't have as close a reading of the terms as I thought. I'm still very disappointed that the prime-net debian package was considered non-free because of that simple restriction (though, the fact that the old DPL pretty much killed the package because of what I expect was a dislike for non-free packages, is the most disturbing). I prefer that limitation to someone pretty much taking the credit for the discovery. My hope was the SoB would fall under (or at least agree to use) the prize distribution that you mention because I think it is a great way to divide the money. Now we just have to convince some people to come up with some mathematical breakthroughs by dangling money in front of their face...

    Also, thanks Mystwalker for checking that prime density information. I knew that it was skewed somewhat from even, but I didn't have the frame of mind to remember in which direction.

  18. #18
    so do any project co-ordinators think they could move 31337 to the 10,000,000 prime level? I doubt there would be much activity there but it would probably get more attention than the original 31337 acct. did. After all the possability of prize money has always helped DC projects before. I wouldn't mind running a few tests in that range just for the sake of doing it. Plus imagine if we found a prime up there and saved the project years worht of computing time by eliminating that particular K so much earlier! Instead of taking 67607 though i would take the lowest K value s othe lowest fft size could be used and the tests would take the minimal amount of time for such large digit sizes.

  19. #19
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Commenting on this thread from what I know...

    At that level we basically have 29,045 k/n pairs per 1M range of n after sieving to 25T.

    (For comparison purposes) ranges 1M<n<20M have roughly 26,264 k/n pairs per 1M range of n (sieve to ~750T).

    So we havn't sieved as deep but considering we brought that number down by 1/2 the k/n pairs per 1M range, that's pretty good.

    The reason I suggested k=67607 is b/c it has the lowest number of k/n pairs per 1M range, so it's the most likely k to be tested to those high numbers. I'm not sure how big of a difference using a smaller k is going to make on test completion times. 19249 is 3rd lightest, but I would hate to use 4847 since it's right in the middle of weight.

    Also wouldn't this que simply be looking for a 10M+ digit prime, "+". So why not start at n=34M? ... nice round...

    Regardless, efficency of this que for the project and doing test way up there... That's not the issue. The issue is interest in doing the tests, it's obviously a better choice than 31337 or testing n=13.3M currently.

    Back to sieve and these numbers, first we are at roughly 40T now ??? 25T-26T Keroberts ??? and will have new stats on everything from 25T-40T soon.

    Yes the factor density is starting to drop off but we are still finding alot factors for n>33.2M tests and above, still much much better off sieveing from a performance stand point.

    Thoughts...

    If louie argees to the que, what we should probably do is consider a small range of n of a particular low weight k. Then first factor each number fairly deep before testing. Sieve has already eliminated 50% and will continue to elimiate tests but for particular k.n range etc, a little factoring first before we start the que is probably the way to go.

    Comments on testing only a particular k or at what n-level to start?

    If we can come up with a method we'd like to use I'll propose it to louie. Major problem right now is we are not setup server side for n>20M. Joe and I have been pretty much doing n>20M on our own when it comes to, collecting factors, processing dats, creating stats, etc. Of course we have had alot of help from interested people in the SoB communtity and others who wouldn't have nessarily participated in SoB.
    Last edited by vjs; 05-06-2005 at 03:05 PM.

  20. #20
    I've been playing around with low-n factoring for a bit but I'll finish up that range before i do anymore

  21. #21
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Yup I was there for a while too its pretty interesting, I'd acutally like to try some P+1 on the 24737/991 but I only have 1G of memory.

    I didn't mean to push you on the 25-26T just wanted to see how it was coming... I'm guilty as well. I still have about 2-3 weeks worth but Joe and E have finished their ranges already.

  22. #22
    either way it isn't likeit is any sort of a hurry if i find any foctorsd taht look like they might be on any use soon I'll submit the mright away but i haven't foundany yet. The only mssed factors i have found aroudn in regions the main prp line has passsed arleady. If i ind any ahead of currewnt line I'll ahve then submitted as sson as i find them soerry if I'm typing especially bad tongiht caompared to my used al typing. I' kindaq drunk we had are cinco de mayp party tonight and although i don't usually ce;edbrate foreogn holidays. hey its a reason to have a party and in ythe summer we don't usuallt ahve neough of those. so whatever. think about wht i said and think about the other post I've put up lately about the 10,000,000 digit factoring plus prp ing. It'd be nice to have our project in contention for the 100,000 prize that GIMPS is soooo... proud that they think thwey're gonna get. I say take it from them focus our efforts and ake them cry. I love competition its whats makes something worth doing. please someone agtree with me. and pleases ignore my drunken rammblings as much as they don't applky to your intersts . If yo uasr interested in geting money or initiating some real competition betqween the GIMPS and us, then say so we need voices to be heard before anything will happen. Once good idea can't be enacted unless the majority of voices heard agree. So... Lewyts heard some voices, some new people who haven't posted before please join in we alwaysd like ot hear new voices. thank everyone for listening to my ramblings when I'm drunk noone wants to hear me taljk but for somereason always wants to read what i writ. well i hope it stays the same thisd time and i hope i see alot of responses.
    peace... try sieving its fun
    keroberts1

  23. #23
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    ummmm is Keroberts1 saying he was drunk while he was posting the last reply. Good for him, we need that once in a while.

    Bet he will get a good laugh when he wakes in the morning. hehehehehehe




    e


  24. #24
    Yoohoo, I hear voices, too, from time to time.
    Thank you for your nice post, Keroberts1.
    And: of course we agree with you. Don't worry.
    H.

  25. #25
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Let me consult...

    Yup the three of us are in agreement, I was a little difficult to convience but myself was all for it, me cast the deciding vote. So Me, myself and I are all in agreement does that make 3 votes or one?

    I full (or is it fool) heartly agree that we should have an account to test these numbers, I think it will get some attention as well.

    Let me reitterate we need to propose a plan, the fact that we have sieved some is great, we will need to factor for sure. I did a quick test with B1=B2=5K just to see how long it would take etc ... n=34M was 61Min for stage 1 on a 2.3G Barton and required 90Mb, also does our client work "correctly" at these numbers?

    How long would it take to prp? What are suitable factoring bounds?

    But like I said and as keroberts pointed out... we need a plan and interest.

    I propose; one k only, k=67609, n=33.5M-34M this would be roughly 600 tests.

  26. #26
    The plan...yes.

    I think we will not get started this sub-project before sieving reaches 40T. So we don't have to think about making lists of factors and stuff before.

    As for the choice of k, I vote for considering that (right now)

    k Total Tests Pending Tests
    19249 16554 272
    22699 15962 257
    67607 13820 233

    prime95 said
    If you look for 10 million digit primes, try the small k value first - it may use a smaller FFT size.
    I am for 19249, and against 67607, because of this. 22699, I don't mind.

    Then SHoeLace wrote:
    9M takes 4-5 days.. 33M will be nearly 10 weeks.
    Thats why I think that 600 tests are far too much. 33.5-33.6M should be enough for the moment (Remember: we will do the factoring work, right?)
    Though, this doesn't take into account all the candidates eliminanted by factoring. If we eliminate 2/3 of them, we need to add more, of course.

    As for the factoring boundaries, we should at least invest 12-24 hours per test, considering the 10 week thing. I don't know if stage 2 is really necessary; as we don't know much about how these factors look like (I don't understand neither what I'm writing).
    Anyway, it would perhaps be a good idea to separate Stage 1 and Stage 2 because of different memory usages. I got two machines with less memory, I do Stage 1. Then somebody with 'only' 1GB picks up the test and does stage 2.

    Question: is the usernameQQQlargest-prime directive already implemented, in fact? I didn't chack it out, but thought only secondpass was implemented.

    Finally, we need to discuss the way to share the eventual money. I don't have an opinion concerning that, but I think we should give 25k$ to George's charity.

    That's all right now... gonna have breakfast.
    H.

  27. #27
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Originally posted by hhh
    Though, this doesn't take into account all the candidates eliminanted by factoring. If we eliminate 2/3 of them, we need to add more, of course.
    If I understand this correctly, one should not expect more than 2% of the k/n pairs to be eliminated through factoring, even with very agressive bounds.

  28. #28
    my head hurts well yes anyways i think 25,000 to the charity sounds like a very god idea and distribute 25 to other people who find primes for the main project after the sub-project has begun? Could build interest in the main project as well. Of course noone would get any if a 10,000,000 digit prime is not found by us but still its worth having hope.

  29. #29
    Originally posted by Nuri
    If I understand this correctly, one should not expect more than 2% of the k/n pairs to be eliminated through factoring, even with very agressive bounds.
    Sorry, I don't get it. If you understand what correctly?

    To make it clear: I have no idea which percentage we will eliminate. And: I think we should not prepare more than say 200 test because this would be a waste of time unless it becomes very popular to work on this que.

    Question to Nuri: Do you claim we will eliminate only 2% of the tests by factoring or did you think I claimed that?

    I thought we would find a lot of factors by factoring because sieve is not yet advanced and because of the increasing number of smooth factors; but I am perhaps mistaken.

  30. #30
    i believe i had at one time heard that larger N values would be more likely to have factors.

  31. #31
    Surely with any number, the bigger it is the more chance of it being factored... Isn't that the whole reason why primes are so rare in large numbers?



  32. #32
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I agree with the idea that larger pairs will have a larger chance of being factored.


    But I'm not sure if that will be as much as 2/3 of remaining candidates after sieving to 25T for n~35m. I mean, if you are suggesting that we'll ba able to factor 2/3 of remaining candidates (i.e. ~19000 per million for remaining 10k), I would doubt that.

    In fact it's easy to check. One can simply try to factor a set of 10 k/n pairs at 35m, and if 2/3 is in fact the case, one should see at least a couple of factors to pop up.

    Of course, it you were not suggesting this, that means I got it wrong.


    Anyways, somre number crunching, just out of curiosity.

    ---
    From vjs' post.
    At that level we basically have 29,045 k/n pairs per 1M range of n after sieving to 25T.

    (For comparison purposes) ranges 1M<n<20M have roughly 26,264 k/n pairs per 1M range of n (sieve to ~750T).
    ---

    So, on the average, there's 2800 k/n pair per million difference between 25T and 750T (let's just put the idea of getting more factors for larger n to the freezer for a moment).

    What was the average density of remaining factors per million when sieve was at around 25T? (for remaining k's of course). So, let's calculate and add back to get comparable figures.

    ---
    From Mike's site
    range range remaining factors remaining factors found/estimated total
    2^40 - 2^41 ( 1T - 2T) 0.00% 0.00% 18387/18387
    2^41 - 2^42 ( 2T - 4T) 0.00% 0.00% 17343/17343
    2^42 - 2^43 ( 4T - 9T) 0.00% 0.00% 16275/16275
    2^43 - 2^44 ( 9T - 18T) 0.00% 0.00% 15617/15617
    2^44 - 2^45 ( 18T - 35T) 0.00% 0.00% 14846/14846
    2^45 - 2^46 ( 35T - 70T) 0.00% 0.00% 13976/13976
    2^46 - 2^47 ( 70T - 141T) 0.00% 0.00% 13773/13773
    2^47 - 2^48 ( 141T - 281T) 0.00% 0.00% 11750/11750
    2^48 - 2^49 ( 281T - 563T) 1.20% 1.02% 10911/11023
    2^49 - 2^50 ( 563T - 1126T) 60.32% 55.49% 4709/10580
    2^50 - 2^51 (1126T - 2252T) 99.75% 99.17% 88/10580
    2^51 - 2^52 (2252T - 4504T) 99.96% 99.35% 69/10580
    ---

    (69+88+4907+10911+11750+13773+13976+(14846/2)) = 62897

    but 717 of these were found by factoring, so substract to find sieve only effect 62180

    to reach per million difference, let's diviede this figure by 20 ==> 3109

    and add back to 26264 ==> 29373, which is roughly the density of remaining pairs per million for remaining 10 k would have been around at ~25T for n<20m.

    Now, compare this with 29045 per million for n<50m.


    I know these are quick and dirty calculations and one should take into account other parameters as well. But still, I guess it still gives the idea that the difference would not be incredibly high for low n vs high n.


    One last note: 2% was just a quick guess for the ratio of factoring tests that would turn in factors. It might be less or more, but I'd really doubt if it were as high as 67%.

  33. #33
    I sure hope someone knows what you're on about 'cos most of this is way over my head. I can just about cope with what a prime number is (no factors except 1 and itself) and not a lot else.

    You have to bear in mind that a large percentage of users are similarly as knowledgeable when making decisions about how things in the project should be done, we just want to sit there and watch the client chug away, we don't care what it's actually doing. Maybe it's just me but that's what I do anyway. Pop in and check my daily rate is still nice and even and no primes have been found, that's it.

    Don't get me wrong I'm glad you guys have these in depth detailed discussions and seem to be really knowledgeable, just don't try and ask me for an opinion



  34. #34
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Ya Matt, don't forget in addition to what u check we also make sure that the project is still running. In case it decided to go to another galaxy without notifying the users.

    Opinions, I don't get paid enough to do that. At least this particular opinion anyway. hhehehehhehe



  35. #35
    Originally posted by Matt
    Surely with any number, the bigger it is the more chance of it being factored... Isn't that the whole reason why primes are so rare in large numbers?

    But even the factors get large. Hence, you wont be able to find them by sieving or P-1 for a large fraction of them.

  36. #36
    Did some investigation this morning.

    Celeron M, 1.3 GHz, 200 MB assigned

    Pfactor=22699,2,33500134,1,49,2.5
    (The parameters only to cheat prime95)
    B1=415000, B2=3631250
    FTT length 3072K

    Memory consumption 90MB
    Stage 1: 0.1% (!) takes about 270 seconds, the entire Stage 1 would take 75 hours, too long.


    Pfactor=22699,2,33500134,1,47,1.5
    B1=270000, B2=2362500
    FTT length 3072K

    Memory consumption 67MB
    Stage 1: 0.1% (!) takes about 150 seconds, still too long.


    Pfactor=22699,2,33500134,1,40,0.36
    B1=100000, B2=925000
    FTT length 3072K
    Memory consumption 67MB
    Stage 1: 0.2% takes about 135 seconds, Stage 1 will take 19 h, just fine.


    Now for different k:
    Pfactor=19249,2,33500138,1,40,0.36
    B1=100000, B2=925000
    FTT length 3072K
    Memory consumption 67MB
    Stage 1: 0.2% takes about 135 seconds, no difference.


    Pfactor=67607,2,33500187,1,40,0.36
    B1=100000, B2=800000 (LESS)
    FTT length 3584K
    Memory consumption 76MB (MORE)
    Stage 1: 0.2% takes about 155 seconds, significantly LONGER.


    Here my conclusions.
    We should examine k=22699, because it's faster than k=67607, but has less candidates then k=19249, at the same speed.
    Appropriate bounds seem to me B1=100000, B2=1000000, to have round numbers. For standardization purposes, everybody should run the same bounds, i think.
    Tests will take about two times half a day on a fast machine. We should split the tests in Stage 1 and Stage 2, machines with less memory only doing Stage 1 only etc.
    We will need a new coordination thread for this, with two lists, Stage 1 and Stage 2; in the lists should not be listed ranges but raw numbers in a format ready to paste it into a batch file or in a DOS Box.
    If you want me to set up this thread, I will do that and take care of it.

    Question: Which was the commandline factoring program with which you can test single numbers, setting the B1 and B2 by yourself and without sob.dat? This one should be the appropriate tool, I guess. (Link?)

    Your comments...? H.

  37. #37
    I agree that you should test the smaller k's to get the 3072K FFT size.

    As to P-1 bounds, I completely disagree. First, decide how much trial factoring you are going to do. Assuming it is 40T, which is about 2^45, then use the bounds prime95 chooses with parameters 45, 1.0. Any other B1/B2 values will cost you more time in the long run.

    Also, use version 24.11 of prime95.

    I recommend having 350MB of memory for prime95 to use during P-1 stage 2. Double that would probably be better (someday do some timings of stage 2).

  38. #38
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Originally posted by prime95
    As to P-1 bounds, I completely disagree. First, decide how much trial factoring you are going to do. Assuming it is 40T, which is about 2^45, then use the bounds prime95 chooses with parameters 45, 1.0. Any other B1/B2 values will cost you more time in the long run.
    George,
    Why 1.0? A factor will eliminate *two* PRP runs. Wouldn't 2.0 be more appropriate?
    Joe O

  39. #39
    not for the purpose we're concerned with. This que wil be to find the largest prime so we should only consider the tests being done once. Most numbers this large will never be double checked because even though it is likely that a prime will not be found before this level for most values by the time the DC reaches this level we should be over 100,000,000 and have eliminated 4 more k values. It is very possible this will be one of them. Als othat is assuming we have not missed any primes yet that would mean more if the double checkers find a surprise.

  40. #40
    Please excuse my (above) precipitate post, inspired by a lack of knowlegde and the early morning. I forgot for a moment that we are not doing voodoo but maths.

    Let's see.

    Pfactor=22699,2,33500134,1,45,1.0 gives
    B1=200000, B2=1750000.
    Memory consumption about 70MB, FFT length 3072K
    Stage 1 will take about 36 hours (at 1.3 GHz Celeron M, 200MB assigned).
    I will run tests on a 2.0GHz Celeron later.

    The choice of optimal bounds is more difficult then I thought. My former post was complete bullshit, OK. But the information in this one is nothing but informal neither, as prime95's choice of B1 and B2 depends on the amount of memory assigned.

    My questions:
    1) Do you think it is a good idea to do stage 1 and 2 on different machines?

    If yes, the stage 1 machines will not need a lot of memory, anyway. We can assign them any B1.
    But the choice of B1,B2 for the optimality of the second machine depends on the memory of the second one.

    If we assign a test to a machine with 1GB, we may get (I fudge some numbers) B1=200000, B2=2000000.
    If we assign the same test to a machine with only 300MB, we may get B1=300000, B2=1000000.

    So for the two machines we would have to run different Stage 1 B1, on an other machine. All this becomes fast very complicated, and additionally, the optimality-formula (for the calculation of B1 and B2) does not take into account that we are not going to waste time on a machine not using its memory.
    So if your answer to question 1) was yes,
    2) is there a) a paticular formula of optimality for the case of split tasks?
    or b) does the same formula apply?
    2c) BTW is the formula sufficiently simple to post it here?

    One could coordinate in the future coordination thread factoring of Stage 1 with different B1 so that people with different amounts of memory could pick up what is optimal for them.
    But all this is very complicated, as I said already. So,

    3) If you said YES to 1) do you still say YES?

    One last possibility to save the splitting-tasks-idea is to take just an average setting on the base of 400 or 500MB and to apply it to everybody.
    Or everybody has to run the entire test.

    4) BTW, is there a way to tell prime95 to test only stage1 or stage2 with chosen bounds? Or any other program?

    I hope I made me understand. If you have other, completely different ideas, I would be more then happy.

    By the way, thank you for the new version, George. Espescially the readme is instructive (which was not included in the v24.6 zip-file), and the bugs I had seem to be gone, too... great work, again.

    Yours, H.

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •