Page 3 of 5 FirstFirst 12345 LastLast
Results 81 to 120 of 172

Thread: P-1 thread discussions

  1. #81
    Code:
    Lastly, in the version I'm testing, I tried the tests with factoring depths
     of 48 and 49. I got B1 and B2 bounds of 40,000/50,000 and 430,000/575,000 
    respectively. However the amount taken to complete these tests with such 
    different bounds weer almost the same. Does this have to do with zero-padded 
    FFTs? Or is there something wrong with this picture? Here is the relevent log
    snippet:
    George,
    Do not worry about my second question. I was not able to replicate it! I do not know why it happened the first time but it doesn't seem to be happening now! With lower bounds the program does indeed take less time.

    Finally, I checked the linux version with some know factors, stage 1 and stage 2 and they were all found. Pity I haven't found any new factors yet

  2. #82
    You weren't imagining things. When you started the test with B1=40000, prime95 created a save file. When you resumed with B1=50000 it had to first complete the B1=40000 to not lose the work you had already done. Then it should have gone from 40000 to 50000.

    Now it could well be that the % complete lines are inaccurate in this case.

    As to k and FFT size. Yes, k now has a big effect on FFT size selection. Log2(k)/2 bits must be reserved in each FFT data word. Since log2(28433)/2 is roughly 7.4 and log2(4847)/2 is roughly 6.1 you have 1.3 more bits per FFT word available. Thus, for FFT length of 512K, you should get 512K*1.3 (about 650,000) higher values of n before changing to a larger FFT size.

  3. #83
    Thanks George. I vaguely remember the log2(k)/2 expression from your other post.

    About the 40,000 -> 50,000 confusion, I'm pretty sure it's the savefiles that caused the confusion. The rate at which the program "apparently" proceeded at the 40,000 bounds was exactly the same as the actual rate at the 50,000 bounds. But I was starting and stopping the program a lot to test bounds etc. so evidently there was some mixup in the % complete figure.

    One more question

    The sieving in SOB is about 31% of the way through 2^48-2^49. So a factoring depth of 48 is too little and 49 is too high. Does that also mean that the "chance of finding a factor" figure is a tad high for 48 (I'm getting 1.26%) and a tad low for 49 (0.911%)?

    I ask this because since a lot of numbers in SOB are not getting any P-1 done on them, I want to stick with the 49 limit which will be about 20% faster. The factor probabilities of 1.26% and 0.911% imply that I'll actually find fewer factors per unit time with the 49 limit. But I'm pretty sure this is not the case since the actual factor depth of 48.3 implies that the those probabilities are incorrect.

    To wit, will I find more factors per unit time with the factor depth of 49 giving me bounds of B1=40k and B2=430k and taking 20% less time than a factor depth of 48 giving B1=50k and B2=575k?

    Final question, though I think others on this forum can probably answer this as well. As George's last post indicated, the k value makes a huge difference in the FFT size for P-1. But since the SOB primality testing code is older, does that mean it does not have the same dependency on k (Edit: I just did a search and found a post by kugano stating that indeed it does not). Hence, k=4847 is much better for P-1 as it uses the 512K FFT for the 6.8M range and hence takes much less time for the P-1 while having the same probability of finding a factor and saving the same amount of time for a primality test.

    [makes a dash for the P-1 reservation thread]
    Last edited by garo; 08-19-2004 at 03:46 PM.

  4. #84
    Yes, the probability is between 0.9 and 1.2%

    I've been thinking about P-1 on SoB. Unlike GIMPS, the number of PRP tests saved if a factor is found is not clear. It all depends on how far behind double-checking is and when you think a prime will be found that will stop the double-checking effort.

    I've just changed prime95 so that it takes a floating point value for how-far-factored. The last argument is no longer a double-check flag it is a floating point value representing the number of PRP tests that will be saved if a factor is found.

    Furthermore, since SoB does not have enough P-1 clients, is it better to do more exponents at a 1.0 (or less?) PRP tests saved setting or fewer exponents at a higher PRP tests saved setting? I don't know. My gut reaction says it doesn't matter. The difference in efficiency is likely to be very, very slight.

    I'll try to upload the new prime95 in the next day or two.

  5. #85
    Thanks George. Yes, I think that in the long run it does not matter. A lot of discussion in the forum when P-1 was first introduced settled on the value of 1.25 as the probable number of tests saved. Personally, given the lack of P-1 right now, I'd go with whatever setting gives the maximum number of factors per unit time regardless of "optimality". In either case, the amount of time saved is likely to be more or less the same but more factors is always better, no?

  6. #86
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Like garo said:

    As there is (more than) enough work to do, it is best to put the setting to a value that generates the most factors per time. Plus, it makes no sense to trial factor the remaining tests once they got reached by PRPing.

  7. #87
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    I had a question about p-1 factoring.

    I have a couple machines on double checks right now.
    Has any p-1 factoring been done for 900k<n<1m?

    I only see p-1 for n>4m.

    Assuming the answer is no for 900k<n<1m would it make sence to do p-1 factoring on these. How many n per day would one expect to remove with a P-833?

    And what settings would I use considering 300k<n<3m was done upto ???75T??? 2^46 and the new sieve doesn't cover values n < 1m.

    I realize this may also be a mute point since I'm not sure how n values correspond to p. Does sieveing everything <300T eliminate all factors for n<1m????

    How does p and n relate anyways????
    Last edited by vjs; 08-24-2004 at 01:21 AM.

  8. #88
    P-1 only makes sense for numbers that are reasonably large - say above 4M especially when the numbers have already been tested once. So the short answer is: No it does not make sense to do P-1 for numbers under 3M.

  9. #89
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    I was just thinking that if I could eliminate more tests through p-1 than actually prp then it would make sence.

    I actually tried it out yesterday for a few n I was getting a rate of ~240 sec per pair and a probability of 0.014. Which would mean that I should get one factor every 4.7 hrs. Which would be pretty good for that machine since I don't think I can do one prp in that time.

    One problem was I need to use a setting of <42. In addition we have already sieved everything below 75T.

    Now it brings up the other question of how does p and n relate.

    And is that part of the reason why we switched to a 1m<n<20m file for ranges above 75T.

  10. #90
    Yes but when you use a setting of 42 the factor probability number you get is inaccurate. So you will be very lucky to get a factor every 4.7 hours. More than likely it would take about a day to find a factor.

  11. #91
    If prime95 is refusing to do P-1 when you use a setting larger than 42, then it is telling you that P-1 does not make sense - you will eliminate candidates faster by just doing the PRP double-check.

    This is not surprising for you small exponents. P-1 barely makes sense for current exponents in the 6 millions given the deep sieving that's been done.

  12. #92
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    sbfactor.exe 7000000 7005000 47 1.5 1 1 + 256 is this correct
    cedricvonck,


    Yes and no first you don't have to specify the 1 1 since it will default to one processor one instance.

    Second I suggest using a 49 instead of 47.

    This number basically signifies the sieve depth

    Since all of the factors below 47 have been found through seive and <0.36% remain between 47-48, a setting of 49 would be best.

    It might/will be too soon to use 50 but 2^49 will probably be reached by the time prp reaches 7m.

    The 256 is the amount of memory you have correct.

    So go with

    sbfactor.exe 7000000 7005000 49 1.5 256

  13. #93
    Hater of webboards
    Join Date
    Feb 2003
    Location
    København, Denmark
    Posts
    205
    Originally posted by vjs
    Does sieveing everything <300T eliminate all factors for n<1m????
    Using the simple fact that at least one factor of a number has to be smaller than the square root of the number, it can easily be shown that sieving everything below 300T eliminates all n<96, but as we have factors for all candidates with n<1000 except one, that doesn't really help us.

  14. #94
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Thanks HC_grove,

    The major reason for my question was I didn't believe sieveing to 300T would eliminate all n<1m, So it is really as simple as p=300T the number is 300T.

    Example if p=6m the number is 6,000,000.

    I was confused b/c if n=3 the number is actually 8.

    2^n
    n=3

    So 2x2x2=8

    one factor of a number has to be smaller than the square root of the number
    Of course as soon as you have one factor it's not prime.

    Thanks very much for the explanation.


    So sbfactor wouldn't let me run any n<1m unless I reduced the 48 value to something very small, the reason being is sbfactor actually make some calculation based upon n, 2^?, the 1.0-1.5 setting etc to check if your wasting time etc.

  15. #95
    vjs,
    The setting is not based on whther all those numebrs have already been factored but on the calculation the P-1 code makes on the probability of finding a factor given that the number has been sieved to 2^48. Clearly, if you tell sbfactor that the sieving has happened till 2^42, the probability of finding a factor it will calculate, will be larger than at 2^48 and as a result it may think it is worthwhile to do a P-1 factoring. But in reality it will not be worth it as the factor probability is based on incorrect information.

  16. #96
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Sorry Garo but now your response has confused me even further with the p-1 settings. B/c by changing the sieved to value it changes the time it take to complete a k,n pair. There is also another variable that one can set between 1.2-1.5 etc. This also has an effect on completion time.

    I did a series of tests to find the best setting: ( I have 512 mb)

    sbfactor 6840100 6840110 48 1.5 400 (10223^6840101)
    Yeilds:
    B1=30k
    B2=255k
    Prob Suc=0.008214
    Sq=65375
    stage 1 trans, time (s) 86450, 3421
    stage 2 trans, time (s) 37154, 5514
    total time 92 min

    sbfactor 6840110 6840120 49 1.5 400 (55459^6840118)
    Yeilds:
    B1=20k
    B2=155k
    Prob Suc=0.004921
    Sq=42832
    stage 1 trans, time (s) 57640, 1865
    stage 2 trans, time (s) 23540, 2907
    total time 49 min

    sbfactor 6840120 6840130 48 1.2 400 (10223^6840125)
    Yeilds:
    B1=20k
    B2=155k
    Prob Suc=0.005879
    Sq=43069
    stage 1 trans, time (s) 57640, 2282
    stage 2 trans, time (s) 23540, 3609
    total time 61 min

    sbfactor 6840100 6840110 49 1.2 400
    Yeilds:

    Program won't run

    And won't start running until n=7172069 with a 49 1.2 setting

    sbfactor 7172000 7172100 49 1.2 256 <-- note 256 used here
    B1=15k
    B2=98k
    Prob Suc=0.003879
    Sq=31386

    Didn't let it run.

    I realise to do this correctly I should have ran the same k,n pair, but the numbers do show trends.

    It looks like the best setting for the probablitiy of finding a prime per time spent is (factor Prob Suc/total time) actually the 49 1.5 setting.

    So what does the 48,49 and 1.2-1.5 settings do exactly?
    Do they simply change the b1 and b2 values and the number of squares...

    If so this would mean that sieve really has nothing to do with P-1 efficiency.

    In other words by sieving everything below 2^48 all it simply does is decrease the probablility of finding a p-1 factor in that range. To counter act this effect p-1 simply changes settings from 48 to 49 so that they spend less time on any one n in an n-range there-by increasing the n's per unit time.

    I know this is not the case.
    Last edited by vjs; 08-26-2004 at 02:22 PM.

  17. #97
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    I did some investigation this weekend and as far as a can tell the 48,49,50 factor basically tells the program no to look for factors below this value b/c they have already been searched, found etc.

    So by using 48 it simply doesn't investigate/spend time on factors below 48.

    So the best setting currently is 49 since a large portion of the factor have been found between 48-49. So using 49 1.5 is better than 48 1.2.

    The question is when should we switch to 49 with some setting >1.5??

  18. #98
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Originally posted by prime95
    I'll try to upload the new prime95 in the next day or two.
    AFAI can see, it's still the old version...
    Could you upload the new version? Many thanks!

  19. #99
    Yes George, new version!!!

    OK vjs, let me have another go at this. I was out of town this past week so sorry for the delay.

    The 48,49,50 etc. you enter is only used by the factoring program to calculate the probability of finding a factor. P-1 works differently from sieving so it cannot control the bit range in which factors are searched. The principle of P-1 is that if the factor found is P then the factors of P-1 are all below B1 except for one which may be between B1 and B2. In fact, if all the factors of P-1 (i.e., the factor - 1) are below B1 then the factor is found in stage 1.

    So essentially, sieving only affects factoring in that the sieving depth entered into the worktodo line (48,49 etc.) changes the probability with which the factoring program thinks a factor will be found and hence it affects the B1 and B2 values chosen by the factoring program.

    The "real" probability of finding a factor depends on the real sieving depth and not the number you have entered. Right now the sieving has been completed to 48.37 or so. Therefore, when you enter 48, the factoring program overestimates the likelihood of finding a factor and thus chooses higher B1,B2 bounds. Whereas if you enter 49 then the factoring program underestimates the probability of factor being found and chooses lower B1, B2 bounds.

    So how are the B1, B2 bounds chosen and how is the amount of memory allocated to factoring change this choice?

    Memory : Given a number and fixed B1,B2 bounds, the amount of memory allocated decreases the processing time as there is more space for temporary variables. That is, the old space-time tradeoff comes into effect. However, the effect is NOT linear or anything even close to that. George has himself stated several times that additional memory above a certain point - and I would put the number at about 256MB for 7M numbers that are currently being tested, though he would put the number even lower - will have dimnishing benefits and it certainly is not worth buying an extra stick of memory only for the purpose of P-1.

    However, increasing the memory can sometimes increase the amount of time taken for P-1 as the extra memory can cause the B1,B2 bounds to be raised. The reason for this lies in how optimal B1,B2 bounds are chosen.

    Optimality : The ultimate objective of P-1 factoring is to increase the throughput of the project as a whole by eliminating numbers faster. It is not to find factors as quickly as possible. And I repeat not fiding factors at the greatest possible speed.

    Let me illustrate this. Suppose you have achoice between two B1,B2 settings. The first takes 60 minutes to complete and finds a factor with the probability of 1%. The second takes 80 minutes and finds a factor with the probability of 1.25%. Notice that the first setting will find a factor every 6000 minutes while the second will find a factor every 6400 minutes. So should we choose setting 1 over setting 2?

    NO : No this is counter-intuitive but our choice really depends on how much time a primality test will take and if a factor is not found what the average number of tests that will be required. now the average number of Lucas-Lehmer tests required for Mersenne number is about 2.1 - I may be wrong about the exact number - or so as each number is doublechecked and each test has a certain probability of being faulty due to hardware errors. In SOB on the other hand this number is not known as the project is not interested in finding every prime number but only one prime number for each k. Louie had once speculated that this number is 1.25. The reason for this is, that numbers will not need to be doublechecked if a prime is found for that k. Note that you were using 1. and 1.5 as input values instead of 1.25.

    So, for the sake of our analysis let us go with 1.25. Let us also assume that each test takes 10,000 minutes. So, each factor found will save us 12,500 minutes. Let us now look at what is the average time taken to eliminate a number if the P-1 test is performed at settings 1 and 2 respectively.
    Remember that now a P-1 test is performed for each number and an LL test is saved whenever a factor is found.

    Setting 1:
    Time for P-1 test: 60
    Prob of finding factor: 1%
    Average time spent in primality testing: 12,500 - (12,500 * 0.01) as 1% of the time we do not do a primality test for the number since a factor was found.
    Therefore, avg amt of time spent per number =

    60 + 12,500 - (12,500 * 0.01) = 12,435.

    Setting 2:
    Time: 80
    Prob: 1.25
    Avg time in primality testing: 12,500 - (12,500 * 0.0125)

    Total avg time: 80 + 12,500 - (12,500 * 0.0125) = 12,423.75 minutes.


    So you can see that with setting 1 we save 65 minutes per test but with setting 2 we save 76.25 minutes. This despite the fact that with setting 1 we find factors at the rate of one per 6000 minutes but with setting 2 we find factors at the rate of one per 6400 minutes.

    Hope this clears it up. As you can see, the complexity of the issue necessitated the lentgh of the post. But if you have any more questions please feel free to ask.

    BTW, at the end of all this I would like to say that since every number does not get a P-1 right now in SOB, in fact more than half of them don't, all this analysis goes out the window and one should simply chose the sieve depth/factor value setting that gives us the maximum number of factors per unit time. This statement will hold as long as P-1 does not match or exceed the rate of primality testing.

    But this brings us back to the problem that entering a sieve depth of 48 or 49 will not accurately compute the probability of finding a factor so we'll have to wait till George uploads the new version that takes floats as an input for the sieve depth.

  20. #100
    Senior Member Frodo42's Avatar
    Join Date
    Nov 2002
    Location
    Jutland, Denmark
    Posts
    299
    Great post garo

    I now understand a whole lot more of P-1-factoring.

    I would suggest this post is added to hcgroves page on P-1.

    So George, let's get the new version (also Linux version please )

  21. #101
    I'm glad you liked it. Methinks I will also post a modified version on the mersenneforums as people have raised questions about P-1 so many times before. It took me a while to understand this - inclduing looking at the Prime95 source code and asking george many questions.

  22. #102
    Hater of webboards
    Join Date
    Feb 2003
    Location
    København, Denmark
    Posts
    205
    Originally posted by Frodo42
    Great post garo
    I agree!


    I would suggest this post is added to hcgroves page on P-1.
    I'd like to do that, but only if garo accepts it.

    The page is Factoring for Seventeen or Bust


    So George, let's get the new version (also Linux version please )
    If the binary object-files for doing the math will work with the old factorer I'd be happy to see if I make a new version, if he'll release those.

    .Henrik

  23. #103
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Wow Garo thanks a whole bunch,

    This really clears things up a great deal, I'll have to re-read what you said a few times to understand P-1 better but at least it makes more sence and of course poses more questions.

    I'll have to do an internet search on b1 and b2 optimal values vs memory allocation unless you like to go into more detail.

    From what you stated it's actually more a question of b1,b2 and the 1.2-1.5 variable.

    Again I personally thank-you and appreciate your efforts, nice write up.


  24. #104
    hc_grove: yeah you can go ahead and add it to your pages as you see fit.!

  25. #105
    The new prime95 for SoB can now be downloaded. Several FFT bugs were fixed that should not have impacted SoB. The only change of importance is accepting floating point values in the Pfactor= line of worktodo.ini

    You can get the versions from:

    Windows: ftp://mersenne.org/gimps/sobpm1.zip
    Linux: ftp://mersenne.org/gimps/sobpm1.tgz

    The linux version is untested, I do not have Linux running on any P4s here.

  26. #106
    Senior Member Frodo42's Avatar
    Join Date
    Nov 2002
    Location
    Jutland, Denmark
    Posts
    299
    Wow.
    I've just switched to Georges new Linux version from the version hc_grove modified.
    With the same B1 and B2 it takes something like halft the time.
    So I guess it's worth the trouble making the worktodo.ini file (which was rather cumbersome, any ideas on how to make it fast for a given range)

  27. #107
    Yeah if you have flavour of unix or cygwin installed you can use awk! Otherwise you could also use perl. Actually, in the worst case, you can just open up your favourite text editor, cut and paste all the lines that are generated by sbfactor that show the numbers to be P-1 factored and do a search and replace.
    Say replace "Estimating for " with "Pfactor" "k=" with "=2," and so on. I can post an awk script if that helps.

  28. #108
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Originally posted by garo
    I can post an awk script if that helps.
    That helps.

    Well, such a script would be really helpful.

  29. #109
    Ok! Here it is. Cut and paste all the lines starting with Estimating into a file say j.

    cat j | awk '{print "Pfactor=" substr($3,3,length($3)-2) ",2," substr($4,3,7) ",1,48.5,1.25"}'

    I'm assuming that the exponents are all length 7 i.e. less than 10 million, if not the change is trivial. This works with george's latest version which was uploaded a couple of days ago. In the older version the last two fields had different meanings. In George's newest version the second-last field can be a float whereas before you had to choose between 48 or 49 both incorrect; and the last field now means the number of tests a factor is worth instead of a zero or 1 indicating whether the number of tests saved was 2 or 1 (a holdover from GIMPS).

    I've put in the figures 48.5 - because that is the current sieving status and 1.25 - because that was recommended a while back since SoB does not necessarily doublecheck every number if a prime is found for that k.

    You can increase/decrease these numbers as you like but I think this is the optimal setting for the moment especially considering that P-1 is not keeping up with PRP testing.

    The bounds I got for testing numbers around 6.9M were:

    Code:
    sievedepth factorvalue  B1    B2      chanceoffactor
    Old Version
    48         2            50k   575k    1.26%
    49         2            40k   430k    .911%
    50         2            30k   322k    .626%
    
    New Version
    49         2            40k   430k    .925%
    48.5       2            45k   506k    1.1%
    48.5       1.9          45k   483k    1.08%
    48.5       1.8          40k   430k    1%
    48.5       1.5          30k   285k    .793%
    48.5       1.4          25k   231k    .687%
    48.5       1.3          20k   180k    .573%
    48.5       1.25         20k   170k    .564%
    Notice that the bounds at 49,2 and 48.5,1.8 are the same but the chance of finding a factor goes up from .925% to 1%. So my hypothesis that the previous version with sieve depth = 49 or 50 was underestimating the chance of finding a factor and sieve depth of 48 was overestimating. So my recommendation to all is to use 48.5,1.25 and George's latest code! With these bounds each P-1 test will take about 30 minutes on a P4 2.8.
    Last edited by garo; 09-19-2004 at 09:03 AM.

  30. #110
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Thanks garo.

    And of course, thanks George.

  31. #111
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I tried 48.5,1.25 and it did not work for me.

    I guess it's because the result also depends on memory allocated to P-1.

    PS: I use 200MB

  32. #112
    Senior Member Frodo42's Avatar
    Join Date
    Nov 2002
    Location
    Jutland, Denmark
    Posts
    299
    thanks garo, that script speeds up things a lot.

  33. #113
    Aha! I use 500MB. For 200MB, the smallest value that works is 48.5,1.5. Alternatively try 48,1.35. The bounds are similar in both cases.

    This shows how close we still are to the point that P-1 does not save us any time. As n progresses, the smaller values will work as well.

  34. #114
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I agree. Still, I guess one has to check for PRP progress vs sieve progress. As sieve boundaries go higher, may be it might take a bit longer than expected for smaller values to work as well.

  35. #115
    That is true as well. I think it would be best if a recommendation could be posted in the P-1 reservation thread. That would be one centralized place where we could monitor and change the bounds as required. This would go a long way towards helping P-1 catch up.

  36. #116
    One more thing to consider in selecting parameters. The P-1 code assumes you are using the same math libraries for P-1 and PRPing. This is not the case right now. You should compare the time it takes PRP3 to test a number vs. the current SoB client. Then enter SoB_client_time / PRP3_time as the number of PRP tests saved.

  37. #117
    You are right! And I think that the current SoB client takes longer because it does not have the latest code from PRP3. Correct me if I am wrong. Assuming that the current PRP3 is 20% faster, the tests saved goes up from 1.25-> 1.5. Anybody care to deliver some hard numbers?

    Still I believe that we should stick with 1.25 at least till P-1 progress is slower than PRP testing.

  38. #118
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Still I believe that we should stick with 1.25 at least till P-1 progress is slower than PRP testing.
    I know this is the wrong thread to make this statement in but I'm wondering if we shouldn't do the following.

    Invest all of our effort into sieve and try to reach the point of deminishing returns and then stop sieveing for values of n<20m. We could then later reinvestigate sieve for 20m<n<100m once we get to a n value around 18m or so. I'm pretty sure everyone will agree that this project will reach 20m...

    As for the p-1 effort incoperate it into the main client such that everyone p-1's their number before testing. Yes this would delay the prp3 implementation but by placing the majority of efforts into sieve right now, we could drive up the testing bounds for p-1 by the time the new client is released.

    If optimal bounds are currently 48.5 1.5 (not taking lack or resources into consideration b/c this obviously wouldn't be the case if it were in main effort). We could set the bounds to 50 1.5 by then etc.

    As for the main effort people wouldn't see a big change, the new prp3 client is faster reducing time but adding time from the p-1 step. In addition the client could report back a no factor found for k/n with x y bounds, the server could then look at time a p-1 test required and reassign the test if nessary. This may be a major advantage once we get to n=>10m. I don't think we would loose people at all especially if we could intergrate a p-1 factor score into prp scores. Also it may be an added plus since, every once in a while people would notice their test completed in record time due to a factor. Also if the bounds were set correctly their personal number of test completed on average would increase.

    My 0.02 hopefully I don't get change back.

    VJS

  39. #119
    It's true that sieving is still the most effective in elliminating possible candidates, and a considerably ammount of effort should be put into sieving.

    But one single prime found elliminates a lot of possible candidates to test, and besides, most are in here to find a prime.

    I agree that P-1 must be put into the client ASAP. A small speedup for the project as a whole is still a speedup, and this will only get bigger as tests get larger.

    It would be better if sieving could also be intergrated into the client and if ranges could be automatically downloaded from the server. This would save a lot of administration.

  40. #120
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    It would be better if sieving could also be intergrated into the client and if ranges could be automatically downloaded from the server. This would save a lot of administration
    Agreed but wouldn't intergrating sieve require quite a bit more effort?

    Also one will never find a prime with either P-1 or sieve but they both help the project. But I think it's easier to justify p-1 one a particular number they are testing b/c it would increase the users possibility of finding a prime though optimization....

    In my mind it seems more benifital to implent P-1 into the client first get it working etc, then do the same for sieve. Granted the project would be best suited if we could somehow have the client and server comuniate. The client could tell the server how fast it works, if it falls below a certain threshold, then it gets a small 1-2 week sieve range rather than a (prp/p-1), we could then also expire sieve ranges and put them back in the system.

    I have a feeling, sieve, p-1, automation, incorperation, and automation are emotionally hot topics, perhaps we need a poll or something.

Page 3 of 5 FirstFirst 12345 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •