Page 2 of 2 FirstFirst 12
Results 41 to 58 of 58

Thread: How many "secret" tests are left?

  1. #41
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Originally posted by OberonBob
    this has probably been asked before, but when we are done with secret, when all the secret tests are done, is secret going to switch to doing supersecret tests? I mean, will the secret "user" switch to supersecret tests?

    I would think that it would be a good idea to do that, the double checks should be done, but the 20 or so people that are doing the secret tests are more than enough, and that number will drop when the 1.1.2 client comes out.
    Currently, the upper limit of supersecret is blocked at 400k, and anyone using supersecret gets secret tests. If the same logic applies, when there will be no secret tests (provided that some secret tests will be released by that time), both secret and supersecret should start to get supersecret tests.

    My guess is Louie is waiting for secret tests to approach completion in order to release some supersecret tests, and with the popuºarity it has now, it will probably finish within the next two weeks.

    Since we've switched to the 1m<n<20m dat file at sieve, I guess he will increase the upper limit of supersecret tests to 1 million sometime towards the new year.

  2. #42
    If people still can't recieve credit for super secret tests they won't be very popular. With current amount of people running it secret would never reach 1 million, and to add to that these are the numbers that are least likely to yeild any results be cause the error rate is low in them.

  3. #43
    Well, we have 13 days or so, like you said, and while it's no big deal, I just don't want secret to have any downtime. I hope he increases the supersecret top limit too. or for that matter, why have a limit? I mean other than it shouldn't pass the main effort, but that will never happen anyway.

  4. #44
    Originally posted by Keroberts1
    If people still can't recieve credit for super secret tests they won't be very popular. With current amount of people running it secret would never reach 1 million, and to add to that these are the numbers that are least likely to yeild any results be cause the error rate is low in them.
    I like running secret on my slow boxes, it makes the boxes feel like they are do more for the effort by doing 2 tests a day, whereas it would take a month and a half to do a normal test.

  5. #45
    wouldn't those resources be more efficient if devoted to numbers that have not already been tested before.

    The sieve is good for taht too because they may still find a factor or two a day and down the line those can save alot more effort for the main effort. Same deal with P-1 factoring.

  6. #46
    I don't know about P-1 factoring, but sieve takes some non-zero amount of effort. the client is automatic. Besides, the double check is important too. gimps has a 3% error rate, we will have some errors too. The double check has to be done, and whats 20 pcs doing double check compared the 3500 doing the main task?

    Besides, like I said, once the 1.1.2 client comes out, there will be like 4 computers doing double-check, that is not alot of processing power being allocated to cover that 3%. Gimps has about 6700 computers doing double-check right now with 27,000 doing LL tests, and 4300 doing factoring.

  7. #47
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    we probably don't have 3% error rate on those low ranges, hopefully not even on the current ranges. The 3% figure is from much bigger numbers than we have done yet. My guess is 1-1.5% or so for 5M and <0.5% on current double check numbers.

  8. #48
    Originally posted by OberonBob
    The double check has to be done, and
    Actually, the double check does not need to be done. Seventeen or Bust is not trying to find the smallest prime, we are trying to find any prime. The last time I checked, we hadn't found any errors yet. If the SoB error rate is one in a thousand, then the double check should be approximately 10 times below the first tests (The first time tests about 100 times as long with about one-tenth the probability of finding a prime, so the primes per unit computing is about one thousand times slower). Shade it a bit for the changes in FFT size, and perhaps the ratio is 8 or 9 instead of 10.
    Poohbah of the search for Odd Perfect Numbers
    http://OddPerfect.org

  9. #49
    but this does support that there is an optimal level to have the double check at. So where would that place it?

  10. #50
    Originally posted by Keroberts1
    but this does support that there is an optimal level to have the double check at. So where would that place it?
    As the example shows, the optimal point is a bit above the cube root of the error rate.
    Poohbah of the search for Odd Perfect Numbers
    http://OddPerfect.org

  11. #51
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Originally posted by wblipp
    The last time I checked, we hadn't found any errors yet.
    Where would one check to see if there are any errors?
    Joe O

  12. #52
    Hater of webboards
    Join Date
    Feb 2003
    Location
    København, Denmark
    Posts
    205
    Originally posted by OberonBob
    I don't know about P-1 factoring, but sieve takes some non-zero amount of effort.
    It's the same for p-1.You have to manually reserve a range, tell the client the range and report the factors found.

  13. #53
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    Where would one check to see if there are any errors?
    That information isn't available, but Louie can check on the server.

  14. #54
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Originally posted by ceselb
    That information isn't available
    My point exactly!
    Joe O

  15. #55
    Junior Member bagleyd's Avatar
    Join Date
    Mar 2003
    Location
    Northern New Jersey - US
    Posts
    9
    Hmmm 19249 has some interesting secret statistics today....


    Ooops, I see the number of tests can be zero and that makes the checked once field "n/a".
    Last edited by bagleyd; 12-19-2003 at 12:29 PM.
    Cheers,
    David Bagley

  16. #56
    So the optimal level would be around 500000?

    Could the server be programmed to assign secret tests to be done under people names so they could get credit and meanwhile reguardless of how people decide to use their resources they will be allocated properly to find as many primes in the least amount of time. I know this has been discussed before in the resource allocation model thread. Although i still believe that would be the best idea implimenting small trick to boost efficiency can be a big help too.

    Also since i believe louie is the only one with error statistics could you give us some idea of how the error rates run through different sized tests. I understand as long as processors aren't overclocked its extremely rare to get a misreported test.
    Last edited by Keroberts1; 12-19-2003 at 11:24 AM.

  17. #57
    yeah, so you mean, like every say 500 tests, a "Supersecret" test goes out in the normal client to normal folks, and is counted normally? how often to be statically detremined by our assumed error rate and the changes on finding a prime during a doubt check?

    Yeah, that might be the best way. it would automatic, it could be programed to maximize the odds, and folks get credit. Or, even better, there could be an option in the new client to accept the double check tests, so that only folks that want to will get the tests.

  18. #58
    yup :thumb:

    It would be nice if we could skip the rest of 400,000 to 1,000,000 unless someone thinks that some of those have been misreported. As far as I've heard we haven't gotten a single non-matching residue yet, this tend to imply a much lower error rate than even 1 in a thousand. Perhaps we should skip ahead to a range thats more likely to have had some probems with errors. It does seem pointless to test numbers that haven't even shown to have an error rate at all yet.

    I think the biggest thing that is required if any numbers of people are to ever run the double check is they must recieve credit for their work.


    This is a small side question in case anyone knows should we start to expect serious problems whit the reliability of the client once we get toward much larger N values. Being taht we have now found a prime and are getting some boosts in membership we'll probably start running through ranges a bit quicker. Should another prime or two be found the nwe could possible even surpass GIMPS in size of primes we're searching for. I don't know tons about GIMPS but i believe that they have about half the same numbers as us to test before reaching the magic figure 10 milion. However 3 or 4 more primes would eliminate half of them and put us in a position to take lead.

    According to Wblipps models we should be able to expect 3 more before reaching 10,000,000 digit primes. And we could always get lucky again. It is december again maybe this'll be a magic month and we'll pull 4 more out in a hurry.
    Last edited by Keroberts1; 12-19-2003 at 03:38 PM.

Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •