Results 1 to 30 of 30

Thread: 11th Prime!!

  1. #1

    Thumbs up 11th Prime!!

    Well, as our hardcore forum members have already noted, we've officially discovered our 11th prime! This one came from second pass testing and was discovered by our #2 contributer, sturle, of team Busty Seventeen!

    More information will be posted in the near future, so keep your eye on the homepage. There's a good chance that we will focus on second-pass in the near future for a couple of months so the boundary between the two isn't so large.

    Louie and I would also like to congratulate all the speculation and analysis of our stats to determine what was going on. This one didn't get past you guys.. And yes.. we do enjoy reading your speculation posts when this is going on.

  2. #2
    Senior Member
    Join Date
    Dec 2002
    Location
    australia
    Posts
    118

    Congrats and damn

    I thought it might have been one of mine - I went through there a few weeks ago.

    Glad to see effort rewarded though.

  3. #3
    This announcement has me wondering if there is a bug in the client that hasn't been found yet. If 2 primes in a row have only been found by double-check, that would sound like to me that there's a bug or the error rate is getting bigger.

  4. #4
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Quote Originally Posted by tqft
    I thought it might have been one of mine - I went through there a few weeks ago.

    Glad to see effort rewarded though.

    I was hoping it was one of mine too. Well guys congrats, excellent job. To the next prime and let the squabbling begin.


    e

  5. #5
    As I just said:
    A secondpass ? 'Tis the season, see Riesel !
    But only #10, not the so hoped #6
    Anyway excellent news for the project and...congrats to the discoverer and all the SB teams and participants
    The ultimate goal of the project might be on sight especially if we are lucky to get the low weight k's (67607 and 22699) come next.

  6. #6
    Unholy Undead Death's Avatar
    Join Date
    Sep 2003
    Location
    Kyiv, Ukraine
    Posts
    907
    Blog Entries
    1
    congrats all!

    maybe now the SIEVING speed increases as well ;^)
    wbr, Me. Dead J. Dona \


  7. #7

    Smile

    Finally! I have been searching for record primes for ten years now.

    I'm not surprised that double checking reveals errors in previous results. While running GIMPS I have excluded almost 10% of all machines for hardware errors. The main reason for failures have been bad memory. For an entire batch of workstations from HP the RAM module connectors on the mainboard were of bad quality. When the machines started producing errors, it helped to take the memory modules out, rinse the connectors and put the modules back in. Overclocking is risky as well. The new machine you got for christmas and overclocked, could work fine for months until dust settles in the cooling fins or a fan and the CPU or RAM gets a degree to hot on a warm summer day. Overclocking benefits throughput and give you nice stats, but harms "goodput". All you need is one single bit error in one CPU cycle, and your result is invisibly reduced to random garbage.

    My team name, by the way, comes from when I tried to convince some friends to run sb and join me in a team. One of them misread the URL and wondered if this was safe for work. After a few laughs we had a team and a name. The picture on the front page is me in a Sherlock Holmes costume from a movie I made with a couple of friends. Long story. I don't usualy dress like that or smoke a pipe. :-)

    Congratulations to everyone! Creators, programmers, organizers and everyone who have patiently donated CPU cycles! It is not your contribution at the moment or the last few days that count, but your total contribution over a long time. Be patient and eventually you will find a prime as well.

  8. #8
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    Congratulations, Sturle! Congratulations to the diligence of those who recognized the importance of double checking, and congratulations to the entire Seventeen or Bust community as well! Congratulations also to George who now has software credits on all 17 of the largest known primes.

    Any interesting history on this discovery? Had the number been tested just once before or twice? I was surprised at the size of the prime, because I thought that current double checking was a bit further along. I remember a post from a couple years ago remarking that there seemed to be an unusually larger than average error rate in the range 7M-8M. Keep in mind that even a 10% error rate implies that about 1% of doubly-checked numbers will still not have returned even one correct residue, and therefore require a triple check.

    Let's find another one for the holidays!

  9. #9
    Sceptical Member
    Join Date
    Jun 2004
    Location
    Oslo, Norway
    Posts
    44
    Woohoo!!! Way to go sturle
    Congratulation to all

    Sceptic
    Violently sceptical!

  10. #10

    Misc

    To respond to an earlier message, I believe that if we have a 10% error rate, approx 10% of the numbers will need to be triple checked. And approx 1% will need to be quadruple checked. An error is only determined with a different result on the second pass, requiring all these error results to be redone a third time. Of all the redone error results, 10% will have yet another error result on the third pass (ie. it will not match the second or first pass), and requiring a forth pass on these etc. [there are also some other cases that also must be considered]

    My question that I have for the forum experts is how much will this prime speed up the progress on the remaining 6 numbers. Can one assume it will speed them up by 1/6 or 16.7 % or is there a more accurate estimate.

  11. #11
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Another question is, for the current second pass sub project are work only being done until we get another residue or does k/n redone as stated above until two good residue are found and put the k/n to bed?

    Been a while since a long running discussion was done with double check/secondpass. Let the gurus put their ideas for the last six prime or do we stay as is?

  12. #12
    Quote Originally Posted by jamroga
    My question that I have for the forum experts is how much will this prime speed up the progress on the remaining 6 numbers. Can one assume it will speed them up by 1/6 or 16.7 % or is there a more accurate estimate.
    A not so bad estimate is based on the Proth weights of each k (see the applet at www.brennen.net/primes/ProthWeight.html) :
    0.119 for 10223 ; 0.098 for 21181 ; 0.043 for 22699 ; 0.096 for 24737 ; 0.141 for 55459 and 0.035 for 67607 ; total 0.532 ; the weight of 33661 being 0.098 its removal speeds up the progress by factor of 0.098/0.532 = 18%. Not so far from 1/6 just because the Proth weight of 33661 is near the average weight of the others.
    But attention this is the raw speed of the project all other factors equal. This is not necessarily a good measure of the expected efficiency in the future, i.e. of the expected effort to be done in order to discover new primes. I mean, when a low weight k is discovered (19249 for the previous one, or I hope 22699 or 67607 asap), the raw speed increases the least (least Proth weight) but the future efficiency increases the most since the project is ridden from the obligation of checking a k to potentially very high n's.

  13. #13
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    :yawn:

    goodluck guys...

  14. #14
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Here's a real statistic that indicates how many tests have been issued to date, and how many now need to be issued.

    So to go from 12M to 13M took 20660 tests, 15M to 16M will be no more than 16309.
    Code:
                    First time PRP tests   Double Check PRP tests           
                         issued (to be)     issued (to be)
     0M<n< 1M:      40262 (    0)      33521 (    0)
     1M<n< 2M:      42674 (    0)      27282 (    0)
     2M<n< 3M:      40803 (    0)      24700 (    0)
     3M<n< 4M:      36033 (    0)      23227 (    0)
     4M<n< 5M:      34189 (    0)      20924 (    0)
     5M<n< 6M:      30416 (    0)      21182 (    0)
     6M<n< 7M:      28548 (    0)      20802 (    0)
     7M<n< 8M:      27982 (    0)        698 (15827)
     8M<n< 9M:      26356 (    0)         22 (16549) 
     9M<n<10M:      24689 (    0)         19 (16429) 
    10M<n<11M:      20794 (    0)         12 (16353) 
    11M<n<12M:      20799 (    0)         16 (16293) 
    12M<n<13M:      20660 (    0)         10 (16338) 
    13M<n<14M:      19957 (    0)          1 (16391) 
    14M<n<15M:       7451 (10054)          0 ( 6418)
    15M<n<16M:          0 (16309)          0 (    0)    
    16M<n<17M:          0 (16572)          0 (    0)    
    17M<n<18M:          0 (16320)          0 (    0)    
    18M<n<19M:          0 (16781)          0 (    0)    
    19M<n<20M:          0 (16525)          0 (    0)

  15. #15
    Quote Originally Posted by vjs
    :yawn:

    goodluck guys...


    Quote Originally Posted by MikeH
    Here's a real statistic that indicates how many tests have been issued to date, and how many now need to be issued.

    So to go from 12M to 13M took 20660 tests, 15M to 16M will be no more than 16309.
    Thanks, Mike. Food for thought. Where are such kind of statistics available ?
    Would help us to be more specific and to contribute, if capable, to what might be truly relevant. Otherwise some generalities even if rehashed are I think better than being asleep...
    OK, about specifics, discussed in another thread but somewhat unconcluded: would letting a user choose a k (e.g. the lovely 67607) be better for the project, and why ?

  16. #16
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    Quote Originally Posted by MikeH
    So to go from 12M to 13M took 20660 tests, 15M to 16M will be no more than 16309.
    Nice data, Mike. By way of comparison, GIMPS did around 22500 tests between 15M to 16M, so we are definitely in sparser territory. It looks like 55459 is now accounting for over a quarter of all our tests.

  17. #17
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Thanks, Mike. Food for thought. Where are such kind of statistics available ?
    First set of data on http://www.henleyclan.co.uk/sobsieve/2007/scores_p.htm. To make things easy I striped off the "saved" and "not saved" columns, because they refer to sieving and factoring, and I guessed that many people reading this won't be interested in that.

  18. #18
    Very interesting. It's possible from that to estimate the mean number of primes and the n for which there is 50% prob to find a prime. If a rough estimate of probability that a candidate surviving the sieve be prime is p_pr = 2*ln(S)/ln(N) (S: sieve limit; N = k*2^n+1 : candidate), then for a sieve at 2^50, p_pr = 100/n [please correct if wrong ]. The estimated mean nb of primes is then 100*sum(1/n) for a set of 92000 candidates on firstpass (14.4-20 M) and 121000 candidates on secondpass (7.1-14.4M). Leads to means nbs of 0.54 for firstpass and 1.18 for secondpass. So :
    - the prob to find a firstpass prime below 20M is ~42% (a little more since all firstpass tests below 14.4M aren't completed) and a 50% prob would lie around 22M,
    - the prob to find a secondpass prime depends on the "error rate"; for a uniform 10% error (i.e. 90% of the 7.1-14.4 M tests confidently show no prime) this prob would be 1-exp(-0.118) = 11%. Very rough estimate though since the error rate seems to depend strongly of the client version, isn't it ?

    BTW the most recent stats observations seem to show that the client just switched to secondpass (max n stuck at 14410K for all tests for 2 days, # of secondpass tests from 20 to 100 per k and growing,...). Am I ? [Can't directly check, just have 1 running test on 1 PC ]
    Last edited by Zuzu; 11-05-2007 at 08:55 AM.

  19. #19
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Quote Originally Posted by Zuzu
    BTW the most recent stats observations seem to show that the client just switched to secondpass (max n stuck at 14410K for all tests for 2 days, # of secondpass tests from 20 to 100 per k and growing,...). Am I ? [Can't directly check, just have 1 running test on 1 PC ]
    Your favorite stat page seems to indicate that we have indeed switched to second pass testing.
    Joe O

  20. #20
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    The other very tell tale...

    The stats for each k.

    http://www.seventeenorbust.com/stats....mhtml?k=67607

    If you look at the graph you can easily see that the number of second pass has grow substantially. This is usally less than 50 tests per k.

    I will poke my head out and make a few suggestions... finding SOAP BOX




    These are suggestions I have made in the past we will see if they are followed.

    First,

    Don't stop the secondpass until at least 8.5M, I could try to find the post from at least a year back when this was suggested. The reason for this suggestion was the flakey client.

    I'm fairly certain that we will continue to at least 9M with secondpass this time around. So this time it shouldn't be an issue.

    What are my recommendations this time, if I were in charge...

    Run secondpass to 11M, why?
    - there is a fairly large gap in primes from 27653•2^9167433+1 to 19249•2^13018586+1 and no other decent reason.

    But what is the optimal solution, IMHO??? Or, How should we do this...

    1. First run all k's to 8.5M, this would eliminate the flakey client issue.
    (this is a repointing of the firstpass que which has been done)

    2. Second, run one dense k or two light k's upto 14M. (this is a partial response to the k=67607 above).
    (this can be done by populating the "garbage" or whatever que with only those k's and making it highest priority)
    If we get a prime before that k reaches 14M start with another k, from 8.5m upto 14M.
    = what this does is establishes is a known error rate based upon the n=value of the test.
    = Analysis of the mismatched residues;
    - why are they mismatched?
    - server error, particular user, some unknown cheat, bad client, etc...

    3. Third, retest all n's for those users or ranges of n that are questionable.
    (Again select population of the garbage que)

    4. Fourth, make sure all k's less than 8.5M are tested twice, ( I know this sounds silly but do it anyways, if there are two test pending assign a third. (shouldn't be more than a few 100 tests)

    5. Fifth, decide what level the secondpass should be pushed based on 2. and 3, while considering the 6th suggestion.

    6. Sixth, start using the LLR client. PSP is currently using the LLR client which is compatiable with bionic. Release the new client using LLR and point it to a new firstpass que. Let this new client start pulling test >14.4M

    7. Direct all old clients on to the secondpass que.

    8. Create Bionic as a specific user and keep it's stats seperate.
    - people will be able to join boinic if they wish or keep on with the new client and old stats.
    - new bionic people can see how they are ranking against other boinic people. SoB will never have to worry about stats again for bionic people.

    ------------

    Anyways comments or suggestions?


    BTW, Zuzu thanks for the probabilities.

  21. #21
    Propositions 6-8 are nice and shiny, but did you think about who is going to do the rest of the doublechecks? Once boinc would be in place, there is a good chance that nobody wants to use the old client, and no way to push people to do doublechecks.

    And finally, why boinc and not LLR net? Easier to setup, I guess. And SoB@Boinc would just drain ressources from the other nice Boinc-projects. I think it is good that there are still non-boinc projects out.

    My first concern would be to push sieving to the utmost threshold of computability.

    And yes, DC is nice.

    H.-
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  22. #22
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    hhh,

    Yes I thought about who was going to do the rest of the doublecheck good catch.

    I would almost insist we leave a small gap for the old client. Since I am sure that there are a few computers out there that are running the SoB client unattended. I personally have two set on doublecheck one I nolonger have access to the other extremely limited access.

    As far as llrnet goes... yes that would be fine as well. My though would be a bonic based on llrnet which is what PSP is going to do as far as I know.

    The major push at this point is to move away from the older PRP client, bionic or non-bionic just llr based testing. But, at the same time limiting the number test which would need a second test with llr, followed by a doublecheck with llr. (You know thats going to happen eventually for some k/n's)

    Note: The above suggestions were made on Nov 5, 2007.

  23. #23
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    Well, vjs, you are the one who first mentioned the high error rate between 7M and 8M, enough right there to justify some second pass work. Louie tells me that the latest prime was originally tested with version 1.1, which was similar to version 1.2. Version 2.5 came out just over two years ago when we were just short of 10M in firstpass testing, and I seem to recall some adjusting of the FFT boundaries between 2.3 and 2.4. On the other hand, we probably don't want to do secondpass for so long that we lose participants. Personally, I don't find the gap between 9167433 and 13018586 all that compelling, as we would expect the discoveries to thin out at larger n's anyway. However, we really can't discount the possibility of another missed prime, and I would like to see secondpass work at least keep up to exponents half the size of firstpass, maybe even little bit larger.

    Testing with LLR is a good suggestion, not that it would be any faster than the current PRP client, as both use the same code of George Woltman, but LLR does a Proth test rather than a prp test. However, we would probably require a special version that would do one additional squaring at the end of the computation, the reason being that this would make the residues from LLR compatible with the current PRP residues for the purposes of double-checking. On the other hand, I don't see any big disadvantage in sticking with PRP, as any new discovery needs to be double-checked anyway. I'm just curious why you think that LLR would be that big of an improvement.

  24. #24
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Phil,

    My major reason for switching the project over to LLR is compatibility with PSP. If SoB ever decides to go with Bionic, PSP has done more than half the work to make that happen.

    Odd, it’s my understanding that is LLR client is a faster client that the PRP version and the residues are not compatible. I know the older PRP clients are not LLR residue compatible and I do see an increase in speed between the PRP and LLR versions of Prime95 with my corequad.

    I do agree that the gap between 9167433 and 13018586 is not that compelling, but considering we already missed 2…

    On the loss participants; The major reason for the potential loss of participants is not the fact that we are doing secondpass testing. The reason is the discrepancy in scoring.

    I would ask for your opinion of testing one or two k’s out to fairly high n value. Do you think it is worth establishing a known error rate for the project between the current second pass level and first pass?

  25. #25
    Senior Member
    Join Date
    Dec 2002
    Location
    australia
    Posts
    118

    Yes

    To calculate an optimal 1st/2nd pass mix we need to know the error rate to estimate the costs and benefits.

    This is only beneficial if the powers that be decide to follow the optimal mix calculated.

    Unless we have a good enough estimate of the error rate already then pick the biggest bang for buck k and crunch it out.

    How accurate does the error estimate need to be to be fully useful? Within 10%, 50% or just out by less than an order of magnitude ?

  26. #26
    Quote Originally Posted by tqft
    To calculate an optimal 1st/2nd pass mix we need to know the error rate to estimate the costs and benefits.

    This is only beneficial if the powers that be decide to follow the optimal mix calculated.

    Unless we have a good enough estimate of the error rate already then pick the biggest bang for buck k and crunch it out.

    How accurate does the error estimate need to be to be fully useful? Within 10%, 50% or just out by less than an order of magnitude ?
    In order to estimate the mean nb of missed primes you have to estimate the mean error on a logarithmic scale and to multiply it by the estimate nb of primes on the interval (1.18 on 7.1-14.4M before testing). The mean error is calculated by the expression:
    1/ln(nmax/nmin)*integral(nmin,nmax)(e(n)dn/n)
    where e(n) is the error rate near the exponent n. For instance, if the error rate is estimated at 30% on the 7.1-8.5M interval (flakey client) and 5% on 8.5-14.4M (better client), then the mean nb of missed primes is 1.18*(0.3*ln(8.5/7.1)+0.05*ln(14.4/8.5))/ln(14.4/7.1) = 0.134. The prob to find a missed prime is thus 12.5%. For a 100% error on 7.1-8.5 and 0% on 8.5-14.4 this prob rises to 26%.
    OTOH having made estimates on future primes I realize that the advancement of SB project is such that the real issue now is the next prime which k and n value is determinant (good luck = before 20M, k=67607/22699 ; bad luck = after 30M, k=55459) for the expected intervals for further primes; thus, no effort has to be spared and unless the estimate of missed primes is minuscule (< 1%) the secondpass testing is worth the effort.
    In that sense the SB project is different from that of Riesel where the high nb of candidates (66 k's) give a "statistical treatment" (expect to find x primes in a [y,z] interval) more sense. The PSP project lies in between, similar to the SB project in 2003.

  27. #27
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    Quote Originally Posted by vjs
    Odd, it’s my understanding that is LLR client is a faster client that the PRP version and the residues are not compatible. I know the older PRP clients are not LLR residue compatible and I do see an increase in speed between the PRP and LLR versions of Prime95 with my corequad.
    This surprises me, that you find LLR faster, since my understanding is that LLR and PRP use the same FFT routines. Does anyone else have additional timings that might clarify this?

    The residues are not compatible, but they are almost compatible. To test t=k*2^n+1 for primality, PRP checks whether 3^(k*2^n) mod t is equal to 1. LLR checks whether 3^(k*2^(n-1)) mod t is equal to -1. (Actually, if k is divisible by 3, LLR would use a different base, but for the candidates being tested by PSP and SB, I believe that LLR would use the smallest suitable base 3. We should probably double-check with Jean Penne' on this.) The LLR result can be squared to give the PRP result, i.e.,
    [3^(k*(n-1))]^2 = 3^(k*2^n).
    If LLR could be modified to report the 64-bit residue of this result, we could do double-checks with LLR of results that were originally tested with PRP.

    Quote Originally Posted by vjs
    I would ask for your opinion of testing one or two k’s out to fairly high n value. Do you think it is worth establishing a known error rate for the project between the current second pass level and first pass?
    Current second pass error rates would be sufficient to tell us whether or not second pass is at an optimum level, but if we are below optimum levels, some kind of information on known error rates at higher levels would help determine what optimum level should be. So yes, I think it is worthwhile, what you are suggesting, but I don't know if it needs to be extended all the way to current first pass level.

  28. #28
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    I don't have exact information on LLR vs PRP test times, I could easily run two concurrent tests one with prp one with llr to see the time difference. However I'm sure this must be known, PSP switched from PRP to LLR back when they were testing around 1.5M.

    I know hhh reads this fourm... hhh was there a speed increase with llr?

    Back to seondpass testing for one k. I wouldn't say that running secondpass for one k upto the first pass level is overkill. What I would say is based on the error rate verses n; as we approach firstpass, it maybe nessary to infact test upto or very close to firstpass. For example an increasing error rate reaching 25% as n approaches 12M. If that were the case then I would say yes we should test to see what the error rate is upto 14M.

    Think of it this way...

    There are less than 1100 tests per 1M-n range for k=67607, it would onl;y take about 7000 test to reach firstpass for that one k. This could easily be done with the project wide resources in less than one month.

    Doing this one k would give a projectwide analysis of the error rate and point out any corrupt user/ computer. (This is assuming the error rates for all k are equal, we could base this on existing information.)

    Then subjective doublechecks based on user or n-range is more than likely what we need, compared to projectwide secondpass testing.

  29. #29
    Quote Originally Posted by vjs
    Back to seondpass testing for one k. I wouldn't say that running secondpass for one k upto the first pass level is overkill. What I would say is based on the error rate verses n; as we approach firstpass, it maybe nessary to infact test upto or very close to firstpass. For example an increasing error rate reaching 25% as n approaches 12M. If that were the case then I would say yes we should test to see what the error rate is upto 14M.

    Think of it this way...

    There are less than 1100 tests per 1M-n range for k=67607, it would onl;y take about 7000 test to reach firstpass for that one k. This could easily be done with the project wide resources in less than one month.

    Doing this one k would give a projectwide analysis of the error rate and point out any corrupt user/ computer. (This is assuming the error rates for all k are equal, we could base this on existing information.)

    Then subjective doublechecks based on user or n-range is more than likely what we need, compared to projectwide secondpass testing.
    I think it will be good to have an error checking possibility in the client like P95 already has to see if a machine is capable to produce good work.

  30. #30
    Quote Originally Posted by vjs
    I know hhh reads this fourm... hhh was there a speed increase with llr?
    I think that was before my time. No idea. You will have to ask Lars.

    H-
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •