Page 1 of 2 12 LastLast
Results 1 to 40 of 49

Thread: PrimeGrid Alliance

  1. #1

    Smile PrimeGrid Alliance

    I was speaking with Rytis @ PrimeGrid today and we agreed in principle to creating an SB project on PrimeGrid to run primality tests.

    I think it can be a huge win-win for both our projects because it will offer PrimeGrid users a new class of work to do and it will offer SB users who are more comfortable using BOINC or are interested in competing in PG challenges a way to do SB tests.

    In general, I'd like to make integration as smooth as possible so I'm looking for any feedback you guys have about how we might approach doing that. I think a painless, simple integration would be preferred to a strictly "optimal" one in most cases but I'm open to hearing any suggestions. Should we put ranges of "n" on PG or possibly an entire k-value? Maybe all n > 20M?

    Are you guys interested in using BOINC? Are there benefits or drawbacks that I am not considering?

    Rytis and me discussed how we could share credit for discoveries.

    Quote Originally Posted by Rytis
    The way we had it with other projects where we joined after their start was: shared credit if PG finds a prime, otherwise credit goes to SB until PG reaches a specific percentage of total work done
    I think this is eminently fair. What are your guys thoughts? Personally, I'd rather do whatever is best to solve the problem. If working with PG gives more people the opportunity to do tests that lead to solving the Sierpinski problem, I have no problem sharing credit with those who help facilitate the discovery.

    Another interesting topic is double-checking.

    Quote Originally Posted by Rytis
    as for doublechecking, I can offer you 3 options:
    a) Full doublecheck. Each number is fully doublechecked, requiring residues to match
    b) No doublecheck. You do your own doublecheck, I give you the residues
    c) "Smart" doublecheck. A number is not doublechecked if it comes from a computer that has been previously submitting correct residues
    What are your guys thoughts on double checking on PG?

    What about other issues? It's all fair game. I know there are lots of intelligent folks here on the forum so I could use all your opinions on how best to approach a new alliance like this.


    Cheers,
    Louie

  2. #2
    Senior Member Frodo42's Avatar
    Join Date
    Nov 2002
    Location
    Jutland, Denmark
    Posts
    299
    Nice to see something happening whatever it is.
    I just started crunching SB again after getting myself an i7 to replace the old corrupted P4, but I might stop again ... my power bill is going wild and my enviromental conscience is evaporating ... but oh well, I'm finishing a few tests now again.

    Without being able to give any really sensible reasoning these are my humble thoughts on this:
    - I think it would be nice if we can somehow get some extra computing power to the project
    - I would be sad to see SB becoming something like a BOINC sub-project and kind of loosing it's independence ... so handing over some work to BOINC might be a good thing, my first thought is that a range would be better than an entire K value because what then happens if SoB finish all the others and BOINC is left with the last standing K?

    Anyhow just my 5 cents.

  3. #3
    Would this somehow utilise the prime95 client (or mprime) from within BOINC? I think it sounds good, since it seems to me that the majority of DC projects are now moving over to BOINC it would be a good idea to offer users the opportunity to earn BOINC credit for doing SoB work (apart from sieve which they can do already!)



  4. #4
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    The smart double check seems good, how is that implemented on the server side?

    As far as a whole K or a set range, it would have to be decided by the majority I hope.

  5. #5
    I think Double Checks is the way to go initially. If that works out, then maybe something more.

  6. #6
    I don't know much about what happens "behind-the-scenes" with the project, but my first thought is I am pretty wary of the idea of "smart" double checks, especially since two of our past discoveries have been a result of double checks. Just because a machine has a history of submitting good results doesn't mean that there won't be a fluke or the computer won't suddenly start returning bad results.

    As for BOINC, I really like the software, and I like PrimeGrid - I occasionally run a few PSPSieve work units over there. On the other hand, I also like SoB's independence and community, and I'd hate to see us get "swallowed up" by PrimeGrid - although the potential for extra computing power is hard to say no to. I know their challenges give a big boost to the projects.

    Would SoB tests be given out along with PSP tests like it is for sieving on BOINC, or would we be a separate project with separate tests? I would greatly prefer a separate project (although, honestly, I would probably continue to use prime95/mprime on most of my machines for the time being, regardless)

    I guess ultimately, I would support whatever increases the work rate of SoB, which I think is a no-brainer in support of PG/BOINC - but I'm not the one that has to worry about setting all this up, feeding it work, and processing/tracking the results.

  7. #7
    I agree with everyone that SB should maintain its independence and keep processing work. That's why I'm suggesting a PrimeGrid Alliance / PrimeGrid Partnership... not a PrimeGrid No-Holds-Barred Takeover!!

    They are a friendly "competitor" of ours who have a great distributed platform that can easily run our tests but with a few distinct advantages for some users. Using BOINC is easy and some people are already familiar with BOINC and prefer using it to other distributed computing clients. It has auto-update and a few other nice features going for it. Some people also prefer BOINC / PG credit. They have a good challenge series too.

    I suspect most people will continue to run SB directly. If I'm wrong and everyone switches to PG, I'm sure it would be for a good reason like PG having a better platform. If that happened, it would obviously be better for our users (or they wouldn't have switched) and would likely mean more effort was going towards discovering our primes. So really, I feel like even that "doomsday" scenario where PG turned out to be so much better for users that everyone switched, I'd be cool with that precisely because it would be so much better for the users. What's good for SB users is good for SB. Even running SB through PG.

    My current plan is to chunk out n = 17M - 17.2M. There will be joint credit if any primes are found and PG will do full double checking.


    Thanks for all the feedback and let me know if you have any other ideas or suggestions. And let me know too if you end up using their client and have feedback once things get under way.

    Cheers,
    Louie

  8. #8
    So will credit for work be either PG or SoB, or will work done with BOINC also show up on the SoB stats page?

    I already have BOINC installed for running PSPSieve, so once we get set up through PG, I'll run a few tests and see how it goes.

  9. #9
    Quote Originally Posted by enderak View Post
    So will credit for work be either PG or SoB, or will work done with BOINC also show up on the SoB stats page?
    Scoring will be separate for now.


    Cheers,
    Louie

  10. #10
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    The only way to do this realisticaly is the way it's done with PSP.

    Talk to Lars...

    My personal suggestion is give them the heaviest k; the chances of it being the last one are the lowest.
    Once it's found prime we give another k to PG.

    As far as credit goes users will always be given some credit (there is no monetary prize with SoB AFAIK).

    Full doublecheck is really the only way to go with PG, both tests get full PG credit and testers share credit for the discovery.

    If you still don't wan't to give a complete k a full double check is the only acceptable answer from PG, this way that n range is done/complete/gone/never need testing again.

  11. #11
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    My vote is for chunking out an n range. Then there is no scramble when a prime is discovered. The single k can quietly be removed from the active set.
    If PG is working on a single k then you either have to continue working on that k, which wastes resources; or drop that k and start on another, which gives away the fact that a prime has been discovered.
    Joe O

  12. #12
    Former QueueMaster Ken_g6[TA]'s Avatar
    Join Date
    Nov 2002
    Location
    Colorado
    Posts
    184
    I completely disagree with vjs. The goal of SoB is to find a prime for each k. Not the lowest prime that can be found, but the fastest. That's why the doublecheck here lags the main n range, because if the main range finds a prime, there's no need to do more doublechecks for that k. I really like the smart doublechecking idea; although it would make doublechecking the whole range more pointless, I'm sure it would take awhile to get doublechecking to that size.

    If it's decided that doublechecking needs to go to PrimeGrid at some point, you could either set up a new application for it, or inject new work units with a pseudo-user who has already done the work unit but needs doublechecking, as explained here.

    Joe's probably right about the n range. I like how he says "when a prime is discovered", not "if".
    Proud member of the friendliest team around, Team Anandtech!
    The Queue is dead! (Or not needed.) Long Live George Woltman!

  13. #13
    While I realize that finding the smallest prime for each k is not strictly necessary to solve the Sierpinski problem, I think there is a benefit to being able to say that the prime we find for a particular k is the lowest prime that can be found. If we find a prime, the next logical question (in my mind at least) is then "is that the smallest prime for that k?". Without complete double-checks, we can't answer that question.

    In my opinion, leaving that question open would be a huge oversight of the project - whether we tackle that problem as we go along or we go back and solve it after we solve the Sierpinski problem...

  14. #14
    Senior Member engracio's Avatar
    Join Date
    Jun 2004
    Location
    Illinois
    Posts
    237
    Quote Originally Posted by Ken_g6[TA] View Post
    I completely disagree with vjs. The goal of SoB is to find a prime for each k. Not the lowest prime that can be found, but the fastest. That's why the doublecheck here lags the main n range, because if the main range finds a prime, there's no need to do more doublechecks for that k. I really like the smart doublechecking idea; although it would make doublechecking the whole range more pointless, I'm sure it would take awhile to get doublechecking to that size.

    If it's decided that doublechecking needs to go to PrimeGrid at some point, you could either set up a new application for it, or inject new work units with a pseudo-user who has already done the work unit but needs doublechecking, as explained here.

    Joe's probably right about the n range. I like how he says "when a prime is discovered", not "if".

  15. #15
    I think this is cool - it doesn't seem to have hurt PSP to have some work done by PrimeGrid whilst continuing their own independent way.

    BTW. having been absent from here for a while, I never deliberately decided to stop doing SoB - the new client was a non-starter on some of the machines I had available, and then I changed job, and ... and ... etc ... and I never got back round to it. I'll probably start up again, but via PrimeGrid as it's less hassle for me.

    [Edit: Seperate scoring is the way things have been done in the past for such things, including PrimeGrid and YoYo@Home - I don't know of any counter-examples off-hand]
    Last edited by Vato; 09-16-2009 at 05:22 PM.

  16. #16
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    Quote Originally Posted by jjjjL View Post
    Scoring will be separate for now.
    Cheers, Louie
    Whatever will induce more people to crunch for the project seems to make the most sense to me.

  17. #17
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    AFAIK, with PB there is some problem with residuals from PG... perhaps this has changed?? (It's either not gathering, report the residual back, or the residuals are not compatable). IF this could be changed the best method is to simply assign PG the lowest possible n's as we go along.

    I don't see a good reason why they can't run with a k? The goal of SoB is to eliminate all k's not nessarily find the lowest. I don't nessarily think it will be a scramble once they find a k's prime. It will be the same as we do here. SOme tests are done above that n level for k etc. There is alway a lag and extra work done when primes are found, we have no way of cancelling pending tests due to primes or factors.

    Also the reason why double check lags is not b/c we want to find the lowest n, thats a totally different topic. The reason for a lagging n is a balancing act between, test time, decreasing prime density with increasing n, and the error rate.

    Assigning an n-range is probabaly the best way to go, BUT!!! we have to match residuals eventually.

  18. #18
    This probably isn't exactly the right place to post this, but part of it might follow the theme. I have a few questions about how the project is progressing, what some of the statistics mean, stuff like that. I'm more interested in questions 2 and 2b if the post looks to long to bother w/ the whole thing. Thanks in advance for any feedback.

    1. In the sieving status on Primegrid's page it says we're working in the 54-55P range. I understand that 54P is 54x10^15, but what does that number represent? It's way too small to be the value of the candidates (k2^n+1 values) we're excluding, and it's way to big to be our exponent. Is it numbers of this size that we're multiplying together and hoping to end up with one of our candidates? I guess this question kind of relates back to how sieving works. I understand the basic principle, take a prime and cancel out it's multiples, but how does sieving work at our scale? Are we really taking a number in the 54P range and crossing out larger multiples of it hoping we hit one of our candidates?

    2. Are all our candidates probable primes? If not, is a probable prime test any quicker than a proth test? If it was 10% quicker, and could only eliminate 50% of our candidates (I'd think it'd eliminate more if it hasn't been done) and assuming my computer is fairly average in taking 2 weeks to finish a test, it'd save almost a day per test on the tests it completes, but it'd require double tests on the other half, so that wouldn't be worth while. But if it was 10% quicker, and eliminated 90% of our tests, we'd break even, and a better number on either we'd come out ahead. The other question that needs to be asked is can a Probable Prime Test report a false negative? In other words does a prime number going through a probable prime test ever report out as NOT being a probable prime?

    2b. If probable prime testing COULD help the project, kind of like a second sieve, could we have Primegrid do our probable prime testing, and then keep our in house testing doing the Proth tests for the probable primes that Primegrid spits out? To keep Primegrid interested, we could give shared prime finding credit to the Primegrid user that marked it as a probable prime and the SB user doing the Proth test.

    There are probably a bunch of reasons this doesn't make sense, but I'm just curious.

  19. #19
    Quote Originally Posted by wolfemancs View Post
    1. In the sieving status on Primegrid's page it says we're working in the 54-55P range. I understand that 54P is 54x10^15, but what does that number represent?
    Every test is divided by all primes in ascending order. Those reached 54x10^15. No test is divisible by a number smaller than that.

    Quote Originally Posted by wolfemancs View Post
    2. Are all our candidates probable primes? If not, is a probable prime test any quicker than a proth test? If it was 10% quicker, and could only eliminate 50% of our candidates (I'd think it'd eliminate more if it hasn't been done) and assuming my computer is fairly average in taking 2 weeks to finish a test, it'd save almost a day per test on the tests it completes, but it'd require double tests on the other half, so that wouldn't be worth while. But if it was 10% quicker, and eliminated 90% of our tests, we'd break even, and a better number on either we'd come out ahead. The other question that needs to be asked is can a Probable Prime Test report a false negative? In other words does a prime number going through a probable prime test ever report out as NOT being a probable prime?
    No, our candidates aren't probable primes. Teey just have no small factor.
    The fastest tests are being done. Those are deterministic now i think. Basically every probable prime found is prime (like 99,999%), so if there were a faster probable prime test, it would be used for sure. And every prime passes every probable prime test.

  20. #20
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Thommy3 has great answers but lets make them even more simple.

    1. Yes your correct.

    If some number is not prime it must be divisable by a smaller number...

    Sieving checks these smaller numbers over the whole set of data at once.

    example

    check all k/n pairs and cancel out those divisable by 2 then 3, then 5, then 7, then 11, etc... we are now up to about 55,000,000,000,000,000.

    As for question 2 and 2b... (vauge but might help)

    Lets go on the assumption that the project guru's and math guru's are using the best and fastests method. THe double check is there b/c how do you know the computer computed correctly? The program will do the job correctly the first time, the computer might not... even proth is run twice to confirm a prime.

  21. #21
    1. Ok. So basically sieving is the counterpart to trial factoring. Instead of dividing each of our numbers by primes and checking for an integer result, we're multiplying primes by large numbers and seeing if we get one of our candidate numbers. Neat.

    2 and 2b. Ok. I was under the understanding that for a Probable prime test to get to the 99.99...% probable level it took showing that it was probable prime for multiple bases, and thought if we just checked one maybe 2 bases, that we could get a 90 maybe 99% out of it, and that those tests might be quicker than a full Proth test. I guess their numbers are probably quite a bit smaller, but I was under the impression that the "5 or bust" (Dual Sierpinski Problem people) tests took significantly less time than ours do, and they're doing a full multiple base probable prime test. The partial Probable Prime test was the only part of my thought I even considered might be potentially innovative.

    Thanks for the responses.

  22. #22
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Their numbers are smaller.

  23. #23
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    A Proth test takes almost the same amount of time as a probable prime test, with one fewer multiplication. This project has a vast database of probable prime test residues, so for purposes of double-checking, it makes sense to continue to do probable prime tests, but if a number passes a probable prime test, Proth tests are then done with two different programs to check.

    Five or Bust is also doing probable prime tests, but after a number passes a probable prime test, we do 20 strong probable prime tests with different bases to check. Those 20 tests would take 20 times as long as the original test, except that we can distribute the tests over multiple processors.

  24. #24
    What about outsourcing only DoubleCheck, single k or not? No problem with matching residues then, and a maximum of independence for SoB for the time being. H.
    ___________________________________________________________________
    Sievers of all projects unite! You have nothing to lose but some PRP-residues.

  25. #25
    Quote Originally Posted by hhh View Post
    What about outsourcing only DoubleCheck, single k or not? No problem with matching residues then, and a maximum of independence for SoB for the time being. H.
    Our plan right now is to chunk out n = 17M - 17.2M for all k and give them to PG. There will be joint credit if any primes are found and PG will do the full double checking.

    The PG folks are currently working on making improvements to the client wrapper and a few other changes that will make our tests more efficient on their system, then they are planning to make our sub-project live.

    Cheers,
    Louie

  26. #26
    Also, I dunno if this was setup by the admins of PG, but if you go over to www.primegrid.com, I'm the PrimeGrid user of the day!



    Cheers,
    Louie

  27. #27
    no setup :-) but there is a definite bias towards users that have recently created a public profile.

  28. #28
    Any progress on this?



  29. #29
    Quote Originally Posted by Matt View Post
    Any progress on this?
    The partnership is still a go! The admins at PrimeGrid are just waiting to install a new custom version of LLR based on the v22.12 of the gwnum lib before we start. It will allow tests to run more efficiently. Jean Penne, the author of LLR is testing the new version now.

    So yeah, I'm still excited about this even though it may be a few more days before it gets going.


    Cheers,
    Louie

  30. #30
    Junior Member Warped's Avatar
    Join Date
    Aug 2008
    Location
    South Africa
    Posts
    27
    Quote Originally Posted by jjjjL View Post
    The partnership is still a go! The admins at PrimeGrid are just waiting to install a new custom version of LLR based on the v22.12 of the gwnum lib before we start. It will allow tests to run more efficiently. Jean Penne, the author of LLR is testing the new version now.

    So yeah, I'm still excited about this even though it may be a few more days before it gets going.


    Cheers,
    Louie
    Having heard nothing in just over a month, I assume this is still the status?
    Warped


  31. #31
    Quote Originally Posted by Warped View Post
    Having heard nothing in just over a month, I assume this is still the status?
    Yeah, still waiting for Jean to finish LLR 3.8.0 before PrimeGrid rolls out the SB project.

    Cheers,
    Louie

  32. #32
    Junior Member Warped's Avatar
    Join Date
    Aug 2008
    Location
    South Africa
    Posts
    27
    Congrats - the partnership is up and running.

    This should give SOB quite a boost.

  33. #33
    Administrator Bok's Avatar
    Join Date
    Oct 2003
    Location
    Wake Forest, North Carolina, United States
    Posts
    24,473
    Blog Entries
    13
    Indeed. Very glad this has happened. Running a few of the tests right now

  34. #34
    Excellent! Can't wait to see how the speeds compare to Prime95/mprime.

  35. #35
    ARGH!

    And Haiku is still behind

    I guess I better work with them to port the Primegrid apps over to Haiku BOINC soon!

  36. #36
    Sounds good. Just don't switch over and mandate BOINC. The reason I'm choose this project and not some other one is that I don't like BOINC.

  37. #37
    Quote Originally Posted by umccullough View Post
    And Haiku is still behind

    I guess I better work with them to port the Primegrid apps over to Haiku BOINC soon!
    If you don't have llr compiled for Haiku yet, I suggest you start there. The source for version 3.8.0 (which PrimeGrid uses) has not been released yet, but you can probably start with compiling 3.7.1? Anyways, here are the sources: http://pagesperso-orange.fr/jean.penne/index2.html

  38. #38
    Quote Originally Posted by opyrt View Post
    If you don't have llr compiled for Haiku yet, I suggest you start there.
    Yeah, I'll have to give it another shot soon... I did mess with it previously (a couple years ago), but at the time Haiku wasn't nearly as "finished" as it is now, so hopefully it will be a more productive attempt next time

    thx for the link!

  39. #39
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643

    LLR Version 3.8.0 is now available!

    http://www.mersenneforum.org/showthread.php?t=13072

    Hi All,

    The new LLR Version 3.8.0 is now available to download on my site :

    http://jpenne.free.fr/index2.html

    This version uses the most recent release (25.13) of George Woltman's Gwnum
    library, to do fast multiplications and squarings of large integers modulo N.

    Since version 25.11, gwnum library is no longer restricted to base two for
    efficient computing modulo k*b^n+c numbers (but SSE2 is required if b != 2),
    and LLR greatly takes advantage of this improvement.

    Please see the Readme.txt file attached with the binaries for more details.
    Note : the binary for Mac OS X is not yet available because I cannot build it,
    but you can build it yourselves after downloading the source directory.
    Please, let me know if you have any difficulty to use this code...

    Best Regards,
    Jean
    Joe O

  40. #40
    Please note that I posted the wrong (old) URL. Joe posted the correct one.

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •