Page 4 of 18 FirstFirst 1234567814 ... LastLast
Results 121 to 160 of 709

Thread: Sieve Client Thread

  1. #121
    For those of you who are real speed-freaks, I attach a console version of the SoBSieve software. Since there is no Windoze overhead, this is about 10% faster on this NT 4 system (!).

    If you run it with no SoBStatus.dat file, it will prompt for the range of p. If there is a SoBStatus.dat file present it will continue from where it left off.

    If you want to use it in an unattended manner, it will go through the range then exit. To set up a range, create a SoBStatus.dat file containing two lines,

    pmin=<start>
    pmax=<end>

    for example

    pmin=100000000000
    pmax=125000000000

    Note that it runs at idle priority.

    Regards,

    Paul.
    Attached Files Attached Files

  2. #122
    can you make it so that we can change the update interval (how long it takes before it prints out the status to the screen)? this one is way too short heh.

  3. #123
    Originally posted by peterzal
    can you make it so that we can change the update interval (how long it takes before it prints out the status to the screen)? this one is way too short heh.
    And the problem is....????

  4. #124
    Aaargh! The console version does not write the pmin value to SobStatus.dat . If I stop the program using Ctrl-C and restart it does not remember the work done.

    Also there doesn't seem to be a way to adjust alpha.

  5. #125
    Ah, it writes a new pmin every hour, so you shouldn't lose more than an hours work.

    As for the alpha bit - you are right. It uses 2.5 by default; I probably ought to allow that to be changed.

    Paul.

  6. #126
    Every hour?? How about making it 10 minutes or something. Or atleast writing it when the command window is closed or Ctrl-C is used.

  7. #127
    Paul - great job!

    I installed the console sieve on a few computers and it runs great.

    Now that you have the console working, is there any hope in building SoBSieve for linux again? Right now I'm running NBeGon10 on around 20 dual proc boxes at the U of M clusters at any given time. It's cool seeing them burn though a 600G range of divisors every 3 days. Of course, it'd still be cooler to see it crunch though 1T every 3 days

    Keep up the good work.

    -Louie

  8. #128
    Senior Member
    Join Date
    Jan 2003
    Location
    U.S
    Posts
    123
    quote:

    ___________________________________________

    Every hour?? How about making it 10 minutes or something?
    ____________________________________________

    Well, Paul will probably fix that (unless he's feeling
    particulary lazy )
    In the meantime, you can type the last pmin value that
    the console version shows to the SoBStatus file.

  9. #129
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    But be careful:
    When you've started the program via the Explorer, the console will close once you press CTRL-c. In this case, you have to copy the value first and then shut down the program...

  10. #130
    Member
    Join Date
    Dec 2002
    Location
    new york
    Posts
    76
    Congratulations to both Paul and Phil. We now have a pair of very speedy sieving engines that will allow us to hit the 10T mark in a couple weeks with just a small group of participants.

    Now that things are settling down again, I'm wondering if anybody is thinking about further automating the sieving process. I enjoy the manual work involved in sieving, but at some point it's going to become hard to coordinate.

    For example, I'm splitting work between two machines, and that alone is a source of potential error. Also consider, if we wish to sieve to 200T in 50G increments, the Sieve Coordination List will grow to 4000 entries.

  11. #131
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    I'm afraid my proposal in the "Sieve Coordination Thread" thread was deleted prior to Louie reading it. Ok, was a bad idea to post it there...

    Anyways, I suggested to use a DB table for the ranges and a scripted webpage where one can reserve ranges. Should be implementable with only little effort - less effort than keeping the coordination thread up-to-date. Plus, no one can accidently reserve ranges already given to someone else.
    Another recent idea (due to dudilo's posting) was to let that page generate the proper SoBStatus.dat file - which can be customized to split the range over multiple machines...

  12. #132
    we are down to 644675 numbers left.

    -Louie

    EDIT: You can now view the # of tests remaining for n < 3 mill and n > 3 mill on the stats page. Now it is easy for everyone to see how much sieving helps.
    Last edited by jjjjL; 02-12-2003 at 12:38 AM.

  13. #133
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    I'm not sure if there already was a program that gets rid of the pmin & pmax lines of the SoBStatus.dat file (as it's needed when using sobsieveconsole), so here's my version. As it's written in Java, everyone should be able to use it...
    Attached Files Attached Files

  14. #134
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Loiue said
    EDIT: You can now view the # of tests remaining for n < 3 mill and n > 3 mill on the stats page. Now it is easy for everyone to see how much sieving helps.
    This is great. Any chance you could add a detail page that breaks down the >3000000 range e.g.

    3M - 4M x outstanding
    4M - 5M y outstanding
    ..

    Another really useful addition would be a 'lowest sieve point' indicator. In ~12 days time n > 3M will begin to be distributed. Although the SoB.dat file doesn't need to be updated right away, it would be rather nice to have somewhere to get a new nmin value, then we sievers could just update every now and again knowing that the value is correct right now.

  15. #135
    Member
    Join Date
    Sep 2002
    Location
    London
    Posts
    94
    Changing sieve range does not speed sieving a lot but sieving
    from 3,000,000 will help us in double cheking.
    When we will find a factor we will know for sure that number wasn't prime. Bacause primes are so rare in these numbers we can't afford to miss any primes.

    Yours,

    Nuutti

    P.S. Right now I wish that we are just unlucy and there is not bug in the client or any false report about prime being not prime(because we have not found any primes so long time).

  16. #136
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Nuutti, you're right on leaving the nmin @ 3M. I've just remembered a discussion before the latest sieving effort started, where it was stated that sieving 3M-20M wasn't significantly more effort than 3M-5M (which was the original suggestion). I guess the converse is true.

    On the double check subject, maybe at some point in the future when the current 3M -20M sieving has exhausted the easy factors, it may be worth sieving 0 - 3M deeper as an aid to double checking. We already have the tools, we just need an appropriate sob.dat file, and a new co-ordination thread.

  17. #137
    On the double check subject, maybe at some point in the future when the current 3M -20M sieving has exhausted the easy factors, it may be worth sieving 0 - 3M deeper as an aid to double checking. We already have the tools, we just need an appropriate sob.dat file, and a new co-ordination thread.
    It is *very* easy to produce an appropriate SoB.dat file using SoBSieve - producing the initial file is part of its functionality.

    If anybody is interested in doing this, let me know - or just start the new thread, give me a prod, and I will post the method in there.

    As for the lack of any new results for a while: finding three primes in such a short amount of time was very very lucky IMHO - perhaps it might have been better for the project to find them at longer intervals. In the 3 to 20 million range I expect around 5 or 6 more primes to be discovered. The important thing is that when one is found, it will be one of the largest known primes - perhaps *the* largest.

    Regards,

    Paul.

  18. #138
    Member
    Join Date
    Sep 2002
    Location
    London
    Posts
    94
    I have done that sieving file from 1->3,000,000 and it is sieved up to 5G

    The file is here :
    http://powersum.dnsq.org/doublesieve/

    Nuutti
    Last edited by nuutti; 02-14-2003 at 02:56 PM.

  19. #139
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Thanks Nuutti. I'll give 5G - 6G a quick shake down just to see how quickly those factors are eliminated.

  20. #140
    Member
    Join Date
    Sep 2002
    Location
    London
    Posts
    94
    I used old good 1.06 because it can create sob.dat file.
    Sieving was slow. I don't know if newer clients are faster with low n values ?

    I tested with SoBSieve 1.24 with p about 2,000 G and n 3,000,000-> 20,000,000 and got rate 28,000 p/s
    and then
    SoBSieve 1.24 with p about 2,000 G and n 1-> 3,000,000 and got rate 13,600 p/s

    May be Phil's software is faster when n is low ?

    I have PII 450.

    I think that sieving in the range 3,000,000 -> 20,000,000 should have priority right now, but when we have reached 10T we should consider double check sieving.

    Ranges under 3,000,000 are partly sieved very high, but I guess that sub ranges n<1,000,000 are not sieved very high. Because speed of sieving does not depend very much n range we can use 1<n<3,000,000 range all the time.

    Nuutti

  21. #141
    Member
    Join Date
    Sep 2002
    Location
    London
    Posts
    94
    I made a small test run SoBSieve vs. NbeGon_010
    and here are my results :
    Test SoB.dat was a file for 1->3,000,000.
    Range was 20G -> 20G+1,000,000.
    Computer was PII 450.
    Time to complete the range :
    SoBSieve 80s.
    NbeGon_010 34s.

    It seems that Phil's NbeGon_010 is about 2.3 times faster than
    Pauls SoBSieve when n values are small.


    Nuutti

  22. #142
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    I see roughly the same.

    Paul's is about half the speed for 1<n<3M as I'd expect to see for 3M<n<20M. Phil's is about 2.5x faster than Paul's in this low n range.

    Now I've started another test for p = 6G - 7G, I'll let it run, submit the factors, and see how it looks.

    In the p = 5G - 6G range, I submitted the first three factors, all of which were new! Does this really mean that this area really wasn't sieved for all values of k? Or is this area not reflected in the DB? Or is there just something odd because these three values have been PRP tested? Or am I just missing something?

    I am confused.

  23. #143
    Member
    Join Date
    Sep 2002
    Location
    London
    Posts
    94
    I have to admit that I don't know what Louis' database like these factors, but you can mail all factor to me. I will keep a a list about found factors for a while.

    my e-mail is nuutti_dot_kuosa_at_kolumbus_dot_fi

    Nuutti

  24. #144
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    Nuutti said:

    > I think that sieving in the range 3,000,000 -> 20,000,000 should have priority right now, but when we have reached 10T we should consider double check sieving.

    I think it isn't too early for a couple of people to be doing double check sieving for the 12 exponents for n=1 to 3,000,000. Consider: suppose hardware errors cause, for example, one out of every 1000 tests to be in error. Suppose a prime is missed because of a hardware error. The work to test up to n goes roughly as n^3. If a prime was missed at, say, n=2,000,000, we could easily test up to 20,000,000, doing 1000 times as much work, before finding a prime. The small chance of missing a prime means that double checking can lag quite a bit behind first time checking, but there is definitely a point at which there is a better chance of finding a prime by double checking a small value of n than first-time checking a large value of n. On the one hand, it would be nice to get some statistics about error rates, but double checking really doesn't have to start right away; it would be advantageous first to get some well-sieved values before starting.

  25. #145
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    To try to prevent confusion for first time users, I have created a new thread for double check sieving discussions.

  26. #146
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643

    Sieve Performance

    Well the console version does run at least 10% faster than the windows version, just as Paul Jobling stated.

    But what I find very interesting is that both sieves run 11% faster under Windows NT than under Windows 98SE,
    even though the NT machine is a Dual Celeron/450, and the 98SE is a PIII/500.
    So the real difference of NT over 98SE is 23% (or more if we take into account the Dual penalty!)
    Joe O

  27. #147
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    One tip:
    Don't even think about using a P4 Celeron for this job.
    I temporarily got my hands on a 1.7 GHz Celeron here. The result is very disappointing.
    Even a 800 MHz Duron beats the Celeron for good!

  28. #148
    Member
    Join Date
    Jan 2003
    Location
    Germany
    Posts
    36
    I experience the same with a P4 2.5 GHz.
    It is about 15% slower than my Athlon 1 GHz.

    Any idea how that can be??

  29. #149
    I was reading an RSA-72 web page yesterday and it said that P4 does not have a hardware instruction for rotate. If the seiveing makes extensive use of rotate that would explain it.

    I think P4 is moving to RISC design with microcode to handle the removed instructions. If this is true the microcode would have to do two shifts and one OR to simulate. You could get 1/3 expected speed.

    Don't know how true all this is. Can anyone point me to a timing chart for P4 instructions. Is there a huge penalty for ROT?

    DP

  30. #150
    The p4 has made lots of changes like that another one I know of is that now add with carry is approx 7 cycles (cant remember exactly) but an add is only 1 cycle.
    This is bad for numbers > 2^32. I'm not sure if this effects factoring or not by my intuition says that it would have at least a small impact.
    Craig.

  31. #151
    Junior Member
    Join Date
    Jan 2003
    Location
    Spain
    Posts
    12
    I have sieve 655G (Moo_the_Cow already had done it) And I found a new factor

    655009757159 | 55459*2^13313098+1


    How could his program skip this number? Could it be a bug in the program??


    I'm using sobsieve 1.24 under Win2000. If anyone can test if this is a real factor...


    (55459*2^13313098 has another factor in 330G...it's just a problem with an old sob.dat)

  32. #152
    Senior Member
    Join Date
    Jan 2003
    Location
    U.S
    Posts
    123
    What!?! I can't believe it. I'm using SoBSieve v.1.22, so there
    are 3 possibilities (which you may already have thought of):
    1.) You are using an old SoB.dat file and have just
    found a duplicate (most likely)
    2.) There is a bug in my version
    3.) My CPU is not calculating accurately (least likely, because
    I don't overclock my CPUs and this has never happened
    before)

  33. #153
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    It's a duplicate, I see no problem there. That kind of a thing happened to me too within the first two weeks of the sieving when we were frequently changing clients (and their versions) and Sob.dat files. I was cross-checking for a small range everytime I switched clients, and mistakenly used an older SoB.dat file on one of the clients (and panicked because the client using the older one had two additional factors when compared to the other one).

    Anyway, to clarify 655009757159 | 55459*2^13313098+1 issue, I checked for 655G-655.01G range on all of the four SoB.dat files I have.

    As far as I could have followed, the SoB.dat file was updated three times, and these files are;

    1- 4595 KB, dated 18/1/03,
    2- 4523 KB, dated 22/1/03,
    3- 4178 KB, dated 25/1/03, and
    4- 3797 KB, dated 30/1/03.

    All of the first three has 655009757159 | 55459*2^13313098+1 as a factor, whereas the last one does not have it.

    So, probably Moo_the_cow is using the most current SoB.dat file, while expinete uses either one of the first three.

    expinete, as you mentioned, 330714997043 | 55459*2^13313098+1 is another factor. It was found and submitted by Louie sometime between 25/01/03 and 30/01/03.

  34. #154
    Member
    Join Date
    Oct 2002
    Location
    Austria
    Posts
    37
    I am using SoBSieve 1.22 and get the message

    WARNING: 64 bit integers are badly aligned as qwords!

    do I have a problem ? should I stop sieving ?

  35. #155
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    expinete wrote
    I have sieve 655G (Moo_the_Cow already had done it) And I found a new factor
    According to the sieve co-ordination thread, Moo has not yet declared this area as complete, so really this is one area you shouldn't be playing with. My hole finder reports the following (using the results.txt from 10 minutes ago) for that range.

    658.71G - 660.00G, size: 1.29G, est fact: 48 (658712581133-660004372939)

    Yesterday this hole was bigger (3.35G), so in addition to the other suggestions offered above, there is a good chance that Moo simply hadn't submitted anything for this range when you tried.

    Mike.

  36. #156
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    While we are on the subject of holes, I have attached an output from my hole finder for the range 200G - 5000G.

    I have checked the possible holes against the co-ordination thread. Any that are not declared as complete have a single space at the beginning of the line. Those that I have started re-sieving are marked with a '*', any others (those that have no space or '*' at the beginning of the line) are open to takers.

    If anyone is interested in checking these (potential) holes, please double check that ranges are declared as complete, and co-ordinate activity on this thread.

    Mike.
    Attached Files Attached Files

  37. #157
    Junior Member
    Join Date
    Feb 2003
    Location
    Belgium
    Posts
    7
    I've started on the hole 3911.54G - 3915.14G.

    When will SoBSieve 1.24 be posted on the seventeen or bust website?

  38. #158
    From the "holes" file

    SERIES: 3911.54G - 3915.14G, size: 3.60G, est fact: 29 (3911540412763-3915137233187)
    SERIES: 3926.48G - 3930.06G, size: 3.58G, est fact: 32 (3926484634733-3930061625759)
    SERIES: 3941.54G - 3945.05G, size: 3.51G, est fact: 25 (3941536383347-3945047319599)
    SERIES: 3956.30G - 3960.05G, size: 3.75G, est fact: 36 (3956304590099-3960053131451)
    SER+RAT: 3970.03G - 3975.01G, size: 4.98G, est fact: 35 (3970034811803-3975012508177)
    SER+RAT: 3985.51G - 3990.08G, size: 4.57G, est fact: 27 (3985510607761-3990079251871)
    SER+RAT: 4000.01G - 4005.03G, size: 5.02G, est fact: 35 (4000011208327-4005029348777)
    SER+RAT: 4015.30G - 4020.14G, size: 4.85G, est fact: 38 (4015295346311-4020141218923)
    SER+RAT: 4030.54G - 4035.38G, size: 4.84G, est fact: 45 (4030542073183-4035379652009)
    SER+RAT: 4045.39G - 4050.26G, size: 4.87G, est fact: 33 (4045385419291-4050257987239)
    SER+RAT: 4060.46G - 4065.31G, size: 4.85G, est fact: 35 (4060459019593-4065313137757)
    SER+RAT: 4075.53G - 4080.03G, size: 4.50G, est fact: 32 (4075530343427-4080031510823)
    SER+RAT: 4090.31G - 4095.26G, size: 4.95G, est fact: 36 (4090307684017-4095259707527)
    SER+RAT: 4105.45G - 4110.19G, size: 4.75G, est fact: 34 (4105445717053-4110192811271)
    SER+RAT: 4119.92G - 4125.10G, size: 5.18G, est fact: 36 (4119922770037-4125103526893)
    SER+RAT: 4134.97G - 4140.20G, size: 5.23G, est fact: 31 (4134974381183-4140202239367)
    SER+RAT: 4149.40G - 4155.32G, size: 5.91G, est fact: 39 (4149402363289-4155315689257)
    SER+RAT: 4164.53G - 4170.08G, size: 5.55G, est fact: 44 (4164530654129-4170082334729)
    SER+RAT: 4178.82G - 4185.14G, size: 6.32G, est fact: 59 (4178824552081-4185144302257)
    *SER+RAT: 4193.89G - 4200.22G, size: 6.33G, est fact: 54 (4193888050727-4200216868171)
    These are all in a range sieved my Louie.

    Louie: It looks like something has gone wrong. Did you forget submitting these, or were they not sieved one way or another?

  39. #159
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I am writing this for coordination purpose.

    I have previously chatted with Louie on that issue (the holes in 3900G-4200G range). He is aware of that and is taking care of it. So, they will be submitted soon. No problem there.

    Also, I am sieving the holes in the 3650G-3700G range. Just not submitted yet. Waiting for it to finish. Will take a couple of days more.

    Regards,

    Nuri

  40. #160
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Nuri et al,

    I don't want to duplicate effort, so I have stopped my re-sieving, and submitted the factors. This is how far I'd got:

    4193888050727 - 4197649481731
    3649834328981 - 3657925328989

    For the future, can we please agree on a co-ordinated method of declaring holes, and to whom the re-sieve is assigned.

    We are getting very close to n>3M being assigned for PRP tests. I would be really nice to have p<5T complete before PRP assignment. This is something I think no one would have even contemplated two months ago.

    And for information, the 5T<p<10T is currently 32% complete. Maybe we can get this to 50% before PRP testing begins? Now wouldn't that be good.

    Mike.

Page 4 of 18 FirstFirst 1234567814 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •