Page 1 of 2 12 LastLast
Results 1 to 40 of 43

Thread: Fast track now?

  1. #1

    Fast track now?

    With recent events it certainly appears we are going to be recieving alot more help soon. We've got a new client offering a boost in efficiency, a new prime boosting membership, and since most of the bugs have been worked out we should be maintaining membership better. Now we just need a few more primes and we may see a possible passing of GIMPS? Maybe not i guess they are alot farther down the line than we are. But still it'd be nice to find the largest prime ever. By the way does 31337 still run clients that would achieve the record prime? Does it still run at all?

  2. #2
    Team Anandtech
    Join Date
    Aug 2003
    Location
    New Zealand
    Posts
    50
    You're right - the main problems seem to have all been resolved.

    After SETI ends, I'm sure SOB will see a large boost in membership.

  3. #3
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752

    Re: Fast track now?

    Originally posted by Keroberts1
    By the way does 31337 still run clients that would achieve the record prime?
    I don't think so. This mainly because the current record holder has an n value larger than 20m, whereas we are sieving up to 20m. So, even if somebody runs 31337 user and finds a prime, it will be the second largest.

  4. #4
    Member
    Join Date
    Feb 2003
    Location
    Lucerne, Switzerland
    Posts
    30
    I doubt that SoB will pass GIMPS. When the primes habe about the same size, it takes much more time to complete one single primbility test besides the lower probability of being prime (due to the lower factoring limit).
    So many people might change to gimps, I guess.

  5. #5
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959

    Re: Re: Fast track now?

    Originally posted by Nuri
    I don't think so. This mainly because the current record holder has an n value larger than 20m, whereas we are sieving up to 20m. So, even if somebody runs 31337 user and finds a prime, it will be the second largest.
    The 20M limit is for sieving only. I don't think that no tests with n > 20M will be issued. The test's likeliness of being a prime will be much lower than that of n's < 20M, though, as there has been no sieving...

  6. #6
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752

    Re: Re: Re: Fast track now?

    Originally posted by Mystwalker
    The 20M limit is for sieving only. I don't think that no tests with n > 20M will be issued. The test's likeliness of being a prime will be much lower than that of n's < 20M, though, as there has been no sieving...
    Exactly. Of course, such a time will come sooner or later (if we do not find all our primes before 20m ).

    May be I could not express myself clearly. All I am saying differently is, I do not think we will start trying any n>20m tests before sieving of those ranges start.

    One other thing is, 31337 did not prove to be much popular, even when it was possible to get the first rank with an 13.x n test. AFAIK, only MikeH completed a test with 31337, and that was all. With n now larger than 20.9, and the range "unsieved", even if Louie enables such an option, I really doubt anybody will try it.

  7. #7
    if a sob.dat was created with n from 3 million to 100 million how muc hwould the speed of proth sieve be changed? I've been told in the past that the size of the dat doesn't matter. If this is so then shouldn't we be using the largest one possible to start eliminating n values i nthe higher spectrum now. We will reac h20 million eventually and its not to likely that we'll remove more than two or three primes in the mean time. Als oonce we are testing in that range it'll become necesarry to sieve to much greater depths because the tests that get eliminated will each be equivalent to several T worth of sieving. Actuallyone test at 100 million would be equivalent to sieving for 10 years. Although machines wil be much faster before we reac hthis size Ns adn this won't take 10 years the sieving will be able ot be done just as fast. Meaning that we would be able ot sieve 142T in the amount of time it would take to do a single test. We'll probably want to sieve ot depths of over a million T. Perhaps we should include this range in the sieving we're doing now ot get started on this range? That is if it really wouldn't slow us down that much. Maybe we could have a seperate co-ordination thread so that we could keep trac kof who is using this duel range sieve and the nwe could als ofarther down the line when we get closer to the changeover point allow another group to resieve what we've don so far up to the point of 300 T with the upper range. I'm just saying this because as sieving is approaching the point of no longer being efficient for the 3-20 million range and it seems that over a quarter of the factors found are nolonger as valuable because they have already been passed over it'b be nice to get started on sieving for the future. Assuming that there is no more tha nperhaps a 20% slowdown I would of course like to start using the larger dat and i expect that many other users would also like to. This could also permit the 31337 account to be reopened in the near future by allowing some users to volutneer to start sieving the lower range of the large dat. Even if noone else wanted to participate I would myself sieve the first 10T just for the hope of revitalizing our chance at having the world record prime (even if it is a small one).
    Last edited by Keroberts1; 05-11-2004 at 09:02 AM.

  8. #8

    Re: Fast track now?

    Originally posted by Keroberts1
    ...We've got a new client offering a boost in efficiency...
    Wish I could make use of that client with all my machines. http://www.free-dc.org/forum/showthr...&threadid=6202

  9. #9
    How about once Sieving becomes totally inefficient..for the current testing window...everyone jumps in full force into P-1 factoring and kills off as many tests as they can before they are sent out for proth testing?

    You are talking about sieving to 100 million for n? Geez....what is the guesstimate on the last k to hit prime? I thought it was around 30 million for n? Why sieve that deep when it is completely unnecessary? Is it to prove the conjecture or go for a record? Also, I thought the sieve speed was sqrt (number of k's) * sqrt (range of n). A raise in n value to 100 million would most likely make the sieve crawl slower than a snail on prozac...all extremely unnecessarily. I'd like to see the memory on the computer that is doing multiple k's for n=20million to 100million. Throw all that into a bitmap at one time. Sure once you got to p=1billion it may fly..but..well...I guess you can newpgen them seperately to 1 billion..no biggie. I just think sieving to 100million for n is a waste of resources when you can P-1 test current numbers and help find a prime and eliminate a k now. I personally think a prime will be soon for you guys.

    Besides, you shouldn't be worried about GIMPS. You should be worried about us Riesel Sieve guys. We are down to 90k's and moving fast. We will be testing n=1mil REAL soon. After hitting a 420K digit prime the other day we are moving up the project rank pages fast. You guys need to get in gear...we are coming for you We got guys putting up their double wides for sale just to buy more rackage...I traded my dog for a new SQL server. We got guys that gave up drinking just to have enough money to upgrade the MoBo/CPU. WE ARE COMING....lmao

    Seriously though...starting a sieve from 20-40 million right now in my humble opinion would be a total waste of time. Stick to the basics. Find primes by testing your k/n pairs and reducing the number of tests thru sieving until it is inefficient..and then go full force into P-1. Then...someone get creative and come up with a new, faster, improved method of finding factors. Some strange twist on ECM or P+1 or some crazy alogrithm you pulled out of your butt while drinking Jager with some chick...who you forgot her name...mary...jennifer...anne...something..anyway..not important. Finding factors for numbers you won't be testing for years just shouldn't be a priority right now.

    Just my 2 cents worth on the subject.

    Lee Stephens
    B2
    www.rieselsieve.com

    And really...we are coming for your #2 spot of prime projects...so you better give it all you got

  10. #10
    Senior Member Frodo42's Avatar
    Join Date
    Nov 2002
    Location
    Jutland, Denmark
    Posts
    299
    Wow, what a rabble rousing post

    Event though I'm not a mathematican, b2uc's arguments make sense to me.
    I have had 6 slow machines (Linux 450 Mhz PII 14 hours a day) doing prp'ing since n=~500K and have been thinking about channging them to something else than prp'ing, but I guess they'll stay prp'ing for now even though it is a matter of months for them to complete tests.

    Btw. rieselsiever's are not the only fanatics around I've been messing up my sleep, cause I want my two computers to work all 24 hours. I have however realised that my sleep is more important than SoB, so now they only work from my waking up till going to sleep now (~18 hours). I've also burned one mobo probably because of the load from SoB ...

  11. #11
    Originally posted by Frodo42

    Btw. rieselsiever's are not the only fanatics around I've been messing up my sleep, cause I want my two computers to work all 24 hours. I have however realised that my sleep is more important than SoB, so now they only work from my waking up till going to sleep now (~18 hours). I've also burned one mobo probably because of the load from SoB ... [/B]

    Sleep is for dead...or for those that hate primes Now on to the part about burning up a motherboard. All the SoB client, LLRNET client, PRP, LLR, PRoth....all of them do is...uncover weaknesses that already exist. If the Mobo fries...it was going to fry anyway....if the CPU is overheating...you gotta ask yourself why? Probably a dustbunny that looks like a giant rat sitting on top of the heatsink. Bad fans, poor heatsink...something. Computers are like dogs...they love you more when you are firm with them and hand out a little discipline when they are bad. Computers love to have something challanging to do. You have a sheep dog..he's miserable unless he is rounding up sheep...a bird dog is miserable unless he is out on the hunt...proving to his master that he can point a bird in back high grass. My computer is the same way. If it isn't finding factors or busting on possible primes....well his digital life sucks. He doesn't care about aliens. He doesn't care about some key for encryption. He doesn't even know what a muon is. All he knows is that factors make his master happy...and primes are bad bad bad animals that need to be found to save humanity.

    When Seti is over...I hope SoB gets a ton of new recruits. Louie and Dave are great guys. The more shortcuts you guys find..the easier my project is for us. Now if we could preach the anti-gospel of Dnet..and explain to others why it is wasted cpu cycles right now...a project that came before the technology was ready. Wait 3 years..and then you can do more in a month than you did the previous 3 years. Silly really. 10,000 users doing nothing but wasting precious cycles.

    </rant>

    Lee Stephens
    B2
    www.rieselsieve.com

  12. #12
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Ah, reading this is a nice start into the day!

    Well, moving to the math side of Kerobert's posting:

    Effort of the sieving is approx. O(#k * sqrt(n range)) (although I've heart that proth_sieve takes a slightly lesser impact with a bigger sieving range), thus making the sieving range more than 5 as big results in a slowdown of a bit over 2, halving the current speed.

    Factors for k/n pairs that have already been prp'ed are still valuable, as this serves as a definite proof that there is no prime. That way, no double check is neccessary. Of course, they would have been of more value when they were found earlier.

  13. #13

    Wink

    Originally posted by b2uc
    ....what is the guesstimate on the last k to hit prime? I thought it was around 30 million for n?
    Whoow Lee, I expected you to know better. Guess you've never seen estimates on how long it takes to solve the riesel project?

    About a year ago, Wblip wrote somewere on this forum (before the last prime was found).

    I wasn’t content with the existing estimates, so I built a probability model to estimate when we will find the remaining primes. Based on this model, there is a 50% probability that we will find one more prime before n=3.7 million, and two more primes before n=5.1 million. There is a 95% probability of finding one more before n=7.2 million, and two more before n=13 million. But if the model is reasonable, we may be here a very very long time. There is a 50% probability that we will get all 12 primes by n=2.9 x 10^13, and a 95% probability we will find all twelve by 1.9 x 10^23.
    Or did you mean 30 million million?

  14. #14
    Originally posted by smh
    Whoow Lee, I expected you to know better. Guess you've never seen estimates on how long it takes to solve the riesel project?

    About a year ago, Wblip wrote somewere on this forum (before the last prime was found).



    Or did you mean 30 million million?

    Hey hey...maybe I should have rephrased my question..what is the guestimate from all the OPTIMIST. 2.9 x 10^13th is not a very optimistic outlook. I call that worst case myself. Yes, I've developed a reputation for being overly optimistic. According to estimates, we at riesel sieve will not find one more prime before n=2^20. We are already to our 'limit'. We have 74 k's that are around 900k for n. More than a few weighty ones. I'm sure we will hit atleast one or two.

    Sure...I see the possibility of this thing going to 30million million n...but I sure as hell hope not. We may have to bust on primes for a while...and then come up with another method to eliminate composites...or tell Moore his law needs an update.

    I've also come to the conclusion that we may simply see a number that has a convering set that is infinite. Which would be a great find...but crazy none the less. Can you imagine a k that will never hit prime...and has a covering set that is not only infinite..but virtually unpredictable on where the next factor is? I would name this a Mary Number. For being such a pain in the rear Cute and sweet at first glance...but deeply complex with no right answer. I think you get the point..heh

    I also believe in primes bunching when forced into a particular form using similar propertied k's. I haven't had anything to drink tonight...so I can't explain that one. Then you have the theory of the magic factor and Goldie locks and the three primes.

    I have this new method for telling if a number is prime or not...you take the sqrt of it...and then find the last factor you would normal check against it for testing. If that doesnt' divide it....it's prime. Since we all know the last factor you try is the magic one

    Anyway...I gotta go find some micron sized hand cuffs...I have some prime candidates to beat tonight.

    Lee Stephens
    B2
    www.rieselsieve.com

  15. #15
    i believe the point taht SMH was trying ot make is that even with the most optimistic outlook this project will continue past 20 million. Eve nwiht the most optimistic outlook we will not find all of the primes before 100 million. Therefore this region wil ned ot be sieved eventually. And all i was saying in my rant was that if it wouldn't severly cripple the speed of the sieve then we should add the extra ranges. I had not thought about the memory differance but the siever isn't as memory intensive as the factorer and i doubt it would even wiht more ranges. I also remember reading (from MKlasson i believe) that the range of the N values does not matter much when calculating the sievers speed.

  16. #16
    Originally posted by Keroberts1
    I also remember reading (from MKlasson i believe) that the range of the N values does not matter much when calculating the sievers speed.
    Sieving a range which is twice as large takes about SQR(2)=1,41 times longer.

    So a range 4 times as larges takes about double the time.

    At the moment the current numbers which are handed out for testing are still undersieved. IE, sieving throws out composites faster than prp testing does. And since n's are getting larger i guess it will take a while (if ever) before sieving becomes uneffective.

    Making the range larger makes this even worse.

    OTOH, we could have saved a lot of time by sieving to 100M from the beginning although you're saving time for prp tests that will only be done in a few years. And finding a prime means we wasted (~10/15%) of the time for sieving that particular K.
    You just have to choose a range at some time. We could also sieve to 1G, or 10G.

    But i agree that we shouldn't wait until N=19M before starting to sieve higher ranges. Maybe when we reach 10/15M, a slow start could be made with sieving the next range.

  17. #17
    Originally posted by b2uc
    Hey hey...maybe I should have rephrased my question..what is the guestimate from all the OPTIMIST. 2.9 x 10^13th is not a very optimistic outlook.
    Actually it is probably quite optimistic.

    If you want a VERY optimistic guess, i would say 1G, but that should only give us something like a 1% chance (give or take a little)

  18. #18
    but remember how quickly we'd be cracking through the ranges if we only had one or two K values to test We'd be testing 400-500 thousand a day or at least handing out the equivalent and getting the equivalent back. At higher levels tests will probably be taking months if not years to complete. Eventually we'll need otstart requiring progress to be reported back and saved as tests proceede to prevent certain tests fro mnever getting finished and ot prevent enormous ammounts of effort from getting wasted by people who don't have the attention span to finish a whole test.

  19. #19
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    n<20m are already sieved out to a pretty good extend. I know, there is still benefit in sieving on a PC basis, but the boost effect on main project will start to become negligible. Also, if we start sieving ranges above n=20m early enough, we'll already be in good shape when PRP reaches n=20m.

    In my opinion n=10 million at PRP or p=500T at sieve (whichever comes first) is a good point to start sieving a larger range (Give or take 2 million at PRP, and 100T at sieve).

    We might choose the lower end of n range as the place where PRP double checks are at that time. The difference in choosing such a low range or a larger one (like n=20m as highest possible nmin) would make less than 2-3% speed difference. (Speed difference between 300k<n<3m - a 2.7m range - and 300k<n<20m - a 19.7m range was only 16%)

    As far as the higher end is concerned, I personally would prefer something within the range of n=100 million to 200 million. A choice of a lower number would not be optimal in terms of output (because we'll have sieve this new range really deep), and the choice of something higher - like n=1G or more - would create problems like huge .dat files with a size of 100 MB when zipped. That would be even worse if we decide to bundle our sieve efforts with PSP (or even may be with Riesel Sieve. I know, this second one requires a new client which can handle both + and - numbers, but who knows, may be Mikael would find a way to do something like that).


    One other note: In my opinion, when we switch to a new range, we should discontinue our current sieve effort (or at least minimize like the one we did for DC sieve).

  20. #20
    how long do yo uexpect it to take for the PRP to reach 500T? I'm guessing before the end of the year.

  21. #21
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I think you meant sieve, right?

    If so, here's just a few data points.

    When we were trying to reach 200T, we had an average daily cumulative speed of 983G per day. This is the average from Jan 13th to Feb 17th. That average speed suggested December 2004 to reach 500T.


    After we reached 200T, daily cumulative averages started to drop. Some of the users left sieving, some allocated a portion of resources to P-1, etc. Currently, we have an average of 745G per day. This is the average from Apr 2nd to May 10th. That average speed suggests March 2005 to reach 500T.


    That said, normally sieve should reach 500T before PRP reaches 10m.

    The reason I said whichever comes first: Who knows, may be in the following months sieve will lose some more computing power and output drops down more. Or, may be we get new users at PRP (which I'd love to see) and it starts to climb even faster. So, the "whichever comes first" phrase is used as a precaution in case PRP reaches 10M (8M?) before sieve reaches 500T.



    And a small note on the nmin and nmax for the range of the new sieve: I guess none of us knows for sure which range would work best. The best course of action would be choose a set of alternative range sizes. Quickly compile dat files for each, so that we can see their relative sizes and speeds, and decide accordingly.

    My opinion above on choosing a number below 20m for nmin and 100m (200m?) for nmax is just a guesstimate which I feel would be right. Of course, that might turn out to be wrong when we start to see some data.

  22. #22
    Unholy Undead Death's Avatar
    Join Date
    Sep 2003
    Location
    Kyiv, Ukraine
    Posts
    907
    Blog Entries
    1

    we must start a race

    best way is to set up .dat files with ranges like 20-30M, 20-40M, 20-50M etc and choose the best sieving speed.
    wbr, Me. Dead J. Dona \


  23. #23
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I'm testing various ranges. I'll post results within a couple of days.
    Last edited by Nuri; 05-13-2004 at 07:38 PM.

  24. #24
    well we'd definatly want to include the ranges we're testing now in the dat since the larger the range the more effective it is especially when the N values inthe dat are ones you can be fairly certai nwill need to be PRP'd soon. We'll probably want to include everything right down to 1 million or where ever we decide the double check is going ot be starting at. Maybe skiping straight to the bound of secret if it is fairly certain that no primes were missed in the 1-2 million range. Perhaps a 20m-100m oir what4ever is chosen should be created just for the sake of sieving ranges already covered for the 1-20 million range though.

  25. #25
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    I'm not sure if we need double checking for sieving. False positives are sorted out immediately, false negatives mean that there will be an additional PRP test. Assuming there are not that much factors falsely not found (and I speak in the region of a few hundred), it would be better to not include a 19M range into sieving.
    If effort would be O(sqrt(n)), performance drop was 11%. It will be less, but I guess it's still at 5%...

  26. #26
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Including anything below n=20m will be meaningful if:

    1. If the speed difference btw. choosing 20m or a lower number as nmin will be negligible (which I think will be).

    2. If the new sieve reaches to the point (p value) where we abonden our current sieve effort and switch to new sieve range (500T?, 400T?, or even earlier?) before main PRP effort reaches 20m.


    Even in the case the new sieve might not be able to catch main PRP effort before it reaches n=20m, I am pretty sure including n below 20m will make sense, as it will still save some secret/supersecret tests below 20m (which will prove to be useful especially for longer lasting tests as n increase closer to 20m).

    Anyways, we'll see the data soon (at most within a couple of days, depending on how far I'll try).

  27. #27
    Senior Member
    Join Date
    Dec 2002
    Location
    australia
    Posts
    118
    A lot of this seems focussed on the efficiency sieving versus prping. Specifically how much effort is wasted by sieving not being far enough ahead of prping.

    A slightly different take on this. Is there a case for reserving say n=6.2m to 7m/10m?/20m? from prping for the k with the highest proth weight? Sieve it to death with all sievers. Hand the remains to p-1 factoring. Let them loose. Release remaining k's for prpIS there a modificatiing? Does this only "work" if sieving is not testing a number of k's simultaeneously (where is the spell check button?)?

    Repeat for next highest proth weight k.

    Is there a modification to this strategy that would allow sieving to get ahead?

  28. #28
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Originally posted by smh
    But i agree that we shouldn't wait until N=19M before starting to sieve higher ranges. Maybe when we reach 10/15M, a slow start could be made with sieving the next range.
    Feels to me as though 12-18 months of focused sieving on the 20M - ??M range should be enough, this would certainly get us to 2^47, which equates to about 29500 remaining candidates per million. So maybe a slow start should be made 24-30 months before n=20M will be reached. Right now we are 5 - 10 years away from n=20M, so now is not the time.

    All that said, I wait for the outcome of Nuri's tests. If we can switch to a 1M - ??M .dat with only (say) a 10% reduction in speed, then that's great, but a 50% hit really would not be beneficial right now. I await those numbers.

  29. #29
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Unfortunately, I have bad news.

    First, the conclusion I reached, then details.

    - After a couple of trials, I get the impression that it might be a bit early to start a higher n range sieve. I feel like we'll need better PCs and more capable clients.

    To be honest, I was hoping for being able to sieve a large dat file as large as 150m in range size (like 1m-150m) at a sieve speed slowdown of only ~25%. As you'll see below, this is not the case.

    - On another note, for 11 k (or less if we find primes), the size of dat file will not be a problem for whatever size of range we'll chose when the time comes.

    - Emprical data seems to support the idea that speed at same sized ranges are almost the same. I tried 300k-3m and then 100.3m-103m (both 2.7m ranges). The speed difference was 1%.

    - Sieve speed increase slowly as p increases and then drops a bit. But I guess that information could simply be ignored. When I say as p increases, I mean the speed at 1T vs at 10T vs at 100T vs 1000T. Speed at 100T is roughly 3% to 5% faster than at 1T, and 1% to 3% faster than at 1000T.


    Second, some background info:

    I used a PIV-1700, 256MB, nothing else running except the firewall.

    To save time, I sieved the ranges up to p=1m during dat file creating process.


    Having said those, let's get into range data details:

    It's not possible to sieve a large range like 20m-200m with an average PC today and/or the client we have. I dunno, it might be related to ram (I used 256 MB machine). I abondoned the dat file creation process when I returned from work (after leaving the client running for 11 hours) to see it had only reached p=2027 (yes, 2k).


    It was possible to create a 20m-100m dat (took 4.25 hours), but proth sieve client was increadibly slow. I really mean it. Though it did not stall/crash at any time, it could not finish even p=10m within a reasonable time span for any of the alternatives I tired with the 80m n range. I mean speeds lower than 5 kp/sec. I'm not sure how much lower, simply because I did not wait for more than half an hour in any of the trials. I tired various p values (1T, 10T, 100T, 1000T), and various number of k left (prime found) scenarios (even until a single k is left). None worked. Even if it might have worked on a 512MB machine, this would not make sense, simply because sieving is assumed to be the function of old machines which can no longer do anything else.

    I did not try out the 80m size range of 1m-80m, but I can try it, if you think would be useful (or any other alternative range we might find meaningful).


    20m-80m, a 60m range, is the largest one that worked at a reasonable speed (30% slower than 1m-20m range). That ratio is a bit better that what sqrt rule suggests 44%? slow down).


    20m-50m, a 30m range, resulted in a dat file of 1.5MB, even after only sieving to 1m. The speed I got here was 8% slower when compared to 1m-20m vs. 20%? suggested by the sqrt rule.


    I also tried a 10m range (10m-20m) to get a data point in relative speeds. It's only 8% faster when compared to 1m-20m speed, v.s. 38%? suggested by the sqrt rule.



    And a last note of wishful thinking: May be, the problem with ranges larger than or equal to n=80m in size is related to the some optimizations in algorithm etc., and Mikael has a simple solution. If so, I'll be more than happy to conduct more tests.
    Last edited by Nuri; 05-14-2004 at 09:12 PM.

  30. #30
    To be honest, I was hoping for being able to sieve a large dat file as large as 150m in range size (like 1m-150m) at a sieve speed slowdown of only ~25%. As you'll see below, this is not the case.
    You expected only 25% slowdown? Or 25% of the original speed?.

    The range is about 9 times larger (compared to 3-20M) so i would expect about 1 third of the speed, not 75%

  31. #31
    Senior Member
    Join Date
    Dec 2002
    Location
    Madrid, Spain
    Posts
    132
    Please keep in mind these things:

    - As pointed here early, as n increases, PRP tests will take much longer, even with better clients and computers. Starting the new sieve when we're, let's say at n=15 M, will still give us a lot of time for sieving before reaching n=20 M. Don't forget that the current sieve started at, I think, n~2 M and now, one year and half later, n has only increased 5 M and the 90% sieve poing it's already at ~305 T. Plus now we have much faster sieve clients that a year and half ago.

    - I think at least another prime will be found before n=15 M, and AFAIK sieving 10 k's it's faster than sieving 11 k's.

    - A factor found by sieving is not only a saved PRP test, but also a saved P-1 factoring test (considering P-1 factoring will be always a bit ahead than PRP and people will keep updated their results.txt ).

    - The factor density is still high enough to sieve even deeper 1 M>n>20 M.

    Conclusion: It think it's too early to start sieving n>20 M, which would be a waste of computing power. Currently, it's much more useful to invest that power in sieving deeper 1 M>n>20 M and P-1 factoring (which will be more valuable as n increases).

  32. #32
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Originally posted by Troodon
    Please keep in mind these things:

    - As pointed here early, as n increases, PRP tests will take much longer, even with better clients and computers. Starting the new sieve when we're, let's say at n=15 M, will still give us a lot of time for sieving before reaching n=20 M. Don't forget that the current sieve started at, I think, n~2 M and now, one year and half later, n has only increased 5 M and the 90% sieve poing it's already at ~305 T. Plus now we have much faster sieve clients that a year and half ago.
    I think this sounds quite convincing. Assuming we really start when PRP is at n=15M, I'm not sure whether we should that thoroughly discuss the exact starting point, as there will be a lot of changes (PC architecture, No. of members, algorithms and clients, ...) during the next 8M. However, a starting point of n=8M would mean that we have to take precautions within the next couple of weeks.

    - I think at least another prime will be found before n=15 M, and AFAIK sieving 10 k's it's faster than sieving 11 k's.
    But PRPing 10 k's is faster than PRPing 11 k's, too. (edit: Hm, :_slap: is not really the right icon --> )

    - A factor found by sieving is not only a saved PRP test, but also a saved P-1 factoring test (considering P-1 factoring will be always a bit ahead than PRP and people will keep updated their results.txt ).
    When you consider P-1 factoring, there is a certain chance that only a factoring test is saved but no PRP test, as the factoring would have found a factor anyway.
    I don't think we should add the factoring effort here. After all, a successfully factored k/n pair is better than a PRPed, as it can be re-proven in no time...

  33. #33
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Originally posted by smh
    You expected only 25% slowdown? Or 25% of the original speed?.

    The range is about 9 times larger (compared to 3-20M) so i would expect about 1 third of the speed, not 75%
    I know it was a bit of wishful thinking, but I had my reasons.

    1m-20m speed is 195 kp/sec and 300k-3m speed is 238 kp/sec in my machine.

    This suggests 18% slowdown for a range that is ~7 times larger.

    Another data point which increased to odds were 3m-20m, which is only 1% faster when compared to 1m-20m.

    I was thinking, may the the same ratio holds for any choice of ranges. Unfortunately, this promising ratio ceases to exist when we come to really large range sizes.

    I hope it's clear now why I thought so.

  34. #34
    Senior Member
    Join Date
    Dec 2002
    Location
    Madrid, Spain
    Posts
    132
    Originally posted by Mystwalker
    But PRPing 10 k's is faster than PRPing 11 k's, too. (edit: Hm, :_slap: is not really the right icon --> )
    Yes, of course, but I said that from the sieve point of view. Why start sieving now n>20 M for 11 k's (considering at least one of them "useless") if later we can sieve faster 10 k's (or less , all useful) and with enough time to sieve very deep before n=20 M for PRP.
    The impact of a new prime in PRP speed dependes on the k's "weight". Has this weight impact on the sieve speed increase or just removing one k, no matter what, would give us the same sieve speed increase?
    PRPing gets slower as n increases while sieve gets "slower" (I mean the factor density decreases) as sieve point increases, so if we start the new sieve at 0 T we will be removing factors much faster than we're now at the "normal sieve".

  35. #35
    Nuri do you still have those .dats? If you don't mind i would like a copy of some of them just for my own testing purposes. Perhaps on different machines or wiht different clients we would get different results. plus I'd like ot see the results first hand.

  36. #36
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Yep, I did not delete them yet.

    Keroberts, it would be great if you could try the 20m-100m and 20m-200m on a 512m MB RAM machine.

    Sizes of the dat files (zipped) are:

    300k-3m: 142 Kb
    100.3m-103m: 142 Kb
    10m-20m: 510 Kb
    3m-20m: 863 Kb
    1m-20m: 963 Kb
    20m-50m: 1,515 Kb
    20m-100m: 4,024 Kb
    20m-200m: 10,902 Kb

    Total: ~18.6 MB

    PS: 20m-200m file is after sieved to p=2027, all others to p=1,000,000.


    You can grab them here, under the files section. Simply sign up (and/or sign in if you are already using yahoogroups for other reasons), and join the group to access the files section.


    EDIT: There's also 20m-80m: 3,022 Kb, which I'm not putting due to space restriction at yahogroups.

    EDIT 2: Ooops. It does not allow files larger than 5120 Kb. So I'm posting 20m-80m instead of 20m-200m.
    Last edited by Nuri; 05-15-2004 at 09:33 PM.

  37. #37
    not great results but I expect there is some algorithmic reason that we are encountering this problem. I believe we just need to hear from mikael and hear what he has to say about this.


    Edit: definatly inside the proth sieve program. I used sobsieve 1.28 and got a speed of around 78,000 on a athalon 2400. Its slow but its a drastic improvement. It is encouraging plus its pumping out about 1000 factors per minute.

    that was with the 20m-100m dat.

    If you can I'd like to try with a 1m-20m dat as i believe adding in the lower range sshould not effect the speed much.

    after finishing the 1g to 2 g range i tried to do the 0-1 range. It ran once for a little bit until i paused it. After that the system froze and everytime i try to start the program since it says the sob.dat is invalid not enough K values. I'm stuck on this one now. If you'd like the factors found in the 1 G range i can send them to you. The fact.txt file i created is 5.5 MB. Alot of factors in there.
    Last edited by Keroberts1; 05-17-2004 at 10:24 PM.

  38. #38
    ok I've solved some of my own problems and I've created a .dat file with the first 1G sieved out I have a factors list for the 1-2G and I'm working on one for the 2-3G range. Nuri would yo ube able to contact MKlasso nabout prothsieve being modified to handle this range? I don't by any means wish to stop the regular sieve i just want to start the first 3 T of this one (by myself if need be) After that has been done I'd like to try to get Louie to reactivate 31337 so that we can start contending for a record prime again. All i need is some info on how to combine lists of factors and dat files to create a smaller dat file. The first 1 G I was able ot eliminate from the dat because when yo uinput 0 and the bottom bound in sobsieve it automatically writes over the old .dat. after the first hundred or so G have been sieved I expect the extremely large size of the dat file to become a bit more reasonable. Perhaps the siever will even speed up a bit without having to strain the active memory so much.

  39. #39
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    It's been a good while since I touched proth_sieve now, but I believe there are a few internal parameters that could be tweaked to provide better performance for larger n ranges.

    I agree with the people who said now is not the time to start sieving above 20M though. Just seems a bit premature and "pointless" right now. PRPing will be there in what, 5-10 years? 2014? Are flying cars finally commonplace then? Not to mention the possibility of a new algorithmic idea being discovered.

  40. #40
    Unholy Undead Death's Avatar
    Join Date
    Sep 2003
    Location
    Kyiv, Ukraine
    Posts
    907
    Blog Entries
    1

    well, how about...

    maybe we should use some out-of-range factors?

    http://www.free-dc.org/forum/showthr...&threadid=5323
    wbr, Me. Dead J. Dona \


Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •