Results 1 to 23 of 23

Thread: Why are we sieving?

  1. #1
    Senior Member dmbrubac's Avatar
    Join Date
    Dec 2002
    Location
    Ontario Canada
    Posts
    112

    Why are we sieving?

    This may be a silly question, but what the heck.

    My understanding - correct me if I'm wrong.
    Sieving finds factors like a shotgun blast - you don't know where the factors will be.
    It has the advantage of being relatively low cost computing wise but is not too thorough.
    Factoring finds factors just ahead (hopefully!) of the PRP wave.
    It has a higher computing cost but is more thorough. If fact, by adjusting the bounds we control the relationship between computing cost and thoroughness.

    Two methods to find the same values.

    There seems to be an excess of resources on sieving and not enough on factoring, so why do we sieve at all, given we have
    a) a method to find factors valuable to the project now, not in 2030.
    b) people who are willing to work on a part of the project that will not produce a prime.

    Couldn't we at least ask some of them (half of them?) to move over to factoring to keep us a little further ahead of the surging PRP wave?

  2. #2
    I'm no expert, but here is my understanding of it:

    Sieving IS thourough. ALL possible factors are tested by trial division. If a range has a factor, sieving find it.
    Factoring doesn't guarantee that it will find a factor in a certain range, even if it's really there.

    Computation wise, there's also some differences.
    P-1 factoring needs lots of memory to be fast and efficient. Sieving don't.

    Also, there is the fact that some processors are better at sieving, some at factoring.

    And; Factoring at the edge of the testing window is only suited for those who know that their computer will be available enough and fast enough. Since sieving mostly produce factors that'll only be useful far into the future, it is safe to put slow computers on it.

  3. #3
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Seems like the old question about a resource allocation model.

    Well, one has to consider that Athlons, P3's & Co. are very good at sieving, but not that fast at factoring.
    At least some time ago, sieving on such machines eliminated more tests coming soon than factoring would do. I don't know how the situation now, as factors in sieving get sparser and sparser (more and more sparse? ), whereas factoring time doesn't increase that much over time in our situation.

    P4s, though, should be taken for factoring, as they are quite slow in sieving (even PSB800 keeps them below over PCs), but very fast in factoring.
    Well, it would be nice to have a client for HT systems, which doesn't allow 2 tests to be parallely run in stage 2, as this wastes a lot of computing power. A third test in stage 1 would be better...

  4. #4
    Senior Member dmbrubac's Avatar
    Join Date
    Dec 2002
    Location
    Ontario Canada
    Posts
    112
    Yes, you are absolutely correct - it's all about resource allocation.

    larsivi, you make some good points and bring info to the table that I hadn't considered.

    I don't agree with the 'factoring at the edge of the window' argument though. The only reason we are at the edge of the window is that factoring is starved for resources. Five more P4's would quickly move us into safer territory. 20 more and we could adjust our bounds to be more thorough and still be pulling ahead.

    I will grant you that there is value in sieving, contrary to the thread title, but consider that I just found 437621215726002191369 | 4847*2^5803263+1. How many of us will still be alive when sieving gets there? PRP is there tomorrow!

  5. #5
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    I agree, it's all about that resource allocation model again.

    And just a few things that haven't quite already been said.

    Factoring will only find 'smooth' factors. Yes I know that by adjusting B1 and B2 it can test for progressively less smooth ones, but it won't find a lot of the sieved factors.

    Sieving does factoring a little favour. It allows the 'already checked to...' to be increased. Currently we're at 2^47, soon that'll be 2^48. Not sure how much of a difference this makes, but it must be something.

    Sieving and P-1 live in harmony. I agree that P-1 needs more resources. If I had more suitable resources, I would do more P-1.

    Looked at another way, I have a dusty old P2 that spends most of it's life acting as a debug console with hyperterminal, and it found
    Code:
      255.576T  67607   5937147  28153.104  Sat 20-Mar-2004
    It has 128K of RAM, and if it wasn't sieving it would be doing nothing.
    Last edited by MikeH; 03-29-2004 at 11:55 AM.

  6. #6
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    It has 128K of RAM
    Ah, that's another reason. Sieving takes considerably less RAM. So all machines with 128 MB (or less) are not really suited more factoring.

  7. #7
    Also at the current depth we are producing almost as many factors in the close future range as the factoring in addition to the hundreds of factors in the distancve taht will eventually be worth alot more to the project because those tests will have taken much longer to complete than the current ones. In addition I read somewhere in the forum that the factoring in total when completed for all N values to 20,000,000 will only yeild about as many factors as sieving pulls out in a few months. If this is also true then it is a major reason to sieve at least until factor density drops off alot more.

    NURI: I'm not sure if there was a mistake in my computer (or my command entry), but assuming the rate of factors is 1%, even if we do P-1 factoring of everything above n=6m up to n=20m (~409,000 k/n pairs), we'll end up with roughly 4,100 factors.
    Also it is important to keep in mind that the PRP is moving forward at a fast enough rate that any factors below 10,000,000 are for the most part guarenteed to save a test with in 2 years and only some of these would have been found by factoring. the only thing that could change these numbers is accounting for primes taht may be found i nthe mean time but as yo umay remember we went for several million N without finding a prime before adn as we get farther in the the spectrum primes will become rarer adn rarer. The vast majority of the factors will save 1 if not 2 tests eventually. Granted factoring is good at finding those huge factors say over 1000T that sieving will never get to.
    Last edited by Keroberts1; 03-29-2004 at 03:30 PM.

  8. #8
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Well, there I have to throw in a big plus of factoring (resp. reducing an advantage of sieving):

    When a prime with n=x ist found for a certain k, every factor for that k and n>x is irrelevant.
    Thus, quite a big amount of factors found by sieving won't be important.
    As factors found by factoring have a "n" close to the current test window upper bound, we won't lose much.

  9. #9
    How many primes do yo uexpect before 20,000,000? the 7th primes window extends past 10 million and the 8ths goes past 16 million. It is of course possible that we could find more than 2 primes in this time but it is similarly possible that no more will be found before we pass 20,000,000 (I hope I'm wrong on this though). Even if we were to find the 4 primes with the heaviest weights 3/4ths of the factors found will still count.

  10. #10
    Senior Member dmbrubac's Avatar
    Join Date
    Dec 2002
    Location
    Ontario Canada
    Posts
    112
    Don't get me wrong everyone - I'm not suggeting we abandon sieving altogether, just that we shift some (20%?) of the seive resources over to factoring.

    Even if we don't do anything, at least we had a good discussion today.

  11. #11
    Very good and very active I've spent almost the last hour pouring over old posts for info to back myself up. Question though. When the sieve depth is switched from 48 to 49 and so on how is factoring efficiency effected? And don't worry the time will come whe nsieving is less efficent that factoring. Sieving will never reach 1000T, at least not until the prping gets far down the line and even the factoring starts taking vast amounts of time to complete a few tests. This won't happen for a while though. I personally utilize a couple machnies that are not connected ot the internet for sieving in addition ota couple low memory machines therefore these will probably stick with sieving for the life of the project.

  12. #12
    I'm not suggeting we abandon sieving altogether, just that we shift some (20%?) of the seive resources over to factoring.
    As long as factoring stays sufficiently ahead of prp, so you can choose optimal parameters without being ran over, there's no need to shift more resources to factoring.

    It might be a good idea though to temporarily put some extra effort in factoring to create a buffer of a couple of weeks with prp. After that, just enough effort to keep that buffer is sufficient. Sieving is still (i guess ) a lot more efficient for the project as a whole.

  13. #13
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    I think Factoring can at least get 50% more resources. Right now, it cannot keep up with PRPing (though stays not much behind). But I'd like to raise the factoring bounds to increase factor likeliness.

    This refers to P4 systems only, as others are better suited for sieving anyway...

  14. #14
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I'm planning to add a second P4 machine soon (within two weeks). This would increase my contribution from 15 k/n pairs per day to 25 k/n pairs per day (also taking into account a slight increase in bounds that I'm planning to do).

    What are your current speeds? May be if every contributer writes their rates we can see how far we're behind (with the current settings) and what we need in terms of additional computing power. Just an idea.

  15. #15
    Senior Member dmbrubac's Avatar
    Join Date
    Dec 2002
    Location
    Ontario Canada
    Posts
    112
    P4 = 20 per day
    P3 = 6 per day
    Athlon = 18 per day

    Obviously the Athlon isn't too far off the P4. I leave all parameters at their default value.
    Last edited by dmbrubac; 04-03-2004 at 08:50 AM.

  16. #16
    Senior Member Frodo42's Avatar
    Join Date
    Nov 2002
    Location
    Jutland, Denmark
    Posts
    299
    P4-2.4GHz : app. 12-30 tests/day changing factor-value from 1.2 to 1.8 (I change it depending on how close prp is).

  17. #17
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    P4: ~1 test/hour * 10 hours/day = 10 tests/day

  18. #18
    Hater of webboards
    Join Date
    Feb 2003
    Location
    København, Denmark
    Posts
    205
    Laptop (P4@2Ghz): ~8 test/day
    Other P4 (doing lots of other things - 2*2GHz-HT): ~15 test/day (varies a lot)

    Using default settings.

  19. #19
    How does shifting the presieved level change the efficiency of the factoring when you shift from 47 to 48. Are bounds decreased/inreased? Does this mean larger factors will be found as a result? Does this mean fewer factors will be found? Maybe more? Will there be an additional effect of this sort going from 48-49 will it be greater or less or the same?

  20. #20
    comments anyone?

  21. #21
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I think I've discussed this upon your similar question on the P-1 coordination thread.

    If you find that not sufficient for your purpose, I remember Louie discussing it in more detail at riesel sieve forum.

  22. #22
    thanks that was helpful. I was also very excited ot see some of the speed improvments that louie was talking about in that thread. Are these amazing speedus really possible?

  23. #23
    Right now I am at 38-40 min per k/n pair using 48 1.5 256

    Since they were taking around 48 min per k/n pair using 47 1.5 256 does that mean that I should find more factors using 47 instead of 48. Its my understanding that 47 is almost all seived so 48 would be optimal. Is this correct?

    Looks like I am back to aroun 48 min using 48 1.5 256
    . I must've had some other setting when i was getting 40 min times. Now I am getting around 71 min while also sieving and I'm getting 275-300kp/s on my sieving. It seems like sieving and factoring on my p4 3.2ghz is more productive than just factoring.
    Last edited by Paperboy; 05-11-2004 at 09:30 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •