Results 1 to 28 of 28

Thread: PSP Sieve + SOB Sieve

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Okay. I only did a few, but won't do any more. I'm curious, though, about what it means for them to be duplicates. Wouldn't that mean that they would all validate as "0 of the results were new results and saved to the database."? Some of the ones that I put in did indicate that they were new results (ie. the value was not 0 in the new results line).

    What exactly are the three files: fact.txt, factexcl.tat and factrange.txt?

  2. #2
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Quote Originally Posted by BlisteringSheep
    Okay. I only did a few, but won't do any more. I'm curious, though, about what it means for them to be duplicates. Wouldn't that mean that they would all validate as "0 of the results were new results and saved to the database."? Some of the ones that I put in did indicate that they were new results (ie. the value was not 0 in the new results line).

    What exactly are the three files: fact.txt, factexcl.tat and factrange.txt?
    Dat is the question! And the answer.
    If a k n pair is in the dat, a factor found for it is written into fact.txt.
    Else if the n value is outside the nmin and nmax for the dat it is written into factrange.txt
    Else it is written to factexcl.txt.

    So a factor is written to factexcl.txt if a factor has already been found for it before the dat was created.

    The submission database does not check the dat. It only checks itself. If no previous factor has been submitted for a k n pair, the new factor is accepted. If a factor has been previously submitted for a k n pair, the new factor is not accepted.

    Since the submission database has no knowledge of factors submitted before it was created, it can and does accept factors that are really duplicates. These are found in the scoring process and so marked.
    Joe O

  3. #3
    Joe, Thanks so much for the clear explanation. Is there any value in keeping the factrange factors for potential use by other (possibly future) efforts? Or is it just too much trouble for the benefit. Or would they never be useful for anything.

    Also, why doesn't the submission db check the dat? Would it just add too much CPU/memory load?

    Yes, I'm trying to actually understand. You know how dense I can be.

  4. #4
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Quote Originally Posted by BlisteringSheep
    Joe, Thanks so much for the clear explanation. Is there any value in keeping the factrange factors for potential use by other (possibly future) efforts? Or is it just too much trouble for the benefit. Or would they never be useful for anything.

    Also, why doesn't the submission db check the dat? Would it just add too much CPU/memory load?

    Yes, I'm trying to actually understand. You know how dense I can be.
    Keeping factrange will help in initially reducing the size of the new dat for the new range, but it will not reduce the sieving effort beyond that. For example, we would still have to sieve all of 50M to 100M if we need to continue n above 50M. There is no magic number, like 51M or 52M or whatever, that we could use as the minimum for the new effort instead of 50M.
    I don't know why SB doesn't check the dat, but PSP doesn't check as the factors are submitted, because of CPU and memory constraints and a db design issue. PSP does however check before storing and scoring the factors.
    Joe O

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •