Page 2 of 2 FirstFirst 12
Results 41 to 42 of 42

Thread: Error Rates Thus Far

  1. #41
    Moderator vjs's Avatar
    Join Date
    Apr 2004
    Location
    ARS DC forum
    Posts
    1,331
    Without looking too deeply into what you wrote it sounds like you have inverted the dependancy of your equation on prime density.

    Since the prime density is higher with lower n it should make the f larger. (Bring secondpass closer to first pass) this is certain.

    A while back we started this exact discussion the generated equation was very simple.

    t(firstpass)=[t(secondpass)/error rate]*prime density[n(secondpass)/n(firstpass)]

    Perhaps this is what you already have... all you have to do is base t on n.

    The above is pretty simple first ignore prime denisty and assume the error rate is 5% for example.

    This would mean you should be able to test 20 secondpass tests in the time it takes to complete a firstpass. If it were a 10% error rate, 10 secondpass tests in the time for 1 firstpass.

    Now consider the prime density since the prime density is obviously higher for lower n.

    prime density[n(secondpass)/n(firstpass)] > 1

    This would decrease required number of secondpass tests in a firstpass test time period to yeild equality.

    Hope the above makes sence... Does your equation match before simplification?

  2. #42
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    Yes, that seems to match what I was doing in my second model in the previous post. What I did was to use the information that the probabilities of finding a prime should be the same to compute f, which would tell you where second pass should be just keeping pace with first pass. So if f=.6, that would mean that doing second pass at 6 million should give the same probability in a given time of finding a prime as doing first pass test at 10 million.

    The first model did not take into account the probability of first pass finding another prime at higher n if the first prime happened to have been missed because of an error. This is what causes f to decrease in the second model. Now, if the error rate is 8%, f decreases from .666 to .443. If the effective error rate can be reduced to 5% by identifying problematic machines and results having runtime errors, then f decreases from .612 to .375, which means that at present, second pass tests are way ahead of where they should be for maximum efficiency.

    Look at the first 5 exponents eliminated by SoB: If those exponents had been missed by defective first pass tests, chances are good that at least two of them would have been eliminated by now from subsequent tests at higher n. If you don't necessarily care about finding the smallest n yielding a prime, lower second pass limits are the way to go, but if you want the smallest n, the higher f is the one to choose. I personally can see an argument for something in between. Maybe an alternative way of looking at it is to ask how much of the project resources should be devoted to second pass? At f=.48, second pass gets 10% of project resources, at f=.63, second pass gets 20% of project resources, while at f=.69, second pass gets 25% of project resources.

Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •