Page 11 of 18 FirstFirst ... 789101112131415 ... LastLast
Results 401 to 440 of 709

Thread: Sieve Client Thread

  1. #401
    yes, it is possible. although unlikely at such high factor levels, it has happened.

    5018720299 divides both 19249*2^5609822+1 and 21181*2^13471484+1.

    This was sieved out with an older version of the sieve such as v1.04 but there's no reason why it wouldn't still check for those.

    -Louie

  2. #402
    Originally posted by Keroberts1
    is it possible for one value ot eliminate several different N values or maybe Nvalues for different Ks? I don't need the technical info on this justa yes or no but als oif the answer is yes does the sieve program check for this?
    The answer is yes, yes, and yes.

    Definitely one p can eliminate n values for different ks. It is also true that one p can eliminate many ns for one k value - particularly when p is small (for example, p=3 eliminates half of the n values for these particular k values). Fortunately, the software finds all eliminated k values.


    As for the software finding factors that have already been excluded.... I will look into it. Are we sure that this isn't a case of different SoB.dat files being used?

    Regards,

    Paul.

  3. #403
    Member
    Join Date
    Feb 2003
    Location
    Lucerne, Switzerland
    Posts
    30

    version for big factors

    Hi,

    Is there a version of SoBSieve available which is maybe a little bit slower then 1.33, but can handle factors beyond 281500 G like versioin 1.22?
    At the moment 1.22 is only half as fast as 1.33.

    Thanks,
    Reto

  4. #404

    Re: version for big factors

    Originally posted by biwema
    Hi,

    Is there a version of SoBSieve available which is maybe a little bit slower then 1.33, but can handle factors beyond 281500 G like versioin 1.22?
    At the moment 1.22 is only half as fast as 1.33.

    Thanks,
    Reto
    Hi Reto,

    You know...thinking about it, the math in the current version should be good up to 2^60. I will check, and if it is the case (and I am 90% sure that it is) I will ease the restriction.

    Paul.

  5. #405
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    From the gap analysis file:

    SER+RAT: 21011.96G - 21031.60G, size: 19.65G, est fact: 22 (21011958908903-21031604828711)
    ( 19.65G) : 21000-21100 Mystwalker
    RATIO: 21055.60G - 21070.94G, size: 15.33G, est fact: 10 (21055604737771-21070935933361)
    ( 15.33G) : 21000-21100 Mystwalker
    I wonder why the estimation says 22 factors for a 20G range (--> 1,12 factors/G), but only 10 factors for a 15G range (--> 0,65 factors/G). The size difference of the factors is not that big to make this change due to the growing factor sparsity, right? Is it SER+RAT vs. RATIO?

  6. #406
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    I wonder why the estimation says 22 factors for a 20G range (--> 1,12 factors/G), but only 10 factors for a 15G range (--> 0,65 factors/G). The size difference of the factors is not that big to make this change due to the growing factor sparsity, right? Is it SER+RAT vs. RATIO?
    The estimate of the number factors is based on the distribution of factors that immediately precede the identified gap (I think, I don't have the code in front of me). To be honest, this is a very very crude estimate, and shouldn't be taken too seriously. The final estimate of factors in the whole range is even cruder.

    ..and yes this is kind of characterised by the SET+RAT vs. RATIO. Where there is a SET+RAT, there is a better probability that this really is a hole, and since the estimate of factors in the gap comes from the SERIES detection, since this hasn't trigged in the second case, it almost definitely means a lower number of factors will be declared.

    I've also just noticed that the "gap" between the two gaps is only 25G. If there are less that 30 factors in this 25G range, then in fact the size of the first gap will distort the estimate of the second gap. If you submit factors for the first gap, I would guess that this will change the estimate of the second gap.

    But to be honest, this gap finding it's not an exact science, and I think I've learned quite a lot since I originally wrote, so when I get time I'll have a go at improving the detection and estimation algorithms.

    Cheers,
    Mike.

  7. #407
    I don't know why I decided ot give it a try but i started sieveing from 1 billion to 2 billion just to see how my P4 performed (horrible as expected) but i also discovered an interesting anomaly. I submitted the first 6 factors it found and it said 3 of them were new. This is very weird since i thought this range was sieved a long time ago. If these are real new factors then this means tht the old program used ot factor these numbers was missing many factors. If not then that means many many false factors are being produced. they were all verified though. I'm going ot finish the range and submit all of the factors if someone with access to the information could look them over and lemme know if this is just an anomaly in the new program or if these really are missed factors from the original range.

  8. #408
    the factors were
    1000186877 | 55459*2^6790366+1
    1000455791 | 55459*2^6353590+1
    1000931203 | 22699*2^10251478+1
    1001422771 | 55459*2^18342478+1
    1001707669 | 22699*2^8018542+1
    1001956961 | 55459*2^7158550+1

  9. #409
    ok this is generating way to many factors to enter them all so I'm gonna stop at
    1079729251 | 55459*2^4631734+1
    if it turns out these are real factors that have been missed before and we need to resieve the earlier range then I'd like to reserve the first 500 now.
    thanks
    someone please reply to this soon.

  10. #410
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Which version of SobSieve did you use? The 1.31 (or was it 1.30?) produced quite a lot of "excluded" factors. I'm not totally aware what that means, but these factors are not relevant for SoB - although they get recognized as valid.

  11. #411
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    @MikeH:

    I found a strange error in your scores file. According to it, I have only 100G not declared [complete]. But it's 2 100G ranges:

    21000-21100 Mystwalker
    22950-23050 Mystwalker


  12. #412
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    someone please reply to this soon.
    I assume you are using SobSieve 1.32 or 1.33?

    I which case, I think you are confirming the problem that the scoring is indicating - that there is a small problem with these clients in generating factors that are not in the sob.dat file. Make a quick note of your various stats, then take a look tomorrow to see what has increased. You can also see exactly what has been excluded from the link from here.

    I think you'll see that just about everyone who is submitting factors are getting excludes. Prior to these latest clients, the only people to have generated excludes were Nuri and myself, and that was because of a duff sob.dat file that only he and I used for a short period. All the others are new.

    if it turns out these are real factors that have been missed before and we need to re-sieve the earlier range then I'd like to reserve the first 500 now.
    This is a little premature. I suggest we analyse the data tomorrow before embarking on any re-sieving effort. I think these will turn out to be factors which were elliminated with p<1G.

    Mike.

  13. #413
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    I found a strange error in your scores file. According to it, I have only 100G not declared [complete]. But it's 2 100G ranges:
    The allocated ranges bit of the scores rely upon me copying the contents of the sieving thread to a text file - I haven't managed to automate this bit yet. I try to update it every few days, but you may find this numbers of days out of date sometimes. Sorry.

  14. #414
    I was using 1.33 and i sure od hope that these aren't real factors but there sure were alot of them nearly 80% of the factors i found were verified and then saved to data base as new.

  15. #415
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    The allocated ranges bit of the scores rely upon me copying the contents of the sieving thread to a text file - I haven't managed to automate this bit yet. I try to update it every few days, but you may find this numbers of days out of date sometimes. Sorry.
    No problem, just thought this is done by a (malfunctioning) script.

  16. #416

    1.34 SoBSieve

    Hi all,

    I found the problem that was cuasing the extra factors to be found. It has been fixed in the attached release. I strongly recommend that you up grade to this version!

    Regards,

    Paul.
    Attached Files Attached Files

  17. #417
    Junior Member
    Join Date
    Jan 2003
    Location
    Spain
    Posts
    12
    What happens if an invalid factor is submitted? Does the script verify if the factor are real factors?

  18. #418
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    Originally posted by expinete
    What happens if an invalid factor is submitted? Does the script verify if the factor are real factors?
    These factors are not "invalid", they are excluded because they are outside the range of the sieving effort, i.e. they were already known and excluded from the SoB.dat file(s). The script does verify that the values submitted are really factors before including them in the data base. AFAIK the only problem the excluded factors cause is wasted effort.
    Joe O

  19. #419
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    the factors were
    1000186877 | 55459*2^6790366+1
    1000455791 | 55459*2^6353590+1
    1000931203 | 22699*2^10251478+1
    1001422771 | 55459*2^18342478+1
    1001707669 | 22699*2^8018542+1
    1001956961 | 55459*2^7158550+1
    These three

    1000186877 | 55459*2^6790366+1
    1000455791 | 55459*2^6353590+1
    1001956961 | 55459*2^7158550+1

    have factors less than 1G.

    These three

    1000931203 | 22699*2^10251478+1
    1001422771 | 55459*2^18342478+1
    1001707669 | 22699*2^8018542+1

    were submitted by Louie some time ago when we sieve the 1G range.

    Thus, as antisipated, no new factors, no candidates removed.

  20. #420
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    I found the problem that was cuasing the extra factors to be found. It has been fixed in the attached release. I strongly recommend that you up grade to this version!
    Many thanks Paul. Do you have the console version?

    Cheers,
    Mike.

  21. #421
    Originally posted by MikeH
    Many thanks Paul. Do you have the console version?

    Cheers,
    Mike.
    Ah, I forgot it. Here it is.

    Regards,

    Paul.
    Attached Files Attached Files

  22. #422
    To explain the problem: I had some dodgy code that meant that the various k values were sharing the same bitmap in different ways. This meant that for a given k value, the bitmap would contain the n values remaining for that k as well as for other ks. Fortunately this means that all of the real factors in the range were found, but there were occasional reports of a factor being found when in fact the n was for a different k.

    An upgrade is strongly recommended because there is a miniscule chance that a factor won't be found. This can only occur if the n value for a k is removed by the detection of a factor for a different k for that n value, on the same run of SoBSieve, when the actual factor for that n is a p value that is also tested in that run but later on. These chances are very very slight (so I hesitate to even mention it), but they do exist. I seriously doubt that any factors have been missed, but the possibility exists.

  23. #423
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643

    Paul

    Have you determined if indeed the client can go to p = 2^60 and eased the restriction?


    ps
    Congratulations on finding a prime. What is its current rank, 134?
    Joe O

  24. #424

    Re: Paul

    Originally posted by Joe O
    Have you determined if indeed the client can go to p = 2^60 and eased the restriction?
    That is the next thing that I am going to do. But I wanted to get this fixed version out ASAP first.

    ps
    Congratulations on finding a prime. What is its current rank, 134?
    Thank you! As I recall, 3*2^414840-1's rank was 134 when it was added, and I don't think any larger ones were added at the same time. The real thanks should go to Jean Penne for writing LLR, and Paul Underwood for organising the 3*2^n-1 search and practically forcing me to test a range (I sent him an updated NewPGen for his sieving, using some of the ideas that came out of the SoBSieve work, and he insisted that I should have a range to test ).

    But hopefully it is about time for another hit from the Seventeen or Bust search... and it is guaranteed to be groundbreaking when it comes.

    Regards,

    Paul.

  25. #425
    Senior Member
    Join Date
    Jan 2003
    Location
    U.S
    Posts
    123
    quote:
    __________________________________________________
    An upgrade is strongly recommended because there is a miniscule chance that a factor won't be found.
    __________________________________________________

    About how small is that chance? 1 out of 5,000 factors?

  26. #426
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643
    If I understand this correctly, there are at the moment only 183 k n pairs of values that would need to be retested.
    For example 15775530631129 | 5359*2^13089790+1 was reported and excluded.
    Only the expression 22699*2^13089790+1 might have had a factor missed. If this already has a reported factor, then there is nothing else to do. If this does not already have a reported factor then we could try to find a factor for it now, or just wait and see if a factor is found for it and factor it later if none is found.
    Joe O

  27. #427
    is there a linux version of the siever? if there is could someone post it I would like ot try it out on one of my systems

  28. #428
    I have updated the sieve page to link to a site where NBeGon can be found. (Mystwalker has mirrored it while Phil's site is down).

    http://www.seventeenorbust.com/sieve/

    or direct link is http://mystwalker.de/sb/fatphil/orgmathssierpinski.html

  29. #429
    BTW, hopefully we can link directly back to Phil's site sometime soon. i'll email him to get a timetable for when his site will be online again. i don't mind linking to his statements about the war... i just mind linking to them indefinately.

    so if you're on the forum Phil, just let us know at what point you plan to reopen your site to the world and then we can link back. something like "the day Bush calls an end to hostilities" or "the day a new Iraqi gov't is formed' or something just so we know. i never took the link down during the war since i thought it was creative how you were demonstrating (although i would have appreciated a heads up). but i also expected you to one day open your site back up when the war ended a few months ago. i'm all for protest, but i feel hurt when it turns from a calculated protest against an unjust war into an open-ended protests against America.

    -Louie

    PS - only Phil has permission to talk war here (since it relates to his sieve client). all other war related comments belong here.

  30. #430
    Moderator Joe O's Avatar
    Join Date
    Jul 2002
    Location
    West Milford, NJ
    Posts
    643

    Paul,

    Setting alpha seems to be a hit or miss operation. There are a lot of local minima and maxima so a runge-kutta approach does not seem to work. Are there any counters that you can easily and efficiently produce/display that would give us a clue? Maybe the ratio of little steps to big steps?

    Another thought: Do you have the "computation costs" of big steps and little steps? That would be very helpfull.
    Last edited by Joe O; 06-02-2003 at 09:15 AM.
    Joe O

  31. #431

    Re: Paul,

    Originally posted by Joe O
    Setting alpha seems to be a hit or miss operation. There are a lot of local minima and maxima so a runge-kutta approach does not seem to work. Are there any counters that you can easily and efficiently produce/display that would give us a clue? Maybe the ratio of little steps to big steps?
    Well, looking at the code I can see that I get two values: arange, and brange.

    arange=sqrt (alpha*34 million);

    arange is then rounded up to the nearest multiple of 15840.

    brange=1+17million/arange


    arange is used for the baby-steps loop; brange for the giant steps loop.

    The rounding to a multiple of 15840 (which is a lot higher than I expected, actually) means that many values of alpha result in the same values of arange and brange being used.

    Regards,

    Paul.

  32. #432
    is that rounding behavior only in the newest versions? because i could swear alpha had a noticable effect on performance when changing it by as little as 0.5 just a few versions back.

    if i'm doing my math right, the behavior you describe would make all alpha 0.0-7.0 the same.

    -Louie

  33. #433
    Originally posted by jjjjL
    is that rounding behavior only in the newest versions? because i could swear alpha had a noticable effect on performance when changing it by as little as 0.5 just a few versions back.

    if i'm doing my math right, the behavior you describe would make all alpha 0.0-7.0 the same.

    -Louie
    In a word, yes, this is comparatively new behaviour. The rounding has always been present, but with the current implementation - where each k value appears multiple times in the sieve - the effect is much more pronounced. The number used to be less than 1000 before, and I am surprised that it is so large now.

    Regards,

    Paul.

  34. #434
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Originally posted by jjjjL
    if i'm doing my math right, the behavior you describe would make all alpha 0.0-7.0 the same.
    If I'm not wrong, it should always round to 15840 (unless alpha is set to 16.7 or more, which AFAIK, is not possible).

    Paul, sorry if it's a silly question, but is it possible (or may be "is it worth" is the right question) to increase that 9.99 alpha limit (or leave it as it is and increase 34 million)?

  35. #435
    Originally posted by Nuri

    Paul, sorry if it's a silly question, but is it possible (or may be "is it worth" is the right question) to increase that 9.99 alpha limit (or leave it as it is and increase 34 million)?
    It isn't a silly question... it is an easy change to make. If anybody wants to experiment, a version without the 9.99 restriction on alpha is attached.

    The 34 million can't be changed - that is twice the range of n.

    EDIT: I tried it out and performance was worse. I also found a minor bug - you can't stop and restart it without it crashing.

    Regards,

    Paul.
    Last edited by paul.jobling; 06-03-2003 at 05:19 AM.

  36. #436
    Junior Member
    Join Date
    Jun 2003
    Location
    Hungary
    Posts
    2
    Speed up the general k sieve algorithm:
    The big trick:

    If order(2,p) is divisible by D, where D is an integer then there is only one c(k)
    mod D for every remaining k value for that if k*2^n+1 is divisible by p then
    n==c(k) mod D.

    Proof:suppose there exists n1<n2 for that k*2^n1+1 and k*2^n2+1 is
    divisible by p and n1 isn't equal to n2 mod D. k*2^n1+1==k*2^n2+1 mod p
    so 2^(n2-n1)==1 mod p (we supposed that p<>k so gcd(p,k)=1) it means
    that n2-n1 is divisible by order(2,p).But order(2,p) is divisible by D and (n2-n1)
    is not divisible by D.It is a contradiction.

    We will use the previous trick with baby step-big step method!!! for every prime
    power divisors of D, then use Chinese Remainder Theorem to get for every
    remaining k the required c(k) value.( we'll use a faster method to power
    calculating).

    Fast power calculating method (similar than MKLASSON proposed on the 10 th
    page but our method is faster):I'll call this "binary tree" algorithm:
    Suppose that p-1 is divisible by q(1)^b(1)*q(2)^b(2)*...*q(s)^b(s) where
    q(1)<q(2)<...<q(s) are small prime numbers ( b(i) are positive integers )
    we know that p-1 is even but we will use this fact only if p-1 is divisible by 4
    (so if p==3 mod 4 then we won't use 2 as a factor).
    Let T(i)=q(i)^b(i) for i=1,2,...,s.
    Suppose that we need the x^((p-1)/T(j)) mod p value (here 1=<j=<s):
    Calculate x^((p-1)/T(1)/T(2)/.../T(s)) mod p (if it hasn't been calculated before)
    and store this in a table.
    In general suppose that we have calculated G=x^((p-1)/T(a)/T(a+1)/.../T(b))
    mod p (here a=<j=<b is true)

    case g1:
    If j=<round((a+b)/2) then calculate H=x^((p-1)/T(a)/.../T(round((a+b)/2)))
    mod p (if it hasn't been calculated before) is easy bacause it is
    H=G^(T(1+round((a+b)/2))*T(2+round((a+b)/2))*...*T(b)) mod p and store
    this value in a table.
    case g2:
    If j>round((a+b)/2) then calculate
    H=x^((p-1)/T(1+round((a+b)/2))/T(2+round((a+b)/2))/.../T(b)) mod p
    (if it hasn't been calculated before) is easy:
    H=G^(T(a)*T(a+1)*...*T(round((a+b)/2))) mod p and store this in a table.

    If we iterate the loop we'll get in the last step x^((p-1)/T(j)) mod p.

    Table[deleted remainder classes]:
    In the beginning of the algorithm we can make a (small) table. For every
    remaining k values (now there are 12): for every small prime power
    determine which remainder classes have already been deleted. So Table_k
    contains {q^b,e} pair if k*2^n+1 is composite for every n==e mod q^b value
    .(I think that prime powers up to 300 is enough).

    Calculating e(k) values: (see the pseudo code step 3. for what values have
    already been calculated).

    Suppose that (p-1) is divisible by q^b (where q is a prime number and b is a
    positive integer, but if q=2 then b>1).
    Suppose that 2^((p-1)/q^0),...,2^((p-1)/q^w) are all congurent with 1 mod p
    (the first is always true because 2^(p-1)==1 mod p (Fermat's little theorem))
    and the remaining powers (if they exist, so if w<>b): 2^((p-1)/q^(w+1)),...,
    2^((p-1)/q^b) are incongurent with 1 mod p.It means that order(2,p) is
    divisible by q^(b-w).
    If w>0 then for every remaining k value do:
    suppose that k*2^n+1 is divisible by p so
    k*2^n==-1 mod p
    (-k)*2^n==1 mod p raise it up to (p-1)/q^w th degree we get:
    (-k)^((p-1)/q^w)*2^(n*(p-1)/q^w)==(-k)^((p-1)/q^w)==1 mod p.
    In step 3. of the pseudo code we have calculated H_k_0=(-k)^((p-1)/q^b)
    mod p so we can easily get H_k_1=(-k)^((p-1)/q^(b-1)) mod p=H_K_0^q
    mod p and so on
    H_k_(b-w)=(-k)^((p-1)/q^w) mod p=H_k_(b-w-1)^q mod p.
    So it means that if H_k_(b-w) is not 1 then we can discard that k value.
    If w<b then order(2,p) is divisible by q (order(2,p) is divisible by q^(b-w)) then:
    Calculate H_k_1=(-k)^((p-1)/q^(b-1)) mod p=H_k_0^q mod p if it hasn't
    been calculated before and store this,...,and so on
    H_k_(b-w-1)=(-k)^((p-1)/q^(1+w))
    mod p==H_k_(b-w-2)^q mod p if it hasn't been calculated before and store
    this. For every remaining k value we'll determine in this case that if
    k*2^n+1 is divisible by p then n==e(k) mod q^(b-w). Let z=b-w>0. Let
    D=D*q^z.
    We use induction to determine these e(k) values:
    Suppose that we know:STEP[j] (for some 0=<j<z)
    n==f(j-1,k)*q^(j-1)+...+f(0,k)*q^0 mod q^j
    so we know: f(j-1,k),...,f(0,k) values for every remaining k value.(0=<f(i)<q).
    we can suppose that:
    n=u*q^(j+1)+f(j,k)*q^j+f(j-1,k)*q^(j-1)+...+f(0,k)*q^0 (u is an integer)
    and we want to know the f(j,k) value (0=<f(j,k)<q) for every remaining k.
    So if k*2^n+1 is divisible by p then we can write that
    (-k)*2^(u*q^(j+1)+f(j,k)*q^j+f(j-1,k)*q^(j-1)+...+f(0,k)*q^0)==1 mod p
    raise it up to (p-1)/q^(j+1+w)-th degree (remark: j+1+w=<z+w=b so (p-1) is
    divisible by q^(j+1+w)) we get the following equation(1):
    (-k)^((p-1)/q^(j+1+w))*2^((p-1)/q^(j+1+w)*(f(j-1,k)*q^(j-1)+...+f(0,k)*q^0))*
    *2^(f(j,k)*(p-1)/q^(1+w))==1 mod p (we have used that
    2^((p-1)/q^w)==1 mod p).We know
    A(k)=(-k)^((p-1)/q^(j+1+w)) mod p=H_k_(b-j-1-w). We have already
    calculated 2^((p-1)/q^(j+1+w) mod p in step 3 of the pseudo code. It is
    easy to determine 2^((p-1)/q^(j+1+w)*(f(j-1,k)*q^(j-1)+...+f(0,k)*q^0))
    mod p because we know from induction:
    f(j-1,k)*q^(j-1)+...+f(0,k)*q^0 and from pseudo code the
    G=2^((p-1)/q^(j+1+w)) mod p ,so raise it up to
    f(j-1,k)*q^(j-1)+...+f(0,k)*q^0 th degree and we get
    B(k)=G^(f(j-1,k)*q^(j-1)+...+f(0,k)*q^0)) mod p.
    Let L(k)==A(k)*B(k) mod p.We know also from pseudo code:
    M=2^((p-1)/q^(1+w)) mod p
    So in equation(1) we can write that:
    L(k)*2^(f(j,k)*(p-1)/q^(1+w))==1 mod p so
    for every remaining k value equation(2):
    L(k)*M^f(j,k)==1 mod p (here 0=<f(j,k)<q)
    We know for every k value the L(k). And we are searching the f(j,k) values
    (it is depending on k).So the previous equation is a simultan k value
    problem,which can be solve by the general baby step-big step method
    (don't forget that 0=<f(j,k)<q is true).If you want to understand than it is
    important to remark that L(k) depends on k and STEP[j].But
    M=2^((p-1)/q^(1+w)) mod p is constant in every STEP[j].So it is only
    need to calculate the multiplicative inverse of M mod p once time for the
    big step-baby step method.This can be calculated easy:
    2^((p-1)/q^(1+w)*(q^(1+w)-1)) mod p=M^(q^(1+w)-1) mod p
    (because their product is 2^(p-1)==1 mod p).
    So it is only need to calculate once the RHS in the baby step-big step
    algorithm but the LHS has to be calculated for every STEP[j].
    In the STEP[j] we'll get for every remaining k value:if k*2^n+1 is divisible
    by p then n==f(j,k)*q^j+...+f(0,k)*q^0 mod q^(j+1).We can discard that k
    value if this remainder class has been deleted:so if it is in the
    Table[deleted remainder classes].
    It is also important to remark that there is only one solution for f(j,k),
    because if there are 0=<f1<f2<q solutions then M^(f2-f1)==1 mod p so
    2^((p-1)/q^(1+w)*(f2-f1)) mod p is 1 so order(2,p) divides
    (p-1)/q^(1+w)*(f2-f1).It means that q^(1+w)*order(2,p) divides (p-1)*(f2-f1)
    but order(2,p) is divisible by q^(b-w) so q^(b+1) divides (p-1)*(f2-f1) but
    0<f2-f1<q and (p-1) isn't divisible by q^(b+1).It is a contradiction.
    (it is similar than the big trick).
    In STEP[z-1] we'll know for every k the remainder of n mod q^z what we
    needed.(z is b-w)
    (It is possible that 0<w<b.)

    The (whole) algorithm in pseudo code:(do only once time step 0., don't after every
    p prime).

    0. Create Table[deleted remainder classes].

    For p prime number do:

    1. For every remaining k value make a fast Kronecker test.

    2. Factorise p-1 a little: (p-1) is divisible by q(1)^b(1)*q(2)^b(2)*...*q(s)^b(s),
    where q(1)<q(2)<...<q(s) are small prime numbers b(1),...,b(s)>0 are integers.
    (if q(1)=2 then b(1)>1). Let D=1. If there are no small prime divisors then
    go to the step 4.

    3. for m=1 to s do: calculate U(m,0)=2^((p-1)/q(m)^b(m)) mod p (store this value)
    by "binary tree" method.
    for every remaining k value calculate also by "binary tree" method
    H_k_0=(-k)^((p-1)/q(m)^b(m)) mod p and store this.
    for h=1 to b(m)-1 do:calculating 2^((p-1)/q(m)^(b(m)-h)) mod p is easy
    because this is U(m,h)=U(m,h-1)^q(m) mod p (store this value)
    next h (if b(m) is 1 then this is zero cycle)
    use "Calculating e(k) values" method for q=q(m) and b=b(m)
    next m

    4. By Chinese Remainder Theorem we can get for every remaining k value that if
    k*2^n+1 is divisible by p then n==c(k) mod D.
    In general for every remaining k value there are only few possibilities for n mod
    48. So let new_D=smallest common multiple of D and 48.(this is D*48/gcd(D,48)
    ,let D=new_D)
    So for every remaining k value there are only few possibilities for n mod D.
    (these can be calculated also by Chinese Remainder Theorem).

    5. Do the general big step-baby step method.

    I think that we can use for example 997 as a factor of (p-1) (if 997 divides it) in
    pseudo code step 3. Because computing the required baby step-big step
    method isn't requiring much time and we'll know the remainders of n mod 997 for
    every remaining k value.

    For step 4.:possible speeding up of this step:
    We may reusing calculated values if we are using higher powers.(higher D values).
    (as in the original algorithm).

    Robert Gerbicz

  37. #437
    hi Robert. welcome to the discussion.

    i'm still digesting what you've suggested. i'm sure paul will have something to say about it when he reads it. anyone else think he's onto something?

    mklasson - have you done this with your siever? is the math good and does it yield an improvement?

    -Louie

  38. #438
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    Originally posted by jjjjL
    mklasson - have you done this with your siever? is the math good and does it yield an improvement?
    I haven't really taken the time to try to get my head around everything Robert wrote, much less implemented it. It was quite a load. I'll try to get around to it soon.

    Anyhow, welcome Robert,
    Mikael

  39. #439
    i optimized the factor database today. factor submission should be noticably faster now. let me know.

    -Louie

  40. #440
    Yup Yup Yup much better,

Page 11 of 18 FirstFirst ... 789101112131415 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •