Page 1 of 4 1234 LastLast
Results 1 to 40 of 142

Thread: Sieving

  1. #1

    Sieving

    Louie,

    How far are the current numbers sieved, and how far do you plan to go?

    I have one or two machines available for sieving (they can not connect to the SoB server) so if you need any help, just let me know.

  2. #2

    Re: Sieving

    Hi
    Here is all the information which I have about sieves:

    k=33661 is sieved by Louie up to 1000G.

    Almost all of the k values are sieved by approx. 80 G by Phil Carmody and approx. up to 200G by Joseph Arthur.

    I have attached the final result of k=4847 sieve. It is sieved up to 96G by Phil and then up to 208G by Joseph.

    You can continue it using Paul Jobling's NewPGen and then report the result to Louie to use it.

    Payam
    Attached Files Attached Files

  3. #3
    I don't mind taking it a bit further, but only after i know that i'm not duplicating any work.

    Also, i have no idea how far it needs to be tested and how long it takes, but i guess we have a couple of weeks before we start testing this K above 2M

    <edit> I'll let it run overnight to see how fast things go. But i would like to know if anyone else hase done more work on this K
    Last edited by smh; 12-03-2002 at 01:55 PM.

  4. #4
    if you do continue sieving, make sure to save the composites to a delete file as it will make it easier for me to use my current removal scripts then. thanks.

    -Louie
    Last edited by jjjjL; 12-04-2002 at 12:24 AM.

  5. #5
    Originally posted by smh
    I don't mind taking it a bit further, but only after i know that i'm not duplicating any work.
    Be sure
    This is the first time that the sieve results for this k is published.

    Payam

  6. #6
    I'm only sieving N's above 2M

    I think my PIII450 does 5 to 6 billion a day. I might have some cpu time on other machines during the night to speed things up a little. And if we are getting close to assigning this K i can do some sieving on a P4

    I have no idea how much sieving needs to be done, but if someone is willing to help, please contact me so we can avoid duplicating work.

  7. #7
    How large an n have we sieved to, I thought Phil "only" did up to n = 3 million, which could start causing us problems relatively soon, especially if we keep getting new people. Do we need to start a large scale sieving effort on n > 3 million?

  8. #8
    I think it will take quite a while before we run out of exponents below N=3M. Don't forget that the larger N's tale much longer to test.

    OTOH, i doesn't hurt to have at least some candidates available above 3M, since we have to sieve the large exponents pretty deep. Sieving 3 to 4M up to 100G will remove enough numbers to at least have some exponents available when we need them.

    Maybe someone can build a simple website to coordinate this?

    I'm currentely sieving K=4847 (2-3M) in a few ranges between 200 and 300G. Although my sample is way to small to draw conclusions about how far to sieve, currenetly i'm removing candidates between 10 and 15 times faster then i can test them with the SoB client

  9. #9
    Just to give a little status update.

    I just merged the sieving files from different computers.

    Above N = 2.000.000 there were 4220 N's left after sieving to 208G. (Including the ones that already had a prp test)

    I sieved from 208G to 280G and removed 37 N's.

    This is mainly done by a PIII 450 with a little help from a PII400 (last weekend) and a few hours on my home P4.

    If i make a very rough estimate, runtime took about 12 P3 days, so i removed a bit over 3 numbers a day, which is IIRC more then 10 times faster compared to prp testing numbers of this size.

    (i had 1 day which removed only one number, and twice removed two N's within a few minutes)

    I'll let it run for a while, The P3 makes a slow, but steady progress of about 6G a day. Help is always welcome of course

  10. #10
    Can I compile that seiving program run on Solaris? Or does it have x86 instruction codes...

  11. #11
    Member
    Join Date
    Sep 2002
    Location
    London
    Posts
    94
    I guess that smh is using this program :
    http://www.utm.edu/research/primes/programs/NewPGen/
    But it should not be a big task to write one general one (ansi C).
    I am not able to do that but problem is a lot easier that doing
    seventeenorbust client.

    Yours,

    Nuutti

  12. #12
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79

    Sieves

    I've just started to think about this, but it seems to me that it might be more efficient to sieve all 14 k values at the same time. Sieving on n is equivalent to determining, for each prime p, whether k*2^n+1==0 mod p, or what is the same thing, 2^n==-1/k mod p. So checking divisibility by p can be done by computing 2^n mod p and seeing if this is equal to any of the 14 values of -1/k mod p. I really don't know alot about sieving, but my guess is that Paul Jobling and/or Jim Fougeron could tell us if this is feasible. What got me thinking about it is that smh reported eliminating exponents 10 times faster than he could Prp them, and if I understood smh correctly, that was on just one k value. That says to me that more resources on sieving are needed, and this will become especially important as the project tests larger and larger n values. Sieving is what gives this project an advantage over GIMPS when it comes to looking for large primes. No two Mersenne numbers with prime number exponents can have a common factor, so trial factoring is a more effective method of removing composite Mersenne candidates. However, GIMPS still ends up running primality tests on 40% of their original candidates, my guess is that SoB ends up actually testing a much smaller percentage. (Does anyone have any statistics on this?)

    Phil

  13. #13
    Member
    Join Date
    Sep 2002
    Location
    London
    Posts
    94
    you can mail to one of the following mailing list and ask about that :

    http://groups.yahoo.com/group/openpfgw/
    http://groups.yahoo.com/group/primeform/
    http://groups.yahoo.com/group/primenumbers/

    or search these mailing list achieves using key word sieving/sieve.

    Phil Carmody have sieving pages here :
    http://fatphil.org/maths/
    http://fatphil.org/maths/sierpinski/
    and even one source code for factorial sieve :
    http://fatphil.org/maths/factorial/factsv.c

    But I suggest you to contact via email to Phil if you are going to use his data / code.

    Yours,

    Nuutti
    Last edited by nuutti; 12-10-2002 at 05:26 PM.

  14. #14
    Yes i'm using newpgen. As far as i know it's the only program that can handle the fixed K sieve. If searching for a large proth prime, it's more efficient to sieve on fixed N numbers. In that case, twingen is also available, which has SSE2 instrucksions so it should be much faster on the P4. As far as i know, the next version of Newpgen will have SSE2 instructions too.

    How many numbers are left after seaving greatly depends on the weight of the number. All the remaining candidates have a rather low weight (a lot of N's are removed by small primes).

    For K=4847 which i'm working on (on a machine which can't connect to the SoB server) there are 500000 odd values between 2 and 3M. Sieving to 208G removed 495780 numbers to test (this is where i continued Payam's file).

    When i merged the files (coz it was getting messy) the sieving was at 280G, which is about 35% higher, and this eliminated 37 numbers (about 0,9%). On the primenumbers list i read that sieving twice as far eliminates about 3% more numbers. Which is about in line with my small sample.

    So when sieving to 1T there will be almost 4000 numbesr left to test, which is about 0,8% of all numbers.

    The higher N gets, the longer it takes to do a prp test, so it makes sence to sieve further, so the percentage should only go down as the numbers get larger

  15. #15
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    Thanks, smh, for the interesting data. I was curious how it compares with GIMPS rate of eliminating primality test candidates. In that same range from n=2,000,000 to 3,000,000, there are 67,883 prime values on n (composite values automatically make 2^n-1 composite) and of that number, I'm guessing from the GIMPS status page that trial factoring eliminated about 57% and the remaining 43% had to be tested, about 29,000 tests. So 93.2% of the exponents are eliminated for free, because they are composite, and of the remaining 6.8%, only about 2.9% of the original number are left after trial factoring. For your case of k=4897, the sieving done so far gets you down to about 4200 numbers left to test, or 0.42% of the original number. (You mentioned getting it down to 0.8%, but this really should be considered 0.4% since the reason you only sieve on odd values of n is that 4897*2^n+1 is divisible by 3 when n is even. For k=5359, we sieve instead on even values of n since 5359*2^n+1 is divisible by 3 when n is odd. By only working with even/odd values, we can reduce the amount of storage needed for the sieve by half.)

    So that is good, 0.4% is a lot less than 2.9%, except that we are trying to simultaneously search all 14 k-values! When I take into account the Proth values for all 14 k-values, I estimate closer to 52,000 numbers to test for primality in the range n=2,000,000 to 3,000,000. Which translates into more tests than GIMPS, not less. Of course, further sieving will continue to reduce this number, but eliminating a couple more k-values would reduce it even more!

  16. #16
    Yes you are right about the even/odd values, i got mixed up with a fixed N sieve.

    Looks like you've got what you asked for. Another prime

    Another way to speed up the sieving is searching for a larger range of N's. The cost of sieving is the square root of N, so it's faster to do one search for 2-3M instead of 10 ranges of a 100K

    I'm still wondering how far the other numbers are sieved. According to Phils page, a lot more sieving needed to be done.

    Sander

  17. #17
    I'd be happy to help with the sieving, I've got a P4 2.2 that I could throw on it. I have newpgen, just tell me what you want me to do.
    Last edited by Bellbox; 12-12-2002 at 02:11 AM.

  18. #18
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79

    Sieving help!

    I just received an email from Paul Jobling that he asked me to forward to this forum: (My questions to him are marked by >.)

    > My question is, would there be any advantage to
    > sieving all 13 of these candidates simultaneously?

    I have just had a look at the code, and there would be a saving over performing the sieving all on one machine. There would even be an advantage over sieving on 13 machines, as if you had 13 machines you could give each of them a range of p to test, and then combine the results at the end (though that does mean that you are committed to leaving each of them to run until they complete their range).

    I can definitely see an advantage in sieving all 13 candidates together - indeed, I can see that modify NewPGen specifically for this task would not be too hard, assuming, of course, that it would be used.

    > My understanding of NewPGen (from Phil Carmody's web-page) is
    > that for large values of p, the program does a "giant steps,
    > baby steps" solution of the discrete logarithm problem.

    That is correct.

    > How do you know whether the equation 2^n==-1/k mod p even has
    > a solution in n? (Phil Carmody apparently refers to an
    > observation of Payam Samidoost that allows one to identify
    > such non-soluble congruences.)

    That is right. Payam observed that for odd n, legendre(-2k/p) must be 1 for p
    to divide any n. For even n they must satisfy (-k/p)=1.

    > Perhaps the project is in need of some sort of distributed sieving client that lets
    > different sievers work on different p ranges simultaneously.

    I would have no problem with modifying my code to perform this sort of sieving. The SoB guys are getting some good results, and anything that can be done to help them would be useful.

    > Thanks for any help you can give!

    No problem.

    Regards,

    Paul.

    ------

    And then a second email:

    Phil,

    By the way, I just went along to the SoB forum and saw the thread on sieving. I wanted to post a reply, but I cannot yet - do you think that you could do that for me? It looks like (a) the sieving needs to be coordinated, and (b) a
    dedicated SoBSieve program would improve performance. I could probably get it written in about 6 hours.

    Regards,

    Paul.

  19. #19
    What a nice, pleasant and helpful chap!

  20. #20
    philmoore/paul - a 13k sieve would definately be cool. if Paul could do that, I would take advatange of it for sure.

    we used to use a custom sieve Dave and I wrote but it wasn't as efficient as numbers got higher so I started using NewPGen. just to give you an idea, this is the way sieving is done now:

    the sieving is carried out on a (MPI) cluster of linux computers run by the university. specifically they are dual P3s. I run 2 k values on each one.

    i used the split function on k=33661 to sieve it across 20 systems for like a week and was able to quickly sieve it up to 1T. however, it's a lot of work to do all the splitting and script writing to submit jobs to the cluster that way so i'm only running one process on each k now.

    i don't know how feasible it is, but if it were possible, it would be nice to have NewPGen spawn multiple threads. the current split system while good, is probably not as optimal as it could be especially since re-merging and spliting them out occasionally to remove n values from each file is a lot of work... nothing i can automate. in other words, i don't do it.

    if i could sieve all k values at once and have NewPGen automatically spawn 20-40 threads at a time, that would be great. the cluster system uses MPI/PVM but as long as the threads can be spawned with a little delay between each other, i think regular threading will load balance itself correctly across the cluster. if paul were interested in attempting this, I would be happy to work with him to test it out or help with the programming.

    -Louie

  21. #21
    Member
    Join Date
    Sep 2002
    Location
    London
    Posts
    94
    Here are few thoughts that came my mind about this issue :

    Quite a lot depens on Louis and Dave. Louis mentioned once that he has some kind of cluster sieving and removing candidates.
    I would like to hear more about current sieving situation.

    Best solution would be that we could have pc sieving software and 64-bit generic sieving software that could be compailed in several 64-bit platforms. I guess that Pauls software is based on highly optimized pc assembler. on the other hand 64-bit sieving should be faster than 32-bit sieving after 4.3G=2^32 (I guess).

    N limit in sieving should be quite high because we will reach n=3,000,000 quite soon. We will be ten or bust quite soon also.

    If we are going to use Pauls software then he should get credits when we will find a prime.


    Yours,

    Nuutti

  22. #22
    Yes, 64-bits would be cool. I have access to a small lab of Solaris machines that could run it at night.

  23. #23
    Originally posted by Bellbox
    I'd be happy to help with the sieving, I've got a P4 2.2 that I could throw on it. I have newpgen, just tell me what you want me to do.
    You can take payams file (attached in the first post) as input file and start sieving form 350 or 400*10^9 (just change the first line of the input file).

    You can remove all numbers between 1M and 2M from the file since it's already known that they are all composite. My save file (which has another 40+ numbers removed) is in the office, so i'm not able to retrieve it until early next week. But the 40+ numbers shouldn't really affect the speed of the sieving.

    I'm currentely at 293G, and during the weekend i can add another one or two machines, but progress is to slow on 400 Mhz machines. I definately can't make it far enough before we start testing 2M numbers.

    Please post here which starting point you'll take so we won't duplicate anything. Later on we can merge the files.

    Sander

  24. #24
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79

    Sieving limits

    SoB is already using Paul's software, but Paul is just offering to tailor a special verson to the project's needs.

    Perhaps Paul can tell us if using 64-bit processors will be a significant advantage to the sieving effort. One advantage of using PCs is that there are plenty of them around, and a comparison of the amount of work to be done with the amount of computing resources available might mean that 32-bit machines make more sense in the end, even though they might not be as optimized to the task.

    Sander (smh) says that he was removing candidates 10 to 15 times faster than they could be tested for probable primality. That suggests that the optimal sieving limit should be 10 to 15 times higher than it currently is, or in the range of 2T to 4T (2e12 to 4e12). (By optimal sieving limit, I mean the upper limit of p values being tested for being factors of any k*2^n+1.) Now he was sieving the range of n from 2M to 3M. Take into account that the time to do a prp test goes approximately as the square of n (please correct me, somebody, if I'm making a mistake), then in the range of n from 4M to 6M, we probably want a sieving limit for p of 8T to 16T, four times higher. I think it is important at this point to decide what n range needs to be sieved next. Current sieving goes up to about n=3M. We don't know how fast this project will grow, but it could be pushing this limit within a few months. The next stage of sieving should go from 3M (or even 2M) up to some upper limit which will keep the project busy for a couple of years. The cost of sieving is reported to be proportional to the square-root of the size of the range in n. (This would follow from the baby-steps, giant-steps part of the sieving algorithm.) What this means, is that doubling the size of the n range only increases the effort for a fixed p range by 41% (sqrt(2)=1.41). So it probably makes sense to increase it by as much as possible. n=10M would certainly keep the project busy awhile if it doesn't grow too fast. n>33M could actually put SoB into competition with GIMPS to discover the first 10 million digit prime.

    However, and this is my main point, for any reasonable choice of new n range, the optimal amount of sieving work is large! For n=10M, I estimate over 20 of Sander's P3 years of work (and this is working on just one k-value), and this jumps to over 100 years for n=20M, since this range will eventually be sieved to higher p. If Louie thinks he can do this quantity of work on the cluster, great, but whether it is distributed on the cluster or distributed as part of the SoB general effort, he brings up a good point about the difficulty of combining work done on different p ranges on different machines. In the early stages of the sieve, many n-values are removed, but in later stages, according to Sander, comparatively few n-values are removed. Wouldn't it be more efficient in these later stages to just use lists of deleted n-values to update a master list? I thought that this sort of thing was what Louie was doing now until I read his last post, but surely there has to be a way of automating the rather tedious process he described.

  25. #25
    Isn't this where the SOB becomes more like GIMPS where the next client will support sieving (as opposed to trial factoring)
    as an optional work unit type? It sounds like the right way forward - especially if there are other problems to attack afterwards...

  26. #26
    Originally posted by smh
    You can take payams file (attached in the first post) as input file and start sieving form 350 or 400*10^9 (just change the first line of the input file).

    You can remove all numbers between 1M and 2M from the file since it's already known that they are all composite. My save file (which has another 40+ numbers removed) is in the office, so i'm not able to retrieve it until early next week. But the 40+ numbers shouldn't really affect the speed of the sieving.

    I'm currentely at 293G, and during the weekend i can add another one or two machines, but progress is to slow on 400 Mhz machines. I definately can't make it far enough before we start testing 2M numbers.

    Please post here which starting point you'll take so we won't duplicate anything. Later on we can merge the files.

    Sander
    I started sieving at 400*10^9.

  27. #27
    Member
    Join Date
    Sep 2002
    Location
    London
    Posts
    94

    about my post

    Louie and I sent our messages same time and he even few minutes before my so he actually answered my questions before I posted those questions to forum.

    Yours,

    Nuutti

  28. #28
    Originally posted by Bellbox
    I started sieving at 400*10^9.
    Great, your P4 will go much faster then my P3. I'll do the sieving up to 400G. I'll see about it later what i will do afterwards.

    Isn't this where the SOB becomes more like GIMPS where the next client will support sieving (as opposed to trial factoring)
    Trail factoring is very inefficient on Proth numbers. Sieving is most efficient on a larger range of N's. Like Phil said before, sieving a twice as large range of N's to the same depth takes only 41% more time. To be efficient a client should recieve a rather large input file (although this can be compressed a lot) sieve for 1 or a couple of G and return the deleted N's. Not sure how much bandwith this will take. The server should delete the composite N's from the database and generate a new imput file.

    Not impossible, but it's probably easier to see this as a seperate project. At least for the time being. The client and the server need to have new code, and i don't even know if the newpgen code is publicly available.

    I haven't calculated exactely, but Phils numbers should be in the ballpark. A P3@450 needs 5 to 6 months to sieve a range of 1M n's to 1T. A larger range would be more efficient of course. I guess an average P4 is 6 times faster, and with the use of SSE2 maybe 10 times. But with 10 K's available at 3M there is still a lot of sieving that needs to be done.

    I don't know what Louie's plans are, and we haven't reached 3M yet (will take a while i guess), but we should definately think about it. Maybe someone can set up a little web site to coordinate things (download/reserve etc). I might be able to do this if noone else wants, but it also depends on how Louie thinks about this.

  29. #29
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    I'd like to help if I can. What whould be best, starting 4847 at 500*10^9 or doing another k?

    I have a PIV 1.5Ghz if it makes any difference.

  30. #30
    I would also want to help.

    But to be honest i am not sure how the first line in the *.fil should be changed.

    And the is the *.del file what i have to set as output file ??

    And last point how do i tell the system to stop before reaching the range of the next siever ?

    i can offer a AMD TB 800 full time or a AMD XP 2000+ part time.

    Lars

  31. #31
    I'm sieving upto 400G, and Bellbox is starting at 400G.

    I don't know how far he plans to go. He has a P4 so i guess 100G would be a couple of days.

    Of the first line in the *.fil file you only need to change the first number. This is the startingpoint and it should be about 208xxxxxxxxx in the zip file. If you start at 500G you put 500000000000 in there.

    Now you can use the *.fil file as input file, and choose another filename as output. Something like *.out

    It's important that you go to options, and then set 'Log the numbers removed'

    This will put the removed numbers in the *.del file

    Finaly, you set options => sieve until => sieve up to p=
    whoever far you want to sieve.

    If you stop the search and want to continue later you can rename the output file to *.in to use it as input, and *.out for a new output file. Remember that you check the 'Log the numbers removed' again

    This forum is not really an easy way to coordinate, but it's the best we got so far. I don't know about other K's. I guess Louie is still sieving them to further depths

  32. #32
    It's cool to see people interested in getting involved here.

    However, I don't want people to ahead of yourselves and duplicate my sieving work. Sieving on all the k values (except 4748) is already being done by me. I have an account on my University's super computing clusters. I can queue all kinds of work... it took me a week to sieve 33661 up to 1T. Right now I'm in contact with Paul Jobling and we should be making a threaded, multiple k siever specifically tailored to our project and my cluster's needs. There is no need for a community of dedicated sievers to pop up yet.

    I understand that some of you have computers that can't run SB for one reason or another and you just want to help. I'm not going to stop you if you want to do random factoring. Anyone who sends me sieve results for any k will help because I will remove the tests (after verifying of course)... I'm just guessing that a lot of work will get duped since I have systems removing factors all the time. As it is, I think almost all the k values are sieved to about the right limit for n < 3000000. As it is, I'm removing less than 90 tests a day with the 24 systems running the sieving. I could run more, but diminishing returns are setting in so I don't want to waste too much time.

    Right now I'm thinking about sieving the next set of values on a faster AMD based cluster with a larger amout of processors. I'll probably sieve from n = 3M to 30M or 40M all at once due to the fact that testing more n values at once takes very little added effort. I may even sieve higher... I'll probably get Paul's input on what he thinks would be a good balance before I start that.

    So for right now, I'll let sander coordinate the effort to sieve k=4748 from n=2-3 mill. If you can collect all your results together and send them to me a little before those tests start being assigned (i few weeks from now i'm guessing), that would be great.

    -Louie

  33. #33
    Thanks for the answers smh.

    Because ceselb wants to sieve from 500 i will start from 600 upto 700.

    Lars

  34. #34
    Thanks for your answer Louie.

    As you are removing 90 numbers a day on 24 computers, thats still 4 a day per computer. I guess even the fastest P4 available can't do one test in 6 hours with N around 3M, so it might be worth to continue a bit further.

    Sieving all the remaining K's at ones would be really cool, especially if it would speed up things a little too. But 30 or 40M seems ambitious. It will take years to get there (GIMPS is not even at 20M, and a LL test is much faster then a proth prp test!)


    Okay, I don't mind coordinating the sieving for K=4847 for a while.

    I think 100G (100x10^9) are nice chunks for one searcher, so just post which chunk you'll take, and once you're done send the output file and the *.del file to sander3005 @ yahoo.com
    I'll merge the files and send them to Louie once we are getting close to 2M. (delete the 1M to 2M part from the input file, there's no need to sieve them further). When you send me the files, please clearly state which range you sieved!

    200 - 300 SMH (Done)
    300 - 400 SMH
    400 - 500 Bellbox
    500 - 600 ceselb ???
    600 - 700 LTD

  35. #35
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    Cool. It'll take ~2 weeks to sieve this chunk for me. Is there any significant gain if you remove eliminated N's for the other peoples ranges as you go?

  36. #36
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    How much faster is a Lucas-Lehmer test than doing a probable primality test on a Proth number of similar size? I would guess not more than twice as fast, maybe someone else can answer that.

    True, 30 to 40 million seems like a long way off. But 10 million seems like a good minimum, and increasing the sieve size to 40 million only doubles the expected amount of sieving time to the same depth. For n around 30 million, the optimum depth is deeper, but as long as the project reaches an optimum depth for n around 3 million by time those numbers come up, sieving can continue for a long, long time to get to the optimum limit for 30 million. Remember, it's Seventeen OR BUST! Who can predict what this project will look like in 3 years? I don't think it is at all inconceivable that the exponents being tested will be in the 20 millions by then. GIMPS wouldn't be at the point where they are now if it hadn't been for continued hardware improvements, and we can only hope that more improvements will benefit this project.

    One more point: Even though prp tests take longer, as more k values are eliminated, SoB will eventually end up with fewer candidates to test in a given range of n than GIMPS.

  37. #37
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    One question:

    As all tested k's habe a Proth Weight > 0, there's definitly a prime (even infinite, to be more specific), right? So it's not the question if we find one, but when, is it?
    Just for general understanding...

    TIA!

  38. #38
    Member
    Join Date
    Dec 2002
    Location
    Eugene, Oregon
    Posts
    79
    Good question! If we definitely knew there was a prime for each k, we wouldn't even have to test! From our knowledge of distribution of primes, we can say that the probability that any of these infinite sets of numbers k*2^n+1 contains no prime value is zero, but that is not a rigorous estimate. On the other hand, the results of Payam Samidoost and others in the "dual Sierpinski project" makes it extremely unlikely that 13 of the 14 remaining k values can be a Sierpinski number. The one value which has not been eliminated as a dual Sierpinski candidate is k=28433, so finding a prime value for 28433*2^n+1 would be sweet! Joseph Arthur is currently testing near n=1,800,000 on this k value.

  39. #39
    Here is the links to the dual proth searches:

    http://sierpinski.insider.com/28433
    http://sierpinski.insider.com/dual

    Just for those, who have not involved or informed yet, dual Proths are numbers of the form k+2^n, which could be considered as the negative continuation of Proth numbers k*2^n+1:

    k*2^(-n)+1= k/2^n+1=(k+2^n)/2^n

    For more details see
    http://groups.yahoo.com/group/primenumbers/message/6982
    and the messages which followed it.

    Their most successful application was to narrow the search for Sierpinski and Riesel problems. Currently there is remained 1 candidate for Sierpinski and 11 for Riesel.

    For the dual Riesel search you can see http://sierpinski.insider.com/riesel

    Payam

  40. #40
    Well, it's running slow and steady as I expected; had a little glitch yesterday so I didn't really start till about noon today. Hopefully we can get this done in the next few weeks and start looking for those (not-so) elusive primes in this range.

    Current Status:
    405 billion
    2 numbers eliminated.

Page 1 of 4 1234 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •